トレンドトピック
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.

Liron Shapira
ドゥーム・ディベートのホスト — 世界が終わる前に解決しなければならない意見の相違。
何言ってるんだよ、ベフ・ジェゾスのVCの眩しいチップをダンクするたびに、俺のいいねでハイIQのロールコールみたいなものだ。


Liron Shapira2025年10月30日
Today's Extropic launch raises some new red flags.
I started following this company when they refused to explain the input/output spec of what they're building, leaving us waiting to get clarification.)
Here are 3 red flags from today:
1. From
"Generative AI is Sampling. All generative AI algorithms are essentially procedures for sampling from probability distributions. Training a generative AI model corresponds to inferring the probability distribution that underlies some training data, and running inference corresponds to generating samples from the learned distribution. Because TSUs sample, they can run generative AI algorithms natively."
This is a highly misleading claim about the algorithms that power the most useful modern AIs, on the same level of gaslighting as calling the human brain a thermodynamic computer. IIUC, as far as anyone knows, the majority of AI computation work doesn't match the kind of input/output that you can feed into Extropic's chip.
The page says:
"The next challenge is to figure out how to combine these primitives in a way that allows for capabilities to be scaled up to something comparable to today’s LLMs. To do this, we will need to build very large TSUs, and invent new algorithms that can consume an arbitrary amount of probabilistic computing resources."
Do you really need to build large TSUs to research if it's possible for LLM-like applications to benefit from this hardware? I would've thought it'd be worth spending a couple $million on investigating that question via a combination of theory and modern cloud supercomputing hardware, instead spending over $30M on building hardware that might be a bridge to nowhere.
Their own documentation for their THRML (their open-source library) says:
"THRML provides GPU‑accelerated tools for block sampling on sparse, heterogeneous graphs, making it a natural place to prototype today and experiment with future Extropic hardware."
You're saying you lack a way your hardware primitives could *in principle* be applied toward useful applications of some kind, and you created this library to help do that kind of research using today's GPUs…
Why would you not just release the Python library earlier (THRML), do the bottlenecking research you said needs to be done earlier, and engage the community to help get you an answer to this key question by now? Why were you waiting all this time to first launch this extremely niche tiny-scale hardware prototype to come forward explaining this make-or-break bottleneck, and only publicize your search for potential partners who have some kind of relevant "probabilistic workloads" now, when the cost of not doing so was $30M and 18 months?
2. From
"We developed a model of our TSU architecture and used it to estimate how much energy it would take to run the denoising process shown in the above animation. What we found is that DTMs running on TSUs can be about 10,000x more energy efficient than standard image generation algorithms on GPUs."
I'm already seeing people on Twitter hyping the 10,000x claim. But for anyone who's followed the decades-long saga of quantum computing companies claiming to achieve "quantum supremacy" with similar kinds of hype figures, you know how much care needs to go into defining that kind of benchmark.
In practice, it tends to be extremely hard to point to situations where a classical computing approach *isn't* much faster than the claimed "10,000x faster thermodynamic computing" approach. The Extropic team knows this, but opted not to elaborate on the kind of conditions that could reproduce this hype benchmark that they wanted to see go viral.
3. The terminology they're using has been switched to "probabilistic computer": "We designed the world’s first scalable probabilistic computer." Until today, they were using "thermodynamic computer" as their term, and claimed in writing that "the brain is a thermodynamic computer".
One could give them the benefit of the doubt for pivoting their terminology. It's just that they were always talking nonsense about the brain being a "thermodynamic computer" (in my view the brain is neither that nor a "quantum computer"; it's very much a neural net algorithm running on a classical computer architecture). And this sudden terminology pivot is consistent with them having been talking nonsense on that front.
Now for the positives:
* Some hardware actually got built!
* They explain how its input/output potentially has an application in denoising, though as mentioned, are vague on the details of the supposed "10,000x thermodynamic supremacy" they achieved on this front.
Overall:
This is about what I expected when I first started asking for the input output 18 months ago.
They had a legitimately cool idea for a piece of hardware, but didn't have a plan for making it useful, but had some vague beginnings of some theoretical research that had a chance to make it useful.
They seem to have made respectable progress getting the hardware into production (the amount that $30M buys you), and seemingly less progress finding reasons why this particular hardware, even after 10 generations of successor refinements, is going to be of use to anyone.
Going forward, instead of responding to questions about your device's input/output by "mogging" people and saying it's a company secret, and tweeting hyperstitions about your thermodynamic god, I'd recommend being more open about the seemingly giant life-or-death question that the tech community might actually be interested in helping you answer: whether someone can write a Python program in your simulator with stronger evidence that some kind of useful "thermodynamic supremacy" with your hardware concept can ever be a thing.

2.36K
息子が突然進行性の癌(24時間の倍増)を患った父親の実体験談で、リアルタイムでジェミニ3のプレビューでそれを乗り越えています。クレージー。こんな話は聞いたことがありません。

The Cognitive Revolution Podcast11月19日 00:29
本日、@NathanLabenz 6歳の息子アーニーの突然のがん診断をAIで乗り越えたことについて語ります。これは、医療危機に直面している人や、医療判断にAIを活用しようと考えている人にとって必聴の内容です。
朗報です:アーニーの予後は良好で、最初の治療波の後は骨髄と脊髄液がクリアです。
このエピソードではネイサンが共有します
* バーキット白血病における指数関数的増加(24時間倍増時間)がAIの加速パターンを反映している方法
* Claude 3.5、Perplexity、GPT-4、NotebookLMを使って血液検査の分析、研究プロトコル、医師会議の準備を行った方法
* なぜ積極的なAIに基づくアドボカシーが早期化学療法につながり、腫瘍が80%減少した理由
* 標準医療とパーソナライズされたAI強化医療との間のギャップ
* 完全な医療的文脈を踏まえてAIに効果的にプロンプトする方法に関する実践的なガイダンス
タイムスタンプ:
(00:00)息子の診断
(06:05)指数関数と初期症状
(13:30)バケーション危機(パート1)
(15:00) スポンサー:フレーマー |タスクレット
(17:41)バケーションの危機(パート2)
(24:26)AIの最初の介入(パート1)
(34:34) スポンサー:Shopify
(36:31)AIの最初の介入(パート2)
(36:41)答えを求めるための擁護
(47:56)最も怖い日々
(59:06)AIがベッドサイドの副操縦士として
(1:11:42) 二次リスクの管理
(1:18:28)再発に備えた計画
(1:31:54)AIの否定しがたい価値
(1:43:02)挑戦する権利への怒り
(2:02:55)アウトロ
2.14K
今週のオールスター・ドゥーム討論会に備えてください:
超知能AIの開発を禁止すべきでしょうか?
🔥 @tegmark vs. @deanwball 🔥

Max Tegmark2025年10月22日
A stunningly broad coalition has come out against Skynet: AI researchers, faith leaders, business pioneers, policymakers, NatSec folks and actors stand together, from Bannon & Beck to Hinton, Wozniak & Prince Harry. We stand together because we want a human future.
#KeepTheFutureHuman

3.7K
トップ
ランキング
お気に入り
