top of page
WV_transparent.png
Search

The Compute Grid

  • 2 days ago
  • 11 min read

I’ve been watching CS153, a Stanford class taught by Anjney Midha, founder of AMP PBC, and have been drawn to his ideas about the standardization of compute. I've been thinking about his ideas for a while and wanted to develop them further. I spent some time summarizing his argument and dug deeper into the insight, preconditions, timeline, and areas that I believe are investable. I also want to make the case for why the early version of this should be built out of Singapore.


Anjney's core claim is that compute today sits where electricity sat before AC/DC won, or where global trade sat before the shipping container. It is the most important production input of our time, and yet it is not fungible. There is no grid equivalent for computing, which results in bilateral deals, opaque pricing, and most importantly, stranded capacity, which Anjney calls the GPU wastage bubble. Billions of dollars of GPUs sit underutilized at any given moment because there is no neutral way to pool, price, and route them.


His core thesis is that this is the pre-standardization era of compute. The history of every prior infrastructure category, power, telecoms, shipping, and the internet, tells us that the bespoke phase eventually gives way to a coordinated grid. The questions are when, who builds the standard, and where the value accrues. Anjney names two primary factors — trusted standards and trusted institutions. The rest of this post is my attempt to inventory what else has to be true and where I think capital should go to make this thesis a reality.


The Insight


I believe there are a few core insights worth pulling out. The first is that compute today is not a commodity, despite a lot of investor framing that treats it like one. Commodities are fungible; every megawatt-hour and every barrel of Brent is interchangeable with the next. Compute is not. The atomic unit, the FLOP, varies by precision, by memory bandwidth, by network topology, and by software stack. Until those variables are standardized, the asset cannot trade like a commodity, no matter how badly the market wants it to.


The second insight is that non-fungibility is a coordination problem, not a physics problem. There is nothing physical preventing an H100 and a TPU from being substitutable for the same training run. What prevents it is CUDA lock-in, proprietary interconnects, and the absence of portable checkpoint formats. These are software and contract problems, which means they are addressable by the right combination of standards bodies, market infrastructure, and software protocols.


The third insight, and the one I think is hiding in plain sight, is that the most valuable layer to build will not be more chips or more datacenters. It will be the coordination layer that sits on top of them. The largest tech companies have spent more on infrastructure in the last three years than in the prior thirty combined. What is missing is the protocol that turns all that capital into a single addressable resource. Whoever builds that protocol, and the institutions around it, will capture a disproportionate share of the value.


I want to apply the same value-accrual framework I've used before. Value in The Compute Grid will concentrate across three points: access points, trust nodes, and coordination hubs. Access points are how buyers and workloads plug in, which includes things like portable runtimes, scheduler APIs, and broker interfaces. Trust nodes are how delivery is verified ( attestation, measurement, certification, insurance). Coordination hubs are the marketplaces and clearinghouses where compute is priced, traded, and settled. I'll come back to this framework when I get to the thesis.


The Preconditions


To realize this vision, several preconditions must be met. I believe the key prerequisites are the existence of:


  1. A standardized unit of account for compute

  2. Workload portability across hardware

  3. Verifiable delivery and attestation

  4. Market infrastructure for settlement


As an investor, I see meaningful early progress in the first two and almost nothing yet in the second two. That asymmetry is where I think the investable opportunity sits.


A standardized unit of account is the foundation. A FLOP is not a unit any more than horsepower is. To price a workload meaningfully, we need a composite measure that captures precision (FP8 vs BF16 vs FP32), memory bandwidth, interconnect latency, and effective throughput on representative workloads. The grid analog of 1 kWh delivered at 60 Hz / 120 V needs an equivalent for compute. MLPerf is the embryo, but it is a benchmarking exercise, not a settlement unit. I've been keeping an eye on early efforts here, the Open Compute Project's measurement working groups, the various "GPU index" launches, and academic work on effective FLOP normalization. None of them are yet at the level where two parties could write an enforceable contract against the unit. That gap is investable.


Workload portability is the second precondition and probably the largest blocker today. A fungible market cannot exist if the asset only runs on one vendor's stack. CUDA lock-in is the AC-vs-DC fight of our era, and right now, NVIDIA is Edison. The candidates trying to break it like Triton, MLIR, OpenXLA, ROCm, Modular's MAX, and OneAPI, which are still immature, but the direction of travel is clear. There is a real opportunity for a portable runtime layer that lets a frontier training run move between H100s, B200s, TPUs, and Trainium without a custom engineering project. The companies that win this layer will look more like an operating system than a compiler. State portability is the underrated subset of this checkpointing, optimizer state transfer, the ability to suspend and resume across heterogeneous hardware. Without it, jobs can't migrate mid-run, and without that, you don't have a real spot market.


Verifiable delivery is the precondition that almost no one is building. If I rent your idle H100s, how do I cryptographically prove the FLOPs I paid for were actually delivered? How do you prove that I didn't exfiltrate model weights? Without confidential computing, TEEs, and signed proofs of compute, fungibility collapses to the boundaries of pre-existing trust relationships, which is exactly the world we have today. Early companies working on this are companies like Phala, Marlin, Gensyn, but I think the space is under-invested relative to its importance. Investable areas here include attestation services, secure enclave runtimes, and audit and certification businesses that sit on top of them.


Market infrastructure is the fourth precondition and the one I'm most excited about as an investor. Electricity has independent system operators running real-time and day-ahead markets, FERC regulating, NERC enforcing reliability, and a deep stack of futures, swaps, and forward contracts. Compute has none of this. To build it, we need a clearinghouse, standardized contracts with explicit SLAs and force-majeure provisions, transparent price discovery, and derivatives that let buyers hedge and sellers lock in revenue. CoreWeave, Lambda, Together, Foundry, and the various GPU marketplaces are early spot-market plays, but there is no NYMEX of compute, no ICE, and no listed compute futures contract. This is the single most under-built layer and, I believe, the most investable.


The Timeline


In addition to the preconditions, I want to lay out how I think this might develop over the next few years. What can we expect to see as The Compute Grid forms?


  1. Portable runtimes that abstract away hardware vendor lock-in for the majority of inference and a meaningful fraction of training.

  2. A published spot index for compute that real buyers and sellers transact against, not a survey-based proxy.

  3. Cross-vendor SLAs that are priced and underwritten by insurers, not just marketed by providers.

  4. Attestation-native compute, where every billable unit is accompanied by a signed proof of work.

  5. Listed compute derivatives that let labs hedge multi-year capacity exposure the way airlines hedge fuel.


I think this unfolds in roughly three phases.


The first phase, over the next 1–2 years, is about defining the unit and breaking portability lock-in. The work here is part technical and part political. On the technical side, Triton, MLIR, and OpenXLA need to mature to the point where serious training workloads run cross-vendor without large performance penalties. On the political side, a critical mass of buyers, including labs, hyperscaler customers, and sovereign nations, needs to make portability a procurement requirement. I think this will happen first in inference, where workloads are smaller, statelessness is the norm, and the latency tolerance is higher. The inference brokers (OpenRouter, Replicate, Fireworks, Together) are already most of the way to commodifying inference; the question is whether they evolve into the unit-of-account standard-setters or get displaced by something more neutral.


The second phase, in years 2–4, is about building the trust and market layer. This is where I think the institutional work happens: attestation protocols, audit firms, insurance products, and the first generation of standardized compute contracts. The forcing function will likely be a procurement decision by a major buyer, an outage that cascades into customer pain, or a regulatory action that mandates auditable compute provenance. I'm watching for whether a credible neutral entity emerges to play the standards-setter role. Hyperscalers and chip vendors won't, because non-fungibility is their margin. The most plausible candidates are a government-adjacent body, an open-source consortium led by the Linux Foundation, an insurer or reinsurer that needs standardization to underwrite, or, most interestingly, a sovereign nation with the scale and neutrality to make it stick. 


The third phase, in years 4–7, is full convergence. By this point, I expect to see a recognized unit of account, listed derivatives, multiple credible exchanges, regulated clearinghouses, and a class of buyers that operationally treats compute the way airlines treat jet fuel. Capital flows into capacity expansion at lower risk premia because the demand-side commitments are hedgeable. Stranded capacity collapses because pooling is the default rather than the exception. The boom-and-bust dynamic that every prior infrastructure category has lived through gets dampened and smoothed in the way that mature commodity markets smooth physical commodities.


The Thesis


If this is the trajectory, where should capital go?


Most investors today are deploying into the chip and datacenter layer. That is rational at the scale the hyperscalers and sovereigns are operating at, but it is not the right place for venture capital. The chip and datacenter layer is capital-intensive, vertically integrated, and dominated by incumbents. The venture-scale opportunity sits in the coordination layer above it.


Mapping back to the access-points, trust-nodes, and coordination-hubs framework: I believe the most overlooked and most investable layers right now are the trust nodes and the coordination hubs. The access points like portable runtimes, scheduler APIs, and broker interfaces are crowded with strong teams (Modular, Fireworks, Together, OpenRouter), and value here will accrue to whoever builds the operating-system equivalent for cross-vendor compute. The trust nodes, including attestation, measurement, certification, and insurance, are dramatically under-built. Themes I'm excited about include attestation-as-a-service for AI workloads, cryptographic compute proofs, and the audit and rating businesses that will sit on top of them. The coordination hubs, exchanges, clearinghouses, derivatives platforms, and settlement layers, are where I think the largest single outcomes will be built. The team that becomes the CME of compute will define the unit of account, capture the spread on every transaction, and become systemically important to the entire industry.


There is a parallel here to my earlier writing on The Grid. The same pattern that applies to identity, money, and energy applies to compute. The coordination hubs replace centralized intermediaries with neutral protocols. The trust nodes turn opaque relationships into verifiable ones. The access points lower the barrier to entry for both supply and demand. The big winner across all four preconditions will look less like a chip company and more like an operating system or a financial exchange.


Why Singapore


Of all the places this could be built, I believe Singapore is the strongest candidate for the early version. The argument has five parts.


The first is heritage. Singapore is the world's largest bunker fuel port, the largest physical oil trading hub in Asia, a major LNG trading center, and home to deep commodities markets in palm oil, rubber, and sugar. The infrastructure for trading goods that are physically heterogeneous but commercially standardized, like every grade of crude and every quality of bunker fuel, already exists here. The Singapore Exchange has a track record of launching novel commodity derivatives and making them stick. Building a clearinghouse for compute is not a foreign idea in Singapore; it is the natural extension of what the country has been doing for fifty years.


The second is regulatory clarity. The Monetary Authority of Singapore is one of the most respected regulators globally, and it has consistently shown a willingness to engage with novel financial products before its peers. Project Guardian on asset tokenization, the Variable Capital Companies framework, the Payment Services Act on digital tokens, and the various MAS sandboxes have made Singapore the default jurisdiction for new financial primitives in Asia. A compute clearinghouse, a compute derivatives exchange, or a tokenized compute credit would all need a regulator willing to think carefully and move quickly. MAS is that regulator.


The third is geopolitical neutrality. Trusted standards require trusted institutions, and trusted institutions require political legitimacy with all the major parties. Singapore is trusted by Washington and Beijing, by hyperscalers and sovereigns, by Western capital and Asian capital, in a way that essentially no other jurisdiction is. The neutral standards-setter role is exactly the role Singapore already plays in maritime, finance, and arbitration. Hong Kong used to share this role and is increasingly entangled with China. London and New York are deeply Western-coded and would struggle to win Asian sovereign trust. Singapore is the natural neutral ground for an asset class like compute that is currently bifurcating along US-China lines.


The fourth is the energy and datacenter angle. Singapore is one of Asia's largest data center hubs and was forced to confront the energy-compute coupling earlier than almost anyone else. The data center moratorium in 2019 and the subsequent careful re-licensing process pushed regional capacity into Johor and Batam while keeping Singapore as the coordination and operational center. The recent green-corridor agreements to import renewable power from Indonesia and Australia are explicitly about powering AI workloads. Singapore takes seriously the fact that compute is fundamentally a way to convert electricity into intelligence, and the country is structurally set up to coordinate the physical layer of the grid for the broader region.


The fifth is the sovereign capital and government-led infrastructure pattern. Temasek, GIC, and EDBI have a long history of underwriting patient infrastructure and standards-building activity, including in adjacent areas like maritime, semiconductors, and biotech. The National AI Strategy 2.0 and the various Compute@Scale initiatives are evidence that the government is willing to put real capital and policy weight behind compute infrastructure. These are all the building blocks we need to see for a credible neutral entity to emerge with a set of compute standards for the region.


I've spent time in the region at conferences in Singapore, at network state experiments in Thailand, in conversations with MAS-adjacent teams, and the appetite is here. In the same way Bhutan is in a unique position to pilot a national identity protocol, and the Philippines is in a unique position to scale digital ID, Singapore is in a unique position to build the trust and market infrastructure for compute. None of the US-based attempts to do this will have the regulatory clarity, the neutrality, or the regional buy-in that a Singapore-anchored effort would. And ASEAN as a whole, which has 600 million people, rapidly growing AI demand, and a strong preference for a regional rather than US- or China-aligned compute supply chain, is the natural first market.


The hard part is getting the first standard adopted. The pattern I'd expect to see is: an MAS-led sandbox for compute settlement, a SGX-listed compute derivatives contract, a public-private clearinghouse, and a small number of anchor buyers (Singaporean sovereign labs, regional hyperscaler customers, ASEAN governments) that commit to using the standardized unit of account. Once that flywheel turns, the same standard becomes the default for the rest of the region and, eventually, for cross-regional trade.


Conclusion


We are at an unusual moment. The capital flowing into compute is larger than into any prior infrastructure category in human history, and yet the coordination layer that turns that capital into a productive grid does not yet exist. The historical pattern is clear when we look at power, telecoms, shipping, and the internet, all of which went through a pre-standardization era and emerged with a grid. Compute will too. The question is who builds it, where it gets built, and which investors are positioned to back the teams that build it.


I'm convinced that the venture-scale opportunity sits in the trust nodes and coordination hubs, that the early version of the grid is most likely to be built out of Singapore, and that the next two to three years are the window where the foundational positions get taken. If any of this resonates, or if you're building in this space, please reach out — I'd love to spend more time thinking about how to build this future together.

 
 
 

Recent Posts

See All
The Grid

Micky Malka, founder of Ribbit Capital, recently appeared on the Invest Like the Best podcast and discussed his investment thesis,...

 
 
 

Comments


bottom of page