Factory Flight Sims, Library Science, and Doc-in-a-Box
A Weekly Text on the Now, New, and Next
1. They Can Call it Gemini. Oh Wait.
Google introduced two new eighth-generation TPUs (ahem, tensor processing units... thing CPU’s but for AI) instead of one: TPU 8t for training and TPU 8i for inference. The tell is that Google frames TPU 8i around agent workloads, where low latency and memory bandwidth matter because an agent may need to reason, call tools, and loop many times before a user sees the result. (Source: Google)
Why it matters: Think of this as the difference between the french fry factory and the drive-through window. Training wants giant batch production; agentic inference wants fast, repeatable service at the counter. Decidedly diff’rent strokes. The chip market is starting to bifurcate around that difference. Google continues to intrigue me here because they’re running their own “silicon-to-software” stack for AI in the same way that Apple has long done for their offerings. Could be an interesting differentiator over the coming few years.
2. Grid lock
FERC (ahem, the Federal Energy Regulatory Commission) said it will act by June 2026 on reforms for connecting large loads, including data centers, to transmission infrastructure. The same week, the IEA (ahem*, the International Energy Agency) reported that data center electricity use rose 17% in 2025, with AI-focused facilities climbing faster, and noted that the biggest tech companies’ capital spending has become energy-system scale. (Source: FERC)
Why it matters: We’ve been covering this a lot here at Onward!, but as a recap: First, AI strategy was a race for talent. Then, for data. Then, for chips. All three are still “things”, but the chokepoint to rule them all continues to be power. That feeling when you show up late to your gate at the airport and see all the power outlets are taken? Yeah... That, but with server racks.
3. Doc-in-a-Box
OpenAI released GPT-Rosalind as a trusted-access model for biology, drug discovery, and translational medicine, pairing a specialized reasoning model with a Codex life-sciences plugin that connects to more than 50 scientific tools and data sources. OpenAI framed the release around governed access, including safeguards for dual-use biological risk. (Source: OpenAI)
Why it matters: Our pesky try-hard digital intern has leveled up to a regulated lab bencher. The interesting part, to me anyway, is not just smarter answers, but rather, the controlled interface between models, instruments, datasets, and safety rules. This is stuff that niche-y consultants and SI’s have historically made a fortune on, and as Scientific AI becomes more of an operating layer for discovery, the winners may figure to be the folks that manage workflow, provenance, and access as well as model accuracy.
4. Library Science
Keepin’ it labby, Google introduced Deep Research Max in the Gemini API, built for long-horizon web and custom-source research with MCP support, file uploads, connected stores, and cited reports that can blend open web with proprietary data streams through a single API call. Google also pointed to work with data providers including FactSet, S&P Global, and PitchBook on MCP server designs. (Source: Google)
Why it matters: It’s like the difference between occasionally asking a librarian a question and hiring him as part of your core team. Once research agents can call private stores, vendor data, tools, and the web through shared interfaces, the value kicks it up a notch from searching to internal wiring. The excitement, in turn, is a new family of questions as to who owns the connectors, permissions, and audit trail behind the answers.
5. Codex gets its Claws
OpenAI expanded Codex from coding assistant toward uber-agent: it can use more apps and tools, run multiple Mac agents in parallel, connect to over 90 plugins, remember preferences, schedule future work, and operate across pull requests, terminals, browsers, and remote dev boxes. A follow-up enterprise push added Codex Labs and systems-integrator partners to scale adoption inside large companies. (Source: OpenAI)
Why it matters: On one hand, we should have figured something like this would happen when OpenAI acquired OpenClaw. On the other, it’s shocking to see it happening at such a pace. Regardless: Software workstations are going full-on dispatch board. The developer is no longer only typing into an editor; they are allocating tasks to semi-autonomous workers that touch tickets, repos, terminals, browsers, and enterprise tools. That changes the job shape, but it also changes procurement: agentic software is becoming a managed operating model, not just a wiz-bang add-on.
6. There Goes the Neighborhood
IBM and the University of Illinois expanded their Discovery Accelerator with a push on quantum-centric supercomputing, explicitly pairing IBM quantum computers with NCSA’s Delta and DeltaAI supercomputers. In parallel, IBM’s quantum platform added early access to ibm_berlin, the first Nighthawk r1 QPU in the EU, with faster median two-qubit gates than the Miami system. (Source: IBM)
Why it matters: For years, quantum has been sold like a standalone magic flute. This move seats it inside the orchestra, next to GPUs, CPUs, storage, and regional cloud controls. The path to usefulness may come less from a sudden quantum leap (sorry) than from hybrid scheduling, data movement, and knowing when the exotic machine should touch the ordinary neighbors.
7. Factory Flight Sim
At Hannover Messe, NVIDIA and partners positioned AI-driven manufacturing around a full physical stack: accelerated computing, AI physics, agents, robotics, real-time simulation, vision AI, and humanoid robots operating in factories. The point was not a single robot demo; it was a deployment chain from design to simulation to factory-floor action. (Source: NVIDIA)
Why it matters: Bits are cheaper (and more failure tolerant) than atoms, which is why flight simulators are, well, a thing. This is like a flight simulator for a factory floor: If factories can test, tune, and supervise physical work in simulated spaces like these before pushing changes into robots and lines, physical AI (i.e., real robots) get real-er, faster.
Today, in Histories of the Future…
April 23, 2005: Jawed Karim uploaded “Me at the zoo,” the first video on a newfangled site called YouTube, a 19-second clip from the San Diego Zoo that became the platform’s seed crystal. It looked... trivial... which was the point: For the first time, anybody could easily post and host any video of any sort. Source: Smithsonian Magazine
Why it’s peak geek: A global media empire began as a dead-simple user-upload box, and a short clip with some elephants. The nerdy miracle was the interface: make publishing video cheap enough and simple enough, and the constraint moved from distribution to participation. As Marshall McLuhan said: “The medium is the message”.

