Wired's Will Knight visited Eka, a Cambridge, Massachusetts robotics startup co-founded by MIT professor Pulkit Agrawal and ex-Google DeepMind researcher Tuomas Haarnoja. Knight reported on April 29 that he watched the company's robot arm screw a light bulb into a socket, chase a rolling bulb across a table, and reliably pick up an unfamiliar jumble — earplug box, hairbrush, plush keyring — using only its two-fingered grasp. His framing: "in more than a decade of writing about robots, I have never seen one move so naturally," and "of the few dozen robot arms on the market today, not one can screw in a light bulb." The pitch from Agrawal and Haarnoja: dexterity is the unlock, dexterity is finally being cracked, and "trillions of dollars flow through the human hand." They think they are halfway there.
The dexterity claim is the part that actually matters and is the part Knight's piece does not technically substantiate. We see compelling video — robot pawing for a rolling bulb, recovering from slip, completing the screw — but no public benchmarks, no comparison runs, no published architecture. Most robotic-manipulation work today routes through Vision-Language-Action (VLA) foundation models — RT-2, Octo, OpenVLA, RT-X — the same architectural family Scout AI's defense-focused Fury was built on. Eka has not publicly disclosed whether they are extending an existing VLA approach or training their own. What is observable: their arm recovers from contact failures (the rolling bulb, the slipping keys) without restarting the task, which is the behavior you expect from a model that maintains state through manipulation rather than executing scripted motion plans. That state-through-contact recovery is the open research frontier in dexterity. If Eka has it working reliably, that is the substance behind the demo.
Two patterns matter. First, the contrast with humanoid hardware. Earlier this session we covered the Japan Airlines humanoid-robot trial at Haneda Airport, where the staged demo showed a Unitree G1 "tottering up to a cargo container and making a vague pushing gesture" — the container actually moved when a human started the conveyor. That was hardware-with-no-dexterity. Eka is the inverse — dexterity-on-a-tabletop without the locomotion. The robotics market has separated cleanly into "we can move a humanoid body around but not manipulate" and "we can manipulate with a fixed arm but not move." Whoever cracks both first wins. Second, the "ChatGPT moment for robots" framing is now an industry-standard pitch. Every robotics startup raising money in 2026 uses some version of it. That framing lands when capability suddenly outruns expectation, which is what ChatGPT did in late 2022. For robots, that moment arrives when a non-expert can instruct a robot to do an unfamiliar manipulation task and have it succeed reliably. Eka's demo is on the path; whether it generalizes outside the demo is the open question.
For builders, three concrete things. First, if you build downstream of robotics — UI, tooling, evaluation, simulation — assume dexterity is on a 2-3 year timeline to working production deployment, but that the deployment will look like "tabletop arm doing a constrained task" before it looks like humanoids. Do not bet on a humanoid moment first. Second, the academic-cofounder plus ex-DeepMind-researcher pattern is the dominant founder fingerprint in current-frontier robotics — Skild AI, Physical Intelligence, Generalist Robotics, and now Eka all share it. If you are investing or recruiting, that pedigree concentration is the signal. Third, the missing detail in Eka's pitch is the same as the missing detail in every robotics-foundation-model claim: how does it generalize to unseen objects and unseen tasks? The honest test is not "screw in this light bulb" — it is "screw in a flathead screw with a Phillips screwdriver." Wait for that demo before you commit.
