On April 23, Toyota released video and technical notes from its halftime demonstration of CUE7, a 7-foot-2-inch, 163-pound humanoid basketball robot that stood up from a bench, walked to the line, dribbled the ball a few times, and made a clean free throw — in front of roughly 8,400 live fans at Toyota Arena Tokyo during the Alvark Tokyo halftime on April 12. No teleoperation. No pre-programmed motion. The story was picked up by AOL News, Interesting Engineering, CyberGuy, and Fox News.
The free throw is the headline. The hybrid RL + MPC control stack is the actual story.
What CUE7 actually did
A free throw is not easy for a robot. It looks easy — it’s a fixed distance, a fixed hoop, a nearly stationary player — which is exactly why pre-CUE7 basketball robots relied on analytically solving the projectile mechanics. CUE3, the 2019 model, held a Guinness world record for 2,020 consecutive assisted free throws — but “assisted” meant the ball was handed to it in a fixed position, the robot never walked, and the motion was scripted.
CUE7 is a different machine:
- Stands up autonomously from a seated position. That involves solving a full-body dynamic balance problem under gravity, not reading a scripted stand-up animation.
- Dribbles the ball — meaning the robot uses a vision loop to track the ball bouncing, adjusts hand impulse timing, and re-catches on each bounce. Dribbling has been a blocker for most humanoids because the error correction window is about 150 ms.
- Walks to the line with basketball in hand, maintaining balance on a polished NBA-regulation court surface.
- Shoots. The shot is from a stance the robot had to find on its own, with a ball that has whatever angular momentum the dribble left it with. Not a cannon calibration.
The official Toyota release, summarized by Interesting Engineering and Automated Intelligence News, names the control system as a hybrid of reinforcement learning and model predictive control (MPC). That combination is the interesting engineering bit.
Why “RL + MPC” is the actually important news
Most humanoid robot demos in 2026 still lean heavily on one of two things:
- Teleoperation, where a human in a mocap suit puppets the robot (Tesla Optimus did this at the 2024 We Robot event, as Bloomberg documented). The robot has no policy; it has a puppet.
- Pre-scripted motion, where a trajectory is computed in simulation and replayed on hardware. Works for fixed environments. Breaks the moment the environment isn’t exactly what the script expected.
Neither generalizes to a factory floor, a warehouse aisle, a hospital room, or a home. Which is why the industry has spent three years trying to make learned policies (reinforcement learning) work on real hardware. The problem has always been that pure RL policies are brittle under edge cases — they are only as robust as their training distribution, and training distributions are never broad enough.
MPC (model predictive control) is the classical answer — solve a physics-aware constrained optimization at every time step to pick the next action. Stable, provably safe, and slow. The marriage of the two is what has been on every humanoid roadmap for 18 months: let the RL policy propose actions, let MPC validate and correct them in real time against a physics model.
CUE7 is one of the first public demonstrations of that marriage in front of a ticketed audience with no second takes. The halftime clip is the kind of artifact that gets shown to boards of directors for the next 18 months. It is the visual argument that learned humanoid control is finally legible to non-engineers.
The body Toyota is quietly showing
Specs, per Toyota’s disclosure:
- Height: 7’2” (≈2.18 m) — unusually tall by humanoid standards, designed specifically for the basketball vertical-reach envelope, not a humanoid labor form factor.
- Weight: ~163 lbs (≈74 kg) — lighter than Figure 03 (61 kg specified, but often hauls 20 kg payloads) because CUE7 does not need to carry totes.
- Use case on the tin: basketball demonstration robot. Toyota has not announced a commercial CUE7.
- Prior generation records: CUE3 did 2,020 consecutive assisted free throws in 2019. CUE6 set the farthest shot by a robot record at ~80’6” in 2022.
The “basketball robot” framing is a clean way to ship a sports-entertainment-friendly story out of what is actually a platform R&D program. Toyota sells cars; it does not sell humanoids. But the company has FSR (Future Society Research) labs and has been quietly building humanoid platforms since the early 2000s. CUE is the public-facing piece of a much bigger private stack.
What this has to do with factory humanoids
A free throw is the world’s cleanest “pick and release from variable posture into a constrained target.” Which is, roughly, warehouse putaway. Same core primitives — vision-tracked object, articulated arm, impulse-constrained release to a location. The hardware-level similarities between a CUE7 shot and an Agility Digit depalletize operation are more than they look.
Every warehouse humanoid manufacturer — Figure, Agility, Apptronik, Agibot, Unitree, RobCo, Humanoid — needs some version of RL+MPC to get past scripted pick-and-place. Figure has Helix 02. Agibot has Genie-OP. Toyota’s CUE7 stack hasn’t been named publicly, but the capability profile is in the same neighborhood.
If CUE7’s motion can be generalized — and Toyota is an automaker with deep factory-robotics interests, so of course it will be — the relevant 2027 question is: does the CUE7 RL+MPC stack become a Toyota manufacturing humanoid? Toyota’s Agility Robotics Digit deal at Woodstock, Ontario in February 2026 is already proving the business case for humanoid RaaS. CUE7 is the autonomy stack Toyota would need to stop renting Digits and start shipping its own.
Why LostJobs cares
Every site post on a humanoid demo includes some version of: the jobs this robot can actually do are still narrower than the marketing says. CUE7 is different. The jobs implication is not that free-throw-shooters are going to be replaced (they are not — the NBA’s free-throw shooting role is called “player” and pays more than Toyota). The implication is that the controller behind the free throw is the same controller you need for the first mile of humanoid labor.
Three reads:
- This is the last cycle of “the robot is teleoperated” excuses. Once a hybrid RL+MPC stack does a real-world task in a ticketed arena with no second takes, the “still teleoperated” critique loses its force. The next Tesla Optimus or Figure demo has to clear this bar to be taken seriously.
- Toyota is in this race whether it’s marketing it or not. Japan’s biggest industrial employer has 375,000 employees globally, a quarter of them directly in manufacturing. A Toyota humanoid program with a working RL+MPC stack is a multi-decade employment story, not a sports demo.
- The 2,020-in-a-row to “one live shot” transition is the story. In 2019, the brag was repetition. In 2026, the brag is single-shot robustness in a stadium. That is the difference between “we built a very precise mechanism” and “we built an agent that can do the task from a realistic starting condition.” The first is a machine. The second is a robot.
The CUE3 held a world record for doing the same thing 2,020 times in a row. The CUE7 did the thing one time in front of 8,400 people and a live camera. Those are different kinds of accomplishment, and only the second one scales into a labor market.