Figure AI, a robotics company, has recently shown two of its F.03 humanoid robots autonomously resetting a full bedroom, including making a bed together, using only visual cues to coordinate their actions. The demonstration marks what the company calls the first instance of a single learned neural network performing collaborative tasks across multiple humanoid robots directly from what they see to how they move.

The robots are powered by Helix-02, a single Vision-Language-Action system that controls each machine's entire body. Unlike traditional robotic setups that rely on separate planners, message passing, or a central coordinator, these two humanoids read the room through their own cameras and infer each other's intent purely from motion. A head nod, a shift in stance, the angle of an arm: that is how they stay in sync.

In the video, the pair opens doors, hangs clothes on a coat tree, puts away a pair of headphones, closes a book, takes out rubbish, pushes an office chair under a desk, and then works together to make the bed. The bedding task is particularly demanding: they lift, unfurl, spread, fold, and smooth a duvet, correcting wrinkles and bunched edges as the fabric settles. Every step happens at normal speed, with no teleoperation and no human intervention.

The underlying Helix-02 system was not built specifically for bedrooms. It is a single learned policy that expands its skills as it is fed more data. Earlier this year, the same approach allowed a Figure robot to load a dishwasher in a full-sized kitchen in four minutes, and in March, a solo F.03 tidied a living room, spraying and wiping surfaces, sorting toys, and replacing cushions on a sofa. The bedroom reset is the latest layer on top of those earlier capabilities, all achieved without altering the core algorithm.

Image

Figure AI robots bedroom

What makes the bed-making sequence especially difficult is the combination of three problems. First, two humanoids in one room are not just two single-robot tasks running side by side. Every move one machine makes changes the problem the other must solve, and each robot must constantly read and predict its partner's next action while its own actions are altering the scene. Second, the central object, the duvet, has no fixed shape, no rigid geometry, and no natural divide between "your half" and "mine." Each robot commits to a contact point while predicting what its partner will do, updating those predictions tens of times per second as the fabric folds, drapes, and slides under shared tension. Third, the entire sequence runs in under two minutes, requiring the robots to walk naturally between locations, balance dynamically on one leg to operate a pedal bin, and switch seamlessly between rigid, deformable, articulated, and collaborative manipulation, all without scripted handoffs between subtasks.

Brett Adcock, Figure AI's Chief Executive, wrote on X that there is "no explicit messaging between these robots; they coordinate their actions fully visually, e.g. head nods." The task was fully autonomous, with no teleoperation, and ran at 1x speed.

The company describes the demonstration as an important first step toward a future in which intelligent humanoids routinely work together in homes, warehouses, and factories, handling shared goals in spaces where people, objects, and other machines are constantly on the move. For now, the robots have shown they can make the bed. The same learned system, Figure says, will continue to grow as more data is added, and the team is hiring.



Contact
reader@banginews.com

Bangi News app আপনাকে দিবে এক অভাবনীয় অভিজ্ঞতা যা আপনি কাগজের সংবাদপত্রে পাবেন না। আপনি শুধু খবর পড়বেন তাই নয়, আপনি পঞ্চ ইন্দ্রিয় দিয়ে উপভোগও করবেন। বিশ্বাস না হলে আজই ডাউনলোড করুন। এটি সম্পূর্ণ ফ্রি।

Follow @banginews