virtual reality invisible robot
(Image Credit: Nick Donnoli/Orangebox Pictures)

“We Are No Longer Limited by the Laws of Physical Reality”: Scientists Use Virtual Reality and an ‘Invisible’ Robot to Move Real-World Objects

Scientists from Princeton University, attempting to “blur the lines” between the virtual world and the physical world, have developed a VR-robot interface that allows users to select virtual objects in a digital environment and have the equivalent real-world object ‘magically’ delivered to them in real time by an invisible robot.

Dubbed Reality Promises: Virtual-Physical Decoupling Illusions in Mixed Reality via Invisible Mobile Robots, the new project aims to enhance remote workplace and personal collaborations, especially for people in remote locations. The mixed reality interface could also enable entirely new forms of interactive entertainment and gaming that seamlessly blend the virtual and physical worlds, offering an entirely new experience.

“By decoupling virtual modes of manipulation from the physical mode of manipulation, Reality Promises create the illusion of manipulating the physical scene instantaneously, using magical forces or fantastical creatures,” the researchers write.

Blurring the Lines Between Virtual Reality and the Physical World

In an email to The Debrief, Princeton assistant computer science professor Parastoo Abtahi said this technology could “fundamentally change” how people interact with the physical world because “we are no longer limited by the laws of physical reality in our interactions.”

The professor also forwarded a video where postdoctoral research associate and project leader Mohamed Kari and colleagues demonstrate how someone wearing a virtual headset displaying a digital rendering of the room that they are physically in can interact with the objects in the room, virtually picking them up and placing them in a new location. Once the user places the virtual object in a new location, the digital environment begins to “render” the object there, including a countdown clock to completion.

virtual reality invisible robot
A digital bee delivers a digital rendering of a tube of chips selected by the user. In reality, an invisible robot delivers a physical tube of chips to the same real-world location, making it appear as if the bee flew the selected item across the room. Image credit: Princeton, Abtahi, Kari, et al.

In the “behind the scenes” segment, the video shows how a physical-world robot collects the selected item from its initial location and moves it to the new location. A total of three objects, a plant, a box of coconut water, and a tube of chips, are all moved virtually in the video. In the video, the robot completes the deliveries of all three real-world objects just as the rendering timer expires.

Notably, the robot remains invisible to the headset wearer throughout the process. In one segment, a digital bee delivers the chips by flying through the air. In a statement announcing Reality Promises, Kari explained that removing all unnecessary technical details from the virtual rendering, “even the robot itself,” lets the user experience the illusion that the object was moved virtually “as if by magic.”

“Yes, the robot is actually delivering the chip in the last clip, but also the plant and the coconut water in the other clips,” Abtahi told The Debrief. “No footage is theoretical.”

 

A Peek Behind the Scenes

Although video presents a relatively seamless interface between virtual reality and the physical world, the Princeton team employed several advanced pieces of equipment and software tools to create their experiment. The first step involved creating a digital twin of the experiment’s physical workspace.

According to Abtahi, Kari’s team collected approximately 200 images of the environment and its objects using an iPhone 14 Pro. They then used the device to capture several, approximately 15-second video clips of the same environment.

Next, the photos and videos were uploaded to a computer. According to Abtahi, the team used software tools that employ a process called 3D Gaussian splatting to turn the raw image and video data into a virtual workspace that matches the scanned, real-world space.

“We train the splat in Jawset Postshot (v0.5.146), using the MCMC profile, and cap the splat count at 25k splats for scenes and 3k-5k for objects,” the professor explained.

The final step involved “erasing” objects from the real world to render them invisible in the virtual world.

Once the team created their virtual environment, they acquired an off-the-shelf Stretch 3 robot and Meta Quest 3 Virtual Reality headset. To control the robot, they programmed it to react to simple hand gestures from the user.

According to the team’s statement, these preprogrammed gestures command the robot to collect real-world objects selected in the virtual environment and move them to the desired location. In the video, this meant delivering the water bottle, chips, and plant to the location that the user had virtually dropped them. Because the robot is operating in a mixed reality environment, the researchers point out that it was also wearing a VR headset, “so it knows where to place objects within the virtual environment.”

Next Steps Include Making the Robot ‘Truly Invisible’

Following the successful demonstrations, the Princeton team is already exploring ways to potentially improve the system’s overall performance. Abtahi told The Debrief that they would like to use the robot to “automate” the digital scanning and digitizing of the environment by letting it drive around autonomously and potentially manipulate each object “to capture all sides.”

Because a ghostly outline of the robot can be seen in the current version, the professor told The Debrief they want to improve the visual fidelity with higher splat counts and better color matching “with passthrough.” The professor said these changes could render the robot “truly invisible” and make the objects in the scene more “photorealistic.”

Finally, the team aims to enhance the robot’s functionality. Abtahi said this would include exploring “richer manipulations beyond pick-and-place tasks,” such as pressing buttons or pouring liquids. The researcher also said the team is interested in “exploring specific applications,” including the possibility of “remote telepresence and collaboration.”

The paper “Reality Promises: Virtual-Physical Decoupling Illusions in Mixed Reality via Invisible Mobile Robots” will be presented in September at the ACM Symposium on User Interface Software and Technology in Busan, Korea.

Christopher Plain is a Science Fiction and Fantasy novelist and Head Science Writer at The Debrief. Follow and connect with him on X, learn about his books at plainfiction.com, or email him directly at christopher@thedebrief.org.