Engineers from Google’s DeepMind project recently unveiled an AI-driven robot that can play table tennis against humans at an amateur level.
The development represents the first time a robot has been taught to play a sport at the human level and marks a significant step forward in applying AI and robotics in complex, dynamic environments. The achievement underscores the advancements in machine learning and the potential for AI to excel in physical, real-world tasks.
In a recent paper published in preprint on DeepMind’s website and ArXiv, DeepMind engineers highlighted the AI-driven robot’s ability to engage in a fast-paced, reactive sport traditionally dominated by human agility and intuition.
“Achieving human-level speed and performance on real-world tasks is a north star for the robotics research community,” researchers wrote. “This work takes a step towards that goal and presents the first learned robot agent that reaches amateur human-level performance in competitive table tennis.”
The study elaborates on the intricate design and learning processes that went into developing the table tennis-playing AI-driven robot.
Central to this innovation is the combination of reinforcement learning and advanced motor control algorithms, which allow the robot to anticipate, react to, and execute precise movements in response to its opponent’s actions.
Unlike previous iterations of robotic systems, which often struggled with the unpredictability and speed of such tasks, researchers say this new system demonstrates a level of competence that rivals amateur human players.
Table tennis, also commonly known as “ping-pong,” is a sport that demands quick reflexes, strategic thinking, and the ability to adapt to an opponent’s style and speed. To be successful, a player must be able to handle the unpredictability and rapid decision-making that occurs during a match.
For a robot, this means processing vast amounts of data in real-time, predicting the ball’s trajectory, and executing a counter-strike with precision—all within a fraction of a second.
To achieve this, the engineers at DeepMind employed a multi-faceted approach. The robot was equipped with high-speed cameras and sensors to track the ball’s movement and the opponent’s paddle.
These inputs were fed into a neural network trained through reinforcement learning, a process in which the robot learns from trial and error, gradually improving its performance through thousands of iterations.
The system also integrated advanced motor control techniques, enabling the robot to make micro-adjustments to its paddle position and swing based on real-time feedback.
This combination of technologies allowed the robot to play at an amateur level, engage in rallies and make strategic decisions about shot placement and spin.
To evaluate the new system’s skill, researchers pitted the AI-driven robot against 29 human table tennis players whose skill levels ranged from “beginner, intermediate, advanced, and advanced+.”
Using a 3D-printed paddle, DeepMind’s robot won 45% of all matches, including going undefeated against all beginner-level human players. However, alleviating some fears that robots will soon send humans to the unemployment line, the AI-driven robot could only win 55% of its matches against intermediate players.
Against advanced players, who included a majority who had been playing table tennis for more than five years and had competed in an average of 14 professional tournaments, the AI-driven robot failed to win a single match.
So, while DeepMind’s AI-driven robot is not yet at the level of a professional human player, its success against casual players still marks a significant milestone in AI development.
“Even a few months back, we projected that realistically the robot may not be able to win against people it had not played before,” Tech-Lead Manager in the Robotics team at Google DeepMind, Dr. Pannag Sanketi, told MIT Technology Review. “The system certainly exceeded our expectations.”
Researchers also noted an intriguing trend: the AI-driven robot always won the first match against beginner and intermediate-skilled competitors. Engineers hypothesized that this might be because players need time to adapt to the unique experience of playing against a robotic arm.
However, despite the initial challenge, all human competitors reported enjoying their matches against the AI. “I would definitely love to have it as a training partner, someone to play some matches with from time to time,” one competitor told MIT Technology Review.
The implications of this development extend far beyond the realm of sports. A robot’s ability to perform in a complex, high-speed environment like table tennis opens the door to a wide range of applications in industries where precision and adaptability are crucial.
For instance, this technology could be applied in manufacturing, where robots need to handle delicate tasks with precision, or in healthcare, where they could assist in surgeries that require a high degree of dexterity and real-time decision-making.
Furthermore, this project’s success underscores AI’s potential to bridge the gap between the virtual and physical worlds. Traditionally, AI has excelled in tasks confined to the digital realm, such as data analysis, pattern recognition, and strategic games like chess and Go.
The ability to translate these skills into physical actions, as demonstrated by the table tennis-playing robot, represents a significant leap forward in the field of robotics.
While this marks the first development of an AI-driven robot that can compete in human sports, this isn’t the first major breakthrough by DeepMind, the British-American artificial intelligence research lab that serves as a subsidiary of Google.
In 2020, DeepMind unveiled AlphaFold, an AI program capable of solving the “protein folding problem” by accurately predicting the 3D structures of nearly all known proteins. This breakthrough marked a monumental achievement in biology, significantly advancing our understanding of molecular biology and paving the way for accelerated drug development.
The DeepMind team believes that this latest innovation is just the beginning. They see the development of the table tennis-playing robot as a proof of concept that could lead to more sophisticated AI-driven systems capable of performing a wide range of tasks that were once thought to be the exclusive domain of humans.
“This is the first robot agent capable of playing a sport with humans at human level and represents a milestone in robot learning and control,” researchers wrote. “However, it is also only a small step towards a long-standing goal in robotics of achieving human level performance on many useful real world skills.”
Tim McMillan is a retired law enforcement executive, investigative reporter and co-founder of The Debrief. His writing typically focuses on defense, national security, the Intelligence Community and topics related to psychology. You can follow Tim on Twitter: @LtTimMcMillan. Tim can be reached by email: tim@thedebrief.org or through encrypted email: LtTimMcMillan@protonmail.com