Many people enjoy creating a virtual representation of themselves as a virtual reality avatar. The virtual reality space allows people to experiment with new styles and outfits that they may not feel comfortable doing in the real world. Individuals can experience new activities, like skydiving, that they might be afraid to do in real life, but their avatar can do safely within a game setting.
While the avatars from older games were rather crude-looking, some of the newer games have very lifelike avatars. These avatars may continue to get even more convincing as research from ETH Zurich reveals a new, more efficient way of designing avatars using AI algorithms.
Creating a Virtual Reality Avatar
Making a convincing virtual reality avatar can be quite challenging. Most avatar software uses computer animation to create realistic movements, like blinking. Companies like Ready Player Me, which specializes in creating unique avatars, have users upload a photo of their faces. From there the software can create a virtual reality composite that mimics the user’s actual face. Other companies, like Apple and Facebook, are planning on implementing motion sensor technology into virtual reality headsets to track eye movement, making the avatar more lifelike with real-time motion. However, this can be an expensive undertaking, which is why other companies are looking at other ways to render avatar movements.
Analysis Artificial Intelligence Can Streamline the Process
AI algorithms have been used in virtual reality avatar creation before. Previous uses had the algorithms map out a user’s body shape using multiple points inside and outside the body, From there, the algorithm would memorize the path of motion when an individual moved, storing these movements away. In order to show effective movement, the AI algorithms had to copy thousands of these movement paths, taking up a lot of time and money. This also meant that for movements outside the memory database, the avatar could look weird, with detached arms or joints out of place, as the AI scrambled to fill in a movement.
In order to make this process more efficient, researchers at ETH Zurich looked at the AI algorithms creating their own movement path from templates of moving poses. Because these poses have similar starting points, this would make the modeling process go significantly faster. The scientists even found that their new method could create movements like a somersault.
Outlook: AI algorithms Causing More Deep Fakes?
With more lifelike avatars, there is a general concern that these avatars could be used in deep fake products. Deep fakes are digital media images or videos where a person’s face or figure has been replaced by someone else. Deep fakes have already caused quite a few problems, by creating fake scandals. Many actors have already lent their likeness to deep fake videos, which could have some major implications for the film industry as well. With more lifelike movements on digital avatars, it may get even harder to tell what is real and what isn’t, spelling big problems for the public in terms of fake news.
Kenna Castleberry is a staff writer at the Debrief and the Science Communicator at JILA (a partnership between the University of Colorado Boulder and NIST). She focuses on deep tech, the metaverse, and quantum technology. You can find more of her work at her website: https://kennacastleberry.com/