Artificial intelligence researchers at Carnegie Mellon University produced an artificial intelligence tool that translates words into physical movements. Known as Joint Language-to-Pose or JL2P, this tool connects natural language to 3D posture models.
Joint placement, which predicts poses, was trained with end-to-end training. In this training, artificial intelligence accomplished shorter task completion sequences before moving on to more difficult goals.
Currently limited to stick figures, JL2P animations can translate words into human movements in the long run, enabling humanoid robots to perform physical tasks. It is also possible that this technology can create virtual characters for games or movies.
In the meantime, let us remind you that JL2P is not the first work that transforms words and images. ObjGAN, introduced by Microsoft in June, produces visual sketches and storyboards from the explanations, while Disney's artificial intelligence algorithm uses the scripts to produce the storyboard. Nvidia's GauGAN allows users to draw landscapes with brushes labeled with words such as trees, mountains and sky.
JL2P's capabilities include walking, running, playing instruments such as guitars or violins, following direction instructions such as right and left, and providing speed control, slow or fast.
Finally, it is worth noting that JL2P has achieved a 9 percent improvement in human movement modeling compared to the latest technology AI proposed by SRI International researchers in 2018.