Creating realistic motion sequences with machine learning?

A five-person team from Ubisoft La Forge and York University is researching how realistic movements could soon be created – supported by a neural network.

In nuce: Saeed Ghorbani, researcher at Ubisoft Toronto, published an almost six-minute video on YouTube on 19 September this year showing ZeroEGGS – a neural network that helps to create voice-controlled gestures. The research project is being developed in collaboration with Ubisoft La Forge and York University.

ZeroEGGS: Zero-shot Example-based Gesture Generation from Speech

In toto: According to the description, ZeroEWGGS requires two things to generate a realistic-looking movement: a verbal prompt and an exemplary moving image of the desired movement. Thanks to these two ingredients, lifelike full-body movements should be created. According to the Ubisoft research team, ZeroEWGGS surpasses previous techniques. The highlight: with the same input, ZeroEWGGS generates several outputs at the same time – i.e. different movements.

Researchers: The video lists Ubisoft La Forge and York University, the research institutions involved. In addition to Saeed Ghorbani, the researchers involved are listed as follows: Ylva Ferstl (research and development scientist), Daniel Holden (researcher on animation and machine learning), Nikolaus F. Troje (Chair for Reality Research at York University) and Marc-André Carbonneau (researcher on machine learning).

Click further: The paper entitled “ZeroEGGS: Zero-shot Example-based Gesture Generation from Speech” is available on the corresponding page on github.com . Digital Production last reported on the research activities of Ubisoft LaForge on 30 March this year. At that time, it was a tool for creature rigging, which makes it possible to record the movement sequences of animal characters – without any motion capturing.

Source: 80.lv ( News from Theodore McKenzie)