Discover “EDGE”: a broadcast-based AI model that generates realistic, long-lasting music-conditioned dance sequences

Discover “EDGE”: a broadcast-based AI model that generates realistic, long-lasting music-conditioned dance sequences

Many cultures place great value on dance as a means of expression, communication and social connection. However, producing new dances or dance animations is a challenge because the dance moves are expressive and free-form while being carefully organized by the music. In reality, this requires either time-consuming manual animation or impractical motion capture techniques. However, the burden of the creation process can be reduced by using computer methods to automatically generate dances. This has a wide range of applications, including helping animators create new dances and providing interactive characters in video games or virtual reality with realistic and varied movements based on user-supplied music. Additionally, creation in dance can illuminate how music and movement interact, a required area of ​​study in neurology.

Past research has made huge strides in applying machine learning-based techniques. Yet it has yet to have much success in producing dances from music that adhere to user demands. Moreover, previous works frequently use quantitative criteria which they demonstrate are not reliable, and the evaluation of the dances created is a difficult and subjective process. This article proposes Editable Dance Generation (EDGE), an advanced dance generation technique that generates physiologically reasonable and realistic dance movements from input music. In their approach, a powerful musical feature extractor called Jukebox is used in conjunction with a transformer-based diffusion model.

EDGE creates various physically plausible dance choreographies based on musical compositions

Thanks to its diffusion-based methodology, dance can benefit from powerful editing features such as joint conditioning. A new metric that captures the physical accuracy of ground contact behaviors without explicit physical modeling is suggested, in addition to the benefits that modeling decisions confer instantly. In conclusion, here is what they brought:

1. They provide a broadcast-based dance generation method that can produce dance sequences of arbitrary length while combining cutting-edge performance with powerful editing tools.

2. They examine measurements from previous studies and demonstrate that they are inaccurate representations of human-rated quality, as found in extensive user research.

3. They introduce the physical foot contact score, a new simple quantitative measure based on acceleration to assess the physical plausibility of generated kinematic motions that do not require explicit physical modeling. Using a new Contact Consistency Loss, they propose a novel method to remove physical implausibilities of foot slippage in induced signs.

4. Using musical audio representations of Jukebox, a pre-trained generative model for music that has previously shown high performance on music-specific prediction challenges, we improve on previous artisanal audio feature extraction methodologies.

One can check out their website which also has wonderful video demonstrations. It’s something you wouldn’t see every day.

Check Paper and Project. All credit for this research goes to the researchers on this project. Also don’t forget to register. our Reddit page and discord channelwhere we share the latest AI research news, cool AI projects, and more.

Aneesh Tickoo is an intern consultant at MarktechPost. He is currently pursuing his undergraduate studies in Data Science and Artificial Intelligence at Indian Institute of Technology (IIT), Bhilai. He spends most of his time working on projects aimed at harnessing the power of machine learning. His research interest is image processing and is passionate about building solutions around it. He enjoys connecting with people and collaborating on interesting projects.

#Discover #EDGE #broadcastbased #model #generates #realistic #longlasting #musicconditioned #dance #sequences

Leave a Comment

Your email address will not be published. Required fields are marked *