Everybody Dance Now is a new paper out of UC Berkeley thats able to create photorealistic video of someone dancing in the style of another, more professional dancer. Its like autotune for dance! They trained their model on a source subject (a trained dancer), then were able to transfer that dancing ability onto a target subject. In the video that was generated, the target subject takes on the source subjects dance moves as if it was their own! Incredible work, and it has huge implications for society as a whole. In this video, i’ll explain the generative model they used using code and animations, as well as applications of this technology. Enjoy!
Code for this video:
Please Subscribe! And like. And comment. That’s what keeps me going.
This video is apart of my Machine Learning Journey course:
Join us in the Wizards Slack channel:
Learn more about the School of AI:
And please support me on Patreon: