Recently, this! The second season of Street Dance started, once again sparking a wave of popular dancing.
Not long after it was first started, this high-energy program got a high score of 9.6 on Douban. The dancers performed brilliantly in the competition, making the melon eaters in front of the screen shout “it’s too burning!” “”Amazing! “,even can’t help shaking up with the music.
However, if you really want to jump up, there is an estimated difference of several Luo Zhixiang between reality and imagination. Imagination, oneself is like this:
But in fact it is like this:
For dancers, their movements are called Hiphop, Breaking, Locking, etc., while for melon eaters, they are shaking, rolling, pointing and pointing.
Maybe I have no chance to do street dancing in my life? Let’s go to the square dance.
Wait! Don’t give up immediately, several bosses at the university of California, Berkeley, have studied an AI “secret weapon” for you, which will make you burst into dancing skills and become the next generation of dancing king.
Everyone can be a dancing king.
In August last year, researchers at the University of California, Berkeley, released a paper entitled “Everybody dance now”. Using the deep learning algorithm GaN (Generative Adversarial Networks), they can copy the actions of professional performers and transfer them to anyone, thus realizing “Do as I do”.
First of all, let’s look at the results of the copy dance and feel it:
The top left corner is a professional dancer, the bottom left is the detected pose, and the middle and right are the generated videos copied to the target character.
In the past, there was a great fire in Deep Fauke’s face changing technology, but now the whole people can “Deep Fauke”! Let’s see how this divine operation is realized.
In the paper, it is introduced that the migration action method is generally divided into the following steps:
- Given two videos, one is the action source video and the other is the target character video;
- Then an algorithm is used to detect professional dancers’ movements from the source video and create a stick figure frame for the corresponding movements.
- Then, using two deep learning algorithms trained to generate a confrontation network (GAN), all images of the target person are created and clearer and more realistic video images are generated for them.
The end result is that the system can map the body movements of professional dancers to those of amateur dancers. In addition to imitating actions, it can also perfectly fabricate human voices and facial expressions.
Reveal the Principle Behind Black Technology
The specific principle of this black technology is as follows: the movement transfer pipeline is divided into three parts:
1 Attitude detection:
The team used the existing pose detection model OpenPose(CMU Open Source Project) to extract pose key points of the body, face and hands from the source video. The essence of this step is to encode the body posture and ignore the body shape and other information.
The dancers’ postures were detected and coded into stick figure figures.
2 Global Attitude Standardization:
Calculating the difference between the body shape and position of the source and the target character in a given frame, and converting the source posture graph into a posture graph conforming to the body shape and position of the target character.
3 infer the image of the target person from the normalized posture figure:
Using a generative countermeasure network model, the training model learns to map from the normalized posture figure to the target figure image.
Schematic Diagram of Training Process (Part I) and Migration Process (Part II)
In the process of developing the system, the team used GeForce GTX 1080 Ti GPU in NVIDIA TITAN Xp and cuDNN accelerated by PyTorch to train and reason.
In the image conversion stage, the pix2pixHD architecture developed by NVIDIA for image translation against training is adopted. The facial residual is predicted by the global generator of pix2pixHD. They used a single 70x70patchgan discriminator for the face.
During the training process, the source video and target video data are collected in slightly different ways. In order to ensure the quality of the target video, a mobile phone camera is used to shoot the real-time lens of the target subject at a speed of 120 frames per second, and each video lasts at least 20 minutes.
For the source video, only the appropriate posture detection results need to be obtained, so the high-quality video of dancing performed online can be used.
System mapping results show
As for the results of the system, researchers said it was not perfect. Although most of the videos produced by it are still very realistic, they occasionally show their feet, such as the disappearance of certain parts of the body, such as “melting” and other abnormal phenomena.
In addition, because the algorithm does not encode clothes, it cannot produce videos of clothes dancing with actions, so the target must wear tight clothes.
This technology is really exciting if we leave these shortcomings aside for the time being.
With this AI tool, even if you are a small white dancer or have stiff and uncoordinated limbs, you can become a “master dancer” like Guo Fucheng, Luo Zhixiang or any dancer you like. Even Jackson’s space walk is a piece of cake for you.
However, Berkeley is not the only team that has a dance dream. Google has also put in mind the combination of AI and dancing.
Google AI Makes New Dance Patterns
At the end of last year, Damien Henry, technology project manager of Google’s Ministry of Art and Culture, and Wayne McGregor, a British choreographer, jointly developed a choreographer tool that can automatically generate specific styles.
McGregor, who holds an honorary doctorate in science from Plymouth University, has always been interested in science and technology. When he looked back at his dance videos over the past 25 years, he thought about whether the performance could be kept fresh through technology. So he went to Henry for advice on how to use technology to continuously create new dance content.
Henry got inspiration from a post on a science website. This post introduces the use of neural networks to predict the next letter based on the handwriting in the previous letter.
Therefore, he proposed a similar algorithm that can predict a given motion. The dancers’ postures are captured by video, and then the most likely dance movements are generated and displayed on the screen in real time.
AI choreography process demonstration
This algorithm also ignores people’s clothes, but only captures the key points of actors’ specific postures, thus obtaining the stick figure model.
After they recorded the video of McGregor and his dancers, AI learned how to dance, and the dance style generated was very similar to McGregor’s.
Although artificial intelligence still has certain limitations in dance creativity. This Google AI tool cannot invent actions it has never “seen”. It only predicts the most likely action among the actions it has learned.
In addition, this technology can also provide choreography of mixed styles, such as inserting a video of Brazilian samba into McGregor’s video, AI may give a completely new mixed dance. Henry was not worried that it would give a grotesque dance, because the source of learning was still input by people.
AI Attitude Tracking, More than “Dance Dream”
Having seen so many techniques to help you “dance”, have you been eager to try them?
Dance AI can make people who dare not move and do not want to move feel more comfortable and relaxed and experience the fun of dancing and sports. But the technology behind this is more than just fun.
The posture estimation that supports dance AI has huge energy hidden behind it. It can help us to complete body movements more accurately, such as 3D fitness learning, posture correction in sports, patient rehabilitation training, even virtual fitting and photo posture correction, which will bring new breakthroughs.
Attitude estimation is widely used.
According to such development, machines will know more and more about us and become more and more familiar with our body features and behavior patterns, thus helping us to better understand ourselves.
Ok, let’s not say, I’m going to learn dancing with AI. Do you want to come?