I’ve seen what’s on the scene and when will the next explosive animation come?

  Artificial intelligence, Computer vision

What a record!

Since its release on July 26, the movie “The Devil Child of Narcissus” has been setting a new box office record for animated films. As of the 13th, the box office of the whole network exceeded 3.6 billion in 19 days, and the number of people watching the movie exceeded 100 million! It has become the first animated movie with an audience of over 100 million.

Before that, the highest domestic box office record for animated films was 1.5 billion yuan.

Behind the soaring box office, its rating in Douban also remained at 8.6. Many people began to sigh with emotion: the milestone of Chinese animation has come.

What makes such a movie adapted from fairy tales such a high box office as a dark horse in the world?


Is it because of Yan value that I have gained such a large number of fans?

In addition to the subversive character setting and the brand-new story setting, which bring surprises to the audience, the exquisite visual effect is the key to the cracking of “nacha”. These cool visual effects have made a bereaved and skinned “enchanted” one of the most “beautiful” children this summer.

The animation production that nearly caused the dystocia

When the audience was deeply attracted by the most “beautiful” young man, it would not have occurred to them that he had nearly suffered dystocia.

Behind every explosive movie, it cannot be separated from the careful polishing of the main creative team, and complicated procedures often make the production time very long, and even some works cannot endure and die in the middle.

The production process of “Nazha” was also not easy, but fortunately, after five years, it was finally “knocked out”.

This 110-minute animated film has undergone 66 screenplay modifications, with more than 20 professional outsourcing teams and more than 1,600 animation producers participating in the production. The whole film has more than 1,400 special effects shots.

Thus, an excellent animation is doomed to be a huge project. Many employees have complained that animation is a “hard work”. In the director’s interview materials, it was also revealed that the turnover rate of the outsourcing team increased sharply due to heavy work.

The production of a large number of characters, the drawing of split-mirror animation and the production of special effects are all time-consuming and labor-consuming. In just over ten seconds, it may take several teams several months of effort.

The ugly and adorable one will blow up after being enchanted.

Although the team has put in a lot of energy, there are still many problems that cannot be solved by just sticking to them. For example, when the spirit bead and magic pill are combined in the movie, the director wanted to create the visual effect of time retrogression and unity of all things. However, after several months of trying, the camera was abandoned because it failed to meet the expectation.

Making difficulties like this are a major factor limiting the development of animation. If the production process can be more efficient, perhaps more high-quality films will emerge. In this respect, artificial intelligence technology can bring new opportunities to the animation industry.

Emancipation of Original Painter: AI Linework Coloring

In the process of animation production, the basic link is to complete the drawing of the artwork, and coloring these artwork is a huge project.

It is worth mentioning that more than 100 versions of the image were designed before and after

For ordinary 12 frame rate animation, it takes 18,000 pictures for 25 minutes. For a team of 10 people, it takes about 2 months to complete.

However, the efficiency of AI-based tools will be greatly improved. For example, the best technology at present requires only 2.5 hours to complete an animation, and the efficiency has been increased by 2,000 times.

Style2paints, which has seen tens of thousands of stars on GitHub recently, is such a magic drawing tool. Through it, the production staff can quickly complete the complicated coloring process.

There are four steps: starting with the line draft, the upper right corner will have a flat color.

Gradient colors are added to the lower left corner, and shadows are added to the lower right corner.

Style2paints was jointly created by researchers from Suzhou University and the Chinese University of Hong Kong. The latest version has been upgraded to V4. It is also rated as the best AI tool for line drawing coloring.

Style2paints is based on unsupervised in-depth learning. Through style transfer and GAN technology, the original line drawings are turned into full color drawings.

The whole process is divided into two stages. The first stage is to render the sketch into rough color pictures. In order to perfect the picture, the second stage will identify its errors and refine them to obtain the final result.

And it is also very simple to use, just need the painter to finish the line draft, and then click with the mouse to get full color draft. For highly demanding painters, some fine-tuning may be needed.

Operating instructions on GitHub

Liberation animator: neural network automatic frame filling

In the production of animated films, there are key frames and intermediate frames. The intermediate frames are the pictures strung between the two key frames, which play a role in linking up and making the pictures smooth. However, the production of animated films is a time-consuming process.

The three actions in dark color are key frames, and the ones in light color are intermediate frames.

If you give the adjacent key frames of a video and let AI supplement the middle picture, the workload will be greatly reduced.

The generation model released by Google AI some time ago solved the problem to a certain extent according to this idea.

Model diagram

The AI system they released includes a complete convolution model, consisting of a 2D convolution image decoder, a 3D convolution potential representation generator, and a video generator.

By decoding the image, in addition to decoding the input video information, the information of the target video is mapped to the potential space. However, the potential representation generator combines the two kinds of information and finally decodes them by the video generator to reach the predicted intermediate frame.

The video frame sequence generated by AI is consistent in style with the given start frame and end frame, and the whole looks smooth. Surprisingly, this method can realize video generation for a long period of time.


Video created from still images using the Kinetics dataset

In their research, some videos have produced satisfactory results, but there are also some complex videos with strange images, which still need improvement.

Liberation Director: Text Generated Animation

Of course, perhaps the most surprising feature is that AI directly generates animation from the script. Those scene effects that cannot be made may be solved by AI.

Some time ago, scientists from Disney and Rutgers University published a paper introducing an AI model that uses text descriptions to generate animated scenes.

To enable AI to generate text to video, it needs to “understand” the text and then generate corresponding animation. To this end, they adopted a neural network of multiple module components.

This model consists of three parts. One is the script analysis module, which automatically analyzes the scenes in the script text. The other is the natural language processing module, which can extract the main description sentences and refine the action representation. The last is a generation module, which converts the action instructions into animation sequences.

Flow schematic of the model

Researchers collected thousands of screenplays from freely available screenplays, and 996 of them were selected to compile a corpus of scene descriptions.

This corpus consists of 525,708 descriptions, including 1,402,864 sentences, of which 920,817 contain at least one action.

The mapping between description and video is established, so that simple animation segments can be generated by inputting scripts. In the test experiment, the rationality of animation generation is 68%.

Although the system still relies on corpora and cannot completely generate text to video, it brings a new direction in animation production.

Disney has also studied the automatic generation of animated mouth shapes that match speech.

How Far Is AI Going to Disrupt Animation

The popularity of “Na Cha” has once again drawn people’s attention to the potential of animated films, and this film, which breaks the traditional tradition, is still running towards its new record.

Although AI technology has made some good attempts in animation production, these technologies can only be used by filmmakers if they become more perfect and mature.

Looking forward to the development of technology in the future, we do not need to wait for a long five years to see more classic works such as “nacha”.