Which is the most important in the fourth year of TensorFlow’s release? @GDD 2019

  google, Machine learning, tensorflow

Scenario description: GDD continued until the next day. Developers were still enthusiastic and TensorFlow RoadShow was still full. Four years have passed since the launch of TensorFlow in 2015. Google has built a complete ecosystem around TensorFlow and its user base is growing. So what did Google’s TensorFlow team bring us this time?

Key words: TensorFlow machine learning GDD

Yesterday, at the GDD conference, Google showed the latest developments and new products in detail. On the second day of GDD, the focus was on TensorFlow, which has been released for four years.

In the early morning of this morning, Apple held a new product launch conference and launched the three-camera iPhone 11 series. In addition, new products such as iPad, Apple Arcade, Apple TV+, iApple Watch Series 5 were also released.

If Apple is using new products to stimulate consumer groups, on the other side of the ocean, Google’s developer conference is still going on in a low-key and steady manner, giving a detailed introduction of its latest technological progress and bringing the most practical help to developers.

The special TensorFlow RoadShow was full of schedules for the whole day. So what did the TensorFlow team bring to GDD today?

TensorFlow: the most popular machine learning framework

On the TensorFlow RoadShow, Liang Xinbing, its Asia Pacific product manager, first appeared to share the book “The Present and Future of Machine Learning” and to state the development of TensorFlow.

图片描述

  • Liang Xinbing analyzed the development of TensorFlow

At present, there are three key points in the development of machine learning: data sets, computational power and models. Tensorflow is the most successful machine learning platform following the trend of machine learning.

Since its release in 2015, TensorFlow has been improving and updating. Up to now, it has more than 41 million downloads, more than 50,000 submissions, 9,900 code change requests, and more than 1,800 contributors.

图片描述

  • TensorFlow has a large user base.

Because of its powerful function, there are more and more actual cases using TensorFlow, and many enterprises and institutions are using it for research and development. In addition, the TensorFlow Chinese website has also been launched, and the Chinese community and technical resources are expanding day by day.

After introducing these situations, TensorFlow’s overall display was randomly opened. The engineers of his team gave a detailed introduction to the progress of TensorFlow.

Focus: TensorFlow 2.0

The high-profile version 2.0 will finally be released in 2019. In June this year, the Beta version of TensorFlow 2.0 was released. On today’s GDD, engineers announced that TensorFlow 2.0 RC is now available.

Compared with version 1.0, the new version has been upgraded in three aspects: ease of use, high performance and scalability.

The most attractive thing is that Keras is used as a high-level API, which optimizes the default Eager Execution, removes duplicate functions and provides a unified API.

图片描述

  • Engineers introduced improvements made by TensorFlow 2.0 around Keras

Using Keras and Eager Execution, TensorFlow 2.0 can easily build models and implement robust model deployment in production environments on any platform.

After introducing the details of 2.0, Google engineer Liang Yanhui also introduced in detail the method of upgrading from version 1.0 to 2.0.

Within Google, internal version migration has already started, and the official website also provides detailed code migration guidelines and tools. If users need or depend on a version 1.0 API, they can easily migrate to version 2.0 according to the guidelines.

So what are the specific aspects of TensorFlow 2.0 that deserve attention? Google’s engineers made detailed introductions from the following angles.

TF.Text: training NLP model

As an important direction in machine learning, natural language processing has great market demand. TF officially launched and upgraded TF.Text, which provides a powerful text processing function for TensorFlow 2.0 and is compatible with dynamic graph mode.

图片描述

  • TF.Text has several advantages.

TF.Text is a TensorFlow 2.0 library that can be easily installed using the PIP command. It can perform the preprocessing process on a regular basis in the text-based model and provide more functions and operations on language modeling that are not provided in the TensorFlow core components.

One of the most common functions is the Tokenization of text. Lexicalization is the process of decomposing a string into token. These token may be words, numbers, punctuation marks, or a combination of several elements.

TF.Text Tokenizer is a new tensor for text recognition, Ragged Tensors. Three new Tokenizer are also provided. Among them, the most basic is the blank Tokenizer, which can split UTF-8 strings on the blank characters (such as spaces, tabs, line breaks) defined by ICU.

TF.Text library also includes normalization, n-gram and tag sequence constraint functions. TF.Text has many advantages, such as users do not need to pay attention to the consistency of training and prediction, and do not need to manage preprocessing scripts themselves.

TensorFlow Lite: deploying machine learning on the end

Two senior Google software engineers, Wang Tiezhen and Liu Renjie, introduced the functional updates and technical details of TensorFlow Lite.

图片描述

  • TensorFlow Lite Roadmap at a glance

TensorFlow Lite is a framework for deploying machine learning applications on mobile phones and embedded devices. The main consideration for choosing to deploy on the end is reflected in the following three points:

First, there is almost no delay and stable and timely user experience can be provided.
Second, there is no need to connect to the network and it can be used in an environment without or with poor network.
Third: privacy protection, data will not be transmitted to the cloud, all processing can be carried out on the end.

In view of these advantages, there is already a large market for deploying machine learning applications on the end based on TensorFlow Lite, and in 2.0, the ability to deploy models has also been strengthened.

For example, in the scene of renting a house, the idle fish APP uses TensorFlow Lite to automatically label pictures, thus improving the efficiency of renting a house. Cobos robots deploy TensorFlow Lite in sweeping robots to realize automatic obstacle avoidance and so on. TensorFlow Lite is also widely used in Google products, such as Google photo albums, input methods, cloud assistants, etc.

According to statistics, over 2 billion mobile devices have installed applications based on TensorFlow Lite.

However, there are still many challenges in deploying machine learning on the end, such as less computing power and less memory on the end compared with the cloud, and power consumption needs to be considered in the end deployment. TensorFlow Lite has also made optimization improvements to these challenges in order to make machine learning easier to deploy on the end.

The final implementation port of TensorFlow Lite can be deployed not only in Android and iOS, but also in embedded systems (such as strawberry pie), hardware accelerators (such as Edge TPU) and microcontrollers (MCU).

图片描述

  • TansorFlow is moving towards microcontroller

At present, it has been applied in image classification, object detection, pose estimation, speech recognition, gesture recognition, and BERT, style transfer, voice wake-up and other functions will be released later.

How do I deploy my model in TensorFlow Lite? Liu Renjie said, this only needs to follow three steps:

Training TF model, converting to TF Lite format, and deploying model to end-side equipment can be realized with only a few code calls according to TF 2.0 integration library.

TensorFlow.js: a platform for making WeChat applets

TensorFlow.js is a deep learning platform customized for JavaScript. You can run existing models, retrain existing models, and train new models.

图片描述

  • Engineers introduce live pictures of TensorFlow.js

In order to increase its practicability, TensorFlow.js supports multiple platforms: browser, wireless terminal (such as WeChat applet), server and desktop.

In addition to running the machine learning model on multiple platforms, it can also train the model. In addition, it has GPU acceleration function and supports WebGL automatically.

In the live demonstration, they showed a virtual trial installation program Modiface based on TensorFlow.js Through this framework, the smallest and fastest virtual makeup test program has been created. It is reported that the following functions will also be realized, such as hairstyle change, age change simulation, skin quality detection, etc.

图片描述

  • Using TensorFlow.js to realize virtual glasses matching application

In addition, Google engineers introduced that TensorFlow.js’s applicable websites and wireless terminals have a large number of machine learning application scenarios, such as augmented reality AR, gesture-based and limb-based interaction, voice recognition, barrier-free websites, semantic analysis, intelligent conversation, and webpage optimization.

At present, TensorFlow.js already supports image classification, object recognition, gesture recognition, voice command recognition, text classification and other functions, such as the WeChat applet plug-in, which uses an API to realize rich functions.

Looking forward to more surprises from Google and TensorFlow

In addition to the above-mentioned functions of TensorFlow, Tf.distribute, TensorFlow Optimization Toolkit and some enterprise application cases of TensorFlow are also introduced.

Finally, Liang Xinbing took the stage again and shared TensorFlow’s community situation.

图片描述

  • TensorFlow’s community has many great users.

In the core construction of TensorFlow, there are more than 2,135 contributors. There are 109 Google developer experts in the field of machine learning. Up to 46 TensorFlow user groups worldwide. He also described in detail the ways to join the TensorFlow community.

With the end of the TensorFlow RoadShow, the Google Developer Conference has also ended all its schedules and concluded successfully. For all technical developers, the dry goods brought by this grand gathering should be much more intuitive than watching the Apple launch.

Let’s look forward to the next breakthrough of TensorFlow, and also hope Google can be more powerful on AI track. GDD next year, we will make another appointment!
图片描述

-End-