Some days ago a new version was presented important of the machine learning platform TensorFlow 2.0, which provides out-of-the-box implementations of various deep machine learning algorithms, a simple programming interface for building models in Python and a low-level interface for C ++ that lets you control the construction and execution of computational graphics.
Platform was originally developed by the Google Brain team and is used by Google services for voice recognition, facial recognition in photos, determine the similarity of images, filter spam in Gmail, select news in Google News and organize the translation according to the meaning.
TensorFlow provides a library of computer algorithms out-of-the-box numerics implemented through data flow charts. The nodes in such graphs implement mathematical operations or entry / exit points, while the edges of the graph represent multidimensional data sets (tensors) that flow between the nodes.
Nodes can be assigned to computing devices and run asynchronously, simultaneously processing all suitable tensors simultaneously, allowing you to organize the simultaneous operation of nodes in a neural network by analogy with the simultaneous firing of neurons in the brain.
Distributed machine learning systems can be built on standard equipment, thanks to built-in support in TensorFlow to extend computing to multiple CPUs or GPUs. TensorFlow can run on multiple CPUs and GPUs (with optional CUDA extensions for general purpose computing on graphics processing units)
TensorFlow is available on 64-bit Linux, macOS, and mobile platforms including Android and iOS. The system code is written in C ++ and Python and is distributed under the Apache license.
Main new features of TensorFlow 2.0
With the release of this new version the main attention lent itself to simplification and ease of use, such is the case that to build and train models, a new high-level Keras API has been proposed which provides several options for interfaces to build models (sequential, functional, subclasses) with the possibility of their immediate execution (without preliminary compilation) and with a simple debugging mechanism.
Added tf.distribute.Strategy API to organize distributed model trainings with minimal modification to existing code. In addition to the ability to distribute calculations to multiple GPUs, there is experimental support available for splitting the learning process across multiple independent processors and the ability to use cloud TPU (Tensor Processing Unit).
Instead of a declarative graph construction model with execution via tf.Session, it is possible to write common Python functions that can be converted to graphs by calling tf.function and then remotely executed, serialized, or optimized to improve performance. performance.
Has been added an AutoGraph translator that converts Python command flow to TensorFlow expressions, which allows you to use Python code within the tf.function, tf.data, tf.distribute, and tf.keras functions.
SavedModel unified the model swap format and added support for saving and restoring the state of models. Assembled models for TensorFlow can now be used in TensorFlow Lite (on mobile devices), TensorFlow JS (in a browser or Node.js), TensorFlow Serving, and TensorFlow Hub.
The tf.train.Optimizers and tf.keras.Optimizers APIs have been unified, Instead of compute_gradients, a new GradientTape class has been proposed to compute gradients.
Also the performance in this new version has been significantly higher when using the GPU. The speed of model training on systems with NVIDIA Volta and Turing GPUs has increased up to three times.
A lot of cleaning APIs, many calls are renamed or removed, support for global variables in helper methods is broken. Instead of tf.app, tf.flags, tf.logging, a new absl-py API is proposed. To continue using the old API, the compat.v1 module has been prepared.
If you want to know more about it you can consult the following link.