Sign up FAST! Login

TensorFlow, Google's Open Source AI, Signals Big Changes in Hardware Too


Stashed in: Google!, Awesome, History of Tech!, Turing, Singularity!, GPU, Machine Learning, Open Source

To save this post, select a stash from drop-down menu or type in a new one:

It's just a matter of time before someone buys Nvidia.

In open sourcing its artificial intelligence engine—freely sharing one of its most important creations with the rest of the Internet—Google showed how the world of computer software is changing.

These days, the big Internet giants frequently share the software sitting at the heart of their online operations. Open source accelerates the progress of technology. In open sourcing its TensorFlow AI engine, Google can feed all sorts of machine-learning research outside the company, and in many ways, this research will feed back into Google.

But Google’s AI engine also reflects how the world of computer hardware is changing. Inside Google, when tackling tasks like image recognition and speech recognition and language translation, TensorFlow depends on machines equipped with GPUs, or graphics processing units, chips that were originally designed to render graphics for games and the like, but have also proven adept at other tasks. And it depends on these chips more than the larger tech universe realizes.

According to Google engineer Jeff Dean, who helps oversee the company’s AI work, Google uses GPUs not only in training its artificial intelligence services, but also in running these services—in delivering them to the smartphones held in the hands of consumers.

That represents a significant shift. Today, inside its massive computer data centers, Facebook uses GPUs to train its face recognition services, but when delivering these services to Facebookers—actually identifying faces on its social networks—it uses traditional computer processors, or CPUs. And this basic setup is the industry norm, as Facebook CTO Mike “Schrep” Schroepfer recently pointed out during a briefing with reporters at the company’s Menlo Park, California headquarters. But as Google seeks an ever greater level of efficiency, there are cases where the company both trains and executes its AI models on GPUs inside the data center. And it’s not the only one moving in this direction. Chinese search giant Baidu is building a new AI system that works in much the same way. “This is quite a big paradigm change,” says Baidu chief scientist Andrew Ng.

The change is good news for nVidia, the chip giant that specialized in GPUs. And it points to a gaping hole in the products offered by Intel, the world’s largest chip maker. Intel doesn’t build GPUs. Some Internet companies and researchers, however, are now exploring FPGAs, or field-programmable gate arrays, as a replacement for GPUs in the AI arena, and Intel recently acquired a company that specializes in these programmable chips.

The bottom line is that AI is playing an increasingly important role in the world’s online services—and alternative chip architectures are playing an increasingly important role in AI. Today, this is true inside the computer data centers that drive our online services, and in the years to come, the same phenomenon may trickle down to the mobile devices where we actually use these services.

Machine learning is coming to smartphones soon.

Typically, when you use a deep learning app on your phone, it can’t run without sending information back to the data center. All the AI happens there. When you bark a command into your Android phone, for instance, it must send your command to a Google data center, where it can processed on one of those enormous networks of CPUs or GPUs.

But Google has also honed its AI engine so that it, in some cases, it can execute on the phone itself. “You can take a model description and run it on a mobile phone,” Dean says, “and you don’t have to make any real changes to the model description or any of the code.”

This is how the company built its Google Translate app. Google trains the app to recognize words and translate them into another language inside its data centers, but once it’s trained, the app can run on its own—without an Internet connection. You can point your phone a French road sign, and it will instantly translate it into English.

That’s hard to do. After all, a phone offers limited amounts of processing power. But as time goes on, more and more of these tasks will move onto the phone itself. Deep learning software will improve, and mobile hardware will improve as well. “The future of deep learning is on small, mobile, edge devices,” says Chris Nicholson, the founder of a deep learning startup called Skymind.

GPUs, for instance, are already starting to find their way onto phones, and hardware makers are always pushing to improve the speed and efficiency of CPUs. Meanwhile, IBM is building a “neuromorphic” chip that’s designed specifically for AI tasks, and according to those who have used it, it’s well suited to mobile devices.

Today, Google’s AI engine runs on server CPUs and GPUs as well as chips commonly found in smartphones. But according to Google engineer Rajat Monga, the company built TensorFlow in a way that engineers can readily port it to other hardware platforms. Now that the tool is open source, outsiders can begin to do so, too. As Dean describes TensorFlow: “It should be portable to a wide variety of extra hardware.”

You May Also Like: