Google I/O 2017: It's Really About AI And Machine Learning, People


image

MOUNTAIN VIEW, CA - MAY 17: Google CEO Sundar Pichai delivers the keynote address at the Google I/O 2017 Conference at Shoreline Amphitheater on May 17, 2017 in Mountain View, California. The three-day conference will highlight innovations including Google Assistant. (Photo by Justin Sullivan/Getty Images)

This week I’m attending Google’s annual developer conference, Google I/O, in Mountain View, CA. The conference comes on the heels of a number of other big tech events (Microsoft Build last week, and Facebook F8 last month), and I went into I/O hoping to see a lot of innovation, and some real, standout differentiation from these other tech giants. Here’s my take on the biggest announcements coming out of Day 1, along with some AR/VR news from Day 2.

Looking at the world through Google Lens

CEO Sundar Pichai wasted no time during the Day 1 keynote, forgoing the typical big-picture digital transformation talk, in favor of diving right into announcements (after a brief intro lauding Google’s history and recent accomplishments).

image

Google CEO Sundar Pichai talks about Google Lens which lets you point your phone's camera at places and objects to get information about them, during the keynote address of the Google I/O conference, Wednesday, May 17, 2017, in Mountain View, Calif. Google provided the latest peek at the digital services and gadgets that it has assembled in the high-tech tussle to become an even more influential force in people's lives. (AP Photo/Eric Risberg)

The first thing Pichai announced was a new technology called Google Lens. The concept behind Google Lens is using Google’s computer vision and AI technology to create a search engine of sorts for images—point the camera at a storefront, and it will pull up the name of the place, business listing information, customer ratings and more. Google Lens is Google Goggles for this decade and reminds me of Samsung's Bixby Vision. Google Lens can also be integrated into Google Assistant, mining images for useful information to add to your calendar. An example was given of pointing the camera at a marquee for an upcoming concert—Google Lens extracted the date and time of the show, Assistant dropped it right into the calendar. Lens can also be paired with Assistant for help with translations. I think Google Lens is a step on the right track towards the Mixed Reality world the industry has been priming us for the last several years. Needless to say, the more Google consumers are using their cameras to capture information, the more Google knows about you and can send you more tailored ads.

Introducing the next generation of TPUs, this time for ML training

Next, Pichai introduced Cloud TPUs, the second generation of Google’s Tensor Processing Units for Google Cloud, which Google says will "greatly accelerate machine learning workloads", shifting from just inference uses, to inference and training. Google says each TPU delivers as much as 180 teraflops of floating-print performance, with the ability to combine into pods of 64 TPUs —delivering an impressive 11.5 petaflops.

I have a lot of thoughts on this. Google’s use of TPUs for training is probably fine for a few workloads for the here and now, but given the rapid change in machine learning frameworks, sophistication, and depth, I believe Google is still doing much of their machine learning production and research training on GPUs. Just look at the change in the last year- Caffe is now replaced by Caffe2, and there are big differences in the needed sophistication to identify, for example, identifying a park bench in a picture to identifying context and details of a video of a family playing in the park around a bench, playing frisbee as others are sunbathing. Getting locked into one machine learning framework like Tensorflow probably isn’t a very good idea either as these frameworks ebb and shift and, let’s say, you wanted to shift from Google Cloud with Tensorflow to let’s say CNTK at Azure.

image

Google CEO Sundar Pichai delivers the keynote address of the Google I/O conference Wednesday, May 17, 2017, in Mountain View, Calif. Google provided the latest peek at the digital services and gadgets that it has assembled in the high-tech tussle to become an even more influential force in people's lives.(AP Photo/Eric Risberg)

It’s unclear to me what Google is actually getting out of their TPU journey beyond a positioning point of why their AI is better because they offer a TPU. One thing to consider is that TPUs could be siphoning off resources from other AI projects. The Google TPU is an ASIC, hard-coded to a limited set of functions year on year without change. We find ASICs in audio and video decoders where standards are firm and don’t change, like H.264 video, very different from the rapidly changing machine learning landscape.  ASICs also add an additional 18 months to 24 months to a development cycle, an eternity in machine learning.

Related & Handpicked articles you may be interested in, check them out

Rate this blog entry:
Google Adds Managed Internet Of Things Service To ...
Destiny 2 Will Run At 60fps On The PC