Skip to main content

The Wonderful Speech From Google CEO About Lens! Don’t Miss It!






0:12 good morning welcome to Google i/o I
0:17 [Applause]
0:22 love you guys do
0:25 can't believe it's one year already a
0:28 beautiful day we've been joined by over
0:31 7,000 people and we are live-streaming
0:35 this as always to over 400 events in 85
0:39 countries last year was a tenth year
0:42 since Google i/o started and so we moved
0:44 it closer to home at Shoreline back
0:46 where it all began
0:48 seems to have gone well I checked the
0:50 Wikipedia entry from last year there
0:52 were some mentions of sunburn so we have
0:56 plenty of sunscreen all around it's on
0:58 us use it liberally it's been a very
1:03 busy year since last year no different
1:06 from my 13 years at Google that's
1:09 because we've been focused evermore on
1:12 our core mission of organizing the
1:14 world's information and we are doing it
1:17 for everyone and we approach it by
1:20 applying deep computer science and
1:22 technical insights to solve problems at
1:25 scale that approach is served us very
1:29 very well this is what has allowed us to
1:32 scale up seven of our most important
1:35 products and platforms to over a billion
1:38 monthly active users each and it's not
1:41 the not just the scale at which these
1:43 products are working users engage with
1:46 them very heavily YouTube's not just as
1:50 over a billion users but every single
1:52 day uses watch over 1 billion hours of
1:55 videos on YouTube Google Maps every
1:59 single day users navigate over 1 billion
2:02 kilometers with Google Maps so the scale
2:05 is inspiring to see and there are other
2:07 products approaching the scale we
2:10 launched Google Drive five years ago and
2:13 today it is over 800 million monthly
2:16 active users
2:18 every single week that are over three
2:20 billion objects uploaded to Google Drive
2:23 two years ago at Google i/o we launched
2:27 photos as a way to organize users photos
2:30 using machine learning and today we are
2:33 over 500 million active users and every
2:37 single day users upload 1.2 billion
2:40 photos to Google so the scale of these
2:43 products are amazing but they are all
2:45 still working up their way towards
2:48 Android which I'm excited as of this
2:50 week we crossed over 2 billion active
2:54 devices of Android as you can see that
3:00 the robot is pretty happy - behind me so
3:03 it's a privilege to serve users of this
3:06 scale and this is all because of the
3:09 growth of mobile and smartphones but
3:12 computing is evolving again we spoke
3:15 last year about this important shift in
3:17 computing from a mobile first to AI
3:20 first approach mobile made us reimagine
3:24 every product we were working on we had
3:27 to take into account that the user
3:28 interaction model it's fundamentally
3:30 changed with multi-touch location
3:33 identity payments and so on similarly in
3:37 AI first world we are rethinking all our
3:41 products and applying machine learning
3:43 and AI to solve user problems and we are
3:46 doing this across every one of our
3:48 products so today if you use Google
3:50 search we rank differently using machine
3:52 learning or if you're using Google Maps
3:55 Street View automatically recognizes
3:57 restaurant signs
3:59 street signs using machine learning duo
4:02 with video calling uses machine learning
4:04 for low bandwidth situations and smart
4:07 reply in a low last year had great
4:10 reception and so today we are excited
4:13 that we are rolling out smart reply to
4:16 over 1 billion users of Gmail
4:17 it works really well here's a sample
4:21 email if you get a email like this the
4:23 machine learning systems learn to be
4:25 conversational and it can reply and find
4:28 what Saturday or
4:29 so it's really nice to see just like
4:33 with every platform shift how users
4:36 interact with computing changes mobile
4:39 brought multi-touch we evolved beyond
4:42 keyboard and mouse
4:43 similarly we now have voice and vision
4:47 as new to new important modalities for
4:50 computing humans are interacting with
4:52 computing in more natural and immersive
4:55 ways let's start with voice we've been
4:58 using voice as an input across many of
5:01 our products that's because computers
5:05 are getting much better at understanding
5:06 speech we have had significant
5:09 breakthroughs but the pace and even
5:11 since last year has been pretty amazing
5:13 to see our word error rate continuously
5:16 improve even in very noisy environments
5:20 this is why if you speak to Google on
5:23 your phone or Google home we can pick up
5:25 your voice accurately even in noisy
5:28 environments when we were shipping
5:30 Google home we had originally planned to
5:33 include 8 microphones so that we could
5:35 accurately locate the source of read
5:38 where the user was speaking from but
5:40 thanks to deep learning use a technique
5:42 called neural beamforming we were able
5:44 to ship it with just two microphones and
5:47 achieve the same quality deep learning
5:50 is what allowed us about two weeks ago
5:53 to announce support for multiple users
5:56 in Google home so that we can recognize
5:58 up to six people in your house and
6:01 personalize the experience for each and
6:03 everyone so voice is becoming an
6:06 important modality in our products the
6:09 same thing is happening with vision
6:12 similar to speech we are seeing great
6:15 improvements in computer vision so when
6:17 we look at a picture like this we are
6:20 able to understand the attributes behind
6:22 the picture we realize it's your boy in
6:25 a birthday party there was cake and
6:28 family in wall and your boy was happy so
6:31 we can understand all that better now
6:33 and our computer vision systems now for
6:37 the task of image recognition are even
6:40 better than humans so it's
6:42 pounding progress and be using it across
6:45 a product so if you use the Google pixel
6:48 it has the best-in-class camera and we
6:51 do do a lot of work with computer vision
6:53 you can take a low-light picture like
6:55 this which is noisy and we automatically
6:58 make it much clearer for you or coming
7:03 or coming very soon if you take a
7:06 picture of your daughter at a baseball
7:08 game and there is something obstructing
7:10 it we can do the hard work remove the
7:13 obstruction and have the picture of what
7:20 matters you in front of you we are
7:23 clearly at an inflection point with
7:25 vision and so today we are announcing a
7:28 new initiative called Google is Google
7:34 AM is a set of vision based computing
7:36 capabilities that can understand what
7:39 you're looking at and help you take
7:41 action based on that information we will
7:44 ship it first in Google assistant and
7:46 photos and it will come to other
7:48 products so how does it work so for
7:51 example if you run into something and
7:52 you want to know what it is say a flower
7:55 you can invoke google lens from your
7:57 assistant point your phone at it and we
7:59 can tell you what floor it is it's great
8:02 for someone like me with allergies or if
8:06 you've ever been at a friend's place and
8:08 you've crawled under a desk just to get
8:11 the username and password from a Wi-Fi
8:13 router you can point your phone at
8:20 and we can automatically do the hard
8:23 work for you or if you're walking in a
8:26 street downtown and you see a set of
8:28 restaurants across you you can point
8:30 your phone because we know where you are
8:32 and we have our knowledge graph and we
8:35 know what you're looking at we can give
8:37 you the right information in a
8:38 meaningful way as you can see we are
8:42 beginning to understand images and
8:45 videos all of Google was built because
8:48 we started understanding text and
8:50 webpages so the fact that computers can
8:53 understand images and videos has
8:56 profound implications for our core
8:58 mission when we started working on
9:02 search we wanted to do it at scale this
9:05 is why we rethought our computational
9:09 architecture we designed our data
9:11 centers from the ground up and we put a
9:14 lot of effort in them now that we are
9:18 evolving for this machine learning and
9:20 AI world we are rethinking our
9:23 computational architecture again we are
9:25 building what we think of as AI first
9:28 data centers this is why last year we
9:33 launched the tensor processing units
9:35 they are custom hardware for machine
9:37 learning they were about 15 to 30 times
9:40 faster on 30 to 80 times more power
9:43 efficient than CPUs and GPUs at that
9:46 time we use DP use across all our
9:49 products every time you do a search
9:52 every time you speak to Google in fact
9:55 DP user what powered alphago in its
9:58 historic match against laser at all as
10:02 you know machine learning has two
10:03 components training that is how we build
10:07 a neural @v we you know training is very
10:10 computationally intensive and inferences
10:13 what we do at real time so that when you
10:16 show it a picture we recognize whether
10:18 it's a dog or a cat and so on last
10:21 year's TPU software optimized for
10:24 inference training is computationally
10:27 very intensive to give you a sense
10:30 each one of our machine translation
10:31 models takes a training of over three
10:35 billion words for a week on about 100
10:39 GPUs so we've been working hard and I'm
10:42 really excited to announce our next
10:45 generation of TP use cloud TP use which
10:48 are optimized for both training and
10:51 inference what you see behind me is one
10:54 cloud TPU board it has four chips in it
10:57 and each board is capable of 180
11:01 trillion floating point operations per
11:03 second and you know we have designed it
11:08 for our data center so you can easily
11:09 stack them you can put 64 of these into
11:13 one big supercomputer we call these TPU
11:15 parts and each part is capable of 11.5
11:19 peda flops it is the important advance
11:23 in technical infrastructure for the AI
11:26 era the reason we enabled named it cloud
11:30 TPU is because we are bringing it
11:32 through the Google cloud platform
11:34 so cloud GPUs are coming to Google
11:36 compute engine as of today we want
11:45 Google cloud to be the best cloud for
11:47 machine learning and so we want to
11:49 provide our customers with a wide range
11:51 of hardware beats CPUs GPUs including
11:56 the great GPS Nvidia announced last week
11:58 and now cloud TPS so this lays the
12:01 foundation for significant progress

Comments

Popular posts from this blog

Sprinklr launches major push into customer experience

Sprinklr, the unicorn startup with a valuation of $1.8 billion, reported a noteworthy refresh today, which moves the organization's concentration from an unadulterated social signs stage to client encounter administration. While despite everything it utilizes social as a focal preparing point, the thought is to bring a common arrangement of advertising undertakings under a solitary umbrella they are calling the Experience Cloud.

In the event that that sounds natural, this is on account of Adobe discharged an item with a similar name fourteen days back. Sprinklr is adopting a comparative strategy with a brought together stage, yet with the objective of having the capacity to deal with the client through what you think about them from a social point of view.

For CEO Ragy Thomas, it's about discovering more inventive approaches to utilize the social data they have been gathering for as far back as 7.5 years the organization has been around.

The recently extended stage includes f…

Here is the upcoming updates for Google Assistant announced in Google I/O 2017

Google Assistant is an intelligent personal assistant developed by Google.  It becomes immense popular all around the world.


Since the Google intelligent assistant was announced a year ago, here coming more Assistant updates this year.

In Google I/O 2017 - stated upcoming Google Assistant updates are

Google Assistant launches App DirectoryGoogle Assistant app for iPhoneGoogle Home phone calls and push notificationsAdd events to calendar and type messages to Google AssistantVisual responses from Google Assistant on TV with Chrome castGoogle Lens for computer vision in the worldGoogle Assistant actions for Android and iPhoneNative payments for Google Assistant actionsSoftware development kit partnersSmart home control for LG and GE appliances

Samsung promises to fix red Galaxy S8 screen issue

Awesome news, Samsung fans — you won't not need to manage red-tinted telephone screens until the end of time.
Because of grumblings from early Galaxy S8 proprietors who guaranteed their gadget screens looked red in shading, Samsung arrangements to take out the issue with a product update.
"Since there are a few protests about the red-tinted screens, we chose to update the product one week from now for all Galaxy S8 customers," a Samsung representative disclosed to The Korea Herald on Friday.
ZDNet detailed the show issue not long ago after a few clients who pre-requested the telephones in South Korea started presenting photos and objections via web-based networking media locales.
After "World S8 Red Screen" started slanting on the district's biggest web crawler, Naver, Samsung discharged an announcement to ZDNet, guaranteeing the issue could be physically settled utilizing the accompanying grouping of activities: Settings > Display > Screen Mode &…