MOUNTAIN VIEW — Google CEO Sundar Pichai broke a new barrier in artificial intelligence technology Tuesday when he unveiled a voice assistant that sounds exactly like a human voice.

At the I/O developers conference, Pichai introduced Google Duplex, which allows the Assistant to speak with human-like cadence and includes artificial intelligence that is able to comprehend context and unclear answers.

Pichai demonstrated Duplex’s ability by having Assistant make reservations with a restaurant and a hair salon in two recorded phone calls. The receivers of the calls seemed to have no idea they were speaking to an AI voice. In the phone calls, Google Assistant said “ums” and “uhs” to make itself sound more human. In its phone call with the restaurant, where it was too busy to book a reservation, Google Assistant was able to naturally respond to questions and remarks made in a thick accent.

“Duplex scheduling a hair salon appointment.mp3” Renamed Downloads

Audio Player


Duplex will roll out as an experimental feature in the coming weeks, Pichai said.

“We’ve been working on this technology for many years,” said Pichai. “We’re still developing this technology, and we want to work hard to get this technology and the expectations right.”

Pichai also introduced a slew of artificial intelligence-powered tools in its suite of apps, including Gmail’s upcoming ability to help compose emails and Google Photos’ edit suggestions and ability to automatically turn document photos into PDF files.

“We are at an important inflection point in computing,” said Pichai. “We know the path ahead needs to navigated carefully and deliberately.”

Analysts were impressed by Google’s continuing commitment to artificial intelligence.

“Where mobile was once the platform for Google’s development and growth, artificial intelligence is now the basis that underpins the full spectrum of Google’s endeavors,” said CCS Insights analyst Geoff Blaber. “Google is weaving its assistant deeper into services such as maps and making it more immersive through visuals.”

While some media attendees voiced their concerns on social media of Duplex’s science fiction-esque potential, one analyst said the technology was “still a long ways to go.”

“At the end of the day, (the calls were) just booking an appointment at a restaurant,” said Creative Strategies’ consumer technology analyst Carolina Milanesi. “These are fairly easy interactions.”

This year’s I/O keynote speech, attended by more than 7,000 developers, Google staff and press, continued last year’s emphasis on AI. In the past year, Google doubled down on its AI efforts, which included Monday’s announcement that its artificial intelligence research will consolidate under a separate Google AI division.


Audio Player


AUDIO: Duplex calling a restaurant.


Artificial intelligence and machine learning ran through every segment of the keynote, from its newest Android software version titled P, Google Maps, and its Waymo autonomous driving arm.

Augmented reality was another key point in Google’s presentation, especially on Google Maps. In a later version, Google Maps will help users find the correct direction to walk toward with an AR-generated arrow floating in a real-world environment — a feature Milanesi touted as a real game-changer.

“It solves a real-world problem,” said Milanesi. “AR used in this space is something consumers will really get.”

Pichai also preached responsible internet consumption, or “digital well-being.” Google will introduce new features for users to track how much time they spend on the internet — and where — and a new tool called “Be Internet Awesome” for parents to manage their children’s screen time.

Digital well-being features were expanded in Android P. Its beta version was made available for developers Tuesday. Android P is able to learn its user’s behavior to predict what the next app, song or action might be on the smartphone. It also is packed with features seeking to reduce screen time, such as the “wind-down mode” which turns the screen into greyscale after a designated night time. The phone with Android P also turns to Do Not Disturb mode when the phone screen is flipped onto the table or surface.

Pichai unveiled other AI-powered tools for healthcare and accessibility. One upcoming feature allows Google’s AI to predict its user’s medical events 24 to 48 hours in advance by sorting through more than 100,000 data points per user to make a quantitative prediction, such as the odds the user will be re-admitted into the hospital.

Pichai’s careful tone seemed to share his precedessors’ opinion about AI in their annual letter to investors last April. Google co-founder Sergey Brin wrote artificial intelligence was the “most significant development in computing in my lifetime” and warned of safety concerns ranging from “the fears of sci-fi style sentience to the more near-term questions such as validating the performance of self-driving cars.”

But one thing Pichai and other Google executives did not touch upon in keynote speeches was privacy and security. As Facebook — which announced a new board director and major organizational changes on Tuesday — reeled from revelations that British political firm Cambridge Analytica had obtained millions of users’ personal data without their permission, Google barely mentioned the issues, focusing on its own AI-powered features.

“If you get too much into Facebook, then it can come across as defensive,” said Milanesi. “Google is not Facebook.”