Artificial intelligence is being woven into an array of the company’s products. But the change — for now — is subtle.
SAN FRANCISCO — There was a time when Google offered a wondrous vision of the future, with driverless cars, augmented-reality eyewear, unlimited storage of emails and photos, and predictive texts to complete sentences in progress.
A more modest Google was on display on Wednesday as the company kicked off its annual developer’s conference. The Google of 2022 is more pragmatic and sensible — a bit more like its business-focused competitors at Microsoft than a fantasy play land for tech enthusiasts.
And that, by all appearances, is by design. The bold vision is still out there — but it’s a ways away. The professional executives who now run Google are increasingly focused on wringing money out of those years of spending on research and development.
The company’s biggest bet in artificial intelligence does not, at least for now, mean science fiction come to life. It means more subtle changes to existing products.
“A.I. is improving our products, making them more helpful, more accessible, and delivering innovative new features for everyone,” Sundar Pichai, Google’s chief executive, said on Wednesday.
In a presentation short of wow moments, Google stressed that its products were “helpful.” In fact, Google executives used the words “help,” “helping,” or “helpful” more than 50 times during two hours of keynote speeches, including a marketing campaign for its new hardware products with the line: “When it comes to helping, we can’t help but help.”
It introduced a cheaper version of its Pixel smartphone, a smartwatch with a round screen and a new tablet coming next year. (“The most helpful tablet in the world.”)
The biggest applause came from a new Google Docs feature in which the company’s artificial-intelligence algorithms automatically summarize a long document into a single paragraph.
At the same time, it was not immediately clear how some of the other groundbreaking work, like language models that better understand natural conversation or that can break down a task into logical smaller steps, will ultimately lead to the next generation of computing that Google has touted.
Certainly some of the new ideas do appear helpful. In one demonstration about how Google continues to improve its search technology, the company showed a feature called “multisearch,” in which a user can snap a photo of a shelf full of chocolates and then find the best-reviewed dark chocolate bar without nuts from the picture.
In another example, Google showed how you can find a picture of a specific dish, like Korean stir-fried noodles, and then search for nearby restaurants serving that dish.
Much of those capabilities are powered by the deep technological work Google has done for years using so-called machine learning, image recognition and natural language understanding. It’s a sign of an evolution rather than revolution for Google and other large tech giants.
Many companies can build digital services easier and faster than in the past because of shared technologies such as cloud computing and storage, but building the underlying infrastructure — such as artificial intelligence language models — is so costly and time-consuming that only the richest companies can invest in them.
As is often the case at Google events, the company didn’t spend a little of time explaining how it makes money. Google brought up the topic of advertising — which still accounts for 80 percent of the company’s revenue — after an hour of other announcements, highlighting a new feature called My Ad Center. It will allow users to request fewer ads from certain brands or to highlight topics they would like to see more ads about.