Sunday, May 28, 2017

Why Google Is Suddenly Obsessed With Your Photos

Google tends to throw lots of ideas at the wall, and then harvest the data from what sticks. Right now the company is feasting on photos and videos being uploaded through its surprisingly popular app Google Photos. The cloud-storage service, salvaged from the husk of the struggling social network Google+ in 2015, now has 500 million monthly active users adding 1.2 billion photos per day. It’s on a growth trajectory to ascend to the vaunted billion-user club with essential products such as YouTube, Gmail, and Chrome. No one is quite sure what Google plans to do with all of these pictures in the long run, and it’s possible the company hasn’t even figured that out. But in a landscape fast becoming dominated by artificial intelligence, data — in this case, your photos — has become its own reward.

At the company’s annual I/O developers conference, Google touted Photos as a signature platform getting a bevy of valuable updates. Users will soon be able to automatically share all their uploaded photos with a loved one, or filter which specific photos are auto-shared by date or topic. A new Suggested Sharing feature will use facial recognition to prompt users to send photos of their friends directly to them, similar to Facebook’s Moments app. The service already uses machine-learning algorithms to classify the objects in photos and make them searchable, so that users can easily find all their pictures of dogs or beer or sunsets. With all these perks, plus unlimited storage, Google Photos is set to become the most convenient, powerful option available for managing a large media library. No wonder the app’s user base has grown so fast. (Though I have my doubts about how “active” these users are — Photos comes preinstalled on Android devices and automatically collects your photos; I mostly use it to look up a friend’s dad’s HBO password that I screencapped once in 2014.)

But the question remains: Why is Google offering such a feature-rich product that doesn’t appear to be readily monetizable, outside of the few print photo books the company plans to sell? The simplest answer is that the company wants to keep people within its all-encompassing ecosystem. Today’s tech giants now offer to serve as caretakers to our digital lives across a suite of services in exchange for access to our personal information. “Even if Google doesn’t make any money directly from something that it offers, it’s still gathering data,” says Pedro Domingos, a computer science professor at the University of Washington and author of The Master Algorithm. “Increasingly these days, what people perceive at companies is that data is one of your biggest assets.”

What more data could Google possibly need? The search giant has effectively achieved its longstanding goal of “organizing the world’s information,” if you consider only the written word. But even cofounder Larry Page has acknowledged that the company’s mission statement is outdated. The internet is fast becoming dominated by visual messaging, benefiting platforms such as Facebook, Instagram, and Snapchat. Google Photos, especially now that it’s been fine-tuned for sharing, is a back door into the social networking and chat functionalities that Google has been trying and failing to pitch to customers for the last decade. While we allow the company to passively track us through platforms like Chrome and Maps, Google Photos may be the first Google product that persuades people to actively share their personal information with the company en masse since Gmail.

The data obtained from a photo, though, has the potential to be much more sensitive than what’s contained in an email. Google already has plenty of pictures of objects that it’s indexed across the web with its search engine, but it still doesn’t know that much about what individual people look like. To make the Photo app’s sharing and tagging features work, Google has to analyze a photo subject’s facial structure and create a unique “faceprint” for them. The company is currently fighting a lawsuit in Illinois alleging that this facial-recognition technology violates a state law protecting citizens’ biometric data, and the tech hasn’t been rolled out in many parts of Europe for fear it might run afoul of privacy laws. (...)

The cliché when criticizing free internet platforms has always been “You are the product.” Today a more accurate critique might be “You are the resource.” For a long time we worried that tech giants might sell our private information to the highest bidder. But with Silicon Valley throwing all its efforts into artificial intelligence, data itself has become its own currency. Andrew Ng, the researcher who founded the AI project Google Brain, recently called data a “scarce resource.” The firms that have the most of it can create complex machine-learning systems that power essential consumer tech products. The firms that don’t have enough of it probably never will now that we’re all firmly in the camp of Google, Amazon, Facebook, or Apple. “All those [companies] have a built-in, inherent advantage because they have tons and tons of data, and moreover they don’t have to share it with anybody else,” says Alex Rudnicky, a research professor in Carnegie Mellon University’s computer science department. “In order to get the data, they have to provide something of value to users. And that’s kind of nontrivial to figure that out. They get the data, and then they can turn around and pitch these new products that leverage data for something else.”

Google’s entire engineering workflow is fast transitioning to this model. All the AI uses mentioned above — recognizing faces, automatically replying to emails, understanding voice commands — are now organized under a broad machine-learning framework known as TensorFlow. The company is staking its future on this system, scaling it down so that it can work on an Android phone that’s not connected to the internet and scaling up to power a new AI chip that will let outside companies leverage Google’s machine-learning advancements via the cloud. Rather than creating a bunch of siloed algorithms that execute discrete tasks, Google wants to devise an overarching AI that can deal with a wide variety of tasks, just like humans do. “Over time, what we discovered is that the same machine-learning techniques and algorithms that solve problems in one area could be used in lots and lots of other product areas and product domains,” Jeff Dean, the current leader of the Google Brain research team, said in a March blog post. “And so what you see is this general explosion of machine-learning usage across Google, across now hundreds of teams and thousands of developers using these machine learning techniques to solve problems in their areas.”

by Victor Luckerson, The Ringer | Read more:
Image: Getty/The Ringer