Creations • Google Photos • 2014-2018

Using AI to help users experience their photos in new ways

Context

As soon as phones started to have good quality cameras built in, the number of photos that people were taking grew exponentially.

Instead of only taking photos at important events, and only snapping the shutter at the right moment, people were taking selfies, photos of their food, and taking multiple consecutive photos to make sure they got one good photo that captured the moment.

This increase in photos meant that the few meaningful photos got lost in the clutter of their gallery.

Google Photos started to use image recognition and AI to turn these otherwise low-value photos into moments of delight through auto-created animations, collages, movies, and more.

Bringing photos to life

Animation auto-created from a burst of photos

Animation of jellyfish
Source photos

Presenting AI suggestions

Early on, these creations were inserted directly into the main gallery view for users. Through user feedback and research, we came to equate this to putting an unrequested gift on someone’s mantle in their home. To some users, this felt like a breach. They wanted to have complete control over what was in their gallery.

We introduced the “Doorstep” model in the Assistant. Every time a new creation was made, it would be presented as a suggestion in the Assistant. Like leaving the gift on the doorstep. The user could open it and decide if they wanted to keep it or throw it away.

AI can get a lot of things right, but it’s important to let the user have the final say.

Higher impact creations

After the early success of auto creations, a new PM came in with an idea to build out more specially curated creations that would have a narrower reach, but higher impact.

I partnered with the PM brainstorm different curation concepts, and design a guided creation process that would allow users to request these specialty creations.

“… the kinds of movies you might make yourself, if you just had the time”

David Lieb, Director of Product at Google Photos, The Keyword, September 19, 2016

Part 2

Giving control to the user

The new concept movies were a big hit, and, as we had hypothesized, the impact of each concept was limited to the reach of the algorithm that chose the photos.

Users were asking how they could get the new movies. Along with fine-tuning the backend that made the movies to push out to users, we wanted to give users a way to get each of the new concept movies on-demand.

Goals

Target for impact

Multiple targeted themes to deliver more impactful experiences.

Scalable design

Make it easy to add more themes in the future.

Enable users to be creators

User control from start to finish gives a sense of ownership.

Team

Designer (Me)

Product Manager

UX Researcher

UX Writer

FE Engineers (Android, iOS, Web)

BE Engineers (curation, delivery)

Scalable design

Each concept needed a certain number of inputs from the user. I simplified those inputs down to a few reusable components that we could mix-and-match.

  • Person selection (one person)

  • Person selection (multiple people)

  • Date/date range

Each component would serve as a step in the creation flow.

Designing the user flow

Illustrations

I worked with an external illustrator to have illustrations made to represent each theme.

Soundtracks

I met with an external vendor and helped provide direction to create original theme music for each of the movie concepts.

Final flow

Video made by the marketing team to promote the feature