Google Photos is expanding its use of AI to help users edit and enhance their photos. While the company has already leveraged AI for its tools like the distraction-removing Magic Eraser and corrective Photo Unblur features in Photos, it’s now turning to AI for more complex edits with the introduction of Magic Editor. The new tool will combine AI techniques, including generative AI, for editing and reimaging photos, says Google.
The company offered a sneak peek at the new experimental feature at this week’s Google I/O developer conference to show off its capabilities.
With Magic Editor, users will be able to make edits to specific parts of the photos — like the foreground or background — as well as fill in gaps in the photo or even reposition the subject for a better-framed shot.
For example, Google showed off how Magic Editor could be used to improve a shot of a person standing in front of a waterfall.
In a demo of the technology, a user is able to first remove the other people from the background of the photo, then remove a bag strap from the subject’s shoulder for a cleaner look. While these types of edits were previously available in Google Photos via Magic Eraser, the ability to reposition the subject is new. Here, the AI “cuts out” the subject in the foreground of the photo, allowing the user to then reposition the person elsewhere in the photo by dragging and dropping.
This is similar to the image cutout feature Apple introduced with iOS 16 last year, which also could isolate the subject from the rest of the photo in order to do things like copy and paste part of the image into another app, grab the subject from images found through Safari search, or position the subject of the photo in front of the clock on the iOS Lock Screen, among other things.
In Google Photos, however, the feature is meant to help users create better photos.
Another demo showed off how Magic Editor’s ability to reposition a subject could also be combined with its ability to fill in the gaps in an image using AI techniques.
In this example, a boy is sitting on a bench holding a bunch of balloons, but the bench is shifted off to the left side of the photo. Magic Editor allows you to pull the boy and bench closer to the photo’s center and, while doing so, it uses generative AI to create more of the bench and the balloons to fill in the rest of the photo. As a final touch, you can brighten the sky behind the photo so it’s a brighter blue with white fluffy clouds, rather than the gray, overcast sky of the original.
The sky-filling feature is similar to what various other photo-editing apps can do, like Lensa or Lightricks’ Photoleap, to name a couple. But in this case, it’s included with users’ main photo organizing app, instead of requiring an additional download of a third-party tool.
The result of the edits, at least in the demos, is that of natural-looking, well-composed images, not those that look like they’ve been heavily edited or AI-created, necessarily.
Google says it will release Magic Editor as an experimental feature later this year, warning that there will be times when it doesn’t quite work correctly. The tests and user feedback will help the feature to improve over time, as users now edit 1.7 billion photos each month using Google Photos, the company said.
It’s unclear if Google will eventually charge for this feature, however, or perhaps make it a Pixel exclusive. Possibly, it will make Magic Editor a Google One subscription perk, as it did with Magic Eraser earlier this year.
The feature will initially become available to “select” Pixel devices, but Google declined to share which phones will receive it first.
The company said it also plans to share more about the AI tech under the hood when it gets closer to the early access release of the feature, but won’t go into detail now.