NEWS-FINANCE -QUOTE-EDUCATIONAL AND MOTIVATIONAL
Google’s Pixel 10 phones made their official debut this week, and with them, a bunch of generative AI features baked directly into the camera app. It’s normal for phones to use “computational photography” these days, a fancy term for all those lighting and post-processing effects they add to your pics as you snap them. But AI makes computational photography into another beast entirely, and it’s one I’m not sure we’re ready for.
Tech nerds love to ask ourselves “what is a photo?” kind of joking that the more post-processing gets added to a picture, the less it resembles anything that actually happened in real life. Night skies being too bright, faces having fewer blemishes than a mirror would show, that sort of thing. Generative AI in the camera app is like the final boss of that moral conundrum. That’s not to say these features aren’t all useful, but at the end of the day, this is kind of a philosophical debate as much as a technical one.
Are photos supposed to look like what the photographer was actually seeing with their eyes, or are they supposed to look as attractive as possible, realism be blocked? It’s been easy enough to keep these questions to the most nitpicky circles for now—who really cares if the sky is a little too neon if it helps your pic pop more?—but if AI is going to start adding whole new objects or backgrounds to your photos, before you even open the Gemini app, it’s time for everyone to start asking themselves what they want out of their phones’ cameras.
And the way Google is using AI in its newest phones, it’s possible you could end up with an AI photo and not really know it.
Pro Res Zoom
Maybe the most egregious of Google’s new AI camera additions is what it’s calling Pro Res Zoom. Google is advertising this as “100x zoom,” and it works kind of like the wholly fictional “zoom in and enhance” tech you might see in old-school police procedurals.
Essentially, on a Pixel 10 Pro or Pro XL, you’ll now be able to push the zoom lens in by 100 times, and on the surface, the experience will be no different than a regular software zoom (which relies on cropping, not AI). But inside your phone’s processor, it’ll still run into the same problems that make “zoom in and enhance” seem so ludicrous in shows like CSI.
In short, the problem is that you can’t invent resolution the camera didn’t capture. If you’ve zoomed in so far that your camera lens only saw vague pixels, then it will never be able to know for sure what was actually there in real life.
Credit: Google
That’s why this feature, despite seeming like a normal, non-AI zoom on the surface, is more of an AI edit than an actual 100x zoom. When you use Pro Res Zoom, your phone will zoom in as much as it can, then use whatever blurry pixels it sees as a prompt for an on-device diffusion model. The model will then guess what the pixels are supposed to look like, and edit the result into your shot. It won’t be capturing reality, but if you’re lucky, it might be close enough.
For certain details, like rock formations or other mundane inanimate objects, that might be fine. For faces or landmarks, though, you could leave with the impression that you just got a great close-up of, say, the lead singer at a concert, without knowing that your “zoom” was basically just a fancy Gemini request. Google says it’s trying to tamp down on hallucinations, but if a photo spat out by Gemini is something you’re uncomfortable posting or including in a creative project, this will have the same issues—except that, because of the branding, you might not realize AI was involved.
Luckily, Pro Res Zoom doesn’t replace non-AI zoom entirely—zooming in past the usual 5x hardware zoom limit will now give you two results to pick from, one with Pro Res Zoom applied and one without. I wrote about this in more detail if you’re interested, but even with non-AI options available, the AI one isn’t clearly indicated while you’re making your selection.
That’s a much more casual approach to AI than Google’s taken in the past. People might be used to AI altering their photos when they ask for it, but having it automatically applied through your camera lens is a new step.
Ask to Edit
The casual AI integration doesn’t stop once you’ve taken your photo, though. With Pixel 10, you can now use natural language to ask AI to alter your photos for you, right from the Google Photos app. Simply open up the photo you want to change, tap the edit icon, and you’ll see a chat box that will let you use natural language to suggest tweaks to your photo. You can even speak your instructions rather than type them, if you want.
On the surface, I don’t mind this. Google Photos has dozens of different edit icons, and it can be difficult for the average person to know how to use them. If you want a simple crop or filter applied, this gives you an option to get that done without going through what could be an otherwise intimidating interface.

Credit: Michelle Ehrhardt
The problem is, in addition to using old-school Google Photos tools, Ask to Edit will also allow you to suggest more outlandish changes, and it won’t clearly delineate when it’s using AI to accomplish those changes. You could ask the AI to swap out your photo’s background for an entirely new one, or if you want a less drastic change, you could ask it to remove reflections from a shot taken through a window. The issue? Plenty of these edits will require generative AI, even the seemingly less destructive ones like glare elimination, but you’ll have to use your intuition to know when it’s been applied.
For example, while you’ll usually see an “AI Enhance” button among Google Photos’ suggested edits, it’s not the only way to get AI in your shot. Ask to Edit will do its best to honor whatever request you make, with whatever tools it has access to, and given some hands-on experience I had with it at a demo with Google, this includes AI generation. It might be obvious that it’ll use AI to, say, “add a Mercedes behind me in this selfie,” but I could see a less tech savvy user ***uming that they could ask the AI to “zoom out” without knowing that changing an aspect ratio without cropping also requires using generative AI. Specifically, it requires asking an AI to imagine what might have surrounded whatever was in your shot in real life. Since it has no way of knowing this, it comes with an inherently high risk of hallucination, no matter how humble “zoom out” sounds.
Since we’re talking about a tool designed to help less tech-literate users, I worry there’s a good chance they could accidentally wind up generating fiction, and think it’s a totally innocent, realistic shot.
What do you think so far?
Camera Coach
Then there’s Camera Coach. This feature also bakes AI into your Camera app, but doesn’t actually put AI in your photos. Instead, it uses AI to suggest alternate framing and angles for whatever your camera is seeing, and coaches you on how to achieve those shots.

Credit: Michelle Ehrhardt
In other words, it’s very what-you-see-is-what-you-get. Camera Coach’s suggestions are just ideas, and even though following through on them takes more work on your end, you can be sure that whatever photo you snap is going to look exactly like what you saw in your viewfinder, with no AI added.
That pretty much immediately erases most of my concerns about unreal photos being presented as absolute truth. There is the possibility that Camera Coach might suggest a photo that’s not actually possible to take, say if it wants you to walk into a restricted area, but the worst you’re going to get there is frustration, not a photo that p***es off AI generation as if it’s the same as, say, zooming in.
People should know when they’re using AI
I’m not going to solve the “what is a photo?” question in one afternoon. The truth is that some photos are meant to represent the real world, and some are just supposed to look aesthetically pleasing. I get it. If AI can help a photo look more visually appealing, even if it’s not fully true-to-life, I can see the appeal. That doesn’t erase any potential ethical concerns about where training data comes from, so I’d still ask you to be diligent with these tools. But I know that pointing at a photo and saying “that never actually happened” isn’t a rhetorical magic bullet.
What worries me is how casually Google’s new AI features are being implemented, as if they’re identical to traditional computational photography, which still always uses your actual image as a base, rather than making stuff up. As someone who’s still wary of AI, seeing AI image generation disguised as “100x zoom” immediately raises my alarm bells. Not everyone pays attention to these tools the way I do, and it’s reasonable for them to expect that these features do what they say on the tin, rather than introducing the risk of hallucination.
In other words, people should know when AI is being used in their photos, so that they can be confident when their shots are realistic, and when they’re not. Referring to zoom using a telephoto lens as “5x zoom” and zoom that layers AI over a bunch of pixels as “100x zoom” doesn’t do that, and neither does building a natural language editor into your Photos app that doesn’t clearly tell you when it’s using generative AI and when it isn’t.
Google’s aware of this problem. All photos taken on the Pixel 10 now come with C2PA content credentials built-in, which will say whether AI was used in the photo’s metadata. But when’s the last time you actually checked a photo’s metadata? Tools like Ask to Edit are clearly being made to be foolproof, and expecting users to manually scrub through each of their photos to see which ones were edited with AI and which weren’t isn’t realistic, especially if we’re making tools that are specifically supposed to let users take fewer steps before getting their final photo.
It’s normal for someone to expect AI will be used when they open the Gemini app, but including it in previously non-AI tools like the Camera app needs more fanfare than quiet C2PA credentials and one vague sentence in a press release. Notifying a user when they’re about to use AI should happen before they take their photo, or before they make their edit. It shouldn’t be quietly marked down for them to find later, if they choose to go looking for it.
Other AI photo tools, like those from Adobe, already do this, through a simple watermark applied to any project using AI generation. While I won’t tell you what to think about AI generated images overall, I will say that you shouldn’t be put in a position where you’re making one by accident. Of Google’s AI camera innovations, I’d say Camera Coach is the only one that does that. For a big new launch from the creator of Android, an ecosystem Google proudly touted as “open” during this year’s Made by Google, a one out of three hit rate on transparency isn’t what I’d expect.