On Thursday morning, I attended a Q&A panel with four top Samsung smartphone executives. Until 2025, Samsung was the world’s largest smartphone manufacturer, and by association, the world’s largest maker of cameras. It’s still the second largest after Apple.
Samsung handed me the microphone first. I asked:
We see a divide in society between people who want AI to do impressive things with their photos and videos, and those who don’t want AI to do anything with photos and videos because it’s eroding our ability to believe that what we have seen is real, destroying the concept of photographic evidence.
Metadata tools like C2PA have utterly failed to stem the tide. Does Samsung have any new and different ideas on how to prevent AI images from taking over the world?
Samsung’s four executives did not have any new and different ideas to share.
I will credit Won-Joon Choi, the mobile division’s COO and R&D chief, for not dodging the question. He told the room that the erosion of reality is a problem and he wants to fix it.
But he, and other Samsung executives, suggested that the company needs to balance the desire for photographic reality with letting smartphone buyers be “more creative.” They passed the buck by suggesting it’s an industry-wide problem, one that requires a broader conversation, and suggested that Samsung has already partially solved it simply by adding a watermark to AI-generated images. A watermark that can easily be removed.
Given an opening later, an exec suggested our feelings toward AI-generated content might become more favorable in the future.
Here are some of those answers.
“We recognize the issue, because a lot of content is generated by AI,” Choi began. “On the one hand, people want to be more creative, so we believe we have to provide a solution so people can be more creative,” he said. “On the other hand, it’s really hard nowadays to distinguish the real photos and videos from the fake ones. I think this is a problem, we recognize that, and I think it’s a problem we have to solve at an industry level.”
“The C2PA, you may view it as a failure, but it’s still enough to provide a mechanism if people want to validate that those pictures and videos are made by AI. I think we have to provide a mechanism so people can use it. I think it’s an ongoing effort throughout the industry to solve this problem,” he added.
“I think if we make joint efforts to solve this problem, I believe that we should be able to do it,” he concluded.
As my colleague Jess Weatherbed wrote earlier this week, we’re starting to worry that this sort of “the industry will solve this together” rhetoric, and the C2PA itself, are substitutes for actual meaningful action.
Perhaps responding to our Samsung AI slop watch reporting, Samsung America exec Dave Das also chimed in to say that the company’s still learning about how much AI is acceptable when creating its own ads. He admitted that the company has “just started utilizing some AI content in our creative, and the feedback has been pretty clear.”
“We’re trying to discern what is the right place to use it, and absolutely how to be very clear about when we are using AI generated content vs naturally generated content,” Das said.
But Das still frames it as a balancing act between business priorities, not a social responsibility. “It really boils down to giving the creator choice,” he said, suggesting that Samsung’s job is to find “the right balance.”
Later in the morning, KTLA-TV tech reporter Rich DeMuro felt out Samsung’s stance with a very different question. He asked whether Samsung might want to make it easier for customers to remove its AI watermark from AI-generated photos.
“If I’m making a Christmas card with all your fun tools, do I really want that to say ‘AI-generated content?’” asked DeMuro.
Drew Blackard, Samsung America’s SVP of mobile product management, took that one. He concluded by saying that “If we feel like consumers really want that ability to remove watermarks, and there are other ways to address the first point around authenticity at the same time, we’ll do both things and deliver the best of what people are wanting out of the experiences.”
“At least right now, there’s enough concern from consumers around authenticity that that’s the primary thing we’ve solved for, in terms of the watermarking both in the metadata and on the photo itself. Not all services do that,” he said.
Blackard also suggested we might look back one day and realize that AI-generated content isn’t such a big deal, that our perceptions might change over time, the same way perceptions changed toward user-generated content to begin with. People were concerned then with “the amount of content now flowing into the system” and how it wasn’t the professional video product they were used to, he said.
I wonder if Samsung and fellow smartphone makers have considered the opposite: that perceptions of AI-generated imagery might become even less favorable after people lose their jobs in favor of cheap-to-produce AI slop, and after it becomes far easier to lie and cheat and steal in the absence of reliable recorded evidence.
I wonder if they’ve considered solving this problem ”at an industry level” before they break the dam, and whether perception might turn against Samsung for helping create the flood that comes after.

