This year’s Adobe Max 2022 was big on 3D design and mixed reality headsets, but the AI-generated elephant in the room was the emergence of text-to-image generators like Dall-E. How does Adobe plan to respond to these revolutionary tools? Slowly and cautiously, according to the talk — but a significant feature buried in the new version of Photoshop shows that the process has already begun.
to the end of release notes (opens in new tab) for the latest Photoshop v24.0 there is a beta feature called ‘background neural filter’. What does it do? Like Dall-E and Midjourney, it lets you “create a unique setting based on a description”. Just enter a background, select ‘Create’ and choose your favorite result.
However, it is a far cry from Adobe’s Dall-E rival. It’s only available in Photoshop Beta, a testing platform separate from the main app, and you’re currently restricted to typing in color to produce different photo backdrops, rather than bizarre blends of the darkest corners of your imagination.
But the ‘background neural filter’ is clear evidence that Adobe is, albeit cautiously, delving further into AI imaging. And his keynote speech at Adobe Max shows that he thinks this frictionless method of creating visual images is undoubtedly the future of Photoshop and Lightroom – once the small issue of copyright issues and ethical standards is resolved.
Adobe didn’t really mention the arrival of the ‘background neural filter’ in Adobe Max 2022, but it did expose where the technology will end up.
David Wadhwani, president of Adobe’s digital media business, effectively said the company has the same technology as Dall-E, Stable Diffusion and Midjourney; just chose not to apply it to their apps yet. “Over the past few years, we’ve been investing more and more in Adobe Sensei, our AI engine. I like to refer to Sensei as his creative co-pilot,” said Wadhwani.
“We’re working on new features that can take our core core apps to whole new levels. Imagine being able to ask your creative co-pilot in Photoshop to add an object to the scene by simply describing what you want, or asking your co-pilot-pilot to give you an alternative idea based on what you’ve already built. It’s like magic,” he added. It certainly goes a few steps beyond Photoshop’s Sky Replacement tool.
He said this while standing in front of a simulated version of what Photoshop would look like with Dall-E’s powers (above). The message was clear – Adobe could do text-to-image generation at this scale now, but chose not to.
But it was Wadhwani’s Lightroom example that showed how this kind of technology could be more sensibly integrated into Adobe’s creative applications.
“Imagine if you could combine ‘gen-tech’ with Lightroom. So you can ask Sensei to turn night into day, a sunny photograph into a beautiful sunset. Move shadows or change the weather. All of that is possible today with the latest advances in generative technology,” he explained, with a not-so-subtle nod to Adobe’s new rivals.
So why hold back while others steal your AI-generated fries? The official reason, and one that certainly has some merit, is that Adobe has a responsibility to ensure that this new power is not exercised recklessly.
“For those unfamiliar with generative AI, it can conjure up an image simply from a text description. . “We want to do this in a way that protects, empowers and champions the needs of creators.”
What does this mean in practice? While still a little vague, Adobe will move slower and more carefully than Dall-E. “Here’s our commitment to you,” Wadhwani told the Adobe Max audience. “We are approaching generative technology from a creator-centric perspective. We believe that AI should enhance human creativity, not replace it, and it should benefit creators, not replace them.”
This goes some way to explaining why Adobe has so far gone as far as Photoshop’s ‘background neural filter’. But that too is only part of the story.
the long game
Despite being the giant of creative apps, Adobe is undoubtedly still very innovative – check out some of the projects at Adobe Labs (opens in new tab)particularly those that can transform real-world objects into 3D digital assets.
But Adobe is also susceptible to being caught off guard by fast-moving rivals. Photoshop and Lightroom were built as desktop tools, which means Canva has gained an edge for easy-to-use, cloud-based design tools. That’s why Adobe spent $20 billion on Figma last month, an amount that’s more than Facebook paid for WhatsApp in 2014.
Is the same thing happening with names like Dall-E and Midjourney? Quite possibly, as Microsoft has just announced that Dall-E 2 will be integrated into its new Designer graphic design app (above), which is part of its 365 productivity suite. mainstream, despite Adobe’s doubts about the speed at which this is happening.
And yet Adobe is also right about the ethical issues surrounding this fascinating new technology. There’s a sizable copyright cloud growing with the rise of AI imaging — and it’s understandable that one of the founders of the Content Authenticity Initiative (ICA), designed to deal with deepfakes and other manipulated content, might refuse to do it all. generating AI.
Still, Adobe Max 2022 and the arrival of the ‘background neural filter’ show that AI imaging will no doubt be a big part of Photoshop, Lightroom and photo editing in general – it may take a little longer to complete. appear in your favorite Adobe Application.