Adobe Developing Voice-Activated Photo Editing Feature

19731-20704-adobeassistantvideo1-lBy James DeRuvo (doddleNEWS)

Adobe is working on an AI digital assistant for Photoshop called Sensei, which allows you to find and edit your photos using your voice, much like iOS’ Siri and Amazon’s Alexa. If it’s a success, can a version for Premiere Pro and After Effects be far behind?

“Our Adobe Research team is exploring what an intelligent digital assistant photo editing might look like. With Adobe Sensei we combined the emerging science of voice interaction with a deep understanding of both creative workflows and the creative aspirations of our customers. Our speech recognition system is able to directly accept natural user voice instructions for image editing either locally through on-device computing or through a cloud-based Natural Language understanding service.

“This is a first step towards a robust multimodal voice-based interface which allows our creative customers to search and edit images in an easy and engaging way using Adobe mobile applications.” – Adobe (via Resource Magazine)

The Siri-like interface (even the voice sounds familiar) Adobe Sensei will interact with a user when a microphone icon is tapped in an iPad and presumably and iPhone. This engages the digital assistant, and allows the user to give verbal commands on how they would like their photos edited.

Basic commands such as cropping, flipping horizontally, undo, and then saving are featured in the above video, plus sharing directly to Facebook and likely other photo sharing sites. Other features include visual search of Adobe Stock, font matching, and making features like liquidy “face aware.”

The development comes as a response to the whole voice activation interface trend, which started with Siri and Google Assistant, and has expanded nearly exponentially since Amazon brought about Alexa and Echo home home assistant devices.

Now, it seems everyone is getting into the AI digital assistant game, and Adobe is joining the party. The proof of concept has come as a result of Apple opening up Siri development to third-parties in the SDK of iOS 10, and will work for Adobe applications on the iPhone and iPad, as seen in the demo.

But Adobe is also introducing machine learning and an AI algorithm to their Creative Cloud platform, so you can bet they’re going to try and expand the feature to the desktop, at least in the Apple ecosystem, where Siri can now work on macOS. It’ll be interesting to see if Adobe expands it to the Windows 10 platform with Cortana, as well, but so far, it’s only a prototype interface through iOS.

It’ll also be interesting if Adobe decides to expand the feature beyond basic photo editing. Can you imagine being able to tell your computer how you want your project edited in Premiere Pro, and it just does it? When you combine machine learning with automation, it could happen.

Disney Research has been working on auto editing apps, as is GoPro, but those are all basic assemble edits. What if you can do advanced post-production workflow that way? Maybe in several years, I’m sure, but that seems where we’re heading.

Check out Adobe’s Sensei site here.

Hat tip: The Verge and Resource Magazine

About doddle 16509 Articles
Doddlenews is the news division of the Digital Production Buzz, a leading online resource for filmmakers, covering news, reviews and tutorials for the video and film industry, along with movie and TV news, and podcasting.

Be the first to comment

Leave a Reply

Your email address will not be published.


*