There’s a lot of bleeding edge technology experimentation going on over at NVIDIA. From proving the moon landing actually happened, to creating incredible 3D images with Ray Tracing in their latest video cards. But there’s new research going on that will enable artists to casually sketch out a scene, and then use Artificial Intelligence to automatically create a photorealistic image of somewhere in the world.
A novice painter might set brush to canvas, aiming to create a stunning sunset landscape — craggy, snow-covered peaks reflected in a glassy lake — only to end up with something that looks more like a multi-colored inkblot. But a deep learning model developed by NVIDIA Research can do just the opposite: it turns rough doodles into photorealistic masterpieces with breathtaking ease. The tool leverages generative adversarial networks, or GANs, to convert segmentation maps into life like images. – Nvidia Blog
Imagine you’re the art director or location scout for your next film. It calls for unique locations and to help, the director has given you the story boards to help craft a set or location to consider shooting on. Sketches are nice, but photos can be even better. That’s where Nvidia’s new software, called GauGan, can come in handy.
GauGan can take simple doodles and use artificial intelligence to create unique photo realistic images from. We’re not talking about going out and finding an image online that is kinda close to what you’re looking for. No. The neural network uses machine learning to deploy segmentation maps to fine time each element of the drawing and then selects those elements. The result is an image that looks like it could have been taken with a DSLR on location. Only, it isn’t a cut and paste application. “This technology is not just stitching together pieces of other images, or cutting and pasting textures,” Catanzaro said. “It’s actually synthesizing new images, very similar to how an artist would draw something.”
“It’s much easier to brainstorm designs with simple sketches, and this technology is able to convert sketches into highly realistic images,” said Bryan Catanzaro, vice president of applied deep learning research at NVIDIA.
Change the sky color, the AI adds clouds and changes the overall lighting of the image. draw a simple brown line across the horizon, and the AI interprets it as a mountain pass and adds a ridge. A blue line drawn down from top to bottom and suddenly you have a waterfall cascading down into a pool of water. Change the season to winter, and suddenly, the scene has snow, and overcast skies.
“It’s like a coloring book picture that describes where a tree is, where the sun is, where the sky is,” Catanzaro said. “And then the neural network is able to fill in all of the detail and texture, and the reflections, shadows and colors, based on what it has learned about real images.”
From there, the AI learns that when you have a lake or a pond, there’s usually a reflection, and therefore, it creates a convincing imitation complete with ripple distortion. Users will also be able to change the time of day, and GauGan will adjust shadow fall and color temperature of the scene to match.
I can imagine that as this technology matures, we’ll have digital location scouts, that not only can find the perfect location, but can create it in a computer using artificial intelligence. And once again, the state of the art will change.
NVIDIA is showcasing GauGan at the annual GPU Technology Conference, where attendees will be able to try out the app for themselves with an interactive demo in the NVIDIA booth.