NVIDIA Develops Super Slow Motion Technique with Artificial Intelligence

By James DeRuvo (doddleNEWS)

Artificial Intelligence is poised to dramatically change the way we make movies. Not only from a workflow point of view, but also from a cinematic one. Will AI get to a point that it’s going to auto-edit our movies for us? Probably not. But if you need to convert a regular sequence shot at 30fps into one that’s super slow motion, Nvidia has a way to do it using AI that’s bound to change the game.

“There are many memorable moments in your life that you might want to record with a camera in slow-motion because they are hard to see clearly with your eyes: the first time a baby walks, a difficult skateboard trick, a dog catching a ball. While it is possible to take 240-frame-per-second videos with a cell phone, recording everything at high frame rates is impractical, as it requires large memories and is power-intensive for mobile devices. Our method can generate multiple intermediate frames that are spatially and temporally coherent. Our multi-frame approach consistently outperforms state-of-the-art single frame methods.” – Nvidia White Paper

Working with researchers at Cornell University, engineers at Nvidia created an algorithm that would use a technique called frame interpolation to take a conventional video clip shot at 30 fps, and convert it into a super slow motion clip at 240 fps. The algorithm uses artificial intelligence, that was trained with over 11,000 video clips shot at 240 frames per second, and then the algorithm was able to predict where a given video clip shot at 30 fps per second would need additional frames and then create and insert those frames into the clip.

And to underscore just how the technique to can really slow things down, the AI was stressed tested with video footage form the popular YouTube Channel The SlowMo Guys, who take videos at thousands of frames per second. And the AI was able to insert even more frames without breaking the image.

The technique used Nvidia Tesla V100 GPUs and the deep learning network of cu-DNN accelerated PolyTorch to map out a neural network if artificial intelligence that could then predict the next frame in the series, going all the way up to 240fps. The team then used a separate algorithm to verify the accuracy of the results. The result is a fluid, slow motion video shot that doesn’t suffer from blurriness of camera shake.

The end result is magic, as it can turn an ordinary “Kodak moment” into something even more special, without having to worry about knowing visual effects. “The method can take everyday videos of life’s most precious moments and slow them down to look like your favorite cinematic slow-motion scenes, adding suspense, emphasis, and anticipation,” NVIDIA writes.

You can read more about it here.


About James DeRuvo 801 Articles
Editor in Chief at doddleNEWS. James has been a writer and editor at doddleNEWS for nearly a decade. As a producer/director/writer James won a Telly Award in 2005 for his Short Film "Searching for Inspiration. James is a recovering talk show producer from KABC in Los Angeles, and a weekly guest on the Digital Production Buzz with Larry Jordan.

Be the first to comment

Leave a Reply

Your email address will not be published.