Sponsored By

Rendered #5: NVIDIA Omniverse and a Volumetric Video Standards Association

Creative production gets much needed "modern" tooling, Google demos 3D Telepresence, and a Volumetric Format Association aims to solidify volumetric video formats

Kyle Kukshtel, Blogger

May 28, 2021

8 Min Read

Rendered is a monthly newsletter on 3D rendering technology, game engines, volumetric filmmaking, photogrammetry, and everything in-between. It's your guide to emerging realities

For this issue, we see creative production get some much needed "modern" tooling, Google shows a 3D Telepresence demo, and a newly-formed Volumetric Format Association aims to solidify volumetric video formats

News

Promethean AI Creation Engine

This is a bit of a follow-up to the concept of “Assisted Creation” I brought up in the last newsletterFrom their website:

Promethean AI is world's first Artificial Intelligence that works together with Artists, assists them in the process of building virtual worlds, helps creative problem solving by suggesting ideas and takes on a lot of mundane and non-creative work, so You can focus on what's important. All while learning from and adapting to individual tastes of every single Artist.

Typical ridiculous startup claims aside, what Promethean AI aims to offer is something like auto-complete or Gmail’s “smart compose” for the process of environmental art design. It’s very much that case that every project can be its own special flower that needs specific tweaks, but it’s also true that many projects contain easily automatable redundant work.

Tools like Houdini aim to solve a bit of this, but only at a a per-asset level (mostly). Houdini can help you generate millions of buildings that all look unique while still looking stylistically similar, but the act of actually arranging those buildings (or trees or chairs or whatever else) in a game world remains somewhat tedious work. Promethean AI aims to solve at least some of that, giving artists and creators a tool that ideally works with them to produce better environments, faster. It’s sort of like SpeedTree, but for whole environments and hand waves AI.

The whole keynote is definitely worth a watch, and hats off to the team for really showing a lot here, not just talking about dreams or ambitions.

NVIDIA Omniverse

Speaking of tools, Nvidia recently unveiled “Omniverse”. Nvidia press strategy is often “if you know you know” so at times it can be hard to parse if something they are showing off is a piece of hardware, software, plugin, etc. 

From their own site:

Omniverse enables universal interoperability across different applications and 3D ecosystem vendors. It provides efficient real-time scene updates and is based on open-standards and protocols. The Omniverse Platform is designed to act as a hub, enabling new capabilities to be exposed as microservices to any connected clients and applications.

As far as I understand it, Omniverse leverages Pixar’s USD format (basically — a “scene description” file) in the cloud, and the disparate applications used in creative production essentially “commit” those changes to a cloud-hosted USD representation of the scene. Those changes then propagate to other connected client machines. RTX is involved via “views” on the content, generated by RTX machines (in the cloud), providing you high resolution renders without needing the hardware itself.

If it’s a bit confusing, it’s because it doesn’t seem like all parts of Omniverse are meant for all projects. VFX people can benefit from the RTX rendering, while game engine work is mostly concerned with asset synchronization.

Creative project collaboration is a notoriously sticky problem to solve, so it’s exciting to see someone tackle the issue in what feels like “the right way,” and with many vendors offering plugins and compatibility patches it seems like it will take.

Google’s Project Starline

Google strapped “more than a dozen different depth sensors and cameras” to a TV and made a 3D telepresence demo. I think people care? But reading the Wired article that covered the project, it’s clear they have the same sort of ¯\_(ツ)_/¯ reaction I did:

Google’s Project Starline seems especially overengineered, an amalgamation of accessible tech (Google Meet), nerd tech (computer vision! compression algorithms!), and an intricately constructed, unmovable mini studio, all for the sake of … more video meetings.

3D telepresence demos are nothing new. We even did one with Depthkit a few years ago, and Or Fleisher of this very newsletter did one before that. Here’s the thing about them: the novelty of them for the user wears off quickly, and no matter how high-res your screen is it will still feel like looking at a screen. Yes yes they are doing sensor fusion and streaming the data over the internet and reconstructing it at a different location, but what is never talked about is why.

It’s presupposed that people want this, that higher fidelity remote conversations improve the human experience or something. I recognize this is a step of many towards something that may actually achieve those aims, but for now it’s hard to see this and not feel like I’m being asked to sit at attention and talk to someone in the most unnatural way possible.

That aside, the fact Google was cagey about the actual depth sensors used is interesting — is Google making a standalone depth sensor? We’ll see!

Volumetric Format Association Founded

From the press release:

The first industry association dedicated to ensuring interoperability across the volumetric video ecosystem has launched. Seven companies have joined forces on the association, including Verizon, ZEISS, RED Digital Cinema, Unity, Intel, NVIDIA, and Canon. The aim of the Volumetric Format Association is to establish a collection of specifications driving adoption of volumetric capture, processing, encoding, delivery, and playback.

This is a Big Deal. Some of the largest players in camera tech, game engines, and realtime technology have collectively decided to form an association dedicated to standardizing volumetric video. This is notable for a lot of reasons, as the volumetric video “format” right now is essentially whatever data container the programmer your company hired thinks it should be.

Of course this could go horribly wrong and they collectively decide to develop a standard that only makes sense when paired with their technology stack (5G, I’m looking at you), but I’m hopeful. The diversity of companies represented makes me think it’s an actual good faith effort to standardize an ecosystem that has suffered from fragmentation, so I look forward to seeing what comes of it. There’s also a legitimizing factor here, as these companies have all collectively acknowledged that volumetric video is really a thing worth caring about.

Worth noting: Microsoft and Epic are notably absent from the list. Maybe they weren’t at the right party at the right time, but Epic’s game engine tech stack and investment in virtual production is far more suited to volumetric than Unity’s current position, and Microsoft operates many capture stages worldwide. Hopefully they will both join soon enough.

Note: No papers this week! We’ll be back next month to share the latest in research!

Resources

Amazon Nimble Studio seems to be a full cloud based VFX suite

It’s hard to parse exactly what is on offer here, but it essentially sounds like access to scalable compute through Amazon, with a flavor suited for creative work (read: creators need GPUs!). Seems like a nice solution for VFX studios that don’t want to invest in workstation machines.

Large list of “Foundational or significant books or talks compiled by people who teach, study, or work around the edges of “immersive experiences”, “themed entertainment”, and XR”

Marie Foulston asked her followers on Twitter the above, and there’s a great list of stuff here for anyone looking to get involved in this space.

Epic has published Volume 2 of the Virtual Production Field Guide

Didn’t realize we were going to get a second volume! This (and the first) are essential reading for anyone looking to get involved with Virtual Production.

Briefly Noted

ARCore gets a RawDepth API - Expect even more selfie apps

Lightform Project LFX experiments with using a mapped space to show you custom information on any surface - Seems pretty cool!

VTuber (kinda) CodeMiko gets profiled in The Verge - if you don’t know who CodeMiko is and you read Rendered, you are contractually obligated to read this

Google Earth adds in Time-lapse features - Watch climate change happen at the global scale. The video even seems like an intro to a dystopian sci-fi movie.

Epic hosts roundtable discussion on Virtual Humans - it’s largely about Metahumans, but there’s some good discussion about the adjacent tech and implications as well

And that’s a wrap! As always, thanks so much for reading this newsletter.

You can subscribe directly on Substack at rendered.substack.com.

Additionally, if you enjoy reading Rendered, please share this newsletter around! We thrive off of our community of readers, so the larger we can grow that pot of people, the better. Thanks for reading!

Read more about:

Featured Blogs
Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like