Nvidia CloudLight tech demo shows off cloud-rendered lighting
Internet-connected devices will theoretically be able to offload graphical computation onto the cloud. But what would that actually look like?
Internet-connected devices will theoretically be able to offload graphical computation onto the cloud. But what would that actually look like? Nvidia's CloudLight demo shows off how the cloud could be used to calculate indirect lighting in games.
CloudLight essentially computes lighting data on a cloud-based server, and then transfer it back to the end user. By focusing on indirect light, Nvidia is trying to overcome the biggest potential pitfall for cloud computing in games: latency. Because direct illumination is still rendered on a local client, cloud-based indirect lighting is an additive feature.
Nvidia argues that cloud rendering makes sense for this purpose because it is effective even when an internet connection is unreliable. "In the worst case, the last known illumination is reused until connectivity is restored, which is no worse than the pre-baked illumination found in many game engines today," Nvidia's report details. As seen in the video below, even with significant latency, the effect is rather subtle, making a scene feel a bit more ephemeral.
There are three approaches Nvidia is taking with cloud-based lighting, but voxels are especially exciting because it is ideally suited for mobile devices. With this method, light data is encoded into 3D points that get transferred via H.264 video compression. It require "almost no computation burden on client devices." Although not practical for large scenes, the low bandwidth and computational requirements for voxel-based light rendering could make it ideal for tablets and phones. And, "in the near future, thermal limits on mobile devices are unlikely to be overcome. To continually improve visual quality at the rate consumers have come to expect, the only solution may be to move some computation to the Cloud," Nvidia says.
There are still practical concerns regarding bandwidth, but it's nonetheless a fascinating proof-of-concept on the potential of deferred rendering. One can only assume that Microsoft will want to get in touch.
-
Andrew Yoon posted a new article, Nvidia CloudLight tech demo shows off cloud-rendered lighting.
Internet-connected devices will theoretically be able to offload graphical computation onto the cloud. But what would that actually look like?-
-
So they needed a Titan in the servers to achieve reasonable results?
Yeah I don't think that's going to work when all of the current cloud solutions are using low powered commodity servers. This idea of giving one user huge timeslices of cloud based processing to do heavy computations contradicts what these cloud clusters are actually built for. -
-
-
Where this will look weird of course will be for light created at the moment a user spontaneously chooses, e.g. when he shoots a gun in a dark room and the muzzleflash illuminates the environment. Even a minor delay would be accutely noticable in a single player game. I suppose the game wil differentiate between such instances and keep them computed client-side. We're also talking about a significant, consistent load being throughput and I don't see the current crop of cloud servers handling that well. It sounds like game companies (or GPU manufacturers wanting to make their tech attainable) will need to purchase new, dedicated hardware for the gaming cloud to host such services. This will become a reality with time, but current constraints make me think this tech demo halts at proof of concept and lacks practical application for at least a decent number of years out. Then again, Microsoft and Sony will want this tech to improve their new consoles and extend their lifespans, so it would be great if they pushed hard for implementation and the market benefited.
-
-
-
Hey dudebros. I didn't work on this, but I can talk about it a little.
Most games use a combination of dynamic and static lighting. The dynamic lighting comes in the form of lights and some straightforward shader math. The static lighting typically comes from lightmaps, which are just textures that have data in them that say "here's how much light is at this point (and what color it is)."
The lightmap data is very high quality, and can take things into account like "bounces". Bouncing matters very much in real life. It's the property that makes things really look connected. For example, think of a red wall next to a white floor--bouncing is what makes the red wall "reflect" in the white floor.
By comparison, dynamic lighting almost never does anything like this, because the cost of doing so would be too high (it's more complex than simple math at that point).
The bonus complexity is that bounce lighting really should be affected by dynamic lights. For example, if you point a flash or spotlight at that red wall, the reflection in the floor should become even more pronounced.
This system basically leverages GRID servers to do high quality, static-style lighting in a realtime way. But not quite realtime enough to do on your local GPU in addition to all of the other work you're doing to render your game. (Unless you were, say, valcan_s and wanted to have some spare GPUs sitting around to do just this).
The reason that the lighting is "no worse" than prebaked illumination in cases of connection interruption is that games would basically just fall back to their prebaked lighting in those cases.
The tech is pretty cool. All of the folks involved in the research are super smart, I hope to see some games or workstation apps take advantage of the tech sooner rather than later.
I can probably answer limited questions about this if folks are interested.-
-
-
-
Another question:
So my understanding of SLI, you either got one card either rendering 1/2 the screen, or doing an every-other frame type thing.
Could this technology offer a different mode for SLI? One card does the normal rendering, with the other card doing this specialized lighting calculations? Would there be any benefit to this configuration? -
-
-
-