You just described ray tracing. The problem is, it’s incredibly computationally expensive. That’s why DLSS and FSR were created to try and make up for the slow framerates.
Not in an ideal world. Ray tracing is how light actually works in real life. Everything we do with global illumination right now is a compromised workaround, since doing a lifelike amount of ray tracing in real time, at reasonable framerates, is still to much for our hardware.
Nothing about 3D animation is ideal. It’s all about reasonable approximations. Needing to build better GPUs to support tracing individual photons is insane when you could just slightly increase ambient lighting in the area of a light source.
I’ve seen an awesome “kludge” method where, instead of simulating billions of photons bouncing in billions of directions off every surface in the entire world, they are taking extremely low resolution cube map snapshots from the perspective of surfaces on a “one per square(area)” basis once every couple frames and blending between them over distance to inform the diffuse lighting of a scene as if it were ambient light mapping rather than direct light. Which is cool because not only can it represent the brightness of emissive textures, but it also makes it less necessary to manually fill scenes with manually placed key lights, fill lights, and backlights.
Light probes, but they don’t update well, because you have to render the world from their point of view frequently, so they’re not suited for dynamic environments
They don’t need to update well; they’re a compromise to achieve slightly more reactive lighting than ‘baked’ ambient lights. Perhaps one could describe it as ‘parbaked’. Only the ones directly affected by changes of scene conditions need to be updated, and some tentative precalculations for “likely” changes can be tackled in advance while pre-established probes contribute no additional process load because they aren’t being updated unless, as previously stated, something acts on them. IF direct light changes and “sticks” long enough to affect the probes, any perceived ‘lag’ in the light changes will be glossed over by the player’s brain as “oh, my characters’ eyes are adjusting, neat how they accommodated for that.”–even though it’s not actually intentional but rather a drawback of the technology’s limitations.
You just described ray tracing. The problem is, it’s incredibly computationally expensive. That’s why DLSS and FSR were created to try and make up for the slow framerates.
there’s more to dynamic global illumination than just ray tracing
Not in an ideal world. Ray tracing is how light actually works in real life. Everything we do with global illumination right now is a compromised workaround, since doing a lifelike amount of ray tracing in real time, at reasonable framerates, is still to much for our hardware.
Nothing about 3D animation is ideal. It’s all about reasonable approximations. Needing to build better GPUs to support tracing individual photons is insane when you could just slightly increase ambient lighting in the area of a light source.
I’ve seen an awesome “kludge” method where, instead of simulating billions of photons bouncing in billions of directions off every surface in the entire world, they are taking extremely low resolution cube map snapshots from the perspective of surfaces on a “one per square(area)” basis once every couple frames and blending between them over distance to inform the diffuse lighting of a scene as if it were ambient light mapping rather than direct light. Which is cool because not only can it represent the brightness of emissive textures, but it also makes it less necessary to manually fill scenes with manually placed key lights, fill lights, and backlights.
I am not educated enough to understand this comment
Light probes, but they don’t update well, because you have to render the world from their point of view frequently, so they’re not suited for dynamic environments
They don’t need to update well; they’re a compromise to achieve slightly more reactive lighting than ‘baked’ ambient lights. Perhaps one could describe it as ‘parbaked’. Only the ones directly affected by changes of scene conditions need to be updated, and some tentative precalculations for “likely” changes can be tackled in advance while pre-established probes contribute no additional process load because they aren’t being updated unless, as previously stated, something acts on them. IF direct light changes and “sticks” long enough to affect the probes, any perceived ‘lag’ in the light changes will be glossed over by the player’s brain as “oh, my characters’ eyes are adjusting, neat how they accommodated for that.”–even though it’s not actually intentional but rather a drawback of the technology’s limitations.