14.4 C
New York
Thursday, October 23, 2025

graphics – How is a Spherical Harmonics Lightmap generated?


The instance you linked is not producing a lightmap, however a light-weight probe. As they are saying within the Future Work part:

The following factor I’d love to do is take this spherical harmonic idea and use it to implement gentle probes distributed in my scene. I’m curious to see the way it all seems as a result of, for now, the comparisons within the final part have been made utilizing a single gentle probe on the heart of the scene (i.e., we generated maps with respect to world area location [0, 0, 0]).

When rendering with gentle probes, we typically have a sparse graph of pattern factors in our scene for which we have calculated spherical harmonics coefficients, and we interpolate between them.

As a result of every probe is accountable for encoding the best way gentle hits totally different surfaces over a large space, we want lots of coefficients to precisely encode that information. However since our probe graph is fairly sparse, we do not thoughts storing too many coefficients, as a result of we solely have to retailer them for a “small” variety of probes.

The state of affairs with lightmaps is considerably the other: we’ve a dense grid of samples, typically each half or quarter meter alongside each floor within the scene. However as a result of they’re so tightly spaced, every pattern would not have to retailer fairly as a lot subtlety concerning the variation in gentle at that spot: that may come by means of within the variations in gentle captured at adjoining pattern factors. So we are able to afford to discard a number of the higher-order coefficients with out too noticeable a lack of constancy.

This GDC presentation about using spherical harmonics lightmaps in Star Wars Battlefront II reveals they used solely the primary 4 coefficients for every of R, G, and B, or 12 values in all, in comparison with 27 if utilizing all 9 coefficients from the pattern within the query.

They saved the L0 (common illumination) coefficients within the RGB channels of 1 excessive dynamic vary texture. Then the L1 coefficients have been saved in three low dynamic vary textures. Meaning the shader wants simply 4 texture faucets to reconstruct the lighting, as an alternative of 9.

Since lightmaps are typically fairly spatially coherent, they used block compressed codecs for each: BC6HU for L0 (8 bits per pixel), and BC7 (8 bpp) or BC1 (4 bpp) for the 3xL1s. In order that’s simply 20-32 bits per pixel of lightmap. A 4096×4096 decision chunk of lightmap would price you 40-64 MB of VRAM utilizing that scheme, so it is fairly doable. And the presentation slides present that you could get some lovely visuals even with these aggressive reductions in uncooked knowledge.

Importantly, they used this just for oblique lighting, and nonetheless evaluated direct lighting individually, which helps maintain excessive frequency particulars that this approximation scheme crunches.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles