cross-posted from: https://programming.dev/post/15448730

In order to learn programming holograms I’d like to gather some sources in this post.

The linked paper describes already optimised way of rendering holograms. I’d like to find a naive implementation of a hologram i.e. in ShaderToy using interferometric processing of stored inteference patterns like it works in a physical hologram(I guess). I also want this to be a resource to learn how laser holograms work in real life.

To create an introduction project to holographic rendering these steps will be required.

  1. Store a sphere or a cube interference patterns in a texture. This should be a model of our physically correct hologram. Note: If this step requires saving thousands of textures we should limit the available viewing angles(if that’s what helps)
  2. Load the rendered patterns as a texture or an array of textures into a WebGL program
  3. Create a shader that will do the interferometric magic to render the sphere/cube from the hologram model

The performance of the solution is irrelevant. Even if takes and hour to generate the data and a minute to render one frame in low resolution that’s fine.

Note: The goal is not about creating anything that visually looks like a cool hologram or rendering 3D objects with a volume like with SDFs or volume rendering. It’s all about creating a basic physical simulation of viewing a real hologram.

  • GrappleHat
    link
    fedilink
    English
    2
    edit-2
    3 days ago

    A couple of ideas:

    Encoding holograms

    • Model the object in 3D space (using Blender maybe?)
    • Use the Angular Spectrum algorithm to model light propagation, its interaction with the object, and it hitting the recording medium.
    • Your final recorded hologram should have two maps (aka “images”) across (x, y): a map of the light’s amplitude and another of its phase offset. This is your recorded hologram.

    Decoding holograms:

    • Use the angular spectrum algorithm again except reverse the light’s propagation direction. The amplitude and phase maps from the encoding phase are the initial conditions you’ll use for the light.
    • The light’s amplitude and phase information you calculate at various planes above the recording plane are the 3D “reconstructed” image.

    Last thought

    Holography is often used to record information from the real world, and in that process it’s impossible to record the light’s phase during the encode step. Physicist’s call it “the phase problem” and there are all kinds of fancy tricks to try to get around it when decoding holograms in the computer. If you’re simulating everything from scratch then you have the luxury of recording the phase as well as the amplitude - and this should make decoding much easier as a result!

    • @PawelOP
      link
      12 days ago

      Thanks! Finally something concrete. Once I return to this to write a POC I’ll revisit your tips here.

  • I just learned about “gaussian splats” and it was the first thing I thought of that might be useful for generating a 3D model that could be converted to a hologram because it specifically uses distance data of reflections to generate the model and how that model fills a volume.

    • @PawelOP
      link
      213 days ago

      I’m wondering if that type of data is abstract enough, follows similar principles and contains enough information to be used as a source to generate interference patterns.

  • @PawelOP
    link
    212 days ago

    Update 1: Added a book that seems to be quite comprehensive, and found some shaders that simulate basic holographic film exposure

    Book: Introduction to Computer Holography: Creating Computer-Generated Holograms as the Ultimate 3D Image

    This is something that could be the beginning of step 1 https://www.shadertoy.com/view/clyyzd based on https://www.shadertoy.com/view/DtKSDW

    All updates will be a part of the original post https://programming.dev/post/15448730