Rendering Light Propagation / Light In Flight
WurblPT can keep track of light propagation, which allows to visualize how light is transported in a scene, and to simulate Time-of-Flight based distance sensors accurately.
For this to work, WurblPT needs to know the distance a ray has travelled from the camera center into the scene.
That information is usually not available in path tracers since they employ Next Event Estimation (NEE): light paths fork at each interaction of a ray with an object, so that there is no unique light travel distance per path. Instead, the radiance accumulated along all path segments is simply accumulated in the sensor in one final step.
The simple and direct approach to keep distance information per path is to follow just one ray at a time without forks through the scene. But that excludes the use of NEE, which results in major performance drawbacks.
With a restructuring of the central path tracing function, WurblPT can instead keep track of distance information and evaluate it at each interaction of the ray with an object in the scene, without losing the benefits of NEE:

Left: classic path tracing with Next Event Estimation (NEE) at each interaction point, right: distance-preserving approach that conserves the advantages of NEE
In the sensor implementation,
this results in additional parameters for geometric and optical path lengths for the accumulateRadiance()
function.
This function is now called at every ray-object interaction instead of just once for every ray,
and allows a sensor to react differently depending on how far the light
needs to travel to reach the sensor from the current interaction position.
Since this function in its simplest form does nothing more than to accumulate radiance, which is exactly what happens in the classic path tracing approach, there is no performance penalty. But for special effects or simulation purposes, a sensor implementation can take that additional information into account, which allows to keep track of how light propagates in the scene.
The first example video shows how light would propagate from the light source to the sensor.
On the left, the light source is only switched on for an extremely short duration, and you can
see that the first light that reaches the camera comes via the shortest path directly from the light source,
while indirect reflections take longer to reach the camera. On the right, you see an accumulation
of that effect, which simulates a very slow propagation of light: once you switch the light source on,
you can see the direct and indirect light paths building up.
Light-in-Flight Rendering in the RTTNW cover scene: camera view (also on YouTube):
The second example visualizes the same effects, but ignores the light travel times from the last interaction
to the sensor. This allows to see how light propagates in the scene because once it interacts with something, it becomes
immediately visible. In contrast to the first example, this one has no direct physical interpretation, but it
has the advantage of showing light propagation independent of the camera location.
Light-in-Flight Rendering in the RTTNW cover scene: global visualization (also on YouTube):
Have a look at wurblpt-rttnw to see how these videos were produced.