Dynamic reverb for a procedurally generated 3D world
- Finn Mitchel-Anyon
- Nov 22, 2023
- 4 min read
Updated: Dec 7, 2023
Reverb plays a huge role in immersing players in a 3D world. Used correctly, it gives virtual environments a sense of realism and atmosphere. Used incorrectly, it can create an uncanny effect and completely break immersion. If the player is in a small shed and their footsteps reverberate as though they’re in a massive cathedral, they’re almost definitely going to feel like something’s off.
How most games do reverb
Games often use ‘reverb zones’. When an audio-emitting object enters a reverb zone, the audio engine is going to be told to apply a specific reverb to any sound effects emitted by that object. This solution works great for 90% of 3D games, which have pre-designed environments, allowing the audio designer to choose an appropriate reverb for each section of the game.
Things get more complicated when it comes to procedurally generated worlds like Minecraft, No Man’s Sky, and Valheim. In these games, there’s no way of knowing exactly what environments the player might encounter.
Systems like Steam Audio and Google’s Resonance Audio allow you to generate physically accurate reverb from any geometry by simulating how sound waves would actually bounce around the environment. However, this is a computationally expensive process, pretty impractical for a real-time application.
For a procedurally generated world, we ideally want to create an adaptive reverb system which is:
Computationally efficient
Adaptable to any environment the player might encounter
Convincing enough that the player's immersion is never broken
Voxelarium
Voxelarium is a procedurally generated sandbox created in Unity. Basically, it's Minecraft but less blocky. In the game, the player might find themselves in a wide variety of environments, including wide-open caverns, tunnels, fields, etc. It's a perfect candidate for implementing our dynamic reverb system.

The system
Our system takes inspiration from the 'Bubblespace' system used in Tom Clancy’s The Division 2, described here. It uses a combination of horizontal and vertical raycasts to scan the geometry around the player. Using distance data gathered from the raycasts, we are able to get a rough idea of what kind of environment the player might be in. If there’s a lot of space horizontally around and vertically above the player, there’s a good chance that the player is currently outdoors. If there’s very little space around the player, it’s more likely that the player is in some kind of tunnel. Or if it’s somewhere in between, it’s likely that the player is in some sort of cavern.
Once we've established the main types of environments that the player might encounter, the next step is to create appropriate reverb sends for each of these situations.

Reverb blending
With our different reverb buses set up, we can now use the raycast data to make an informed estimate about what kind of space the player is most likely to be in, and select a reverb send appropriately. But first, we'll need to set up our 'spatial definitions' — data containers which hold the approximate horizontal and vertical size ranges which define a type of space. For example, we might define a ‘Tunnel’ spatial definition, which would cover spaces with a very small horizontal and vertical size.
Each spatial definition should also be linked to a parameter in our audio engine, which will be used to control the send level to the associated reverb.
public class SpatialDefinition
{
public string Name;
public Vector2 Size; // The average width and height of this space
public float Range; // The range of sizes that this spatial definition covers
public AK.Wwise.RTPC RTPC; // Which RTPC should this control
public Color Color; // For visualisation purposes
}
There's only one more issue: blending between different spaces. If the player starts outdoors and enters a cavern, there’s going to be a very abrupt change in the reverb as the player moves from what is determined to be one kind of environment to another. We can, of course, just smooth this transition out over time, but this doesn’t address the core issue: Sometimes you’re neither outdoors nor in a cave. Sometimes you’re halfway in between the two environments.
The solution is to instead blend between different reverbs based on the current environment’s resemblance to each spatial definition. If the space seems 75% like a cavern and 25% like a tunnel, we can mix both reverbs proportionally, rather than only applying the cavern reverb. This creates smoother, more natural transitions between different environments.
Creating a custom inspector
To help visualise how our different reverbs will be blended, it's useful to create a custom editor in our game engine. In the below example, each colour on the visualisation represents a spatial definition. The X axis represents the average horizontal space around the player, and the Y axis represents the space above the player. The ambiguity variable controls how much the different spatial definitions blend into each other. Now we can see exactly how our different reverbs will blend into each other, making it a great deal easier to fine tune the parameters of our spatial definitions.

Further improvements
This is just one approach to creating a dynamic reverb system. There are still many ways that the system could be expanded. Some possible improvements include:
Simulating early reflections when the player is near a wall.
Detecting the materials of surrounding surfaces and adjusting reverb parameters based on their acoustic properties. For example, a stone surface would likely be more reflective than dirt, meaning less high-frequency damping.
Creating a system that allows each sound emitter to have its own unique reverb characteristics based on its surroundings. Currently the same listener-based reverb is applied to all sound effects. This means that if you were standing outside a cave and a sound was emitted from inside the cave, that sound would not be affected by reverb.
Comments