At one point, I was tasked to create a system, that would allow sound designers to stop currently playing sound or sounds whenever another sound is triggered. Since our engine already supported custom sound properties, I knew the most appropriate way to expose this functionality would be to introduce a new list property to each sound entry in the sound config. Thus, sound designers could tweak the functionality as they saw fit using the flow they were accustomed to already. So far, so good.
However, it was the next step I wanted to take when my reasoning stood against the open-closed principle, so let’s explore that case together and see what lessons can be learned from my mistake.
One disclaimer before we dive in. Since most of my career working with sound involved using Audiokinetic’s Wwise audio middleware, I will refer to each sound as a sound event throughout the article. A sound event in Wwise is a call the middleware expects to receive from the game engine whenever an action involving sound is needed. It deals with sounds and/or music as defined in the Wwise authoring application. I might write more about it in the future, but for now, let’s call sounds sound events and continue.
This approach would stand against the open-closed principle, because whenever a sound designer would want another sound event to also stop a playing ‘door_open’ instance, they would have to edit the properties of the latter event! As a result, if we’d like to expand the functionality, we’d have to edit the state of an entity that has nothing to do with the event being queued, making that entity dependent on another entity's behavior. In other words, it's really the action of playing the sound that we'd like to change, and not the flow of an event being played. After all, a sound event that's being played shouldn't have to react to each new sound event getting triggered.
The right thing to do in this case is to reverse the logic. Let’s assume we’re adding a new ‘door_lock’ sound event. Our aim is to have this sound event automatically stop any playing instance of ‘door_open’ when triggered. This logic is centered around the ‘door_lock’ event, which means that it’s this event that should ‘know’ about the other entities it needs to interact with. This implies that the ‘door_lock’ needs to know about the existence of ‘door_open’, while ‘door_open’ should have no knowledge of either ‘door_lock’ or the fact that there are other events that might stop it.
It might seem a bit forced at first, but it really is essential that a designer understands these concepts. Let’s consider the flow of the operation for each approach. In the first design I wanted to implement, whenever a sound event is triggered, the steps needed to achieve the intended result would be:
In the first example, the operation would start with querying the audio engine for all currently playing sounds. Depending on the audio engine implementation, this step’s complexity would be either
The second example is a clear winner here, especially when the worst-case scenario is considered. But what does any of it have to do with the open-close principle?
Following the principle encourages us to stop thinking about the internals of entities we really shouldn't be considering at all. In this case, even though I only wanted one sound to stop another, I suddenly found myself wondering not only about the states of other sound events and their configuration, but also about whether the audio engine keeps a cached list of playing sound events or not. For a concept as simple as the one discussed, the amount of information I needed to have access to should have raised red flags in my head immediately.
Usually the more information gathering and state querying we need to do, the more we should question whether our approach is correct. This is especially true for simple operations which should always be easy to achieve without having to inspect the internals of the entities and/or systems we deal with.
The other benefit of the second approach is that it has way less external implementation details to deal with, which would significantly cut the number of its own reasons to change.
Unfortunately, one day you get sick and are so bedridden there is no way you are driving the car out of the garage for your significant other. Since you’d taken care of that for ages, your better half would have a hard time dealing with this predicament themselves, leading them to commute to work with public communication instead. This is a perfect example of a tight coupling, showing how your loved one having a dependency on your action forced them to make changes to their routine as a result of a change in your behavior. This is exactly the kind of dependency we should avoid in software development, including game development.
Could you imagine that the internet connection on your phone would depend on whether your friend is driving their car at the moment? Or would you opt for not having a shower in your apartment because your neighbor has one and you could arrange it so that you could use it at their home? I highly doubt it, and yet it’s commonplace to see code or designs that are so tightly coupled that changing anything would cause a chain reaction of a refactor. For this reason, whenever we design something, we should always try to achieve it in a way that:
If we attempt to design a clock, its time logic should be completely independent from the animation layer. If it’s not, then whenever we made a change to it, we’d always have to have the animation layer in the back of our heads and wonder, whether our changes would not cause animation implementation to change. This can be especially painful and costly in bigger teams, where each department have their own roadmaps and goals to pursue and might not be able to adapt their work on every demand.
The less reasons to change there are in a system, the less open for modification we can make it, which in turn means easier (and thus cheaper) maintenance.
One disclaimer before we dive in. Since most of my career working with sound involved using Audiokinetic’s Wwise audio middleware, I will refer to each sound as a sound event throughout the article. A sound event in Wwise is a call the middleware expects to receive from the game engine whenever an action involving sound is needed. It deals with sounds and/or music as defined in the Wwise authoring application. I might write more about it in the future, but for now, let’s call sounds sound events and continue.
Open-closed principle
The open-closed principle in programming states, that entities and/or systems should be easy to expand without the need to modify their current state. It’s a bit abstract, so let’s use the case already mentioned as an example. When I’ve added a new property to sound event’s properties, I wanted this property to list all sound events that, when triggered, would stop the one currently edited. Thus, a ‘door_open’ sound event could have a ‘door_close’ event in its list property. Whenever a ‘door_close’ sound event would be triggered, the currently playing instance of ‘door_open’ sound event would be stopped.This approach would stand against the open-closed principle, because whenever a sound designer would want another sound event to also stop a playing ‘door_open’ instance, they would have to edit the properties of the latter event! As a result, if we’d like to expand the functionality, we’d have to edit the state of an entity that has nothing to do with the event being queued, making that entity dependent on another entity's behavior. In other words, it's really the action of playing the sound that we'd like to change, and not the flow of an event being played. After all, a sound event that's being played shouldn't have to react to each new sound event getting triggered.
The right thing to do in this case is to reverse the logic. Let’s assume we’re adding a new ‘door_lock’ sound event. Our aim is to have this sound event automatically stop any playing instance of ‘door_open’ when triggered. This logic is centered around the ‘door_lock’ event, which means that it’s this event that should ‘know’ about the other entities it needs to interact with. This implies that the ‘door_lock’ needs to know about the existence of ‘door_open’, while ‘door_open’ should have no knowledge of either ‘door_lock’ or the fact that there are other events that might stop it.
It might seem a bit forced at first, but it really is essential that a designer understands these concepts. Let’s consider the flow of the operation for each approach. In the first design I wanted to implement, whenever a sound event is triggered, the steps needed to achieve the intended result would be:
- Start resolving the play logic for sound event x
- Query the audio engine for all currently playing sound events.
- Enter the loop - for each sound event y in the list of currently playing sounds:
- Access the properties of y.
- Get the list of events that can stop y.
- Enter the loop - for each event z in the list:
- If the name of z matches the name x, stop playing x and break out of the loop.
- Start resolving the play logic for sound event x.
- Access the properties of x.
- Get the list of events that x stops.
- Enter the loop - for each sound event y in the list:
- Stop y.
m
sounds playing at one time and the total of n
sound events registered in the sound engine. I will also assume that 10% of all sound events stop o
other sound events on average.
In the first example, the operation would start with querying the audio engine for all currently playing sounds. Depending on the audio engine implementation, this step’s complexity would be either
O( m )
if the audio engine keeps a cached list of currently playing sound events, or O( n )
if it would have to loop through all n
events and query each of them for their current state.
The next step would be to enter each event’s properties and loop through the list of events the event is set to stop. Following our assumption, we’d enter 10% of the n currently playing events and loop through o
sound events in their ‘sounds to stop when triggered’ property. We’d break the loop the moment the event that matches the one currently queued to be played is found, which gives us the complexity of O( m*o )
.
The end result complexity would thus be O( m*n*o )
if the audio engine does not cache a list of currently playing sound events or O( n*o )
if it does.
The second example’s complexity is easier to calculate and equals O( o )
. We enter the properties of the sound requested to play and loop through o
events in the list, stopping all of them.
Assuming a game situation in which m=20
sound events are currently playing and there are a total of n=2000
events registered in the audio engine, 10% of which would stop o=2
sounds on average, the upper bound of the number of operations would look like this:
Implementation | Operation count (upper bound) |
---|---|
First example (audio engine has to query each registered sound event for its state) | 80.000 |
First example (audio engine caches a list of playing sound events) | 40 |
Second example | 2 |
Reasons to change
Let’s talk real life for a moment. Imagine you live with your significant other. Both of you have your own routines and lead mostly independent lives during work hours. You are so nice to each other, that you drive the car out of the garage for your loved one every morning, while they take care of preparing breakfast for both of you. In short - your routine at work might be very different, but morning activities are tightly coupled.Unfortunately, one day you get sick and are so bedridden there is no way you are driving the car out of the garage for your significant other. Since you’d taken care of that for ages, your better half would have a hard time dealing with this predicament themselves, leading them to commute to work with public communication instead. This is a perfect example of a tight coupling, showing how your loved one having a dependency on your action forced them to make changes to their routine as a result of a change in your behavior. This is exactly the kind of dependency we should avoid in software development, including game development.
Could you imagine that the internet connection on your phone would depend on whether your friend is driving their car at the moment? Or would you opt for not having a shower in your apartment because your neighbor has one and you could arrange it so that you could use it at their home? I highly doubt it, and yet it’s commonplace to see code or designs that are so tightly coupled that changing anything would cause a chain reaction of a refactor. For this reason, whenever we design something, we should always try to achieve it in a way that:
- Does not force additional implementation details onto other components
- Is as independent from other components as possible
- Is expandable without having to edit its current state
Summary
If we’re designing an inventory system based on carrying weight limit, whereas exceeding the limit would slow down movement, this system should not know about the locomotion system. It could raise a notification whenever the limit is passed, but should have no knowledge of other systems and/or entities that react to this notification. After all, the last thing we’d ever want to consider while making changes to the inventory system would be the character’s stance. And yet, very often our projects are full of such intermingled dependencies, with systems tightly dependent on each other’s implementation.If we attempt to design a clock, its time logic should be completely independent from the animation layer. If it’s not, then whenever we made a change to it, we’d always have to have the animation layer in the back of our heads and wonder, whether our changes would not cause animation implementation to change. This can be especially painful and costly in bigger teams, where each department have their own roadmaps and goals to pursue and might not be able to adapt their work on every demand.
The less reasons to change there are in a system, the less open for modification we can make it, which in turn means easier (and thus cheaper) maintenance.
Comments
Post a Comment