Wind gameplay improvements 🛠️
As I already indicated in my previous post, I reworked the sound emitting rig in the wind path gameplay. To recall, I was using a fixed rig before, essentially a rectangle moving along a spline with sound emitters at each corner. Now, the sound emitter rig is split into a front and a rear part that position themselves dynamically on either side of the closest point on the spline to the player. This allows the sources to move around corners in narrower tunnels, while avoiding clipping the sources through the walls of the cave. At the same time, front and rear sources are further away than they were one the rig, making for more easily distinguishable wind directions. Following the sound of either sound source should lead the player to the respective end of the tunnel, which proved to be much easier using this improved setup. An interesting side effect may be contributing to this improvement: When either pair of sound sources is on either side of the player, i.e., 90 degrees to the left and right of the player character, each stereo pair ends up sounding “mono”. However, once the player turns to face one sound, for example, looking towards the front sources are ahead, the sound sources begin to sound “stereo” again. This helps when orienting oneself in the dark cage. However, I should note that while my own testing deems this setup to be effective in leading me through the tunnel, only feedback from other players will tell if the wind path gameplay is “production ready”. As of the week of writing this post, there will be some playtests and feedback on this. Will report back once I know more. 🫡
Listening modes đź‘‚
I looked at the different ways we listen to sounds. Chion (Audio-Vision: Sound on Screen) differentiates between causal, codal, and semantic listening. In causal listening, we focus on who or what is making the sound, trying to understand its origin. In codal listening, we derive meaning from a sound based on a set of learned rules and conventions. In reduced listening, we focus on the features of the sound itself, without thinking about its source and meaning. I am thinking about these modes of listening in terms of gameplay, to understand what has been done before and to possibly find a gap for coming up with a new gameplay idea. At a first glance, it appears that we can find many gameplay examples for each of these listening modes. We listen in a causal way when we hear footsteps in a game, trying to make out whether this might be an enemy trying to sneak up on us in some competitive online shooter. Codal listening is what we do when we are listening to a familiar spoken language. Applied to games, this is when we develop certain expectations or make assumptions based on what some sounds mean to us. We listen in a reduced way, for example, when we are playing Guitar Hero, trying to anticipate the structure of a song so we’re ready to press the right button on our plastic guitar. To come up with new gameplay that goes beyond these examples, I’m going to brainstorm some ideas for each of the listening modes. I intend to eventually pick out at least one of these ideas and test them out in Soundgarden.
Causal
Codal
Reduced
Next up 🔜
Beyond these brief ideas, I was also thinking about implementing new visual elements that react to sound as additional unlockables on the first island. Also, I was thinking about Schafer’s practice of ear cleaning and possibly deriving some game mechanics from that. For example, mapping one’s sound environment by consciously acknowledging various surrounding sound sources.
Next, I will evaluate the playtesting feedback and possibly rework the wind path gameplay. Also, I’m planning to design and implement a new sound interaction.