hifi, lofi, wifi is an exploration of sound reactive images, technology, and interaction.
The only reason I'm designing today is because my musician
friends needed posters. Nothing fancy. Nothing too complicated. Just something pretty and informative that would get noticed. For many years in my professional work, I veered away from music - based projects. I took this opportunity to return to music, and see how my design work fits in the world of sound beyond pretty posters.
Sound exploration began with research into digital and analog synthesizers. The wide range of technology used for synthesizers inspired the name of the project. Familiar controllers — like a piano keyboard and a drum machine — were selected and paired with a handful of minimal sounds, to drive the audio/visual responses.
Image exploration was initially based on sound wave principles, finding basic forms to describe rhythm and movement. I limited my color palette and used only simple, geometric shapes in two dimensions. I then expanded my color range through overprinting and blending, drawing close yet basic analogies between sound and image mixing. Next, images were rotated and translated into three ‑ dimensional space. Lastly, I connected the shapes to sound/music, enabling their mobilization through interaction with the controls.
Download Process Zine
Hifi, Lofi, Wiifi was a sound-reactive, motion-graphics installation. The exhibit was originally shown for ten days in April 2012 at the Maryland Institute College of Art (MICA). I created a series of simple, sound-reactive graphics that reacted to ambient noise, including footsteps in the gallery space or the notes of a song. I also designed an audiovisual instrument that incorporated a piano keyboard, a drum machine, and an iPad. I designed and programmed a collection of sounds and accompanying images that changed in response to the pressing of a keyboard key, the thumping a drum-machine pad, or the turning a knob. I wanted to create an experience that incorporated sound and graphic design in a way that gave the exhibit visitor the control of a multimedia experience—transforming the visitor into a performer who entertained the other exhibit visitors.
Download Sound Reactive Series
Defining My User
I had to choose between designing something that a professional would use for a performance and designing something that an inexperienced user, with little or no musical knowledge, could immediately pick up and interact with. This was an important decision and one that ultimately drove the design of the whole project. The goal of the installation was to allow someone to immerse themselves and others in sound and image. The initial idea for the installation came from wanting to learn the skills to collaborate with musicians to create immersive environments during live performances. However, as the project progressed, I found that I wanted this project to live as an ongoing, spontaneous performance in the gallery--as opposed to a single choreographed professional performance.
The tools that many producers and deejays used were already available on the market and easy to learn for a general consumer with a knack for technology. These included tools that I used in my project, like keyboards and drum machines. However, I wanted to design something that was as simple and accessible as possible. I wanted to use the same tools professionals used, but I wanted to modify them for amateurs. I wanted a regular person to be able to create an enjoyable experience that seemed as choreographed as a professional performance. To make the tools easy to use, I programmed fixed sounds and did not demand that the user customize their own. I also designed the graphics to be simple and straightforward.
Simple and Minimal
I had to make sure the interactions were simple and intuitive. I began with the buttons. Anyone, even a child, can understand the simple action of pressing a button. You press a button and something happens. In the case of my installation, you pressed a button, and two things happened simultaneously—a sound reaction and a visual reaction, giving the user two different yet coupled sets of feedback. The design of the visual elements was kept as minimal as possible—limited to three simple geometric shapes and three colors. The use of simple geometric shapes allowed me to make abstract, coherent connections between sound and image. This also allowed me to program something that would not strain a network of computers and instruments running in a gallery for an extended period of time, even if many users overloaded/used the system all at once. Though I did explore the use of video and photorealistic images, I decided to limit the visuals to simple shapes because I wanted the project accessible to a wider audience. I did not want the user, when being introduced to a new experience, to be intimidated by complex visuals.
More Complex Interactions
The keys on the keyboard were programmed for simple interactions. A distinct set of triangles on screen were paired with each key on the keyboard, and they all reacted similarly when a keyboard key was pressed. For example, the lowest note on the left side of the keyboard activated a reaction on the left side of the screen, while the highest note on the keyboard activated a reaction on the right side of the screen. The simplicity of the tool was both its strength and limitation. One concern was that the simple shapes and limited color palette would not hold the attention of a user long enough for the user to explore all of the controllers and options in the system. For this reason, more complicated interactions were incorporated, for those users who decided to spend more time with each individual controller. One of these reactions was the inclusion of buttons and knob pairings. These specific buttons functioned as on/off switches that toggled audio and video loops. A specific behavior was a set of triangles that swayed back and forth when a button was toggled to the “on” position. Accompanying these buttons were knobs that altered the amount and the sizes of the triangles seen on screen. As the triangles multiplied and grew, the sound they emitted became louder and deeper. In another case, a set of circles swayed left and right when turned on and were moved forward and backward in three-dimensional space when the knob was turned. As a shape moved further from the user in space, the sound became quieter. As the shape moved closer to the user, the shapes became louder and more ominous. Despite incorporating only a minimal color palette and a few shapes, the interactions engaged and rewarded the user as the user changed shapes by altering knobs.
Controlling the Number of Controllers
Earlier iterations of the installation only used one or two controllers, while others used as many as five. However, for the final installation, I wanted to have many people interact with the instrument simultaneously, while still maintaining a certain level of harmony in the images and sounds. I included three separate input devices. This allowed for a reasonable number of people in the small gallery space to all participate. For the visual component, the shapes were all designed to fit together, so that even in the worst case (in which every button was being pushed down on all of the controllers simultaneously), what would appear on screen would still be visually pleasing. The shapes were all designed to blend with one another as they overlapped, creating different patterns and new colors. For example, if a blue circle overlapped with a yellow triangle, the points that they intersected would appear light blue, or if a red circle overlapped with a green circle, the points where they intersected would appear yellow. This allowed for interesting shifts in color within the limited color palette, as well as giving the user an incentive to press down more than one button at a time, exploring what combinations could produce interesting and attractive results. The concern with this is that the users would also be creating and overlapping a lot of different audio reactions as they activated several different visual reactions. The largest risk was that as multiple buttons were pressed, the audio would be jarring and uncomfortable. For this reason I programmed all of the audio in the same key and did not allow for the user to modify this. I also used similar sounds--sticking with airy, ambient synthesizers that complimented each other. Since there was going to be multiple users pressing many buttons at once, it was necessary to limit the sounds, in order to preserve harmony, or at least present the illusion of harmony.
One possibility of creating a tool like this would be to allow people to compose and record a visual and audio arrangement that they could take with them or perhaps publish online. This is appealing, because the system I designed makes it easy for anyone to create something engaging. However, I wanted to showcase and preserve the aspects of performance and spontaneity. Any recording, be it of what is projected on screen or the actions of the user, would not accurately replicate the experience of being in the gallery space playing with the installation or watching someone play. Also, the fleeting properties of this type of live experience helps users not feel any pressure to really perform a comprehensive piece of music, but it helps them simply just explore, play, and enjoy their experience in the moment.
You can design and plan for certain interactions, and you will always be surprised how people actually end up interacting with something you create. Most of the thinking and effort that went into the project ended up creating an exciting experience in the gallery space. Just as I planned, many people interacted with the different controllers simultaneously, and the sound and images were all appealing. One interesting thing I noticed was that the person interacting with a controller often did not look up to see the screen. The only feedback they were receiving was the audio. Also, the system set up for the more complex toggle interactions was not so intuitive, and the function of the knobs was not as implicit as I had hoped. This could have been solved with more instructions presented to the users or simply just a custom, simplified controller for toggle functions. Maybe even separating the drum pad controller from the toggle controllers would have achieved the purpose of letting people explore all reactions. The most popular buttons were the drum pads, which created both a drum noise and a graphic when first hit, and if held down they sustained a note and a graphic. I found most people were attracted to the drum pad controller and the iPad drum controller interface. Perhaps it goes back to what I mentioned earlier about the simplest interaction being a button push. The drum pad and drum pad interface provide the easiest interaction of the whole system, and with two sounds and an image to accompany it, perhaps it is the most rewarding.
The principles behind this project were rooted in electronic music and the technology behind music creation. It made sense that the end product used things like drum machines and keyboards. I’m very happy with the end result of the project. However, I’m most happy with the skills I learned while experimenting. One of the most valuable things was learning how to program different things to control audio and motion graphics software simultaneously. This project opened up a new arena for me to explore as a designer, allowing me to create multimedia interactive installations, something that I always wanted to learn for myself and experiment with. Future projects in this arena will probably still focus on music, and technology, but the research and thought put into user experience also interest me. One of the next steps is to learn how to custom build hardware and be able to improve usability, and make interactions on screen more implicit through custom built hardware.