This experiment is a research on how people navigate through the space without any visual reference. They are only navigated by three different sounds located around the field. One person has a recorder attached to her shoe, while the other guy has a camcorder used as a mic attached to his hands(Just imagine one person has her ears on her foot while the other has them on his hand.)
Some images also shows that using our hands as an interface for navigation, is much more active than using our foot. It also shows that without the support of our vision or any verbal navigation. It is almost impossible to have an accurate mapping of our surroundings.
Our sight give us much more reference of the location of our physical world than our hearing does. Our brain memorize and compares the landscape that we see, and creates the map in a very literal method. Although our hearing can also identify different sounds, but it is not as literal as our visual. It’s always easier to tell the length of two objects than to tell the volume of two sounds. However, sound adds more value to the what we see, it is a media that can be even more engaging than the visual.
Another thing that I noticed in the experiment is the use of “touch.” What happens when we lose both sound and visual? or when these two media does not give us valuable input, how do we gather information? I wonder if texture and touch can also become a navigational interface, and how effective it can be or how disastrous it might become? In the end, I guess there are different advantages and disadvantages to our human senses, and it is a designers job to manipulate and use these senses strategically, in order to accomplish our design purpose, and bring more value to our work.