top of page

Research #2 - Sound and Light

It's safe to argue that the most common sensory pairing within the world of design and new media is that of sound and visuals, but it's a sensory pairing that is intensely fractured by high levels of subjectivity. Don't get me wrong, and as I've said several times in this research and development blog, this is absolutely fine in many ways, but I'm increasingly motivated to ensure my upcoming multimodal toolkit can help makers from all backgrounds expand beyond this. Why? Because pure or isolated 'subjectivity' is but one modus operandi for the creation of multimodal artifacts. Artistic expression isn't the only driving force behind the creation of multisensory outputs or new media artifacts. External objectives that are clearly outlined via 3rd party 'development briefs' are ever-present within the creative industries, and these briefs demand that you work from an influencing force that is outside your own opinions and personal experiences - so whether you're creating a soundtrack for a video game, or a user interface for an app with a carefully tailored brand identity, there are messages to convey, atmospheres to terraform, and moods to conduct, all of which will require methodologies that go beyond the sphere of self-expression. So, I feel that this landscape needs to be charted, and my next media experiment later this week will help me dig into processes that move beyond pure feelings and opinions. But before I set that up, let's look at some outward-facing research again - this time on the relationship between sound and light. Let's start by skipping stones a few centuries back, and take a look at the idea of the colour scale. This multisensory framework has been visited by countless scientists and philosophers over the years, with high degrees of subjectivity evident in even the simplest of graphics that summarise each individual's take on this topic, as you can see:

There's a few things going on here. In most iterations shown above, we can see that many of these colour scales start with red. Intuitively this makes sense, it's the first colour that appears within the electromagnetic spectrum that is visible to us as humans. In the image above we can see that in each of those iterations, red has been mapped against the note C, and for my money this is where some problems begin to emerge with those systems. At first glance, this feels like a fairly logical starting point for the sound component of these colour scales, but a knee-jerk evaluation on why I believe that this has been the method of association here comes down to the influence of a highly tailored westernised music system, where C is the route of the most basic of musical scales, and therefore it could be argued that this assignation isn't grounded with a more universal qualifier. To look at alternatives to this colour-pitch assignation, let's now look beyond any potential influence from the principles of westernised music. If we follow the same principle for establishing the route colour (i.e. the fact that the first colour in the scale is the lowest frequency of visible light), we can check what the lowest audible frequency is for humans, which is around 20hz (the widely agreed average), and this comes out as an E. So maybe this would be a much more logical starting point for the colour scale? The first frequency of light that is visible to us is paired with the first frequency of sound that is audible to us. With a pinch of boldness, I'm going to make my own personal addition to the history of colour scale configurations, as shown below:

...and with only a few days to go until the new decade, I've decided to date this 2020, because.... (*redacted vision pun*) Also might I add... well hellooooooo, George Field! With my own addition here I'm suddenly drawn to your implementation of a colour scale - I wonder what train of through brought Field to his conclusion here? They look rather similar! That will be something I look into more very soon I feel as I'd be thrilled if a similar logic was used there as well. Okay I jumped up into some lofty heights there... I know it takes more than 2 paragraphs of reflection to establish a new colour scale - there's lots of dancing to be done here to establish a more objective scaling system between these modalities. However, all of these points of reference regarding the labeling of such a system aside - the notion that pitch and colour make for a great pairing between the sonic and visual modalities holds up very well indeed. Fundamentally we're looking at wave frequencies here - a change in colour and a change in pitch is established by a change in wave oscillation frequency; one of electromagnetic radiation, and the other of air pressure. Two different mediums, but the same scientific principle. Check. Then let's take this another level. We all know that when you split a beam of white light through a prism, you can see all of the colours of visible light. This is because white light is made up of all of the frequencies that produce different visible colours (...and some invisible ones too!). With this same principle, let's take sound - when you combine all frequencies of audible sound you end up with white noise. Double-check. These modal relationships can begin to have an impact on frameworks for creativity under each of these disciplines. Take the circle of fifths for music theory:

...and take the colour wheel for colour theory harmony:


Balancing these two systems could result in a circle of multimodal frequency perhaps? Again, another thing I would like to explore via the hybridity of the two resources shown above. So colour and pitch work well, with plenty of empirically-informed, emotion-free opportunities to capatalise on here. But unlike colours, sounds are not 'static'. Well, okay, they absolutely have the capability of creating that impression (i.e. as a continuously sustained tone at a specific pitch) but the way that they are most commonly featured within creative works and new media projects is as carefully tailored sound and music. So, how can qualities of audio such as loudness, speed, rhythm, and so on, be represented via the visual modality? The creation of static imagery to represent a temporal art form such as sound and music is a highly subjective process. I had reflected on this in my previous post on a recent artwork project called Sono-Subjective, in regards to how the movement of the brush and the real-time application of the paint secured the strongest relationship between the sound and the visual elements, more so than the colours used, or the finished paintings themselves. A full roster of opportunities for balancing the relationship between sound and visuals come into play with the introduction of motion graphics. So, the real opportunities for methodical connection between sonic and visual content are ultimately conducted by time. I will aim to come face to face with several of these elements discussed above in my next practical experiment as I explore a more considered approach to the link between each of these modalities. Watch this space! It's coming soon.

bottom of page