Here’s One You Made Earlier

Posted on June 06, 2016

By Julia Payne, director of the hub

We invited you to share with each other some of the thoughts you have about the ideas and possibilities we’re hoping we’ll all be able to explore tomorrow at Make Do and Bend. Here’s what you’ve told us, and discussed, so far, edited together (in a hopefully sensible fashion!) by my fellow hubster, Matthew Linley and I. It makes for some really interesting reading.…


WHAT DOES DIGITAL ‘PERFORMANCE’ LOOK LIKE?

Adam Stark: I actually hope it looks like nothing I can imagine. Certainly many of the great musical innovations didn’t come from using technology, but rather from mis-using technology (overdriven amplifiers, scratching turntables, guitar feedback, misuse of auto-tune [NB, this last one is rarely done well]). So I think we need to create possibilities for performers to use digital technologies in many new and interesting ways - and then watch as they ritually misuse them and dismay us with their lack of regard for our intentions. :)

Andrew Hugill: I think the challenge for artists is, as always, to find ways to “creatively abuse” the technologies. This will continue to provide the most fertile and imaginative solutions. We must embrace the engineers (indeed, become engineers ourselves), but we must do so in ways which embed humanity in the systems. We must find a way to reconcile the subjective ambiguities of human beings with the objective precisions of computers.

Hopefully like nothing in particular or, rather, like anything it wants to look like. There is room for everything, from a conventional performance ritual with screens to an immersive interactive experience using augmented reality in a locative situation, and many many other permutations. This is not to dodge the issue, however. What is ‘liveness’? How do we understand non-human agency? What’s our role as performer, or audience, or creator? And what will be the future role of, for example, an orchestra - is it inevitably a museum piece? All these questions are in the mix when contemplating digital performance, which will continually offer creative answers almost on a work-by-work basis. So, the exceptional is encountered more often than not. This is the sign of an active and flourishing area that has yet to settle into conservatism.

Daniel Jones: A whole-hearted yes to creative abuse of new technology.

Adrian McEwan: I love your call for the artists to "creatively abuse" the technology. What does public space look like online?  How does that impact what can or can't be performed?

Leo Amico: I join you guys in praising "Creatively abuse"  - such a great term! (On the topic, I LOVED Simon Reynolds joint review of Guns N'Roses “Chinese Democracy” and Kanye's “808s & Heartbreak”. How technology is used in totally opposite ways by each artist - Kanye abusing the auto-tune for artistic purposes and GNR abusing compression for commercial purposes - http://www.salon.com/2008/11/29/kanye_gnr/

Digital broke the barrier between the instrument and the music it produces. Both became malleable, fluid, in evolution. Making music/performing music with digital tools allows artists to create and interact with both the making the music and making the instrument (i.e. in live coding). What are the implications of this? How can this be used in live performances? What improvisation become when we are not just creating the music on the spot, but the instrument itself? 

Thor Magnusson: It looks like live coding : ) - But seriously, there is no easy answer to this question. The research field of NIME (New Interfaces for Musical Expression) studies this topic at great depth, but the answers are many, emerging, and diverging. The exciting thing about being part of this creative field of new musical technologies is that the practices are innumerable: there are no compositional constrains, and people are confident in crossing art forms for new explorative expression. The explosion of creative coders is a positive force as it's dangerous when commercial interests drive musical innovation, but today I'd argue we have the opposite: innovation by creative people drives commercial companies to up their game and follow what's going on. 

Duncan Chapman: Can I tell the difference between it and ‘analogue’ performance? 

And I was reminded (when talking with Julia) about visiting the wonderful instrument museum in Brussels. Lots of prototype saxophones made by Adolphe Sax in the process of designing the instrument that was intended to be portable, stable, “cross platform” (learn one and you can play them all), simple to learn the basics and able to be played in the rain in military bands. All this invention with no idea that it might have other completely unimagined voices.


HOW CAN THE OPPORTUNITIES FOR DIGITAL ARTIST/AUDIENCE INTERACTION SHAPE MUSIC CREATION?

Adam: I’m definitely very interested in this idea. I like the idea of a performance whereby the music created is very much collaborative with a group of people - so it is an experience for the audience, and something that exists only in that room at that time.

However, I feel there is a limit to the size of an audience for such an experience. If the audience becomes too large, then nobody can tell what effect they have on the performance anymore. So perhaps there is a critical point where the collective moves from ‘active’ to ‘passive’?

Andrew: I suspect there will be ever more dissolving of the distinction between the two. Performances wil become more of a shared, pro-active, experience, like multi-player gaming. However, this does not remove the possibility of a collective appreciation of virtuosity, for example. Sometimes we like to be passive recipients, especially when one individual’s contribution stands out. So, more traditional modes of interaction will survive. But the hierarchical relationship which always places one person (or group) in the role of ‘genius’ performer and the rest in the role of grateful recipients is changing. Audiences reasonably expect to be able to shape their musical experience and today have a degree of control over that process that far exceeds anything imagined even a decade ago.

Daniel: I'm interested in the ways in which sound and technology can transform our relationships with the world, using code as a way to augment creative and scientific practices. I make sound installations that translate real-time patterns and processes into living musical forms (as one-half of Jones/Bulley and in my own solo practice), which can both deepen our understanding of composition (what if a composition were endless? nonlinear? context-aware?) and illuminate the world around us (highlighting otherwise intangible aspects of the environment, or incorporating them as emergent narrative agents).

I also develop a technology that allows information to be transmitted via sound, which may open up some unorthodox avenues for audience engagement, etc: imagine embedding information within the actual audio of a live performance that would allow the listener to understand it from different perspectives. 

Stefan: At Kinicho we're interested in 3D Audio, we have developed an ambisonics array system in which the listener is immersed in a sound field, which can reproduce sound from another space or an entirely fabricated sound space or a combination of the two.

It has been used for performance and installation work. It can be be fixed in place but we also have a portable pop-up dome which can be used for production and public engagement.

We also have a developed processing system for creating 3D Audio in binaural presentation for consumption over headphones. This can be done live, streamed for internet consumption or packaged for afterlife products. Binaural presentation has a rich history, first appearing in 1881 Paris World Expo - it allowed audiences to dial into Opera and Theatre performances, while most recently Theatre Complicite performed 'The Encounter' with the audience at The Barbican listening to a 3D binaural presentation of the sound from the stage

Thor: It is an interesting fact that composers today are capable of interacting with each and every member of the audience through the mobile computing devices they carry in their pockets. Interactive or participatory performance has always existed, but it's as if new digital technologies are offering, or rather affording, this to a much stronger degree. Server-client systems in musical performance have been explored, for example by Tim Shaw and Sébastien Piquemal or Norbert Schnell and his team at IRCAM. 

It's intriguing to see the development of Christian Wolff's and Cornelius Cardew's ideas of non-musician participation in musical performance. Here we could also point to the animated scores of Ryan Ross Smith, where the distinction between playing a score and playing a game becomes fuzzy and we begin thinking about the gameplay of musical pieces. 

Duncan: How open am I really to allowing others to shape the creation of work?

Am I still hung up on the idea of creating pieces that last after I’m gone? Do the people I collaborate with care about it anyway? Experience vs Artifact (is this the right spectrum anyway?).


HOW CAN NEW WORK BE DEVELOPED TO HAVE A MEANINGFUL EXPRESSION DIGITALLY AND IN A LIVE CONTEXT?

Adam: The two should be fundamentally interlinked - not by restricting composition to using simplistic digital instruments, but by making sure we have the interfaces to digital creation that are simultaneously interesting for a composer to use and engaging for an audience to watch.

Andrew: I’m not at all sure what is meant by “meaningful expression”. It seems to me that any work developed in a digital and live context has “meaningful expression” somehow or to someone, if only to another computer. As for audiences, they will pick and choose as they have always done. But now, unusually, they will do more than just experience the work – they will make it too!

Leo: The musical object is no longer a static one. The example of the Radiohead's Polyfauna music app that with an update had new visuals and new sounds (I didn't realise about the update when it happened)- when I opened the app after some time was both surprised and disappointed. I was happy about the new - and better - content, but I also felt slightly "violated" by those remote hands that without asking my explicit consent changed something I thought I owned). That applies to every time we stream music. Are we sure that if we play Rihanna's Umbrella today from Spotify we are listening to the same song that we played yesterday? We have overcome the album as a finished product.

All this has some scary implications but also interesting applications. I can imagine some 1984-ish system that automatically revise song lyrics under the control of some sort of Ministry of Truth. On the other hand we can think of artistic ways that can make the musical object can evolve over time, never repeating itself but maybe changing according to real time sensor data, seasons, news...  

Thor: This is a good question and I would emphasise the concept of meaningful here. One of the interesting problems is where the digital instrument can be so complex (for example applying non-linear mappings, automation, or AI as part of the instrument's functionality) that it's hard for the audience to understand what is actually being played. What is the performer doing? At what level can we consider the performance "skilful"? 

I don't think it's possible to answer the question generally: it's up to each and every instrument designer/composer to consider how the performance of the particular instrument or system is meaningful to the audience, the performer, and indeed to themselves! This is a question of much more importance when studying digital systems than we have with acoustic instruments, for reasons mentioned above.

Duncan: Given that ears are physically the same as they were before “digital” technology does “digital” sound different.

Given that I am not as articulate as Simon Emmerson (“Relocating the Live”) what does a “live” context feel like to me and the people I work with? 


HOW CAN NEW AND EMERGING TECHNOLOGIES SUPPORT AND INFORM COMPOSITION AND PERFORMANCE?

Adam: Personally I am interested in computer music (i.e. any music made using computers, be it live augmentation of a cello, a digital prepared piano or just some good old techno). I see computer music as being “in a box” - something incredibly powerful, but also something we can’t really see or access. It is behind glass. So my work has mainly been concerned with trying to humanise electronic music, or trying to make it more performative. Trying to get it out of the box…

I believe that developments in machine learning, sensors and movement tracking, wireless devices, audio analysis and more can help us to make electronic music composition and performance into physical and visual experiences. This would bring to electronic music the richness we experience in performances of acoustic music, but with the additional world of signal processing power, networked performance and multimedia possibilities that computer technology provides.

Leo: I totally second what Adam was mentioning about computers being unaccessible boxes. I think that applies also beyond music. We are generally losing control and ownership on the tools we use. Technology is pervading more and more aspects of our life, yet we are totally unaware of the inner working of the tools and services we use every day. And this condition is becoming even more pronounced with internet connected devices, that often relies on software that runs in well-guarded data center that we have no access, let alone control upon (pardon me the digression here).

I believe that music could be a vector of education and change in the direction of a reappropriation of the technology that we use in our life. New musical instruments where hackability and repurposability are design features? That is something that in music happens anyway (from extended techniques to modular synthesizers to circuit bending to computer music), but could be made even more pronounced, and especially designed for non-musicians to use.

Adrian: Digital (whatever that means...) should let us build new tools, new instruments, things to give us more agency, not less. 

I don't know how much of it we'd be able to get done next week, but I also want to remind/point out that digital doesn't have to be about things trapped behind glowing rectangles - it also encompasses things like digital fabrication: 3d printers, laser-cutters, etc. and bits of electronics like Arduino boards or Raspberry Pis.

I’d like to see pieces develop that provide a critique of the blind acceptance that "big data", "IoT", "machine learning" and "VR" will bring untrammelled benefit :-)

Leo: What could be the role of AI in musical production/execution? In the studio we've been recently starting to explore metaphors of AI as personas - trying to think AI/Human collaboration as one that would happen between two people. In music that could be something like this.

· Music through AI - AI as the producer, limiting and control what the human artist do.

· Music with AI - Human/AI as a duo, collaborating.

· Music by AI - AI as the musician. Human as the client. AI produces music autonomously, given certain directions.

Andrew: The important thing is not so much what is technologically possible now, but what will be so in five years’ time. Technology changes that fast. But how do we know what that will be? We can make some educated guesses, and we can look at the Gartner hype-cycle and other similar projections. I’ll make a (risky) stab at some predictions…

· I predict neural control becoming mainstream quite soon. There are already basic neural controllers available quite cheaply (around £100), but the technology is not sophisticated enough yet to break through to a wide consumer market. But it’s on its way.

· Bio-technology and genetic sciences are sure bets. Environmental humanities are a growing area.

· The computers we use now will probably disappear, to be replaced by things much less binary and much more flexible (probably made of graphene). Consequently, “digital” as an idea will become as obsolete as “analog” is today.

· Current fads such as “big data” and “internet of things” will be superseded rapidly by super intelligent computational entities that will operate in ways we cannot yet predict. We must negotiate our relationship with these, starting now. Co-operation is the only way forwards.

· Commercial interests will continue to hobble progress in some areas and artificially accelerate others. Meanwhile, society as a whole will decelerate as the pace of change becomes wearisome.

· Something will have to be done about energy.

· The workplace will change yet more (see my 2013 article on “10 jobs of the future”) and we will overcome our objections to many ethical issues that affect human existence. These wider things will affect composition and performance as much as they affect anything else. 

Thor: I consider the development of music technologies a consecutive history where technologies evolve, mutate, cross-breed, etc. For millennia this history can be studied as a feedback loop between instrument makers, performers and composers (or in some traditions excluding the role of the "composer", but that's outside the scope here). I'm quite interested then in this break in continuity that happens with the digital. The break is profound and can be studied at multiple levels, but we can point to issues such as the arbitrariness of gesture-sound mapping, of representative design, and other features of the digital that break with the continuity and embodiedness of the acoustic instrumental tradition. This is not to say that century old music technologies are not continuing in the new media, but rather a comment on the complexity of the considerations we have to do when we translate an acoustic tradition into digital media formats.

To me it's clear that emerging technologies can support both composition and performance in multiple ways. As an example, machine learning and new synthesis algorithms are developing quickly and it's exciting to behold how musicians always are amongst the first to apply new technologies for musical purposes.

Duncan: I think one needs to think about what the contexts for these activities are, individual, social, remote etc. Are the people doing these things the same folks that have always done them? How much is physical presence necessary? How do we engage with people so they feel a connection with what they see/hear/feel, does technology make this more difficult?

Are my niche obsessions shared?  And if so by whom? 


THINGS THAT KEEP US AWAKE, AND THAT WE’D LIKE TO EXPLORE (OVER AND ABOVE THE ABOVE)!

Daniel: I'd like to be exploring  novel perspectives and insights into performances; new possibilities for archiving via linked data and the semantic web; whether we can create new listening contexts via the Internet of Things; open source, open data and creative commons with respect to musical performance; the pitfalls and potentials opened up by the internet's limited economy of attention. 

Leo: Exploring alternative interaction between gesture and output in music.  Gesture and output do not need to relate with each other in digital music. Can we explore this potential? What if for instance the relation is inverted (i.e. to a hard movement correspond a soft sound - and vice-versa)

Andrew: I’m interested in exploring:

· Embodied Intelligence in Music

· Multimedia Installations

· Musical use of the Internet of Things

· Data Sonification

· Music and the Semantic Web

Stefan: Stuff that is exciting us at the moment at Kinicho includes the way 3D Audio opens up new ways of composition and recording, for instance volume is a function of distance and panning a function of localisation - so recordings can have positional and kinetic audio with a centered listener or the listener can go on a journey through the sound sources. Also

· The perception of time differs in stereo and 3D

· 3D Audio performance - new dimensions in the use of space for performance 

· Immersive Sonic Realities - reproducing acoustic environments

· Sonic archiving - recording the sonic fingerprint of a space so it can be preserved for future use 

· Sonic mashups - what would an orchestra sound like if they were playing in the bottom of a bott

Daniel: I’d like to be exploring:

·  Data sonification - translating information into audible forms, for both artistic and scientific ends; using sound as an interface to information

· Nonlinear composition - creating musical works that change on each rendition, based on generative techniques or in response to their context ·live notation - generating scores for performance on-the-fly

· Emergence and self-organisation - harnessing collective behaviours to generate and strong>·learn about emergent processes that are more than the sum of their parts

· Augmented composition - using generative tools to extend our compositional practice. We're a long way from artificial creative agencies, but we can certainly get close enough to usefully incorporate semi-autonomous processes in radically augmenting our own composition

· Spatial and immersive audio - see Stefan's comments; I love the idea of recording an archive of sonic fingerprints of specific spaces

 

That’s it, so far! There’s still time to add your thoughts into the mix ahead of tomorrow. Just join in via the email ‘string’ or share via Facebook and twitter.

Really looking forward to seeing you all tomorrow…

 

Back to News