To navigate this site, click and drag anywhere on the window and use the mouse-wheel to zoom. Use the menu at the bottom of the screen to change pages, and the links at the left to read about each item.
This is an experimental site using CSS3 and three.js - read more in the About This Site item.
If you experience performance issues when viewing this site, use the settings tab in the bottom right to lower the flock size. The site requires a modern, up-to-date browser, and runs best in Google Chrome.
The site is loading...
Readymade for eBay
The Internet in 3D
Readymade for eBay
United Visual Artists
About This Site
Self Typing Typewriter
I salvaged this 1930's American-made typewriter, featuring a German keyboard complete with umlauts and an "Ümschalter" key, from a bin in Kensington. It was in a pretty poor condition, but with a good cleaning, oiling and a new ribbon it was fully functional - quite why anyone would want to throw away such a beautiful object is beyond me.
The idea to make it type itself came about through an assignment for a module I was studying at Queen Mary called 'cruftfest' which involved finding an interesting but obsolete piece of equipment (or cruft) to turn into a digital media art piece. The keys are operated by forty four solenoids mounted under the typewriter on a laser-cut acrylic chassis, triggered by forty four mosfets and an Arduino Uno with a multiplexer shield attached to a custom-made PCB at the back. The carriage return and new line mechanism are operated by a small motor.
Due to both budget and space constraints the solenoids had to be small, but run at their rated voltage they have nowhere near enough power to pull down the keys with sufficient force to print characters. To get around this, taking inspiration from a disposable camera flash, I built a circuit to charge up a capacitor bank to three hundred volts before discharging it through the solenoids, delivering a much more powerful tug on the keys.
One of the limitations of this method is time - it takes about a second to charge the capacitor bank fully and a typing speed of one character per second seems a little slow. I am currently working on a solution to improve this speed.
Silent Cacophony was an event put on by John McKiernan of Platform 7 to commemorate remembrance day on 11.11.2013, and explore the effect of silence during war and conflict.
John approached me with the idea of using the typewriter in a collaboration with Nancy Esposito, an award-winning poet from Boston, who was writing a poem for the event. The idea was that she would type out her poem in a cafe in Boston, and the typewriter would type what she was typing here in London, at the fantastic St. Bride's institute on Fleet Street. The typewriter seemed to fit in well with the concept behind the event, not only due to the war-time connotations of its age and its German keyboard, but also because of the sound it makes, and the extraordinary periods of silence in between each stroke.
To connect Nancy and the typewriter I made a webpage which would save what Nancy typed into it onto a server, and a Processing app which would grab the text off the server as it arrived and send it via USB to the Arduino in the typewriter. I deliberately denied Nancy a cursor to see where she was typing and the ability to back-space, as you don't get either of these functions when typing on a typewriter, however this caused some confusion (see Nancy's blog post).
The typewriter is available to use for events for a negotiable fee plus travel expenses. Email me for enquiries.
Readymade for eBay
Commissioned by the Barbican as part of their "Dancing Around Duchamp" season celebrating the work of the legendary artist and so called founder of modern art, Readymade for eBay was a digital art piece which paid homage to Duchamp's readymades while bringing their concept into the digital age of the 21st century.
Using a Duchampian-style algorithm seeded from "random" tweets, items were selected and purchased from eBay. They were then converted into 3D digital models and presented to the viewer through a website, allowing them to be rotated and examined in close detail by the viewer, whilst simultaneously presenting to them the actual sounds of the items being handled, recorded by artist Chris Jack.
Each item was then re-listed on eBay; if it was bought a new item would be selected to replace it, leading to a slow evolution of the piece over time.
The piece serves as a reminder of how far the art world has come since it was scandalised by Duchamp with a concept so banal by contemporary standards as presenting everyday items as art. It points to the present and future possibilities afforded by digital technology both for artists and wider society, but also highlights the limitations and immateriality of digital objects and modern digital culture.
It can be viewed as it was in it's final state at the end of the piece with the following link:
The Barbican blog post about the piece can be seen here:
The modelling was done with Autodesk 123DCatch, and the models were presented using three.js. The website works best in Chrome or Firefox, to view it in Safari you must first enable WebGL (Safari>Preferences>Advanced>Show develop menu in menu bar, then Develop>Enable WebGL). Due to the high demands of viewing 3D in a browser, the page will not work with Internet Explorer, and will only work in machines with reasonably good graphics cards.
The talking guitar was a project conducted at Queen Mary University and is essentially an augmentation to an electric guitar for the purposes of affording the guitarist multi-dimensional real time control of their effects parameters. To demonstrate the system I built a formant filtering effect inspired by the classical wah-pedal to give the guitar an expressive speech-like sound.
The system consists of a ping-pong ball housing a green LED and an accelerometer attached to the headstock of the guitar. The movements and gestures produced by the guitarist with the neck of the guitar are captured using a webcam positioned off to the side of the guitarist and tracked using Jitter.
The talking effect was achieved by precisely controlling the bandwidth and centre-frequency of three peaking filters (or formant filters) which simulate the effect of the vocal tract on the sound produced by the vocal chords. With this method all vowel sounds and a handful of sonorant consonants (w, y, l and r) can be simulated. The following video shows the effect with an 'r' sound, and several vowel sounds:
The system is not yet finished, and I am planning to add the following features:
Better integration of accelerometer and video-tracking data, with a view to possibly eliminating the need for the ping-pong ball.
More intelligent gesture recognition.
Port the system to an embedded solution and eliminate the need for a computer.
Integrate hardware audio effects and foot pedals for more control.
In the spring and summer of 2013 I did a six-month internship with United Visual Artists, an art and design studio based in London who specialise in technological art installations. While there I worked on several projects with another intern Martin Brooks, some of which are described below.
Vanishing Point was a laser installation for an exhibition in Berlin curated by Olympus called "Photography Playground". The piece featured a white laser located at the end of a long room and pointing towards the audience, separated from them by three sheets of a specially constructed material called laser voile. The audience would walk in to the haze-filled room and look through the laser voile directly towards the laser, which was programmed to draw lines of light in the space in a generative manner. As these lines of light propagated through the space, they hit the laser voile, allowing most of the laser beam to pass through untouched and continue but causing a small amount to diffuse in all directions, showing up as a bright line drawn in the plane of the material.
With three sheets of the laser voile arranged between the audience and the laser, the patterns drawn by the lines of light were repeated but diminishing in size as the distance from the audience increased, causing a remarkable illusion of perspective inspired by the original drawings of Leon Battista Alberti, Leonardo Da Vinci and Albrecht Dürer. The haze in the room caused the beams to diffuse further and converted the lines into planes which stretched from the viewer through the laser voile and all the way back to the laser. As the laser drew these planes in a random but structured way, the audience was immersed in a shifting light structure in which the walls could appear or disappear at any moment.
My task was to add a soundscape to the piece, one which was subtly reactive to the visual content of the installation and reflected and reinforced the overall concept. The laser used in the piece was able to draw lines by reflecting its beam in a given direction using a tiny mirror which moves at extremely high speeds to scan the patterns programmed in to it. The movement of the mirror is surprisingly loud, and fairly irritating to anyone nearby, but interestingly it changes depending on what patterns are being drawn by the laser in a quite unpredictable way. We therefore decided to capture the sound with a piezo contact microphone and bring it into Max/MSP to process it and make it more palatable. By using a physical reflection of the process being used to generate the visual content of the piece we hoped to create a much more organic and natural sounding soundscape than would be possible by synthesising the audio from scratch.
The Serpentine Pavilion is an annually updated architectural installation at the Serpentine gallery in Hyde Park, this year created by Suji Fujimoto as an amorphous cloud-like structure made from a strict grid of hollow cubes of white steel. UVA were commissioned to do a sound and light based installation at the pavilion for the Serpentine Summer Party fundraiser, an A-list event attended by the likes of Mick Jagger and Kate Moss, and in keeping with the cloud concept of the architecture they decided to create a lightning storm within the structure. This was achieved by attaching cleverly disguised strips of high intensity white LEDs to the steel struts of the structure with magnets such that they were practically invisible under normal circumstances. Then, with a carefully orchestrated generative system, the strips were triggered, causing an electrical storm to rage through the structure and casting disorientating shadows on the audience below with a straight-lined illusion of geometry reminiscent of Vanishing Point.
For the soundscape to the installation the goal was to produce an explosive, explicitly responsive sound palette that made the audience believe they were in the centre of an electrical storm. With the lights being controlled generatively the strategy we deployed was to use information from the lighting control system to trigger short samples with an element of granular synthesis in sync with the light bursts. This was achieved using OSC and a Max patch.
Rag and Bone
This project was a commercial commission for UVA's sister company Artisan by the fashion brand Rag and Bone. The company wanted a hybrid installation and stage design for their autumn collection fashion show which, as it was their first show in the UK, they wanted to have a high impact. Artisan's proposal was to produce a set of 7ft by 2ft mirrors which could be rotated in sequence as the models walked across the stage, producing a disorientating, chopped-up visual backdrop.
My role in the project was to program the stepper motor drivers which rotated the mirrors with their own proprietary code to communicate with the central control system which sequenced the performance. As the intern and only one who could reach, it was also my job to polish the mirrors until they were free of oh-so-stubborn finger smudges at roughly 11am after the all-night install.
The main project I worked on at UVA was a development of one of their most successful installations. Chorus consisted of an array of pendulums with solid, inflexible rods, swinging on a fixed axis with the same period but with a controllable phase difference. Lights and speakers in the pendulum bobs created an immersive, evolving environment for the audience as the pendulums swung overhead, but one with a very organic, analogue feel, in contrast to the highly technological nature of the installation.
UVA wanted to extend the abilities of the chorus pendulum to be swinging in three-dimensions, and to increase the sense of danger by hanging the pendulums from flexible cables rather than solid rods and hanging them so close together that their swings could overlap each other, giving the impression they could collide.
We built a pair of three-metre-long pendulums from laser-cut acrylic, and a control system consisting of a pair of stepper motors, an accelerometer and a gyroscope.
The trouble with trying to control a pendulum, is that you can't, not really. A pendulum's swing is determined by the its length and nothing else, and trying to change that, especially if the pendulum is hung on a flexible rod, is impractical. What you can do though, is nudge it a little, subtly delaying or boosting its swing by a handful of milliseconds. By doing this in a controlled way across an array of pendulums, you can control the phase and produce some fascinating effects.
The pendulums were swung at their natural frequency by a pair of stepper motors attached to an arm, attached in turn to the pendulum's cable (think of how you would swing a yoyo). To control the phase difference between the pendulums I used a Teensy 3.0 microcontroller in the bob of each pendulum to constantly monitor the output of the accelerometer and gyroscope in order to work out precisely when the bob switched direction, at the peak of each pendulum's swing. The microcontroller then sent back a timestamp to a computer which compared the phase difference between each pendulum's bob and each pendulum's stepper motor. By subtly changing the speed of the motor, this phase difference could be changed which would equally subtly affect the period of the pendulum, allowing one pendulum's phase to be shifted relative to another.
The work I did on the pendulums was continued by UVA after I left and developed into a large exhibition which will open at a major London gallery in early 2014.
About This Site
My aim with this site was to build a three-dimensional immersive interface using only the DOM, with no proprietary flash or poorly-supported WebGL. Everything you can see is an HTML element, manipulated using the CSS3 matrix3d attribute (with the help of the three.js CSS3DRenderer) and the standard DOM event model.
The movement of the elements is controlled by a flocking algorithm, using the concepts of allignment, seperation and cohesion. Each element looks at all the other elements of its group within a certain distance and adjusts its movement so that its direction is similar to them (alignment) but is also towards the center of the group (cohesion) and at the same time is repelled by any nearby elements (separation).
With these three simple rules, complex group behaviours can arise. To see some of these in action, click the "change flock behaviour" button in the settings tab in the bottom right corner. This randomises the weightings for the three flocking behaviours, and can produce some quite surprising results.
I'm very much hoping these problems will be tackled by famo.us, but in the meantime I would like to acknowledge the fantastic three.js library and its creator Mr Doob for their inspirational work.
It's an exciting time to be a web designer. The slow but sure progression of browsers away from the closed technologies of Flash and the historical proprietary quirks of IE and Netscape towards the nirvana of open-standard HTML5 is opening many doors. Soon, the web designer's primary job will not be endlessly trying to trick IE into doing things that it should just simply do without question, but rather working out how best to leverage the wealth of features offered by CSS3, WebGl and HTML5 to improve their user's experience.
Since the excellent three.js library provided an easy-to-use API to manipulate 3D graphics with WebGl, I've been sure that 3D is the future. The level of immersion and satisfaction it has the potential to offer simply dwarfs the 2D JqueryUI-style interactivity that is now omnipresent in modern websites, and it is my opinion (although I know there are many who disagree) that it will come to replace it in many cases.
That's why I'm so interested in CSS3's 3D transforms, and why I was so excited when I heard that Mr. Doob, the creator of three.js, had produced an extension to his library which allowed DOM elements to be rendered in 3D using CSS3 transforms. Finally, HTML content can be manipulated within the DOM in mathematically-simplified 3D, powered by browser-native, hardware accelerated CSS. 3D interfaces are possible!
My name is Liam Donovan and I am a researcher, artist, electronics hacker and freelance web designer.
I am a PhD candidate on the Media and Arts Technology programme at Queen Mary University of London, and am part of the Augmented Instruments Laboratory research group within the Centre for Digital Music. My research is primarily concerned with creating new acoustic musical instruments which create sound via physically vibrating objects such as strings, plates and air cavities, as opposed to synthesising sound electronically and outputting it through speakers.
As an artist I am interested in exploring the relationship between the digital and the physical, and in particular highlighting the limitations and dis-satisfying nature of immaterial, digital things. These notions have heavily influenced my research.
As a web designer I am interested in exploring the cutting edge of new web technologies like HTML5, CSS3 and WebGL, see my Internet in 3D blog post for more details.