Home made Virtual Soup

 Home made Virtual Soup,  developed by artist Avital Meshi to connect Second Life avatars and Real-Life puppeteers. The meeting was held at St. Columba Catholic School, Second Life during soup time. The artist set up the meeting by visiting the Second Life boarding school multiple times where she talked with Miss Sarah Sandalwood, the editor of O’Hare’s Gap. Together, Miss Tali (Avital Meshi’s Second Life avatar) and Miss Sarah Sandalwood “opened a mystical portal between”[1] both worlds. The meeting was a wonderful real-time experience where realities collided. The artist invited her “school family”[1] to enjoy soup with Second Life students of St. Columba, and they all experienced a “moment of communal fellowship.”[1] 


The artist chose St. Columbia after visiting the site various times and joining the community for soup. The boarding school discussed their lives during soup time, and Avital was intrigued that even though the users were playing a role within the school, there was a strong, genuine, real bond they all shared over soup.

The artist wanted the Home made Virtual Soup meeting to be an extension of St. Columba’s dining room in Second Life. She dressed up her table with the red tablecloth, white placemats and dishes, making an “almost exact”[1] replica of the table in Second Life. The soup prepared for the guests in Real-Life was St. Columba’s super secret yellow cream-based recipe (butternut squash).

Once the meeting started the Real-Life guests spoke directly to the Second Life avatars who would respond with a text. A screen on the wall in St. Columba’s dining room showed the Real-Life guests and the Second Life avatars were screened on a wall in the artists “almost replica”[1] room.

Avital Meshi has done multiple projects involving Second Life. She started studying Second Life to research virtual worlds and how people form relationships and family bonds in them. She has questioned role-playing and identity, exploring the different identities players assume and act out. She has used virtual reality as an artistic medium to create a space for Second Life avatars role-playing to interact with Real-Life personalities colliding various realities. Avital’s work in Home made Virtual Soup has explored ideas on “networked communication”[2] and how it “has shaped behavior and consciousness within and beyond the realm of what is conventionally defined as art.”[2] 


[1] http://www.avitalmeshi.com/home-made-virtual-soup-2017.html

[2] http://rhizome.org/community/10459/



Skinonskinonskin is a 1999 digital art piece that displays a love correspondence between two artists in a unique and fascinating manner. The artists Auriea "Entropy8" Harvey and Michaël "Zuper!" Samyn met on an online website for digital artists. Their creative digital love "letters" are what now make up the art piece Skinonskinonskin[1]. Currently the piece can be accessed via a Windows 98 emulator on the referenced archived website. 

In total there are 25 letters, each with some form of interactive graphic. While not letters in a traditional sense, each piece is a file that contains a graphic as well as some form of message, whether that is embedded in the source code or within the letter itself. An example of this is the very first letter, breath.html, in which Michaël created an image of a torso with a dynamic extra cursor that mirrors the user's cursor position. His own heavy breathing can be heard playing in the background. Hidden messages to Harvey can be found in the code for breath.html.


Another exemplary letter features an interactive picture of Auriea Harvey against a green background. Their face repositions itself to constantly be directed towards the cursors position, allowing for an intimate interaction that often causes the participant to feel a sense of voyeurism into someone else's intimate life.

Almost all of the letters are created in a way that the user can interact with the simulated object, environment, or representation, usually in a fairly simplistic manner. Doing so reveals love notes, messages, and other meanings. There are often personal representations of the two artists, either in the form of a rendered avatar of sorts, or actual images and pictures taken of their bodies.

Skinonskinonskin offers a unique insight into the novelty of online intimacy, especially from a time before our modern day connectedness. While today we have video calls, instant image transference, and similar, at the time of Skinonskinonskin's creation the artists did not have such luxuries, instead turning to creative and heavily intimate methods. We are offered a somewhat voyeuristic window into a deeply personal dialogue between paramours. A huge part of the release of this piece was as an expression of change and rebirth, according to Harvey "It seemed that the time was right to release our love into the world and to leave our old lives behind." At the time such expressions were rare on the internet, much unlike how relationships can be a broadcasted thing in our modern times.

[1] http://archive.rhizome.org/anthology/skinonskinonskin.html



This is so cool. I like how you chose a project that was created in a time outside of instant messsaging. You did a great job explaining the project and providing examples of the different letters. Great analysis. I wish there was a video, maybe you could do a screen recording of the online love letters. Great job! -Ashley McDougal

We Are All Made of Light

We Are All Made Of Light is an immersive art exhibition which exemplifies the interconnectedness that we share with our fellow humans and the universe itself. The releasing of the installation coincided with Seattle’s BOREALIS: a festival of light, debuting in October, 2018 in South Lake Union and across Seattle. The work was created by the Seattle-based artist Maja Petrić who set out to demonstrate the thought that we are all just individuals in vast and expansive universe, yet we all share some form of expanded consciousness. 


The work of art asks us to allow ourselves to experience an environment of interconnectedness and explore the world of connected consciousness, allows for an individual to abstract from their narrow perspective in order to see humans in a different light. This installation strips our identities from us at the door, abstracting our physical qualities that so often separate us as humans, allowing us to truly appreciate the human form and love one another no matter our differences. Such is the beauty of a collective consciousness, as it has a way exposing the natural beauty of the universe by evoking us to explore the connections between us and the rest of the world that often stay hidden in the plain sight. 


In order to create such an intense realization, the installation utilized interactive light, spatialized sound, and AI to create audiovisual trails of each individual as they move through the space. These human-like light threads accumulate as the exhibition progressed, emulating a constellation of lights created by visitors, who are connected to one another through past, present, and future. The result is simply an immersive starscape, evoking the beauty of the universe and revealing the invisible ties between each one of us and the rest world at large.

Through these revelations provided by such a work of art, Maja Petric would hope that humans can learn to love each other a little more and respect the wider universe in which we live. In the biography about the work of art, Petric herself says the following, "My desire is that such experience leads to answering the following questions. If we could glimpse the paths crossed, trails left by our fellow humans, would we see how we all alike shape our environments, the spatial memory of our environment? What would being immersed in this mesh of trails lend us in the understanding of each other and our collective experience?"[1]


{1} https://www.majapetric.com/we-are-all-made-of-light


What an amzing looking piece! It is like walking among a night sky. I wonder how many visitors will be recorded into the project while it remains active?


The placement of images and the video in this article is very well done – it gives the sense of continuously falling stardust from the top of the article to the bottom.  An issue that could be easily changed is the use of first-person grammar – the article should be informative without an obvious hint of opinion.  Additionally, would it be possible to add more direct quotes from the artist about their thoughts of their own work?

-Eric Mitchell


In 2015, London-based artist John Walter premiered his multimedia project Alien Sex Club, which sought to explore the relationship between visual culture and HIV as well as open a public discussion about this often-taboo topic.  But even after Alien Sex Club had its final exhibitions, Walter felt that there was still more to say on the connection between art and the nature of viruses.

Three years later, Walter, alongside his collaborator at University College London (UCL), Professor Greg Towers, presented his new multimedia project CAPSID, featuring a short film titled A Virus Walks Into a Bar, which pulls the "bar" motif of British soap operas to present an analogous tale of how HIV enters human cells and destroys them from within [1].

Walter spent those three years in UCL's Towers Lab, studying under Professor Towers to better understand the interactions between a virus and its host, specifically HIV-1 and the effect of AIDS on the human body's immune system.  His meticulous research is shown in the presentation of the HIV capsid, which is a protein shell surrounding the virus and acting as a "sphere of invisibilty," preventing its DNA from being detected by the immune system [1].

In A Virus Walks Into a Bar, the HIV character within his capsid is initially barred from entry into the bar (white blood cell) by the bouncers, figuring him to be a troublemaker, but is allowed in by a few friendly chaps who presume that the new guy just needs a few pints to warm him up.  As HIV's DNA progresses through the bar, with some pushback by other patrons (cytoplasmids, proteins, etc.), he eventually reaches the "bartender" – the cell's nucleus.  And then, he singlehandedly takes control of the cell by replacing the "bartender," allowing himself to replicate and take over more "bars" [1].

A-Virus-Walks-Into-a-Bar-785x486.jpg?resAlthough the idea of equating a real life visual motif, such as a pub, to a microscopic topic like HIV seems too eccentric to be factually sound, CAPSID is, at its core, scientifically rooted.  During its production, Walter became invested in the scientific basis of Towers' research, immersing himself in intellectual discourse with undergraduate students and participating in the lab's science outreach programs.  This was a unique opportunity for Professor Towers' team to approach their research from different perspectives, and challenged them to introspectively question their own investigations.  Professor Towers himself actively engaged in the discussion with Walter about the artistic merit of his research as well.  This surprising role-reversal produced spectacular results, as Walter mixed elements of scientific jargon and factual observations to craft other analogous artifacts for his pieces [1]. 

Although, this is not as surprising as one may first think; much like 9 Evenings by E.A.T., the artist and the scientist seemed to share an understanding that they each possess elements of Techné and Agnos, and that equal contribution by both parties would form a greater whole result [2].

4superhe855638a8_c.jpg?w=800In one portion of his exhibit, Walter took the likenesses of pop culture characters and corporate logos to act as co-factors (the protein particles in cells that allow the capsid access to the nucleii).  In another, plush toys and silicon foreskin are framed as the defense mechanism in cells that detect and kill foreign material.  Through these paintings and installations, Walter's work not only uncovers new ways of expressing how HIV behaves, but presents his findings in a way that the non-scientifically inclined can understand.  Additionally, with the collaboration between John Walter and Professor Greg Towers, this exhibition displays an uncommon situation in new media – the infection of science into the art world, and art into the oft-secluded science community [1].

The final exhibition of Walter's CAPSID was displayed on January 6, 2019, but his official site with more information can be found here.


[1] Regine. “A Virus Walks into a Bar. Or How Art and Science Can Infect Each Other.” We Make Money Not Art, 4 Dec. 2018, http://we-make-money-not-art.com/a-virus-walks-into-a-bar-or-how-art-and-science-can-infect-each-other/.

[2] Shanken, Edward A. Art and Electronic Media. Phaidon, 2014.

Walter, John. CAPSID | Multi-Media Maximalist Installation by London-Based British Artist John Walter, http://www.johnwalter.net/capsid/.






The Tunnel under the Atlantic

tunneltube.jpgThe Tunnel under the Atlantic, by Maurice Benayoun is an interesting stimulatory system that connects visitors of the Pompidou Centre in Paris to the visitors of the Museum of Contemporary Art in Montreal through a simulated “digging” through skewed virtual medium.  By using televirtuality, people in each museum can dig to meet each other, and then interact with each other in this virtual channel.  They are brought together through communication as they survey their virtual surroundings that reveal geological strata as iconographic strata, bringing both shape and recognizable textures.

This televirtuality was made possible by Tristan Lorach, who created the so-called “3D Digging Engine,” which allowed the user to "dig" anywhere they wanted within the 3D space.  As the video (below) shows, this 3D virtual space is very similar to that of MineCraft:  as you dig down through the virtual blocks under the surface of the world, slowly creating caves and paths that become part of a larger system.


This type of art, connecting two groups of people from two separate locations through a virtual mean reminded me of Kit Galloway and Sherrie Rabinowicz's HOLE-IN-SPACE (1980) an artwork that connected people in public spaces in Los Angeles and New York City via satellite.  People who happened to walk by the large window display-sized screen in a busy shopping center in Los Angeles could instantaneously talk and communicate with people in New York City. Today, with the advent of Skype and other services, which have made videoconferencing mundane, it is hard to imagine what the experience might have been like in 1980. Interviews (see video below) show that people were extremely excited about it, even though some were not completely sure of what was happening, since video-conferencing would not become widespread for at least two decades. Perhaps not surprisingly, young men and women tried to pick each other up!  After the first day, when it was not publicly announced, people began to plan to meet friends and loved ones via Hole in Space

1) http://www.artelectronicmedia.com/artwork/the-tunnel-under-the-atlantic

2) http://youtu.be/gQ2Ti6nB9mg

3) http://www.benayoun.com/projet.php?id=14

6) http://youtu.be/QSMVtE1QjaU

7) http://artelectronicmedia.com/artwork/hole-in-space

METRO Re/De-construction

METRO Re/De-Construction is a 6 minute video compilation of a series of 3D rendered scenes from a trip along the Denver Light Rail. Artist Chris Coleman created this thought provoking animation by bringing a handheld 3D scanner onto the train he takes to work every morning. He scanned the inside of the light rail cars while the train was in motion, and hopped  off the train to walk around and scan stations before catching the next one.


Because Coleman scanned the insides of cars by walking through them with the scanner in hand as the train was moving, the bumps and jostles that resulted from his steps and the movement of the cars manifested themselves in the final image as distortions, moving objects and ripping holes in the readings from a device that is otherwise accurate to the millimeter. The result is a digitally fractured and frayed re/de-construction (as per the title) of an otherwise familiar setting. Because the medium of a 3D render allowed Coleman to explore the world his scanner had recorded, something not unlike an ''out of body" experience has been created in the final video, in which the camera approaches the boundaries of what the 3D scanner could sense. As the viewer travels further down successive train cars the picture becomes more and more abstracted, until finally it collapses at the edge of perception.





What makes this artwork truly special is its perspective from the center of motion. Most artworks that involve digital perception place their instruments as externally as possible, outside the immediate area of change and movement. Morphovision by Toshio Iwai engages with concepts of distortion similar to METRO, combining a spinning 3D objects with projections of lines onto that object to create controlled distortion effects make viewers see a 3D image that does not in actuality exist. Sanctum by James Coupe and Juan Pampin also collects data on a well traveled commute location, but explores the concept of automated profiling based on artificial intelligence. Legible City by Jeffry Shaw is highly similar to METRO in that it is an exploration of a 3D rendering, but allows for greater immersion and less realism; it is more immersive in that viewers operate a stationairy bicycle to virtually ride through a model of a city, but less realistic since the model city has large 3D rendered words instead of buildings. These and many other works place their digital devices above, around, or outside of the action in some way, as this is the best way to gather accurate data or project a stable image. METRO breaks this paradigm by subjecting its sensor to the random, jerky, high velocity conditions that we as humans experience every day of our lives. We do not sit perched on high all day watching the goings down below us, we are at our core transient beings, and thus always in motion. Our perception of the world around us is born out of that state. In particular, this piece draws to mind the words of Herbert W. Franke:

It is now possible through information psychology to give a quantitative expression for optimally perceivable information aggregates. We now find that in the case of periodically changing patterns not more than 16 bits per second of information enter consciousness and that a maximum of 0.7 bits per second can enter the memory; here one bit stands for the unit of information represented by 1 or 0 in a computer store. In the case of static arrangements, the complexity of the picture must not exceed 160 bits – at least not in one plane of observation. (Shanken 205-6) 

In this excerpt from Franke's Theoretical Foundations of Computer Art  [1971], he talks about how an average person can't store more than about .7 bits of information per second in their memory, and only perceive about 16 bits of information per second. The 3D scanner can store much more information much faster than we can, so the fact that the world in motion us, and later our memory of that world, does not seem as distorted as the image produced by the scan speaks to the enormous power of the brain. They fill in missing bits and smooth out the rough corners of the pictures we build in our minds. It makes you wonder, if you could go back and fly through your memories, would they really be any more organized or complete than METRO?


Shanken, Edward A. "Documents: Herbert W. Franke: Theoretical Foundations of Computer Art" Art and Electronic Media. London: Phaidon, 2009. 205-6. Print.

Giant Ice Bag

‘I am for an art … that does something other than sit on its ass in a museum,’ stated Claes Oldenburg in 1961.[1] With the production of his mechanised ‘Icebags’ in 1971 this desire was realised. Icebag—scale B moves almost imperceptibly in the exhibition space, slowly winding and undulating this way and that, creating the rather eerie sensation that it is following one around the room. The soft, amorphous folds of bright yellow nylon lend the object an organic feel as it gently, almost reassuringly, ‘breathes’.

With works such as Icebag—scale B, Oldenburg was at the forefront of the avant-garde sculpture that transformed art in the 1960s. Soft sculpture was Oldenburg’s preferred medium and he is widely credited as the catalyst for the movement, which evolved out of the soft, oversized props he created for performances at the Ray Gun theatre in New York City in the early 1960s. Oldenburg’s art has always been preoccupied with the ordinary, and his subjects are consistently taken from his immediate surroundings. This practice of elevating humble, everyday objects to the realm of high art has its origins in the Surrealist obsession with the objet trouvé (found object), the readymade tradition set in train by Marcel Duchamp. In Icebag—scale B the Surrealists’ absurdist disregard for scale and functionality is married to a Pop art fixation on consumerism. Forfeiting functionality for style, the everyday utilitarian object—the icebag—becomes an ideal, reversing the original Modernist dictum that form must follow function; here, function follows form.

512x.jpgIcebag—scale B is a variant [101cm or 40 inches tall] of Oldenburg’s first mechanised sculpture, Giant Icebag [left, 600cm or about 20 feet tall] which he created for the 1970 World’s Fair in Osaka, Japan [and as part of the Art and Technology program at the Los Angeles County Museum of Art – Ed.] Harnessing technology to realise the movement of the ‘icebags’, Oldenburg was able ‘to take something which is formidable in its complexity, and make it do some very foolish thing—I sort of like the idea that all this time and effort was spent on the Icebag.’[2] And, indeed, much time and effort was spent on the creation of the very complex work: over 14 months were devoted to the collaboration between Oldenburg, Gemini GEL and Krofft enterprises. Although the icebags are identical, each of the 25 examples in the edition was individually assembled and fitted with the specially designed hydraulic system that regulates inflation and deflation. For Oldenburg, who is particularly concerned with ‘movement and the conversion of states’,[3] the resultant works were well worth the long process.

Oldenburg’s soft sculptures―mechanised or not―never simply ‘sit on their arses’. Fashioned from malleable materials, and ranging in size from miniature to mammoth, the works take on their own life and exist in a constant state of flux. In Icebag—scale B this idea is taken to its extreme: distending and deflating like a lung, the object goes beyond its utility as a therapeutic aid, to become an uncanny extension of the body itself.

Note: Above description extensively quoted from:

Brooke Babington and Emilie Owens, "Soft Sculpture"
International Art
National Gallery of Australia, Canberra


[1] The artist, 1961, cited in Charles Harrison and Paul Wood (eds), Art in theory: 1900–2000, Blackwell Publishing, New York, 2005, p 744

[2] The artist, 1971, quoted in Claes Oldenburg, Claes Oldenburg: an anthology, Guggenheim & National Gallery of Art, New York & Washington, 1995, p 323

[3] Cited at http://www.tate.org.uk accessed March 2009

From : http://artsearch.nga.gov.au/Detail.cfm?IRN=37808

The Virtual Museum

This article is a STUB please make edits and adjustments as suggested on Wikipedia to make it more robust.  Thanks!


An exact reproduction of the exhibition space is shown on a large monitor placed on a circular, motorized platform. Sitting in front of the screen in an armchair, visitors can navigate their way through four further virtual spaces by using the weight of their body to tip or swivel the chair. Each of these spaces contains different things: a gallery of pictures with running captions; an accumulation of sculptures consisting of letters of the alphabet; characters from the kanji alphabet on which sequences of film can be seen; and floating letters that become a source of light. This ‘Virtual Museum’ functions only in part as a visual memory facility. Although every artistic medium is represented in it, paintings, sculptures, films and the computer-generated space itself are all transformed into signs that can be interpreted only with the help of specialist knowledge.


Source : http://www.medienkunstnetz.de/works/the-virtuel-museum/

Nano Mandala

This article is a STUB please make edits and adjustments as suggested on Wikipedia to make it more robust.  Thanks!


The Nanomandala is an installation by media artist Victoria Vesna, in collaboration with nanoscience pioneer James Gimzewski.

The installation consists of a video projected onto a disk of sand, 8 feet in diameter. Visitors can touch the sand as images are projected in evolving scale from the molecular structure of a single grain of sand – achieved my means of a scanning electron microscope (SEM)- to the recognizable image of the complete mandala, and then back again.

This coming together of art, science and technology is a modern interpretation of an ancient tradition that consecrates the planet and its inhabitants to bring about purification and healing. The sand mandala of Chakrasamvara seen in this installation was created by Tibetan Buddhist monks from the Gaden Lhopa Khangtsen Monastery in India, in conjunction with the “Circle of Bliss” exhibition on Nepalese and Tibetan Buddhist Art at the Los Angeles County Museum of Art. This particular sand mandala had never before been made in the United States.

To complement the video, sound artist Anne Niemetz has developed a meditative soundscape derived from sounds recorded during the creative process of making the sand mandala.

Of the installation the artist says: Inspired by watching the nanoscientist at work, purposefully arranging atoms just as the monk laboriously creates sand images grain by grain, this work brings together the Eastern and Western minds through their shared process centered on patience. Both cultures use these bottom-up building practices to create a complex picture of the world from extremely different perspectives.

With generous support from the David W. Bermant Foundation.

The Nanomandala was premiered at the exhibition NANO at the Los Angeles County Museum of Art.


Source : http://nano.arts.ucla.edu/mandala/mandala.php

The World Generator

This article is a STUB please make edits and adjustments as suggested on Wikipedia to make it more robust.  Thanks!


The World Generator was developed in 1996/97 with programmer Gideon May and specifically enlarged for “p0es1s.” Participants can use this digital machine to generate, in real-time, virtual surroundings out of different digital elements, within which they can then navigate. 3-D objects can be selected, put in scale, given texture; pictures and videos can be placed and repeatedly rearranged in an ever growing world; music, as well as written and spoken texts, change the projected world into an audio-visual space of experience. These “Recombinant Poetics” are used to examine the way meanings arise out of different medial contexts. For “p0es1s,” Bill Seaman has used this electronic machine in order to transpose text from the context of the World Generator into the real exhibition area: i.e., the text installed in the stairway of the Kulturforum “describes” physical space in a manner derived from the virtual space of the World Generator.


Source : http://www.p0es1s.net/en/projects/bill_seaman.html

Frontiers of Utopia


Frontiers of Utopia» is the final part of an interactive series exploring the «history and nature of idealism, technology and design». In the first two parts, «Machinedreams» (1991) and «Paradise Tossed» (1993) the relationship between desire, design and memory was explored in a stylised and dreamy manner.

The viewer was able to construct a collage of sounds and images from 1900, 1930, 1960, 1990 by moving through both real and virtual space, their movement triggering sounds or by using icons on a touch sensitive screen. In this way they became time travellers, making interesting associations as well as learning about history in a new way.

«Frontiers of Utopia» presents the viewer with the politics of the ideal society from the points of view of eight different female characters from these same time zones. «Frontiers of Utopia» creates and illustrates the various moods, criticisms and attitudes toward Utopia along with the articulation of these eight characters and their views about society. In doing so, it presents a rich tapestry of ideas, attitudes, locations and historical perspectives.

The work has a theatrical Brechtian nature, as the viewers can speak to the virtual characters about the struggles of workers, the plight of students, the relationship between women and the attitudes toward the implications of media and technology. Its epic sweep is impressive, traversing as it does decades and continents and drawing parallels between periods and locations based on shared class and gender.

In the installation the viewers can move interactively through the four time zone layers of «Frontiers of Utopia,» via touching icons and objects in the space as if they were time travellers. In another part of the installation they can attend a virtual dinner party where all the characters and their personal objects are available for comparison.

(Source: Jill Scott Homepage)

Jill Scott

from http://www.medienkunstnetz.de/works/frontiers-of-utopia/


Lover’s Leap

This article is a STUB please make edits and adjustments as suggested on Wikipedia to make it more robust.  Thanks!


Lovers Leap, 1995

Interactive environment produced in two forms simultaneously as an interactive multimedia installation (using the viewer’s body as a triggering device) and as a CD-ROM (using the viewer’s hand as a triggering device). Thus the work occupies both a public space and a personal space. This created an environment where the audience self-selected themselves into participants and observers. Within each experience there exists a nexus between the work itself and the viewer’s interpretation of it. Collaboration with Ludger Hovestadt and Ford Oxaal. Produced at ZKM /The Center for Art and Media Technology, Karlsruhe, Germany, Artist in Residency Fellowship.

Source : http://www.rogala.org/LoversLeap.htm

The Forest

This article is a STUB please make edits and adjustments as suggested on Wikipedia to make it more robust.  Thanks!


After finishing the computer animation version of “The Forest”, Waliczky began to work on the second, interactive variant of “The Forest”, in collaboration with Jeffrey Shaw and Sebastian Egner. Here, the animation becomes part of an interactive installation, based on a flight simulator whose cockpit is replased by a simple platform with a seat and a large monitor. Using a joystick mounted in the arm of the seat, the viewer can negotiate his own path through the forest which is shown on the screen in front of him. The flight simulator reacts accordingly, so that changes of speed or direction are experienced as phisical sensations. For this version, Sebastian Egner, who wrote the program and designed the control system for the platform, also designed a new method of constructing the visual image – for technical reasons it was not appropriate to use the previous solution. In the new version of the work, the drawings of the trees are not mounted on transparent cylinders but randomly arranged inside a huge cube, in which the camera is free to move in any direction the viewer chooses. In theory, when the camera reaches the side of the cube, it passes through into a new box of the same type, whith exactly the same trees; in fact, however, it reenters the same cube from the opposite side. Thus this space, too, appears infinite.

The flight simulator version of “The Forest” is significantly different from other flight simulator installations. It does not have a pre-recorded, linear structure to follow, like the techincally similar installations at the different amusement parks. It is also not a computer game: it does not have any goal to reach. (For example an airport to land or an enemy to destroy.) On the contrary: “The Forest” speaks about the purposeless of human actions. To test the flight simulator version of “The Forest” is a meditative experience.

Source : http://www.waliczky.net/pages/waliczky_forest2.htm

Portrait One

This article is a STUB please make edits and adjustments as suggested on Wikipedia to make it more robust.  Thanks!



“Portrait One (1990) by Luc Courchesne is a fictional work and a framed encounter with a character. But unlike other interactive works, it is not a narrative piece, as multi layered as it may be. It is structured so that the viewer can converse with Marie. (…) To experience Portrait One, is, simply put, to encounter Marie.” (1)

“I use hypermedia to make portraits. A portrait of someone is an account of an encounter between the author and the subject. Painted portraits were made over long periods of time and therefore are more conceptual than photographic portraits. They encapsulate in one single image hours of interaction between the model and the painter. Photography, on the other hand, makes realistic portraits. The talent of the portrait photographer is to wait and pick the right moment – the moment when the person expresses the density of his or her being; the subject and the photographer wait for the magic moment in complicity. In my portraits, the entire encounter is recorded, and material is extracted to construct a mechanics of interaction that will allow visitors to conduct their own interviews. As this happens over time, the conversations will evolved toward more intimate considerations.” (2)

Portrait One was shown at the Montreal Museum of Fine Arts from September 20 to December 9, 2007, for the exhibition e-art: New Technologies and Contemporary Art, Ten Years of Accomplishments by the Daniel Langlois Foundation.



Source : http://www.fondation-langlois.org/html/e/page.php?NumPage=157

(1) Jean Gagnon, excerpt from “Blind Date in Cyberspace or the Figure that Speaks” [Text originally published in Artintact 2 : CD-ROMagazin interaktiver Kunst = Artists’ interactive CD-ROMagazine (Karlsruhe : ZKM/Zentrum für Kunst und Medientechnologie Karlsruhe; Ostfilden : Cantz Verlag, 1995).]

(2) Luc Courchesne “Family Portrait : The Art of Portraiture,” Luc Courchesne : Interactive Portraits (Ottawa : National Gallery of Canada, 1994) : 3.


Bar Code Hotel

This article is a STUB please make edits and adjustments as suggested on Wikipedia to make it more robust.  Thanks!


Bar Code Hotel (1994) recycles the ubiquitous symbols found on every consumer product to create an multi-user interface to an unruly virtual environment. The installation makes use of a number of strategies to create a casual, social, multi-person interface. The public simultaneously influences and interacts with computer-generated objects in an oversized three-dimensional projection, scanning and transmitting printed bar code information instantaneously into the computer system. The objects, each corresponding to a different user, exist as semi-autonomous agents that are only partially under the control of their human collaborators.

Each guest who checks into the Bar Code Hotel dons a pair of 3D glasses and picks up a bar code wand, a lightweight pen with the ability to scan and transmit printed bar code information instantaneously into the computer system. Because each wand can be distinguished by the system as a separate input device, each guest can have their own consistent identity and personality in the computer-generated world. And since the interface is the room itself, guests can interact not only with the computer-generated world, but with each other as well. Bar code technology provides a virtually unlimited series of low-maintenance sensing devices (constrained only by available physical space), mapping every square inch of the room’s surface into the virtual realm of the computer.

The projected environment consists of a number of computer-generated objects, each one corresponding to a different guest. These objects are brought into being by scanning unique bar codes that are printed on white cubes that are dispersed throughout the room. Once brought into existence, objects exist as semi-autonomous agents that are only partially under the control of their human collaborators. They also respond to other objects, and to their environment. They emit a variety of sounds in the course of their actions and interactions. They have their own behaviors and personalities; they have their own life spans (on the order of a few minutes); they age and (eventually) die.

Since any bar code can be scanned at any time, the narrative logic of Bar Code Hotel is strictly dependent on the decisions and whims of its guests. It can be played like a game without rules, or like a musical ensemble. It can seem to be a slow and graceful dance, or a slapstick comedy. And because the activities of Bar Code Hotel are affected both by its changing guests and by the autonomous behaviors of its various objects, the potential exists for the manifestation of a vast number of unpredictable and dynamic scenarios.


 Source : http://www.perryhoberman.com/page24/index.html



banff_kinetoscope.jpgSEE BANFF! is an interactive stereoscopic installation. It bears a strong – and intentional – resemblance to an Edison kinetoscope, which made its public debut one hundred years ago in April 1894. It achieved instant popularity, but was short-lived. One and a half years later, in December 1895, the Lumiere brothers publicly exhibited projected film, and cinema as we know it was born. The kinetoscope became a transitionary symbol during a turbulent era in the media arts.

Physically, SEE BANFF! is a self-contained unit about the size of a podium, made out of walnut and brass, with a viewing hood on top and a crank on the side, as well as a selector for chosing one of the silent "views."

These views were filmed around Banff and rural Alberta in autumn 1993. They were recorded with two stop-frame 16mm film cameras mounted on a "super jogger" baby carriage. Stereoscopic recording was either triggered by an intervalometer (for timelapse) or by an encoder on one of the carriage wheels (for dollys and moviemaps). Since the filming was "stop-frame" (rather than "real-time"), time and space appear compressed.

The imagery is part of an investigation of the role of media and its relationship to landscape, tourism, and growth. Recordings were made dollying along waterfalls, glaciers, mountains, and farmland; moviemapping up and down popular natural trails; and timelapsing tourists.

SEE BANFF! looks and feel like a real kinetoscope. Turning the crank allows the user to browse back and forth, to "move through," the material.


Source : http://www.naimark.net/projects/banff.html



Brenda Laurel is a designer, researcher and writer. Her work focuses on interactive narrative, human-computer interaction, and cultural aspects of technology. Her career in human-computer interaction spans over twenty-five years. She holds an M.F.A. and Ph.D. in theatre from the Ohio State University. Her doctoral dissertation was the first to propose a comprehensive architecture for computer-based interactive fantasy and fiction. Brenda was one of the founding members of the research staff at Interval Research Corporation in Palo Alto, California, where she coordinated research activities exploring gender and technology, and where she co-produced and directed the Placeholder Virtual Reality project. She was also one of the founders and VP/Design of a spinoff company from Interval – Purple Moon, formed to market products based on this research. In 1990 she co-founded Telepresence Research, Inc. to develop virtual reality and remote presence technology and applications. She has worked as a software designer, producer, and researcher for companies including Atari, Activision, and Apple. She serves currently as Chair and graduate faculty member of the graduate Media Design Program at the Art Center College of Design in Pasadena, California. Brenda has published extensively on topics including interactive fiction, computer games, autonomous agents, virtual reality, and political and artistic issues in interactive media.

Placeholder was a Virtual Reality project produced by Interval Research Corporation and The Banff Centre for The Performing Arts, and directed by Brenda Laurel and Rachel Strickland, which explored a new paradigm for multi-person narrative action in virtual environments at the Banff Centre in 1992. The Placeholder archive – composed of over 600 megabytes of working papers, published reports, video clips, sounds, and retrospective comments is available on a Macintosh CD-ROM by special arrangement.

Source : http://digitalarts.lmc.gatech.edu/unesco/vr/artists/vr_a_blaurel.html

Spectral Bodies


Spectral Bodies 1991 – artist statement

from http://www.catherinerichards.ca/artwork/spectral_statement.html

Spectral Bodies is a videotape that considers the issues at stake around simulation and subjectivity in contemporary technology. Specifically, it is concerned with virtual reality technology, which literally reads and writes our bodies. Using emerging technology, it traverses the boundaries between the imaginary and the >real= in the realm of scientific metaphor, physiological testing and the construction of our own subjectivity – of our selves. Spectral Bodies could be seen as an approach to science fiction that disrupts the metaphors we have woven around ourselves and technology. Technology is treated as instrumenting (if I can use this word), or rendering instrumental our preoccupations and desires in today's culture. As a result, this science fiction is as much a fiction of science as it is a fiction about contemporary subjectivity.

Through puzzles and short narratives, the tape describes how we may perceive a loss or change of body and, quite often along with it, a loss of personal self. The image of the body as an intimate part of perceptual and technological processes has been extended through the development of computer technological instrumentation, such as VR. The conceptual reinscription implied by this and the possibility of occupying other senses is explored in my recent works such as this one. Spectral Bodies focuses on narratives that explore one of these other senses – the sense of presence. It is a form of intimacy with our own bodies that is relatively unacknowledged in everyday life and yet enables us to function as a complex relation of parts, despite body changes. This sense is formally known as the proprioceptive sense. As a sense of presence it is, in many ways, how we know we are.

The Spectral Bodies narratives can be seen as equally fictional and 'real' testimonies to how we think we know where our senses and body boundaries begin and end. The tape begins with a reference to mid-nineteenth century scientists who discovered through the study of after-images how the corporal body is involved in the process of vision. At that time scientific activity in this area was so compelling and exhilarating that several scientists went blind experimenting with their own vision by staring directly at the sun. This new work on vision destabilized the earlier camera obscura model in which the observer was autonomous and sacrosanct. It marked a beginning: the body was seen subsequently as both influencing our perceptual process and as a site to read and to write upon.

Two stories in the tape from neurological histories poignantly testify to the loss of the sense of the body. The first is the most touching: a woman attempts to describe what it is like to completely lose, through neurological disease, any sense of where her body is at any moment in time. She is like a rag doll, incapable of motion or position. The proprioceptive sense is taken for granted to such a degree that both she and our culture lack words to describe it. It is like the body is blind, she says, it cannot see itself. This is part of the reason I play with images of blindness and sight, external and internal, throughout the tape.

We often assume our body limits are fixed, and the boundaries clear. Other stories in the tape tell how bodies can change in impossible ways, however, and very quickly. These testimonies are excerpts of physiological experiments, which I performed, creating body illusions with 'subjects'. The experiments suggest that our imaginary bodies (or are they our real bodies?) can be destabilized.

After destabilization, 'bodies' are ready to be re-mapped. A segment of interactive work then begins the process of re-representing the body as a virtual body. Working with scientists at Brandeis University in Boston and the University of Alberta in Edmonton, I combined body illusions of impossible arm and hand transformations with VR technology. A spectral hand and arm are portrayed as spectral dots and skeletal lines in a virtual environment. The VR participant recognizes this as a conventional schematic representation of his or her arm and hand. However, these initial representations of hand/arm begin to transform. The spectral arm and fingers lengthen, the palm spreads open and so on. Combined with body illusions, the participant's limb is mapped into an insect-like thing. As this mapping takes place there seems to be a physiological sense that this transformed virtual body is oneself.

VR computer technology can be approached as a kind of fiction made physical. At the same time it is a machine to create fictions of ourselves. This short excerpt of virtual reality (VR) work in the middle of the tape is one narrative among many – and one irony among others. As a whole, Spectral Bodies opens the door on the permeable and flexible mapping of the imaginary body with the material one, pointing to the role of technology as an instrument that intervenes in both the physiological and the imaginary.

Watch a video of Virtual Bodies below.

More Catherine Richards on Art and Electronic Media : http://artelectronicmedia.com/artwork/curiosity-cabinet-at-the-end-of-the-millennium



Liquid Architectures

This article is a STUB please make edits and adjustments as suggested on Wikipedia to make it more robust.  Thanks!


Marcos Novak defines liquid architectures: “”What is liquid architecture? A liquid architecture is an architecture whose form is contingent on the interests of the beholder; it is an architecture that opens to welcome you and closes to defend you; it is an architecture without doors and hallways, where the next room is always where it needs to be and what it needs to be. It is an architecture that dances or pulsates, becomes tranquil or agitated. Liquid architecture makes liquid cities, cities that change at the shift of a value, where visitors with different backgrounds see different landmarks, where neighborhoods vary with ideas held in common, and evolve as the ideas mature or dissolve.”

Source : http://www.zakros.com/liquidarchitecture/liquidarchitecture.html

Watch a video here : https://sma.sciarc.edu/video/marcos-novak-liquid-architecture/



Finding Eutaw and North

Finding Eutaw and North is a 34-minute animation created in 2005 by WeWorkForThem, the collaboration of Michael Paul Young and Michael Cina. The piece consists of a series of a series of abstract vignettes of Young’s 3D animation and Cina’s audio work, which were mixed together live using custom software.

The piece combines abstract, dynamic architectural models with ambient music and audio samples, intended to convey an interpretation of “the mean streets of Baltimore”. Rather than an interpretation of the visual aspects of the city, Finding Eutaw and North is a response to “the mood, the lifestyle, the ideas, the language.”






4_outerspace_sample.jpgOuterspace is a reactive robotic creature with lifelike interactive behaviour. The robot wants to explore the world surrounding him, or the outer space, exhibiting curiosity and waryness as an aprehensive animal might. A participant may put a hand up to the robot and cause it to pull away, as if surprised at the recognition of another being, then move forward searching for the thing that caught it’s attention. The concept that insprired the work was that an object, inherently not living, cannot have emotion. In order to create an emotional object (the goal), first the thing must be aroused, feel, have a emotion; then comes emotional expression. In technical terms, it must read input and display output.

The robot is comprised of three ‘limbs’: the head, the midsection, and the lower body. The head contains five photo sensors that detect light and shadows, as well as simple movements. The body of the robot has two capacity sensors which detect contact, so that it may react to human touch. Motion is controlled by four motors that power a systems of wires and pulleys. Input from the sensors is interpreted by computer software that informs the robot’s movements, which are dictated by two microcontrollers. 

The work is the result of a University project with guidelines to create reactive physical object by means of microcontrollers and sensors. The students, Markus Lerner and Andre Stubbe, were awarded with an Honorary Mention from the Prixs Ars Electronica in 2006.

Technology: http://www.outerspace-robot.com/technology/

Object of Desire: Philosophical Conflict : http://www.markuslerner.com/news/lustobjekt-philosophical-conflict/?lang=de&imageID=1&order=ASC&by=bestof&zoom=


Given Time


5842399642_3521411028.jpgIn Given Time, artist Nathaniel Stern utilizes different forms of media, but maintains his focus on the relationship between bodies and art. Given Time is a video installation that uses the 3D social network platform, Second Life, to make a comment on our role in the real world as compared to the online world.

In Second Life, users create an avatar that inhabits a virtual world in a 3D environment. Users interact with one another as they would in real life, only virtually, in a computer graphics setting. Each individual you may come across in a Second Life world is a representation of a real human at his or her own computer, who could conceivably be any distance away from you. 

4357303030_265e0789d6_m.jpgFor this installation, Stern created two avatars in Second Life who stand apart facing one another. Their location within the in-game world is never disclosed, but they do occupy virtual space somewhere. In physical space, the two avatars are projected onto screens opposite one another in the gallery, as in Second Life. Viewers can approach the two virtual performers, while they hover as video projections, forever staring at one another. These representational characters are in fact, nobody. They exist only in the virtual plane, yet they are brought into existence via light and electricity.

To enhance the tangible effect on the viewer, the characters are given a stylized appearance and ambient noises can be heard in their proximity. Their features and embodiments in Second Life are as real and legitimate as any other avatar in the game world, except they have no real-life counterpart– no physical representation to their digital existence. Essentially ghosts in the machine, they are present only as electronic impulses and data configurations in a network thousands of miles long.

Given Time elicits questions of reality, consciousness, and physical space. The fact that a network like Second Life exists, illustrates the questions and curiosities we have about identity. This piece further probes those inquiries and offers a paradox to reflect on.







Ah_Q.jpgChinese new media artist Feng Mengbo has worked with iconic first person shooters Doom and Quake numerous times throughout his career. In Q3 (1999) Feng recorded footage of the game Quake III Arena and superimposed live action video of himself, toting a camera around the battlefield and interviewing contestants, over top. Feng expanded upon this idea in 2002's Q4U (a play on the common abbreviation for the game, Q3A) by completely reworking the game's code to replace all character models with a model of himself, bespecktacled and shirtless, with a gun in one hand and a video camera in the other. Feng's final version of the game, released in 2004 and pictured here, saw the addition of a dance pad used to control all the player character's motions. 


Feng's modded version of Q3A was made available to players online, and he regularly gave installation demos involving live matches against anonymous opponents. With every character bearing the image of the artist, friends and foes alike found themselves locked in a bloody deathmatch with no way of telling who should or shouldn't be killed. Cameras in tow, a dozen Fengs repeatedly encounter each other, given no other choice of interaction but violence. While this may seem to deliver a deliberately negative message about our interpersonal interactions, Feng instead hopes the viewer will come to appreciate the "beauty of virtual violence…which can be switched on or off at any moment" [1]. The dance pad controller gives an added element of physicality and personal investment to the piece; dance games, played on similar controllers, were also highly popular in China at the time of AHQ's creation, and thus gathered the interest of viewers on a popular cultural level as well.  



Tim Hawkinson works in a variety of media, but most famously perhaps in the realm of kinetic sculpture. A well-known tinkerer, Hawkinson’s pieces often have an exposed ruggedness to them; a simplicity in their presentation that exposes the complex mechanical underpinnings and thougth processes often present within. Signature Chair, a 1993 sculpture, deals with themes of rhythmic repitition and self-obession through the use of a small motorized contraption attached to a vintage school desk. The contraption, which contains a pen, copies out “Tim Hawkinson” endlessly on rolls of paper, discarding the sheets around the desk itself. The process itself becomes the piece, as the quirky Rube Goldberg-esque contraption continually copies the artist’s name and then litters it on the ground. As the paper slowly envelops the desk, viewers are forced to consider their expectations created by the slow engulfment, raising questions on ego through our repetition of habits dealing with self-declaration and self-worth. 


Picture Credits


Background info


Nemo Observatorium


Lawrence Malstaf - Nemo Observatorium

In this single viewer experience, a person is invited to enter a transparent PVC cylinder, about 6 feet in radius, and sit in a comfy armchair. To the right is a button that when pressed activates 5 fans that create a simulated typhoon using bits of polystyrene. The viewer is engulfed in a whirlwind of activity, which is both awesome and overwhelming, and mysterious in that there is obviously no real storm cell overhead but the energy of air speed and movement is made apparent in a form that begs an emotional response: one might feel the chaotic activity is unpleasant and entrapping, but more likely the viewer will find calm in the rhythmic undulations of the beads. After a time period the wind dies down, and the participant is allowed to leave, or begin round two. The mesmeric environment causes one to loose sense of time and has been called a practice in meditation. One must ask themselves if it’s the enveloping energy or the connection to a storm being made that is so evoking. 










Peter d’Agostino’s VR/RV is an interactive installation piece which simulates a recreational vehicle as it navigates a virtual space in which the roles of territory and map have been reversed. Participants don head-mounted displays and “data gloves” in order to drive through a landscape that mixes Philadelphia, the Rocky Mountains, Kuwait City, and Hiroshima. As the car traverses the environment, the radio picks up snippets of audio from historical music and events in addition to simulated audio from the landscape. The piece examines our “technologically-determined” culture by bombarding riders with both the utopian possibilities and the dystopian consequences of technological advancement.


The piece received an honorary award for interactive art at Prix Ars Electronica in Linz, Austria.




Miao Xiachun’s recent work transforms paintings from the canon of Western art history into photographic and animated computer models. Microcosm is based on Hieronymus Bosch’s 15th century masterpiece The Garden of Earthly Delights. It is an imaginative reinvention of its sumptuous landscape of sin and salvation, where new digital means and computer technologies have allowed Miao Xiaochun to conjure a contemporary visual vocabulary. He abolishes the traditional fixed single-point perspective aesthetic, instead favouring the Chinese tradition of multiple points of view, which he constructs into a world of radically different metaphors and tangled relations among reality and virtual reality. In his remake of this work for the AVIE 360 degree immersive 3D cinema, Miao has taken Microcosm a spectacular step further, literally placing the viewer in the centre of its phantasmagoric universe.

Miao Xiaochun successfully uses 3D technology to create upon a 2D image a virtual 3D scene, to transform a still canvas into moving images, concurrently changing the traditional way of viewing paintings and giving a completely new interpretation and significance to a masterpiece of art, especially with the striking use of his idiosyncratic imagination about history and the future. His works add another important example to contemporary negotiations with art history, and open up new potential for art as he experiments with new possibilities, taking a step forward into new potential spheres.


The Jew of Malta

Jew of Malta - medial stage and costume design


The Jew of Malta – Scenic Concept: Content-driven Interaction

The Jew of Malta – Development of Costume Design

Description of the project

 The goal of the project was the enhancement of the traditional static stage setting into a reactive and dynamic stage design that plays its own vital role in the narration.

On the stage, large planes were arranged onto which architecture, generated in real-time, was projected. The projection screens formed clipping planes through an imaginary virtual architecture positioned on stage. Machiavelli’s – the opera’s protagonist’s – movements and gestures were camera-tracked, and the virtual architecture moved according to his movements and gestures. This concept allowed linking the staged action and the architecture closely: Machiavelli, as a powerful and dominant character in the play, has power over the stage (and consequently over his co-actors) through the possibilities of interaction given to him.

In addition to the architecture, the costumes of the actors were also augmented with digital media. Via a tracking system developed especially for this opera, digital masks were generated in real-time, according to the silhouettes of the actors. Textures were then pasted onto these masks, and the ensuing “media costumes” were projected to fit exactly onto the singers. This way, it was possible to depict the characters’ conditions and feelings with dynamic textures on their bodies.

Despite the complexity of the software and hardware developed for this project, technology was never at the forefront. The exclusive aim was to generate new ways of expression for the director and the actors.

The project was commissioned by the Opera Biennale Munich in 1999 and premiered in 2002. Composer: André Werner, libretto based on the novel by Christopher Marlowe. The project is a co-production between ART+COM and bureau+staubach, supported by ZKM Karlsruhe. Co-authors and developers: Nils Krueger, Bernd Lintermann, Andre Bernhardt, Jan Schroeder, Andeas Kratky.

source:  http://www.artcom.de/en/projects/project/detail/medial-stage-and-costume-design/

Selective Memory Theater

Pulls images from Flickr and manipulates them to form
a metaphor for how memory and perception work in the brain….

“Digital installations that claim to mimic the ineffable processes of our minds usually do nothing of the sort, but Matthias Dörfelt‘s “Selective Memory Theatre” is subtler than most. To him, the main difference between our memories and digital files is that our minds can actually forget things stored within them, whereas computers — outside of server crashes and file corruptions — never do. “Selective Memory Theatre” pulls images off of the image-sharing site Flickr, then uses two layers of data-processing to distort, remix, and display them in a way that metaphorically mimicks the way our own brains store and reconstruct memories.” 

Neuroscience tells us that memories, unlike digital data files, are re-built every time we recall them.Dörfelt’s art makes a lot of other conjectures about how the brain turns raw perceptions into coherent memories, and if you feel like fact-checking them, head over to  Mindhacks.com and go nuts. But here’s how “Selective Memory Theatre” works. First, a programmed “perception layer” sucks in new images from Flickr and mixes them into a kind of raw noise in the “memory layer.””

“Then, the two layers communicate: as new images come into the perception layer, it uses the photo’s Flickr tags to associate it with other, similar images in the memory layer. Those images then get called back up and displayed at 30 frames per second, as do the connections themselves (visualized as glowing nodes in a network).”

“According to Dörfelt, “this demonstrates the interrelation between perception and memory, which oblivion results from.” Er, ok. In case you couldn’t already tell, the best way to appreciate “Selective Memory Theatre” is just to bask in its mesmerizing visuals, not their quasi-scientific interpretations. And there’s plenty to appreciate.”

John Pavlus, “Selective Memory Theater Uses Flickr to Mimic the Brain” Co.Design, 16 February, 2011

AR Magic System

AR Magic System is an interactive system inspired by magical illusions performed by illusionists. Clara Boj and Diego Diaz, the artists behind Lalalab claim that there was an Ancient Egyptian magician who interchanged the heads of chickens and ducks and made people believe it was really magic, this was the inspiration for this work.

Based on an augmented reality system, AR Magic System allows users to exchange faces with their neighbours in real time by looking at themselves in a mirror-like video projection. Audiences participate enthusiastically in the augmented reality system, their interactions are intuitive. The produced effect is funny and magical; people pull rather unbecoming faces when their faces appear on their neighbour’s bodies, as if they were trying to make the rest of their neighbour’s body look silly. In an interview, the artists said, “It is really amazing how people react when they look at themselves and see another face that is smiling or talking and they cannot control the expression. It is as if somebody has supplanted your identity. For us it was a real surprise how people enjoyed this very simple idea and they played for a long time and called their friends to see how it feels to be the other. […] Almost everybody who played with the piece took a picture of their transformed face. We found dozens of those pictures on Flickr, which for us is a sign of how people enjoyed the experience.” [1]

It is a rather confusing sight, as the participants’ faces and bodies are mixed up, resulting in a sense of disembodiment. This somehow distances the participants from their sense of visual identity which then allows them to interact intuitively and freely with the other person’s body. By playing with technological simplicity, Lalalab raises complex questions about one’s sense of identity and how it is constructed.

AR Magic System recalls Paul Sermon’s artwork Telematic Dreaming (1993, see linked entry), which created a hybrid space that joins real and virtual forms of presence. Two separate beds, video cameras and a projector were connected by a telecommunications line. These were used to create a very eerie effect where a participant would lie on the bed with the projection of the other participant next to him or herself. Both AR Magic System and Telematic Dreaming challenge human perception, space and privacy through the intermingling of real and virtual forms of presence. However, AR Magic System differs from Sermon’s work by including identity in its subject matter, which heightens the sense of disembodiment and freedom in the virtual world.

Lalalab’s augmented reality system could be seen as a visualised form of Eduardo Kac’s concept of telepresence, where, he saw “telepresence art as challenging the teleological nature of technology.” The camera is no longer used to record what stands before it, nor does the screen display what has been recorded in AR Magic System, revealing what Kac calls a “phenomenology of perception” [2] and a new form of communication with others and one’s identity.



[1] WMMNA Interview with Lalalab

[2] Kac, Eduardo, Telepresence Art, 1993, In: Edward Shanken, red. Art and Electronic Media. London: Phaidon, 2009, p. 237

DeResFX.Kill (KarmaPhysics < Elvis)

DesResFX.Kill (Karma Physics < Elvis) is a self playing modification of the science fiction first person shooter computer game engine Unreal Tournament 2003. When plugged into a projector/monitor and power, a small custom computer starts and displays the work. The viewer is pulled slowly through an endless chasm of pink fog filled with countless floating, flailing bodies of Elvis Presley. The representations of Elvis are appropriated archetypes that represent a certain type of character; they are empty shells that one can inhabit during a video game.

The choice of pink background and the use of Presley’s image imbue the artwork with a rather humorous quality. The artist’s sense of humour seems rather dark when the observers discover that the convulsions of Elvis are controlled by the original game’s Karma real-time physics system – a type of procedural animation that applies real-world physics to video games, especially when re-enacting death animations. Movements are calculated at the moment they are executed, enabling the video game character to interact more naturally with its virtual environment. The “reality” of the game is not predetermined when using karma physics; the player alters it through his or her imagination’s comprehension of the virtual environment. Condon states that he misuses karma physics as a “new representation of death via code, not just the visual surface of trauma, but the physical dynamics of the falling figure.” [1] DesResFX.Kill (Karma Physics < Elvis) seems even darker in its subject matter when one realises that the artist is keeping the allegedly immortal ‘King of Rock ‘n Roll’ doing the eternal ‘dance of death’ in his artwork. This realisation is striking in both its gravity and lightness, as death is seem to be traumatic, yet one realises that millions of game characters ‘lose their lives’ daily. Death is reduced to a simple, repetitive action on a pink background in the virtual world.

Just as Condon used the Unreal Tournament 2003 gaming engine to create endless simulations of death (which jarringly contrast with his game environment and character choice), Mary Flanagan used the same technology in her work [domestic] where the 3D computer environment simulates memory and spaces in a house fire. It is another traumatic subject which raises questions about re-experiencing memories in the virtual world and how computer games create meaning.



[1] Condon, Brody, New Planes of Existence, 2008, p. 20

Brody Condon website



Movie-Movie was an expanded cinema performance that was specially created for the 4th Experimental Film Festival in Belgium. Film, slides and liquid-light show effects were projected on and through a thirty foot inflatable plastic dome and the people interacting in and around it. Expanded cinema, as theorized by Gene Youngblood, sought to extend the range of possibility of film by incorporating unconventional media, performance, and contexts (AEM, p. 218-19). MovieMovie anticipates subsequent artworks by Shaw incorporating virtual reality and other technological media.

011_001.jpg“Three performers – Jeffrey Shaw, Theo Botschuyver, and Sean Wellesley-Miller – dressed in white overalls, first carried in the inflatable structure and unrolled it on the floor. While it was being gradually inflated, film, slides and liquid-light show effects were projected onto its surface. Its fully inflated shape was a 7m diameter and 10m high cone with an outer transparent membrane and an inner white surface. The projected imagery first impinged lightly on the outer envelope and then appeared on the semi-inflated inner surface; in the intermediate space various material actions were performed.” [1]

The intention of this work was to transmute the conventional flat cinema projection screen into a three dimensional kinetic and architectonic space of visualization. The multiple projection surfaces allowed the images to materialize in many layers, and the physical bodies of the performers and then of the audience became part of the cinematic spectacle. In this way the immersive space of cinematic fiction included the literal and interactive immersion of the viewers who modulating the changing shapes of the pneumatic architecture which in turn modulated the shifting deformations of the projected imagery. With speakers placed both outside and inside the structure, its acoustic environment was also modulated in this way.


Relations to other artworks

Since the late 60's Jeffrey Shaw has pioneered the use of interactivity and virtuality in his many art installations. His works have been exhibited worldwide at major museums and festivals. [2] Jeffrey Shaw’s early works where often various forms of expanded cinema performances. Other Air structure/mixed-media events by Jeffrey Shaw are 'Disillusion of a Fish Pond' (1967) and 'Corpocinema' (1967).

MovieMovie can also be seen as a predecessor of virtual spaces. "A sensual conjunction of actuality and fiction was achieved through a mediated dematerialisation of their respective boundaries. Such a convergence of architectonic and cinematic space clearly prefigures the modalities of mediated architecture that today are being built in cyberspace." [3]

In 2001, Jeffrey Shaw realised this vision with his work 'conFIGURING the CAVE' (2001). See AEM, p 177.

Jeffrey Shaw's website <http://www.jeffrey-shaw.net.>


[1] Official Jeffrey Shaw Website, MovieMovie, <http://www.jeffrey-shaw.net/html_main/show_work.php3?record_id=11>

[2] Jeffrey Shaw's Biography <http://www.jeffrey-shaw.net/html_titles/titles_biography.php3>

[3] Jeffrey Shaw, "Movies After Film – The Digitally Expanded Cinema" (2002), in Edward Shanken, red. Art and Electronic Media. London: Phaidon, 2009: p. 264.

Your Destiny

Your Destiny is an immersive, interactive installation based on tarot cards. When the audience comes into the room, they first see a projection of the backside of the tarot cards on the screen. A camera recognizes the visitor’s movements. When the visitor stands in front of the camera, tarot cardYour Destinys will flip over in the shape of the visitor’s silhouette, revealing the front of the tarot cards. A pre-recorded voice will be heard by the audience as the tarot cards are displayed, voicing the symbolic meaning of each revealed card. As the visitor moves and creates new silhouettes, a new display of front facing tarot cards and voices are revealed.

Through programming, this work responds to the viewer’s movements and form and shows the digital image randomly and vertically within their figure on the screen. It allows visitors various possibilities of interaction through image, animation, and sound. The audience can enjoy colorful and varied visualizations as a result of their movement.

In this project, the main theme is the use of tarot cards as a visual. My tarot deck has been designed and reinterpreted in a modern photomontage style. Fot these photomontage tarot cards, i have taken pictures of models to represent the characters in the tarot card deck, and photos of New York for the background.” [1]

The inspiration for this work comes from the book, “The Castle of Crossed Destinies” by Italo Calvino [link]. It is said that we would like to be able to see our destinies. In many parts of the world, tarot cards are used as a tool to see into the future. Even though some parts of our destiny are pre-determined, in my opinion, we can manipulate other parts. Moreover, one person’s destiny is mixed with the destiny of others. We can make a new destiny as we can change the tarot by our own movement.” [1]

The idea of motion tracking using a video camera has been around a couple of decades, and researched in a field known as “Computer Vision”. Historically, Myron Kreuger created some of the first body interaction based camera in design field. He used the computer and a camera to create a real-time relationship between the participants’ movements and the environment. His project, Video Place (1974, see linked entry) coordinated the movement of a graphic element with the actions of the audiences. [2]  

Your Destiny uses a camera to feed the input into a computer, which in turn processes the image data into cells, detecting and processing the color value of each cell. Each cell is assigned a random tarot card. Differences in the color value of the cell determine whether a card will be flipped or not. The data is sent to the projector and shown as tarot cards on the screen. A similar technique is used in Daniel Rozin’s Wooden Mirror (1999, see linked entry) [3]. The process that is used for these artworks was allready envisioned by Myron Krueger in 1977. In Responsive Environments he describes it as follows: “The environments described suggest a new art medium based on commitment to real-time interaction between men and machines. The medium is comprised of sensing [motion tracking], display [projection] and control systems [computer software]. It accepts inputs from or about the participant and then outputs in a way he can recognize as corresponding to his behavior. The relationship between inputs and outputs is arbitrary and variable, allowing the artist to intervene between the participants’ actions and the results perceived.” [4] In the case of Your Destiny, Kwon intervenes by converting the input into cells and assigning them with tarot cards.

Jieun Kwon wrote a paper (Your destiny: dynamic interactive installation for digital Gesamtkunstwerk) on her installation and the techniques she used. If you have an ACM Web Account you can download her paper here.

Your Destiny was exhibited at the 404 Festival in 2008.

More information about the artwork can be found here (Artwork #4) and in Kwon’s paper.



[1] Jieun Kwon on 404 Festival <http://www.404festival.com/eng/autores2008.htm> (artwork #4)

[2] Edward Shanken, red. Art and Electronic Media. London: Phaidon, 2009: p.44 & p.166

[3] Edward Shanken, red. Art and Electronic Media. London: Phaidon, 2009: p. 31

[4] Kreuger, excerpted in Edward Shanken, red. Art and Electronic Media. London: Phaidon, 2009: p. 258

The Tunnel under the Atlantic

The Tunnel under the Atlantic is an interactive art installation by Maurice Benayoun that first was exhibited in September 1995. The visitors where invited to dig, inside memory, a virtual tunnel between the Pompidou Centre in Paris and the Museum of Contemporary Art in Montreal and enabled hundreds of people from both sides to meet. 

The Tunnel under the Atlantic is a work using televirtuality that allows the users to meet each others and to interact in the virtual space they have created together. A person enters on each side of the virtual tunnel that links the museums and then the two users progressively move forward meeting each others. The way surface they open up in the process of forging, reveals the equivalent of geological strata, converted here into iconographic strata. These discoveries enhance the dialogue between the two participants. Thus, each participant shares a complementary part of the same musical composition.

Technically, it uses a sophisticated “3D Digging Engine” developed by Tristan Lorach, who notes that it ” “allows you to dig holes everywhere. At the end of the day if you made holes everywhere, the caves start to look really cool. The whole level in which I am flying is made by the user itself and not a level that I loaded. You can create miles and miles of caves without slowing-down the framerate of the engine. It was meant to be multi-players…”  See https://www.youtube.com/watch?v=1qrQZrv_02E

This VR artwork exemplifies what Benayoun calls “architecture of communication.” As another way to explore limits of communication, after Hole in Space (1980) by Kit Galloway and Sherrie Rabinowitz, The Tunnel under the Atlantic introduces the concept of dynamic semantic shared space.

Visit the artist website for more information


Hand From Above

‘Hand From Above’ is a public art piece on the ‘BBC Big Screen’ in Liverpool. “It encourages us to question our normal routine when we often find ourselves rushing from one destination to another.” [1] When pedestrians walk by, they see themselves on the big screen and will be tickled, stretched, flicked or removed by a big hand.

 The screen is connected to a CCTV camera, linked to a computer that runs software that can pick walkers-by based on their proportions and how apart they are from other people. When there is too big a crowd it resorts to tickling people, with a random selection.

 In a certain way this is an augmented reality, especially made to shake people out of their normal routine. As we can see in the video, people clearly react to it; they mostly have fun with it. But it also makes you think; a higher power (in this case a hand from above) can easily wipe you away. Our lives are all very precious to us, yet they’re also very fragile. Maybe we should stop once in a while and think about our lives and the world that surrounds us. This is in line with David Rokeby’s ideas about interactive technologies. He defines an interactive technology as a mirror which provides us with a self-image and which also provides us “with a sense of the relation between this self and the experienced world. This is analogous to our relationship with the universe” [2].   

   Hand From Above     Hand From Above

‘Hand From Above’ investigates how the use of outdoor screens can be used to enhance the feeling of community in a city. There’s even an entire project with conferences about this phenomena, called Urban Screens. Urban Screens defines its goals as follows: “We want to network and sensitise all engaged parties for the possibilities of using the digital infrastructure for contributing to a lively urban society, binding the screens more to the communal context of the space and therefore creating local identity and engagement”. [3]

 ‘Hand From Above’ is an engaging urban screen that playfully transforms its passers-by. It will get people’s attention and temporarily wake them up from their daily routine. 




[1] Chris O’Shea: http://www.chrisoshea.org/projects/hand-from-above/

[2] Rokeby, David. ‘Transforming Mirrors: Subjectivity and Control in Interactive Media’ (1995). In: Edward Shanken, red. Art and Electronic Media. London: Phaidon, 2009: p. 223.

[3] Urban Screens: http://www.urbanscreens.org/




Topshot Helmet

‘Topshot Helmet’ is a device to see yourself from above. It is inspired by the so called ‘GTA View’. GTA (Grand Theft Auto) is a videogame in which a virtual camera follows the avatar from a top view. In this gameplay the camera will always stay right above the avatar, giving the player a birds-eye, godlike view (see also the left picture below).

 With the ‘Topshot Helmet’ you are able to experience this view in reality. You basically wear a big, white helmet that covers your head. A big helium balloon is mounted with wires onto the helmet, and always floats right above you. At the bottom of the balloon, a camera is positioned that is connected to the helmet. In this way it is possible to see the world, and yourself from above.

         GTA View           Topshot Helmet view

Wearing the ‘Topshot Helmet’ will give you an out-of-body-experience. Through this helmet your vision will be released from your body, suggesting that you can control yourself from above. This is almost Godlike. In her article: ‘Embodied Virtuality: Or How to Put Bodies Back Into the Picture’, N. Katherine Hayles explains that the dream of transcending the body has always been expressed through certain kinds of spiritualities. But today we have new possibilities: “to achieve this apotheosis one does not need spiritual discipline, only a good robot surgeon”. [1] Or in the case of the ‘Topshot Helmet’: maybe one only needs a camera and a helium balloon.    

This godlike view is, however, very restricted; a view from above doesn’t reach very far. It is only possible to see things that are in your direct surroundings. Whilst wearing the ‘Topshot Helmet’ you have to adapt yourself to a new way of seeing and navigating, which is more restricted than most of us are used to.   

Also, the design isn’t optimal. Because the helmet is tied to a helium balloon, it is difficult to keep your balance. You can’t easily walk around; a gush of wind, or even a low ceiling will destroy your experience.

       Topshot Helmet

When you wear the ‘Topshot Helmet’ you do get a lot of attention. In the video above we see the artist walking around town with his helmet. This almost looks like some kind of performance; bystanders must be wondering how the artist can navigate his way around. What’s more, the artist sort of looks like a cyborg; half man, half machine. The helmet with the helium balloon also resembles the Sputnik, the first Earth-orbiting satellite.

Therefore the artwork doesn’t just give a simulation of a videogame-aesthetic, it also reflects on other kinds of electronic views on the world. Thinking about videogames, video camera’s, and even satellites, we come to realize how much our view of the world is influenced by electronic media and technological inventions. We all live in a hyper real world.


More information:

Website ‘Topshot Helmet’: http://www.juliusvonbismarck.com/topshot-helmet/index.html

Website Artist: http://www.juliusvonbismarck.com/index.html



[1] Hayles, N. Katherine. ‘Embodied Virtuality: Or How to Put Bodies Back Into the Picture’ (1996). In: Edward Shanken, red. Art and Electronic Media. London: Phaidon, 2009: p. 261.


World Skin

‘World Skin’ is an immersive, interactive virtual reality installation. It’s a CAVE installation which features a three-dimensional landscape made up out of photographs and news pictures from different zones of war. Visitors thus enter a universe filled with violence. The audio reproduces the sounds of breathing. Because in this world, to breathe is to suffer.

When entering the CAVE, visitors are equipped with cameras and become war tourists. Only one visitor can “drive” using a “wand”. He/she takes over the role of a “bus driver”, but everyone can ask about path alterations.

Visitors can take pictures, but in this virtual reality photography is a weapon of erasure. When you take a picture of a fragment of the virtual world, this fragment disappears from the screen and will be replaced by a blank silhouette. Each picture, therefore, extinguishes a part of the world and is printed out for the visitors to take home. So taking a picture in this CAVE means literally “taking” something with you and making it your own. You rip of the skin of the world you are visiting, and you can take it with you as a personal trophy.  

       World Skin 

War is a world filled with pain. Through the media, war can become a public space in which this pain can be exposed. Without this exposure, people from lands without war feel like they cannot get a grasp of the world around them. But can a picture take us closer to reality? Maurice Benayoun seems to suggest the contrary: “One “takes” the picture, and the world “proffers” itself as a theatrical event. The world and the destruction constitute the preferred stage for this drama-tragedy as a play of nature in action.” [1]

We could actually see this artwork-simulation as a critique on the concept of ‘Simulations and Simulacra’ as formed by Jean Baudrillard. Baudrillard, in writing about simulations, believes that there is no ‘real’, no ‘truth’. He writes: “the simulacrum is never that which conceals the truth–it is the truth which conceals that there is none. The simulacrum is true.” [2] This artwork seems to suggest that there is in fact a reality, a reality that could be hurtful.

      World Skin

Jeffrey Shaw states in his article ‘Movies After Film – The Digitally Expanded Cinema’ that the holy grail in all forms of art is: “presence, the experience of being in that place that induces a totality of engagement in the aesthetic conceptual construct of the work”. [3] Shaw’s ‘conFiguring the CAVE’ is one of the most important works to be created using the CAVE in the mid-1990s. He suggests that this work was a “totally immersive experience for the viewers”. [4]

‘World Skin’ also uses the CAVE. However, this artwork seems to comment on the notion of ‘immersion’ in a different way. Although the visitors of this virtual reality feel as if they are immersed in this war-environment, they can never know what it’s like to really live in such a universe of violence; as tourists, they can never fully immerse themselves in the everyday realness of the men, women and children who do live in these worlds. Maurice Benayoun explains: “Some things cannot be shared. Among them are the pain and the image of our remembrance. The worlds to be explored here can bring things closer to us – but always simply as metaphors, never as a simulacrum.” [5]




[1] Maurice Benayoun: http://www.benayoun.com/projet.php?id=16

[2] Jean Baudrillard, Selected Writings, ed. Mark Poster (Stanford; Stanford University Press, 1988): p. 166.

[3] Shaw: ‘Movies after film-The Digitally Expanded Cinema’ (2002). In: Edward Shanken, red. Art and Electronic Media. London: Phaidon, 2009: p. 264.

[4] Ibid.

[5] Maurice Benayoun: http://www.benayoun.com/projet.php?id=16



Life Writer

“Life Writer consist of an old-style type writer that evokes the area of analogue text processing. In addition a normal piece of paper is used as projection screen and the position of the projection is always matched with the position of the type writer roll. When users type text into the keys of the type writer, the resulting letters appear as projected characters on the normal paper. When users then push the carriage return, the letters on screen transform into small black and white artificial life creatures that appear to float on the paper of the type writer itself. The creatures are based on genetic algorithms where text is used as the genetic code that determines the behaviour and movements of the creatures.” [1]

The philosophy behind Life Writer is that of emergent interaction by user interaction: “By connecting the act of typing to the act of creation of life, Life Writer deals with the idea of creating an open-ended artwork where user-creature and creature-creature interaction become essential to the creation of digital life and where an emergent systems of life-like art emerges on the boundaries between analog and digital worlds.” [1]

The interactive virtual life environment is also present in A-Volve (1994-5). Here participants are able to create creatures – in this case fish – more specifically by profiling their virtual fish. The fish evolve by surviving attacks of other fish and procreate passing along their code profile. The creatures in Life Writer reproduce similarly but survive by eating the letters the participant sends into the virtual world, not each other.

In Artificial Life and Interactivity in the Online Project TechnoSphere (1996) Jane Prophet discusses her project TechnoSphere as an application of artificial life as a medium. Here participants are able to create a virtual creature to let it develop itself in the virtual landscape subsequently. “For example, a creature can splice digital DNA with another if they are similar, but only if both creatures are more than 50% full of food, otherwise cybersex is out of the question and the search for food takes priority.” [2].

TechnoSphere is also similar to the approach of Life Writer in the evolution of artificial life: “The notion of self-organising artificial life systems which we have used in TechnoSphere depend on a ‘bottom-up’ approach, with behaviour emerging as artificial creatures interact, rather than us imposing a ‘top down’ control on behaviour.” [2].

The only ‘top-down’ control in Life Writer is death. In the complete document by Jane Prophet, she also states that “Our intention is that users should not be able to interfere with creatures by, for example, killing them” [3]. This is an distinct difference with Life Writer where in Life Writer the participant does have control over both creation and death by striking the typewriter keys and using the scrolling functionality of the typewriter respectively.

[1] Project website: http://www.interface.ufg.ac.at/christa-laurent/WORKS/CONCEPTS/LifeWriterConcept.html

[2] Edward Shanken, red. Art and Electronic Media. London: Phaidon, 2009 P. 249

[3] Artist website: http://www.janeprophet.com/leonardo1a.html

“Silicon Remembers Carbon” by David Rokeby

Silicon Remembers Carbon (Version 1)

“The central element in Silicon Remembers Carbon is a large video image projected down onto a bed of sand on the floor of the installation space. Visitors’ movements subtly affect the mixing and dissolving of video images and sounds. Each visitor leaves traces which affect the experience of the work for later visitors. The installation presents a fragile illusion, a consensual hallucination, requiring the visitors’ participation for its continuation, through their body movements, a willingness to blur their eyes slightly to hide the scan-lines, and their ability to project depth into the flat image. They are offered a range of possibilities from sustaining the illusion by creating and maintaining distance, to dispelling it by stepping into the illusionary space itself. For the artist, the visitors’ movement through this range of possibilities represents a more important interaction than the direct interaction with the technical system itself.” 1.

Technical information on how this piece exactly works can be found on David Rokeby’s website. To understand the next text, one should know that movement along the side of the projection causes a second image to be projected.

'Live Virtual Shadow' in Version 2

“The new image usually contains shadows or reflections of people along the edge of the clip that is visible. One tends to interpret those reflections and shadows as the being generated by people actually in the room, either oneselves or others, rather than as being present in the image itself. So this installation is some sort of fake reflecting pool, an inversion of Narcissus’s experience. Whereas Narcissus’s tragedy is that he cannot recognize himself in his reflection, the visitors to the space would find themselves identifying with shadows and distorted reflections that had only circumstantial relation to them. The identification is be momentary, and elusive. My intention is to play along the boundary of identification.” 1.

In the second version, the technology was adapted so that instead of casting their usual shadows, the audience cast shadows of video. At the same time a previously recorded shadow is projected in this image. More info on the second version can also be found at Rokeby’s website.

The interaction with the artwork that is so important to Rokeby reminds of the 1970 piece called Live-Taped Video Corridor by Bruce Nauman. That work plays with its audience and disorients the viewer through the relative placement of videocamera and monitors. Both use a combination of live and pre-recorded video images. Additionally, Silicon Remembers Carbon constitutes an interplay between projected image and audience that can also be experienced in the work of Studio Azzurro, like La Camera Astratta.

In an Oxford Journal review of the Silicon Remembers Carbon exhibition, Rokeby is praised by the author Maryleen Deegan of London’s King’s College for converging digital humanity and art. (2.) In his important essay, “Transforming Mirrors:
Subjectivity and Control in Interactive Media, Rokeby describes interactive art as a medium through which we can communicate with ourselves; in other words, it reflects like a mirror. When this is applied in conjunction with Marshall McLuhan’s ‘The medium is the message’, then the mirror would be the message. Reflection makes us evaluate and affirm ourselves, feel engaged and disembodied at the same time. But above all it raises our consciousness. (AEM 223).

Version 2: The shadow of the person walking by is smearing in a second image, in which a recorded shadow appears.

1. Source: http://homepage.mac.com/davidrokeby/src.html

2. Source: http://llc.oxfordjournals.org

Embodied Virtuality: Or How to Put Bodies Back Into the Picture, 1996

This is a video I edited from an interview with Dr. Hayles at the 2008 Thomas R. Watson Conference in Louisville, KY

N. Katherine Hayles professor of literature at Duke University is interviewed by Stacey Cochran for Raleigh Television Network program The Artist’s Craft. Directed by Marnie Cooper Priest and Michael Graziano.

Download the complete article here.