Likeness-AdamFerriss-Digital-ItsNiceThatLos Angeles-based Adam Ferriss’s art installation from June 2018, Likeness, takes one person’s face and transforms it into another. It “reimagines face-filtering and remolds people’s appearances in real time.” [1] Likeness, curated by Alex Czetwertynski, another digital artist and curator, was produced for Google IO 2018 in the Museum of Developer Art and commissioned by UCLA Conditional Studio. 

Ferriss began his installation by presenting the computer two sets of images: generic photographs of people, and what Ferriss calls “label maps” that “identify and highlight the different facial features, like eyes, eyebrows, jawline, mouth, and nose, present in the photographs." [2] The computer then juxtaposes the two sets of images and makes connections between the labels and photos; Ferriss states that this is called training, and can take hours or days at a time. His computer was trained for the span of a week for thirty to forty hours. Likeness-AdamFerriss-Digital-ItsNiceThat

Once training has been finalized, Ferriss presents “a label map to the computer, which then alters and reshapes the face.” [3] The final result could have changed any characteristic, including gender or skin color, or layered attributes on top of one another, as pictured. Likeness was interactive, and allowed people to stand in front of a camera and pose for the AI face-generator, installed on an eight by ten-foot-high LED screen/wall.

Ferris began coding seven years ago during his undergraduate program in photography at Maryland Institute College of Art, and had aspired to manipulate photographs. After taking a new-media class and having learned JavaScript and rudimentary coding, he began working on manipulation and his curiosity led him to where he is today. His “driving principle is to find new ways to interpret, distort, and redraw images and photos.” [4] 


Groups audience: 
- Private group -


Excellent entry. Interested work chosen, well-written, good use of media and layout. It would be useful to add a comparison with a work in AEM that seems similar, for example Tim Hawkinson's Emoter (that's in a different section, so you probably would not have been aware of it. But maybe that could be included for the midterm