WORDS: HELENE FURJÁN & LEE NENTWIG | IMAGES: ROSA MENKMAN
In the 1990s Kodak created Kodacolor Gold film, a product the company championed for its ability to more sensitively capture diverse skin tones. Until that point, Kodak had made faint effort to acknowledge and accommodate a market of non-white consumers, nor tested their product against darker skin. However, complaints made by chocolate and wood manufacturing companies that Kodak’s original color film failed to distinguish types of chocolate or grains of wood provided impetus for the company to invent Kodacolor Gold. Nonetheless, the product’s accompanying "True Color" marketing campaign suggested racial inclusivity as the motivation for the more dynamic range of the new film stock. Each television ad focused on capturing the important life moments of a young black child, often amongst a contrast of white faces.
“Resolutions inform both machine vision and human ways of perception. They are the material of everyday life, ubiquitous.”
A film stock’s color balance is determined by tests made on what are known as “Shirley” cards, taking their name from one of the first color test-strip-card models in the 1940s. Until the late 1980s, these reference cards always showed a Caucasian woman wearing colorful, high-contrast clothing. There were numerous “Shirley” models, but the test cards were never created to serve variation. Different skin tones require differing hue swatches and responses to reflection and luminosity.
Yet, for more than 40 years, skin-color “balance” in still photography was engineered to a racist standard in which white women set the “ideal”. In motion picture testing, producers also hired white women as test models. Nicknamed “China Girls”, the makeup these models wore made them appear as porcelain mannequins on screen. Red-haired model Marie McNamara was famous in the 1950s for calibrating NBC television cameras; Carole Hersee was the well-known face of Test Card F (and latter J, W, and X) for the BBC from 1967 to 1998.
While an unsuspecting public may believe modern image-making devices provide direct representations of reality, color test cards remain fundamental to shaping our visual world. Chemicals, machines, and algorithms only capture what they are tested or programmed to “see”. Digital imaging still relies on test images, which evaluate processing, compression, rendering, display quality, or resolution. Behind White Shadows, a recent exhibition by researcher and artist Rosa Menkman, surveyed behaviors of modern image processing technologies that reveal the discriminatory biases still embedded in our visual apparatus.
“A large set of protocols intervene in the processes of saving an image of a face to memory, which cater to techno-conventional, political, and historically biased settings.”
Like fellow artists, Phillip Stearns and Adam Ferriss, Menkman comes from the glitch scene. “Glitch first came into my life in 2005,” she notes in her book, The Glitch Moment(um), “when I visited the World-Wide Wrong exhibition, a retrospective of the Dutch/Belgium artist collective JODI (Joan Heemskerk and Dirk Paesmans) at MonteVideo/Time Based Arts in Amsterdam”.” Menkman was excited by JODI’s critique of digital culture and technology, at a time when the notion of glitch art was, “just crossing over from sound culture, and leaking into visual art cultures only sporadically.” Her, “taste for glitch, and for its potential to interrogate conventions through crashes, bugs, errors and viruses,” was spawned by JODI’s work amplifying disruptive miscommunications and failures to investigate and subvert conventions of the Internet, computer programs, and video games. In 2007, she began collaborating with the musician Goto80 (Anders Carlsson), who was exploiting bugs in the Commodore 64 sound chip (the SID chip) to generate noise artifacts. Menkman developed visual equivalents.
Glitch art is now mainstream, a recognized visual language, and part of popular culture. Every medium has its glitches and imperfections that can be exploited by artists, and that confound the expectations of the user. But the corollary for Menkman is the influx of people into the genre that don’t have a critical viewpoint, and the appropriation of glitch effects for fashion and style. For this reason, Menkman relocated her practice of glitch art to its source in resolution studies. She seeks to uncover the ways in which resolutions inform both machine vision and human perception, as well as the compromises made within the resolution standards set for our media. Menkman scrutinizes the ways in which resolution standards determine what goes seen and unseen in the digital realm, understanding that resolution settings are never neutral but carry historical, economical and political ideologies.
Similar to Phillip Stearns' view of technological objects as embodiments of the will of a society at large, Menkman sees resolution settings as regulations set in place to standardize the behavior of technologies that are decided upon by actors with inherent socio-political biases and motivations, favoring certain technological possibilities over others. She is interested in who sets these standards, why, and with what impacts; and in how these standards confine information to specific platforms, preventing syphoning of data from one platform to the next. To challenge these standards, Menkman captures noise artifacts occurring from visual accidents in digital and analogue media systems. These artifacts, and the stories behind them, provide important insights while presenting traces of alternative, unconsidered resolution possibilities.
Beyond White Shadows, Menkman’s installation created for Brooklyn-based Transfer Gallery in 2017, reveals ways in which resolutions inform machine vision and consequently affect modes of human perception. The central piece was called DCT:SYPHONING. The 1000000th (64th) interval, a VR installation which pulled viewers along a fictional journey through the historical progression of image complexities. DCT:SYPHONING is a modern interpretation of the satirical novella, Flatland: A Romance of Many Dimensions, written by Edwin A. Abbott in 1884. Abbott’s original work tells the story of a fictional two-dimensional world occupied by geometric figures only able to think in terms of length and width, within which women are line-segments and men are polygons.
The narrator of Flatland, A. Square, has a dream in which he visits a one-dimensional world called Lineland, inhabited by citizens unable to see anything beyond the level of points on a line. It is, therefore, an impossible task for A. Square to convince the realm's monarch of a second dimension. In the end, the monarch of Lineland attempts to kill A. Square for his unfathomable claims. A. Square is later visited by a sphere from a three-dimensional world beyond his own, and is unable to see the sphere as anything other than a circle. The story not only illustrates how difficult it is for the reality of alternative dimensions to be understood, it asks readers to question the limitations of our own assumptions, and envision worlds beyond the constraints of given circumstances.
“The only way to tell these abstract stories is to give them a personal touch, to make them a metaphor to real life, because that is what people finally understand: being human, and the issues that come with it. I try to take these algorithms as a metaphor for emotions, feelings, experiences, or fears.”
While Abbott’s writing was a critique of social hierarchies in 19th Century Britain, Menkman’s VR story illustrates the contemporary dangers and disputes surrounding resolution. She develops two DCTS (Discrete Cosine Transforms), “Senior” and “Junior,” who travel through the different ecologies of image-field complexity to examine the biases inherent to, but concealed within, the DCT standard. A mathematical technique that has been used since 1973, DCT became widely implemented in 1992, when the JPEG image compression technology started using it as a core component. DCT is used to describe a finite set of patterns, called macroblocks, the 64 characters making up a JPEG image, adding luma and chroma values (light and color) as ‘intonation.’
DCT:SYPHONING is based on the open-source Syphon software developed by Tom Butterworth and Anton Marini, which provides an ecosystem for real-time frame-sharing between applications and new-media development environments. Running their first Syphon together, DCT Senior introduces DCT Junior to simulated realms representing different levels of compression, from dither (the illusion of color depth by approximating an unavailable color from a mixture of other colors), to lines, to macroblocks (the realm in which they normally resonate), to the ‘future’ realms of wavelets and vectors (compression transforms).
Menkman aims to create an access point for new audiences to engage the obfuscated processes that guide the creation of technology, to reveal the very real ethical dilemmas innate within them. Take, for example, her essay The White Shadows of Image Processing: Shirley, Lena, Jennifer and the Angel of History. She tells the true story of Lena Söderberg, a Playboy model who was widely used as a standard test image for JPEG image processing.
In 1973, a team at USC’s Signal and Image Processing Institute was creating a test card for DCT compression in the JPEG file format. They wanted to replace existing test images with a glossy photo that had dynamic range. As the engineers contemplated what image they would use, an engineer happened to walk in the room with a copy of Playboy in hand. The team scanned the issue’s centerfold photo of Söderberg. Because they had chosen to scan a magazine page instead of a photographic print, dot-matrix imaging distortions stretched Lena’s figure thinner on screen than the page.
Scanners were still scarce and expensive at the time, so Lena’s image became standard reference for nearly all imaging technologists to test processing algorithms. The image was one of the first uploaded to ARPANET, a network which laid the technical foundation for the Internet. Lena became one of the single most used pictures in image-processing research, the first female figure to go viral online.
“These images need to lose their elusive power. The stories of standardization belong in high-school textbooks, and the possible violence of standardization should be studied in any curriculum. By illuminating the histories of standardization, we will also expose its white shadows.”
But what if the choice of Lena’s image was deliberate, like that of John Knoll’s 1987 photo taken of his then-girlfriend, Jennifer, sitting topless on a beach in Bora Bora? That photo, which Knoll nicknamed “Jennifer In Paradise”, was chosen as the demo-image for Photoshop. It was used to showcase how the software could help image editors mold more “perfect” depictions of reality.
Are racial and gender biases embedded in JPEG compression code? How well do these standard settings function when capturing other kinds of color complexity? Do the technologies that make things visible to us also make certain things invisible? Menkman raises such questions as she points to image compression protocols, such as scaling, reordering, decomposing, and the reconstituting of image data, which favor certain affordances catered to biased influence. She contemplates the consequences of these decisions and the ways in which the people behind these decisions (e.g. white male coders in Silicon Valley) cast their shadows over these technologies. In her essay she asks:
What would it have meant for the standardization of digital image compression if the image chosen for the test card would have been the first African-American Playboy centerfold Jennifer Jackson (March 1965), or if the 512 x 512 pixel image had instead featured the image of Grace Murray Hopper?
- Rosa Menkman, The White Shadows of Image Processing: Shirley, Lena, Jennifer and the Angel of History (2017)
Menkman’s own image has been the subject of viral appropriation. In 2010, for A Vernacular of File Formats, she used her own face to test the internal limits of image compression protocols. The project used a series of corrupted self-portraits to illustrate the language of compression algorithms. She did so by covering her hair and face in white makeup to create an excessively Caucasian, “porcelain” portrait. Then, she manipulated that image via different compression languages and a series of introduced file errors, which created visual disturbances allowing the normally invisible compression language to present itself on the surface of the image. A Vernacular of File Formats tested how image compression algorithms fell apart—what their internal limits and errors were—as a way of demonstrating that the technologies that make things visible to us also make certain things not visible to us. Ironically, the photoshoot itself created visual disturbances: an allergic reaction to the makeup hurt Menkman’s eyes, causing temporary loss of sight.
The project exceeded its thesis, like a mild virus mutating into a pandemic. To begin with, images from the project were appropriated out of place, used without Menkman’s approval. Images from the series ended up on the covers of books and magazines. The German DJ Phon.o used one for the cover his vinyl, Fractions. A rapper, Yung Joey, photoshopped his face into another one of the images. Another was used in a poster campaign of a Valencian festival. Two were featured as icons for glitch software apps on smartphones. And one appeared in a sponsorship campaign for a Hollywood movie about a woman being stalked. Perhaps most ironic (and sinister) of all was Kevin Benisvy’s description of Menkman’s ghostly-white, awkwardly blind self-portrait. In The Queer Identity and Glitch: Deconstructing Transparency, he described the image series as, “with an almost ‘come hither’ expression, as if caught by surprise, having an intimate moment in a Playboy erotic fiction.” Menkman saw how she herself had become complicit in the long history of using white women’s faces as the standard for testing compression.
For Menkman, the most important question for Resolution Studies to ask is “what does it compromise?” A resolution—or rather the resolving—of an image means more than just a superficial setting of width and height, or frames per second. Resolutions shape the material of everyday life in a pervasive fashion, a network of protocols inside protocols, sensitive to some things and not others. The more complex and higher definition an image processing technology is, the more actors it entails, and the more these actors and their inherent complexities are positioned beyond our awareness, beyond the fold of the everyday settings of an interface, and the more naturalized the images may appear. Unknowingly, both user and audience suffer from technological hyperopia, fixated on innovation, but blind to fundamental processes at work and compromises at stake.
In 2015, Menkman started the institutions for Resolution Disputes, a (virtual) space to share knowledge and awareness regarding the biases and compromises hidden in resolution standards and compression algorithms. The project emerged out of her doctoral studies on the ecology of compression complexities. After a grant to write her book on this topic was suddenly revoked, she was promptly left out of work and fed up with institutional authority.
Soon after the withdrawal of her grant, the first “Crytpo-Design Challenge” was written out, by the institute that had let her go. Menkman decided to enter. She created a language out of DCT compression formats designed to make use of things we are conditioned not to see or read. The legibility of an encrypted message does not just depend on the complexity of the encryption algorithm, but also on the placement of the data of the message. If an image is compressed correctly, it’s macroblocks become invisible. Keeping this in mind, Menkman developed “DCT” as a font that appropriates the algorithmic aesthetics of JPEG macroblocks to mask a secret message as error. The encrypted message, hidden on the surface of the image, is only legible to those with decryption keys (a physical “patch” or fabric badge with the DCT key). The institute had no idea that the project was a critical response to the loss of her fellowship and, in an ironic turn of events, she won a shared first prize. Menkman’s prize, a new laptop computer, was small recompense for what she had lost.
iRD was born out of an anger towards institutions. As Menkman explains, “If you lose your job at an institution the best way to take revenge is to start multiple institutions yourself.” Even though the iRD mimics an institute, it multiplexes the term institution, revisiting its usage in the late 1970s by Joseph Goguen and Rod Burstall. Goguen and Burstall formulated the term institution as a “more compound framework,” that dealt with the growing complexities at stake when connecting different logical systems (such as databases and programming languages) within computer sciences. While these institutions were put in place to connect different logical systems, they were not logical themselves. With the iRD, Menkman developed a fluid concept of “institution” that people can join, or which gives certifications (the “patch” with the DCT key), but does not operate formally as an institution. She encrypted five statements critiquing institutions into the five “institutions” making up the iRD.
“Technological standards have compiled into resolution clusters; media platforms that form resolutions like tablelands, flanked by steep cliffs and precipices looking out over obscure, incremental abysses that seem to harbor a mist of unsupported, obsolete norms. The platforms of resolution now organize perspective. They are the legitimizers of both inclusion and exclusion of what can not be seen or what should be done, while ‘other’ possible resolutions become more and more obscure.”
- Rosa Menkman, Resolution Theory (2015)
The iRD is an anti-protological institute, an institute for anti-utopic, obfuscated, or dysfunctional resolutions, but it is also about finding a productive, practice-based, approach to critiquing the apparent neutrality of institutions, standardizations, and the protocols embedded in our technologies. In doing so, the “disputes” at the heart of Menkman’s work keep the stakes at play firmly in perspective, revealing compromises and biases, and keeping critical resistance alive, “within the ever growing digital territories.”
“I got stuck in these algorithms and technologies, I got stuck in life, and they got stuck together somehow.”
The effort to uncover and critique human biases behind compression algorithms, file standards, and software packages that enable histories of discrimination to thread through new forms of visual technology is vital. As the role that intelligent systems play within our society continues to expand, patterns of discrimination must be eliminated from the machine-learning algorithms which inform their decision-making processes. While solutions to these complex issues are not simple, critical dialogue is key.
Artificial intelligence, virtual reality, and other emerging technologies will all continue to reflect the values of their creators. Computer vision cameras now enable these systems to gather visual data, which they use to analyze their environments. A system’s outputs are based on the information that it is provided, but data is never neutral. If A.I. systems are developed based on a culture of white skin-tone standards, they may not consider non-white faces fairly. Any technology that is engineered will mirror the biases of its maker. Therefore, inclusivity matters. Relentless attention must be given to the obfuscated protocols set in place for these technologies to demand accountability, fairness, and equity.
NOISE IN THE MACHINE
THE ALGORITHMIC UNCONSCIOUS
THE UNCANNY VALLEY