Geography of Hidden Faces

Explore the beauty of the world and find faces along the way.

Geography of Hidden Faces applies AI to aerial imagery to “see” faces in the landscape.

At any given moment, dozens of satellite and aircraft sensors look down from above, taking pictures of the rapidly changing and evolving landscapes below. These images, stitched together, processed, and transmitted, become maps which ultimately come to shape the spaces and places they represent.

There is an ambivalence to the ways that these mappings constantly re-imagine and re-form the world. In the best cases, aerial images are used to help inform aid relief efforts after a disaster, expose the locations of fires, and create evidence pointing directly to critical environmental changes. Aerial images however are all too often used to exploit natural resources, displace native and indigenous peoples, and to justify the killing of innocent people.

How we interpret the “God's eye view”—its privileges, politics, responsibilities, and opportunities—is effectively how we draw the map. And with the unrestricted use of automated, machine learning-based systems for interpreting images, we only further codify our own problems and flaws, as well as our imaginations and politics onto the world.

Geography of Hidden Faces is a project about exploring and questioning what we see versus what we've trained computers to see. The project provides an interface to renderings of the world from above, layering both critique and a window into a kind of "algorithmic imagination." By applying facial recognition algorithms to aerial imagery, the project begins training you to see "algorithmically." Just as the algorithms have learned to identify patterns in their training data, longer engagement with this project begins to cue you in to the potential conditions and environmental factors that produce to a face detection in the landscape. By applying a facial recognition algorithm onto data it is not meant to interpret, we are given an opportunity to explore the quirky artifacts of facial recognition algorithms.

In 2013, Onformative Studio in Germany published their Google Faces project. Their investigation explored the phenomenon of pareidolia—the tendency for humans to see shapes or faces in inanimate or abstract objects and things—in machines. Onformative employed machine learning algorithms designed for face tracking, turning the gaze of the algorithms away from people and onto views of the earth from space.

Geography of Hidden Faces extends and reinterprets Onformative's early work. Here, the application prompts viewers to zoom and pan in such a way that with each interaction, they are left gazing into the map wondering if the AI sees anything more than what is in view.

image of face in mix of urban and natural landscape

Seeing something that isn't there

This project applies machine learning based facial recognition algorithms on aerial imagery to find "faces" on earth's surface. This exercise produces surprising results—faces are detected in landscapes in often unintuitive and sometimes unexplainable ways. What do the algorithms see in those pixels that we don't?

The facial recognition algorithms used in this project are accessed through the face-api.js application programming interface (API) implemented in ml5.js. At its core, the facial recognition and detection model is built on top of ~30,000 images. In total there are ~390,000 faces in the dataset—WIDER FACE dataset—which were used to create a model capable demarcating 68 key face points, also known as "face landmarks". When these algorithms are pointed at images of people, these models generally perform well, meaning that they can quickly detect faces and their landmarks, however it must be acknowleged that accuracy for facial recognition algorithms tend to perform worse for women and people of color.

For this project, the minimum threshold for detections was lowered to 0.0001, meaning that nearly anything that the facial recognition model might believe to be a face is returned to the viewer.

Works such as Philipp Schmitt's “A Computer Walks into a Gallery“ or “Introspection” play with this idea of purposefully feeding abstract data to machine learning models. The exercise offers a means of interrogating otherwise blackbox systems and architectures (the layers of the neural networks). It creates opportunities to question the impressive and opaque mechanics of what Kyle MacDonald has aptly stated as, “programming with examples versus with instructions.” What happens to those pixels when they are mashed up, sorted, and arithmetically pummeled with calculus? I would argue that these interventions offer signals of those layers of computational abstraction in a way that can and must be seen and felt.

image of face in Portugal

Putting a face on geography

These notes describe some of my early observations, as well as questions that this exercise of building and using this tool has surfaced.

Some faces look eerily expressive and proportional; quite likely a ghost of the data the models were trained on. Other faces are rendered as mangled scribbles where, for example, the eyebrows might be located unrealistically far from the rest of the face.

Some faces appear to express surprise or sadness, while others appear to be speaking as if they were responding to a question. Other faces are so small that it is impossible to read what kinds of expressions they hold. How the algorithm subsets the image feels mysterious. Do faces appear more frequently in urban environments or in natural landscapes? Is there a relationship between detected faces and landcover or elevation?

Scale introduces new patterns as well. In the domain of remote sensing, a general rule of thumb is that the resolution of an image must be twice that of the feature that is attempting to be resolved; to “see” a car that is 3 square meters wide, your pixel resolution must be at least 1.5 square meters, for example. In the context of “seeing faces in geography,” this remote sensing rule of thumb takes on new meaning.

This project highlights how scale, image and feature resolution, and geography are linked together. For example, faces detected at one zoom level might not be detected in the same location at a different zoom level; A face is detected based on those specific constellation of pixels, at that specific zoom level and in that specific moment. The shadows cast by chimneys, the specific colors of the vegetation in that growing season, the view angle of the satellite or airplane all come together to provide the specific conditions for a face to be detected by these specific models. There is an ephemerality to these faces as seen from above that reflects the changing nature of those places below.

Face detected in the Gaudi's Sagrada Familia in Barcelona.

Beyond faces on the map

Geography of Hidden Faces highlights how we must practice looking and questioning what we see on maps; We must acknowledge what is and what is not represented, ask who (or what) made the map and why, and consider who wins and loses as a result of what has been mapped. The AI detected faces on the map remind us that the data collected from satellites and aircrafts are being interpreted in myriad ways, some for the better and others for the worst, and more than ever automatically through opaque algorithms.

By interfacing with these aerial images, my hope is that you gain an appreciation for “reading” maps in a new way. With AI constantly interpreting what is in the field-of-view, each pan and zoom of the map may leave you to question, “does the algorithm see what I see?” And similarly, does such an exercise train you to “see more algorithmically?”

I hope you find yourself getting as lost in the map through this project as I have. Take notes of the things you see, the faces you find, and all the (un)familiar places you encounter—these views won't last forever.

A note on the r,g,b colors: The face landmarks are rendered in red (r), green (g), and blue (g) as a reference to remote sensing artifacts, specifically multispectral "blurs" (as James Bridle calls them) that occur when a fast moving object is captured at different speeds at different spectra.

Last updated 2019-10.

→ Explore

References

  • Google Faces, Onformative. 2013
  • A Machine walks into a gallery, Philipp Schmitt. 2018
  • Introspection, Philipp Schmitt. 2019
  • Weird Intelligence, Kyle MacDonald. 2018
  • WIDER FACE, WIDER FACE: A Face Detection Benchmark. 2017
  • Mapping’s Intelligent Agents, Mattern. 2017
  • Facial Recognition Is Accurate, if You’re a White Guy, NYTimes. 2018
  • An Introduction to Critical Cartography, Crampton. 2005
  • The war lawyers : U.S., Israel and the spaces of targeting, Jones. C. 2017
  • Additional Inspiration: Aerial Bold, Groß & Lee. 2014. Conversations with Benedikt Groß.
  • This project was built during an artist residency at NYU's Interactive Telecommunications Program. Thank you ITP and ITP community for all your generous support. Special thanks to Andy Anzollitto, Benedikt Groß, Philipp Schmitt, & Sarah B.
  • Built with: ml5.js (face-api.js), MapboxGL.js, & Mapbox Static Map API.