Earlier this year, dancer Mor Mendel took the stage with Boston Dynamics’ Spot at 7×7, an annual symposium organized by the born-digital art and culture organization Rhizome, to perform a work choreographed by multidisciplinary techno artist Miriam Simun. Mendel ran, danced and romped with the robot (piloted by Hannah Rossi) to a soundtrack of Igor Tkachenko and DJ Dedein in front of an audience eager to witness collaboration between artists and scientists in action. Spot, as one might expect, stole the show, but there would have been no show without Simun, who conceived of the piece with questions, not answers, in mind. Specifically: “What kind of relationships with machines do we want? What will we get? What can we dream of?”
Simun’s dreams encompass everything from cheese made from human milk to technology that captures the scent of endangered flowers to bees and their conspicuous absence. The artist, who works in video, installation, painting, performance and communal sensorial experiences, trained in sociology and has carried her experiences with her in a role she calls artist-as-fieldworker—”conducting first-person research with diverse places and communities: from scientific laboratories to rewilded forests, from freedivers to human pollinators” that “dictates the form of the final artworks,” as per her bio.
The result is art not only informed by science but that informs science, and Simun’s explorations of what is possible have been exhibited around the world at venues including Gropius Bau in Berlin, Montreal’s Momenta Biennale, New Museum in New York, Himalayas Museum in Shanghai, the MIT List Center for Visual Art and Colombia’s Bogota Museum of Modern Art. Following 7×7, Observer connected with the artist to discuss her practice, artificial intelligence and what it was like to choreograph for a robot.
Why did you decide to engage so deeply with science and technology in your practice?
I have always worked with science and technology. I am fascinated by science and the natural world. I am also very interested in how we decide we know things—and in the dominant Western culture, science has the top authority in the construction of knowledge—and thus in defining reality. This makes it a particularly interesting object to work with, not only with the findings of science and what it tells us about our world but also with the social function we as humans have ascribed to it, as reality maker—and what this tells us about ourselves.
Technology is interesting to work with because it is evolving rapidly, and changing how we live and who we are in the process. The technology itself is interesting but what I’m more concerned with in my practice is its entanglement with social, political and ecological systems. How do we decide what to build? What to adopt? What does that say about us and the world we are building for ourselves? How do those decisions get made and who does and doesn’t get a seat at the table? What forms of knowledge and value are privileged?
What was it like working with a robot at the 7×7 event—had you done that before?
It was my first time working with a robot. It was fascinating, unnerving, and exciting. It’s also really important to point out that I was not just dancing with Spot, the robot, but also with Hannah Rossi the robot handler, carer and operator; with David Robert, the director of Human Robot Interaction; and with the entire team at Boston Dynamics, responsible for how the robot functions and how it is presented in the world.
Do you think people experience artworks rooted in technology differently than traditional artworks?
Making art with a new technology as spectacular as the robot Spot is a challenge—the technology itself is such an amazing feat, such a spectacle, that it can be hard to compete, to have the voice of the artwork still be heard above the (quite loud!) stomping of the robot feet. Rhizome and Hyundai Art Lab granted me this amazing privilege to be among the first people to get to spend time with this technology, move with it, and think about what it will mean for humans to live with such machines in the future, in our daily lives.
I hope the performance I created, as danced by Mor Mendel, and featuring the music of Igor Tkachenko and DJ Dede, enabled the audience to gain a new and different perspective on the adoption of robots in our daily lives. How are these robots being programmed to behave? To interact with us? To interact with their surroundings? Will they be built to accommodate us, or will we need to accommodate them? How will we learn to predict their behavior, to see what they see? What kind of relationships with machines do we want, will we get, can we dream of?
Is A.I. a threat to art, a tool for art or potentially both? What about other technologies?
A.I. is a tool. Like every tool, it enables some things while making other things more difficult. As Marshall McLuhan wrote about technology decades ago: “every extension is also a mutilation.” We need to think carefully about how we build and deploy A.I.—not only about what it enables but also what it obscures or takes away. Something I am concerned about with AI is – what is the data we are training it on? If this so-called intelligence is trained only on a narrow set of past data (specific data sources that are largely corporate, largely English, largely data that is easy to scrape – so created or digitized in the last few decades)—then there is so much missing from what this A.I. can “know.”
As A.I. is being tasked with making more and more predictions, based on a relatively narrow view of the recent past – are we canceling the possibility for a new, more diverse, more imaginative future? At the same time, it’s a totally seductive tool and super fun to play with in the process of making art, and I love that people will always find ways to bend, break, and come up with unimagined uses for technology. I have faith in artists especially.
The other question that is really important to me is how do we define “intelligence?” Especially in relation to A.I.? A question I asked during my [7×7] performance is, what would happen if we defined intelligence less on how well someone/something knows, and rather on how well they react to unexpected, ambiguous, and uncertain situations? If this was the metric by which we defined intelligence, how might we build our robots and our A.I. differently?