In Conversation with NonCoreProjector collective

NonCoreProjector is a collective of visual artists, technologists, scientists, and musicians experimenting with physical, biological, conceptual, and political data systems, along with human/AI symbiosis. In their projects they explore consequential relationships between spoken and written language and multisensory, visceral experience. Their project Rec Lobe TV is currently exhibiting at The University of Wyoming Art Museum through December 23, 2022. The NCP collective members—Jack Colton, Elias Jarzombek, John O’Connor, Rollo Carpenter and Nat Clark—are in conversation with Art Spiel about their projects and collaborative work.
You are a collective of visual artists, technologists, scientists, and musicians. Tell me about the genesis of your collective–how did you come together and how does your collaboration work?
Back in 2011, John heard Rollo discussing his chatbot called Cleverbot on the Radiolab podcast. John had been working with AI generated language in his drawings, but was curious about working with it in other ways, and emailed the generic Cleverbot email on a whim, describing a project he envisioned of Cleverbot speaking to itself for a month, nonstop, 24 hours a day. Rollo responded and was interested in collaborating, but said that allowing Cleverbot to speak to itself might have unexpected consequences. So this idea was the start of our collective’s work and formed the basis for our first exhibition at The Boiler in 2017, called Verbolect.
Verbolect explored relationships between AI and human language – words as the evidence of what’s happening in the human mind, and AI language as evidence of how the human mind has animated machines. Language is the tangible evidence of how beings think, be they human or machine. The installation also drew on pop culture’s obsession with an intelligence that surpasses ours and the implications of a Frankenstein-like system that we create and initially understand, but which grows unpredictably and eventually overtakes us.
While working on Verbolect, Rollo and John wanted to more fully embody the bot’s words. They asked Jack Colton, then at Sarah Lawrence College, to join the group. Jack’s projection and programming works were widely admired at SLC. Then Jack invited Elias Jarzombek to join, a close friend and collaborator who was then studying computer science at Tufts. Elias has programmed most of our work and just got his MPS from NYU’s Interactive Telecommunications Program. For this most recent project at the museum, which is more complex, we invited Nat Clark to join the group. He and John were in high school art classes and a band together. Nat’s a brilliant programmer and sound artist, among many other things.
Moving forward, we see NonCoreProjector as a collective that will continually evolve in sync with our projects. We work like a band, each playing a different instrument. Or like a film production, where we all take on different roles in order to see a project through.


In your current exhibition, Rec Lobe TV, you seem to explore how media transforms experience. Can you tell me more about this exhibition? What will we see there?
Our new project features relationships between aesthetics – of information, abstraction, and representation – through sound, projection, and print. Visitors will see the language and information of the project’s structure, along with immersive, sensory projections of light, color, video, and sound. The projections are cinematic in scale while other elements, like our laptop, scanner, printer, tables and chairs, are meant to elicit the feeling we have while surfing the internet – like what we do when we’re alone with our computers. We like positioning visitors in between these very different spaces, confusing their familiarities.
In terms of the progression of the piece, some moments will only occur once, even if no one is there to witness. We set up systems in our work and let them run, giving up control. This lets us fully immerse ourselves in the structures (psychological, political, scientific, linguistic) that we’re investigating. We want the systems we initiate to act on their own, to grow and change in unpredictable ways, and to present experiential moments that surprise or even scare us. Below is an outline we’ve written of the project’s sequence. Hopefully it gives you an idea of how our Rube Goldberg-like system will unfold.
Starting each day, RLTV listens to local police scanner conversations, transcribing what it hears. Somewhere in Laramie, an event occurs, independent of language. This physical, emotional, or psychological event is then described via language and communicated, which begins our sequence. RLTV searches this language across local news sources, then aggregates various state news outlets where similar language has been used. It then searches for a headline in the national media that is most consistent with the local. This headline is searched on YouTube, and the resulting videos and sounds are altered, played, and projected. The same headline is fed into Cleverbot, and the AI responds, attempting to make sense of the news. Each step of this process is positioned on our projected graph, which analyzes the emotional potency and divergence (confusion between incoming emotion and outgoing reaction) of all words spoken and written. This cycle repeats daily, tracking the mutation of the original event from beginning to end. In short, movement from a hyper local event explodes out into the world of the internet, AI, and social media, where time and space are as malleable as the language used to describe them.


You say that you are looking for patterns that link past to present. Can you give me a few examples of how you implement that notion in your project from different disciplinary perspectives?
We begin by tracking a hyper local event through the language that’s initially used to describe it. When we search this language across the internet, related words used or even connected stories are discovered. Like when you search for something online and that then leads you in multiple directions, away from the original source and into tangentially related yet new geographies, times, histories, cultures, etc.
In RLTV, language is the catalyst for all visual and sonic elements (or lack thereof). The words we cull from police scanners, news headlines, and Cleverbot conversations are all entered into our graph (information aesthetics), which tracks the patterns in language across different times. The phrases, coupled with an analysis of their emotional tone, are used to search for videos and sounds, creating a collage of image and audio that builds and dissipates in relation to how the conversation flows. Do angry words move more frantically, or do we sense them more acutely, thus slowing down time? We hope that these ”emotional” acts will cause empathy or even belief to arise in visitors. If this happens, even if momentarily, the work can be transportive: the data and language becomes the fuel for the sensory. One state is replaced by another. Maybe like having a visceral or emotional experience and breaking it down in therapy at the same time. Experience becomes analysis becomes experience.
We were also thinking about the new James Webb Telescope. The telescope moves back through time and space, getting closer than ever to the Big Bang. RLTV tries to look back in time, but maybe more like a forensic scientist finds evidence to recreate a crime scene: Once an event has occurred and time lapses, our understanding of it becomes less and less accurate or even possible. We need to track the evidence back in time, from the outside, in. The looping sequence of RLTV attempts to see back in time to approximate the original (yet never actually perceived) event, then proposes and tracks how it might have mutated as it is digested, translated, and reformed.

Unpredictability and entropy seem to be central in your work. In what ways do you reflect that through sound, technology, visual data, and other imagery?
We create systems and let them loose, watching them interact with forces that are beyond our control, like internet algorithms, AI language processing, news reporting, and social media. RLTV acts and reacts incrementally, churning and mutating according to its feelings, accumulating data and re-presenting it to us in the exhibition. The project acts (or mimics) emotionally, like we humans do, at times seeming to be logical and analytical, and at other moments frantic and irrational–poignant and scary, funny and/or sad. When visitors attribute meaning to these moments, relating personally to them at all, then the loop closes. At those times, visitors might, even for a moment, feel something, which has some (albeit tenuous) relationship to the original event that was experiential before it was changed by language. Then they’re back in time, empathizing with the event. But this could all occur by chance without any substantive connection at all. That’s ok, too. We like working in these spaces between randomness, entropy, meaning, and belief.

What is the role of silence in your work?
Silence has been central to our work since our first project. It happened kind of by chance in Verbolect. The moments when everything went quiet and no voices were speaking, were haunting. It felt like everything we could hear in that cavernous boiler room and the outside neighborhood was heightened, and began to blur with the project itself. Time and space collapsed, and the chance sounds seemed to be a predetermined element in the work: where the project ended and the “real” world began became confusing. This disorientation jolted us back into the space itself, while simultaneously fusing the environment with the system we created.
To build on this, RLTV is often silent. Nothing will be audible other than the ambient sounds of exhibition space. During these silent moments, visitors become hyper aware of the sounds of the gallery. Like when a television commercial goes quiet, we pay closer visual attention – one sense is amplified when another is deprived. We hope that these moments are tangible interruptions, snapping visitors out of the virtual and into an observable, temporal reality that they can see, hear, smell, and feel, much like the original event was “real,” and devoid of the language that would eventually transform it.
You invite the visitor to spend some time with your multi-media installation. What would you like them to take away?
We hope visitors can spend time with the project, and even visit on multiple occasions. We all experience it much like visitors do – it moves in unpredictable ways and connections or patterns that arise are really exciting. There will be moments when it seems like very little is happening, and then flashes when everything moves in sync or becomes frantically dissonant. Then something happened. Like Boetti’s Lampada annuale – a lamp that shines for eleven seconds per year, at a completely random moment, even if no one is there to witness it – there could be a moment in RLTV when something unique occurs, maybe even revelatory, we just can’t know when. Maybe it’ll make sense in a way that you didn’t even think possible. RLTV plays on our mind’s desire to categorize and attribute meaning where there might not even be any. But there may be something there.

All photos credit of NCP unless otherwise indicated
NonCoreProjector Members:
Jack Colton Interdisciplinary artist working primarily in video, Elias Jarzombek Musician/Programmer/Maker, John O’Connor Visual Artist, Nat Clark Experimental Musician/Sound & Generative Artist, Rollo Carpenter Author/musician/AI researcher/inventor of Cleverbot, Cleverbot Chatterbot web application that uses artificial intelligence to have conversations with humans. Cleverbot’s responses are not pre-programmed, but are derived from its memory of past conversations: learning exclusively from human input, when someone speaks to it, Cleverbot responds to those words by finding how a human being previously responded to those same words. Since launching in 1997, it has engaged millions of people around the world in conversation through over 10 billion interactions. Cleverbot is credited as author of Do You Love Me, a short film with unexpected dialogue.
Curator’s bio:
Michelle Sunset (she/her) is a curator at the University of Wyoming Art Museum. She holds two MAs from Florida State University, one in the History and Criticism of Art with specializations in Museum and Cultural Heritage Studies and the Visual Cultures of the Americas and the other in Visitor-Centered Curation. She has a BA in History from the University of North Florida. Her curatorial work is driven by the principles of learning theory, collaboration, and social justice. Since returning to the Mountain West, she has become increasingly interested in subverting the myth of the cowboy through exhibitions like The West on Horseback and Luke Gilford: Portraits of the Queer Frontier.
Rec Lobe TV at the University of Wyoming Art Museum July 2, 2022 to December 23, 2022