A framework for productive play — AI as blade, not author.
The Cut-Up and the Abduction
In 1959, Brion Gysin accidentally sliced through a stack of newspapers with a razor blade while cutting a mount for a drawing. The strips of text fell together in new arrangements. He showed them to William Burroughs, who recognized immediately what had happened: meaning had been liberated from its original intent. The cut-up was born — not as creation, but as recombination. Not authorship, but surgery.
Sixty years earlier, Charles Sanders Peirce had given a name to what happens next — the cognitive act that follows the cut. The moment when two previously unrelated fragments land next to each other and your mind, unbidden, produces a hypothesis about what they mean together. Peirce called this abduction: the logic of discovery, the inferential leap that generates new explanations from surprising observations. Not deduction (which proves), not induction (which confirms), but abduction — the flash of insight that says maybe this means that.
And Peirce understood something else. He knew that abduction doesn't arrive on demand. It tends to arise through a specific mode of cognition he called Musement — a state of undirected, receptive play in which the mind moves freely through real material without a fixed goal. Not daydreaming. Not work. Play — but play that, surprisingly often, produces. What interested Peirce was that this seemingly idle wandering has a remarkable tendency to generate genuinely novel insights. Not always. But more reliably than you'd expect from something that feels so much like doing nothing.
Archive Harvest is an attempt to build a framework for that kind of productive play. It's a live visual performance system I built as part of an ongoing practice in lighting design and real-time image — a practice grounded in the conviction that meaning isn't something you plan in advance, it's something you discover through doing. The system pulls forgotten archival footage from the Internet Archive — mid-century industrial films, Cold War propaganda reels, amateur home movies, scientific documentation — and layers them into live visual compositions projected in performance. The footage already exists. It's been sitting in digital vaults for decades, unseen. The tool doesn't generate anything new. It resurfaces what was lost, hands you the blade, and tries to create the conditions for abductive leaps to occur — in the performer, and maybe in the audience.
What It Looks Like in Practice
This clip is a good example of how the system works and what it feels like when the materials find each other. Two archival layers are composited here: footage of a historical witch burning overlaid with a piece of trippy mid-century Americana — the kind of oversaturated, optimistic promotional film that radiates a confidence its era didn't earn. Neither clip was selected to comment on the other. They were loaded into adjacent layers during a session and blended through the effects chain.
But something happened in the collision. The burning figure and the cheerful Americana started to rhyme in a way that felt genuinely unsettling — the violence embedded in the culture that produced both images became visible in a way that neither clip could articulate alone. The patriotic colors of one became the fire of the other. The domestic optimism became a kind of erasure.
This wasn't planned. It was noticed. That's the distinction Peirce makes between construction and abduction — you don't build the insight, you encounter it. The performer's task is to sustain the conditions in which encounters like this can happen, and then to recognize them when they arrive. In practice that means shuffling, blending, tweaking effects, staying in the state of alert receptive attention that Peirce called Musement — and then holding the moment when something clicks.
This particular look became one I kept coming back to. The system lets you save these discoveries as scene snapshots that can be recalled during performance. What starts as Musement — undirected play — produces a specific visual composition that can be developed, refined, and performed. The play was productive.
Against Generative Slop
The dominant narrative around AI and creative work goes something like this: describe what you want, and the machine produces it. Text-to-image. Text-to-video. Text-to-music. The human becomes a prompter — a client issuing briefs to a tireless intern who never pushes back, never misunderstands productively, never introduces the friction that makes art interesting.
The output of this process is what the internet has started calling slop — technically competent, aesthetically vacant, spiritually dead. It looks like something. It feels like nothing. The problem isn't that AI made it. The problem is the relationship: AI as author, human as consumer. That's the wrong topology.
There's a deeper issue too. Generative AI produces exactly what you ask for. It closes the loop between intention and output. But Peirce understood that genuine discovery seems to require surprise — an encounter with something you didn't expect and can't immediately explain. That surprise is what triggers abduction. A system that only gives you what you asked for is unlikely to surprise you. It can only confirm what you already knew you wanted. There's not much room for play in that.
Archive Harvest inverts this. AI doesn't generate the visuals — it can't. The footage is real. It was shot by real people, in real places, decades ago. What AI does instead is act as librarian, curator, and analytical eye. It watches the clips through sampled frames, describes what it sees, tags the content with semantic metadata — mood, era, visual texture, motion quality — and then, when you tell it what you're after, it suggests how to layer those materials together. It's the research assistant who's read everything in the archive and can tell you which reels rhyme with each other. But it can't predict what will happen when those reels collide in real-time, in a specific room, with a specific audience. That's where the surprise lives. That's where play might become productive.
Musement: Play as Method
In his 1908 essay "A Neglected Argument for the Reality of God," Peirce describes Musement as a pure play of ideas in which the mind freely associates across the three categories of experience: Firstness (raw quality, feeling), Secondness (brute fact, resistance), and Thirdness (mediation, pattern, law). It's the condition in which abductive hypotheses tend to arise — not through deliberate effort, but through sustained, alert attention to the material at hand. The mind wanders, but it wanders through real things. And in that wandering, new connections sometimes emerge.
What seems to make Musement productive rather than idle is the material it plays with. Peirce is fairly precise about this: you need genuine encounters with the real — experiences that resist your expectations and force the mind to accommodate something it didn't anticipate. This is Secondness, the brute "thisness" of the world pushing back against your habits of thought. Without that resistance, there's less room for surprise. Without surprise, fewer openings for abduction. Without abduction, not much that's genuinely new. The quality of the play seems to depend on the quality of the material.
This, at least, is what I notice happening during a live visual performance with archival footage. You're not executing a predetermined lighting cue sheet. You're musing — something close to what Peirce described. The footage presents raw visual qualities (Firstness): the particular amber warmth of 1950s Kodachrome, the harsh fluorescent green of a factory training film, the blown-out whites of amateur 8mm. It presents brute facts (Secondness): these are images of real things that happened — a steel mill operating, a family eating dinner, a missile test. And in the collision of layers, patterns start to emerge (Thirdness): the factory and the family dinner begin to feel like a commentary on industrial domesticity; the missile test and the child's birthday party seem to speak to the parallel realities of Cold War America.
None of these readings were planned. They feel abducted — hypothesized in the moment of encounter, by a mind at play with materials whose juxtaposition produced something unexpected. The performer's task is to sustain this state: to keep shuffling the deck, adjusting the blend, tuning the effects until something clicks — until a combination produces a flash of recognition that can be held and refined for the audience.
The Cut-Up Method, Digitized
Burroughs understood intuitively what Peirce formalized: that meaning isn't only constructed by the author — it's also an emergent property of juxtaposition. Place two unrelated texts side by side and a third meaning appears in the gap between them. The reader's pattern-recognition machinery fires whether you want it to or not. "When you cut into the present, the future leaks out."
What Burroughs provided was a technique for generating productive encounters — a method for arranging the conditions in which new meaning could emerge. The cut-up isn't a creative act in the traditional sense. It's a framework for play. You provide the materials (the newspaper, the novel, the transcript), you apply the operation (the cut), and then you attend to what arrives. The art is in the selection — which cuts to keep, which juxtapositions to pursue, which accidents to honor.
Archive Harvest extends this to moving image and light. Three layers of archival footage composite in real-time on a projector. A 1952 industrial film about steel manufacturing overlays a 1967 Department of Defense training reel overlays a home movie of a child's birthday party in suburban Ohio. None of these materials were meant to coexist. Their collision produces something none of them could produce alone — not a new image, but a new reading of old images. The meaning lives in the superposition.
What the AI Actually Does
The AI's role in this practice is specifically non-generative. It operates in three modes, each designed to support and extend the conditions for productive play:
Vision analysis. When a clip is imported, Claude samples frames across its duration and describes what it sees — not just objects and scenes, but visual qualities that matter for live mixing. Film grain texture. Camera movement patterns. Dominant color temperatures. Whether the footage is static or frenetic. The era it belongs to. This is something like Peircean observation — attending to the Firstness and Secondness of the material so the performer doesn't have to watch every minute of every clip. The AI builds an index of qualities that the human mind can then freely associate across. It doesn't interpret the footage. It makes it available for play.
Curation. Given a prompt like "red industrial tones" or "eerie underwater dreamscape," the AI doesn't generate images matching that description. Instead, it searches through the metadata of your existing library and suggests which clips to layer on which channel, what effects settings might enhance the aesthetic, and how the layers could interact. This is something like provisional abduction — the AI generating hypotheses about which combinations might produce interesting juxtapositions. But the performer evaluates those hypotheses in real-time, in something closer to the state of Musement. The AI proposes; the human plays.
Discovery. Based on your library's existing aesthetic profile, the AI suggests search terms for the Internet Archive that would find complementary footage. It knows the Archive's collections — Prelinger, the Cold War film vault, the amateur cinema holdings — and can recommend specific queries that are likely to surface materials you didn't know existed but that rhyme with what you already have. This expands the field of play — increasing the number of possible encounters and therefore the chances of something unexpected clicking into place. More material means more possibility for surprise.
Materials Lost to Time
The Internet Archive holds millions of public domain videos that almost nobody watches. Industrial training films from companies that no longer exist. Government propaganda that was screened once in a high school gymnasium in 1954 and never again. Scientific documentation of processes and phenomena shot on 16mm by researchers who are now dead. Home movies donated by families who didn't want them thrown away but had no idea anyone would ever look at them.
This footage has an aesthetic quality that can't be replicated by generative AI, because it's the product of real material constraints — the specific grain structure of Kodachrome, the optical characteristics of lenses that haven't been manufactured in decades, the way fluorescent lighting in a 1960s factory interacts with Ektachrome film stock. These aren't effects. They're artifacts of a specific moment in the history of image-making. When you composite them in a live performance, you're not simulating a vintage look — you're using the actual materials. The light that passed through that lens, that hit that film stock, that captured that moment — it's still in the image. You're projecting it again.
I think Peirce would recognize why this matters. Productive play seems to require genuine Secondness — encounter with something that resists your expectations, something with the brute facticity of the real. Generative imagery, no matter how photorealistic, lacks this resistance. It was made to satisfy your prompt. It has no history, no context, no life before you summoned it. Archival footage has all of these things. It was made by someone else, for some other purpose, in some other time. When you encounter it in a new context, that distance — between its original meaning and its present collision — feels like precisely the kind of gap in which abduction operates. The surprise that might trigger insight.
In an era where image generation threatens to drown the visual commons in synthetic content, working with archival footage is a deliberate choice to stay rooted in the real. These images are evidence of a world that existed. Remixing them isn't nostalgia — it's archaeology. The cut-up doesn't destroy the original meaning; it reveals latent ones. A Cold War civil defense film, stripped of its narration and layered with footage of children playing, seems to say something neither could say alone about the parallel realities Americans inhabited in the mid-twentieth century. That saying — if it's real, and not just my projection — is the abductive hypothesis. The new explanation that emerges when play encounters surprise.
How It Works
The system has two components. A web application handles the archive — searching, downloading, processing, AI-tagging, reviewing, and extracting clips — while TouchDesigner runs the real-time video engine and projector output. A browser-based controller provides the performance interface: knobs for per-layer effects, clip pads with video thumbnails, faders for opacity and speed, and an audio-reactive section with tap tempo and frequency-band parameter binding.
The video engine composites three layers through a master effects chain — contrast, edge detection, chromatic aberration via a GLSL shader, and resolution pixelation — before outputting to the projector. A scene system allows snapshots to be saved and recalled instantly during performance. Shows can be saved as complete configurations and loaded for different venues and sets.
The audio reactive system captures sound from the room, splits it into bass, mid, and high frequency bands, and allows any visual parameter to be bound to any band. Bass kicks punch the opacity. Hi-hat patterns trigger edge detection. Synth pads shift the hue. The performer sets up the bindings, taps the tempo, and then the system responds to the room.
This audio coupling seems important for sustaining productive play. The performer isn't only juxtaposing images — they're juxtaposing images with sound, and the system's audio reactivity introduces a semi-autonomous element that the performer configured but can't fully predict. The bass drops and the entire visual field shifts in ways you didn't anticipate. This productive unpredictability — this ongoing stream of minor surprises — tends to keep the performer in something like the state of alert, receptive attention that Peirce identified with Musement. You can't settle into routine. The system won't let you. It keeps offering surprises that keep the play going.
The Blade, Not the Hand
Burroughs' cut-up method was criticized as anti-literary — a negation of craft. But Burroughs never claimed the technique replaced authorship. The cut-up is a framework for discovering connections the conscious mind would suppress. The author still chooses which newspaper to cut, where to place the blade, which arrangements to keep and which to discard. The randomness is a collaborator, not a replacement.
Peirce would likely agree, and he might add something. Abduction isn't random — he considered it the form of reasoning most likely to produce new ideas, because deduction and induction can only work with what you already have. Abduction reaches beyond the given to hypothesize something new. But it seems to require raw material — surprising facts, unexpected combinations, encounters with the genuinely other. The cut-up generates these encounters. AI, in this system, multiplies them by making a vast archive navigable. The abductive leap itself — the moment of insight, the new reading — feels like something that remains stubbornly human. It seems to arise in play, or not at all.
This is the model for AI in creative practice that interests me. Not AI that replaces the artist's hand, but AI that builds the playground. The footage is real. The performance is human. The audience is in the room. The light is on the wall. What the machine contributes is the ability to navigate an impossibly large archive, to see patterns across thousands of clips, to suggest juxtapositions that a single human couldn't hold in memory. It doesn't make the art. It tries to create the conditions in which art might emerge — through play, through surprise, through the abductive leaps that seem to happen when a mind moves freely through real material.
When I perform with Archive Harvest, the AI has already done its work — tagging, curating, suggesting. What happens live is entirely analog: a person in a room, responding to sound, adjusting light, trying to sustain the productive play that Peirce called Musement and Burroughs practiced with a razor blade. The machine built the framework. The play is mine.
The future leaks out.
Archive Harvest was built with Python, FastAPI, TouchDesigner, and the Anthropic API, connecting to the Internet Archive's public domain collections. The system — including AI-powered curation, real-time video compositing, audio reactivity, and browser-based performance control — was designed and built in collaboration with Claude.