• Kiki and Mozart
  • Posts
  • What I've Learned Watching My Husband Paint Through the AI Boom

What I've Learned Watching My Husband Paint Through the AI Boom

Why “tool versus replacement” is the wrong way to think about AI in art — and what a better frame might look like

It's been a long time. About two years since I last sent anything, which is longer than I meant it to be.

I'm coming back with something a little different. Not a tutorial or a roundup of new tools — a personal piece about a question I've been sitting with for a while, sparked by watching my husband (an oil painter) try to figure out his own relationship with AI. The question is one I think a lot of you have been turning over too: how much can you hand over to AI before the work stops being yours?

If that's something you've been thinking about, I'd love for you to read it.

Beloved (2022) by Mykola Koidan — no AI involvement

I have spent the last eight years researching AI — analyzing where the technology is going, writing about new tools, trying to understand what each new model actually changes. My husband paints portraits in oil. The classical kind, where a single composition can take days before any paint touches the canvas, and the canvas itself takes weeks after that.

For a long time, our work lived in different parts of the house and different parts of our conversations. Then image generation got good, and I started watching him try to figure out what to do with it.

What surprised me was what didn't worry him. He wasn't afraid of being replaced. He knows what his work is and who it's for; nobody who commissions an oil portrait is going to be satisfied with a generated image. The question for him was something else, and it was harder.

He could see what these tools might give him. Faster reference gathering. Composition studies without spending days on them. A way to test color ideas before committing. He wanted those things. He's not a purist. But every time he opened one of these tools, he ran into the same question: how much could he hand over before the work stopped being his? If he generated a composition and then painted it, was the composition still his? If he didn't generate it, was he just making things harder for himself out of pride?

I didn't have good answers for him. And the public conversation wasn't helping either. One side was saying AI is just a tool, like Photoshop, and artists who push back are being precious about it. The other side was saying AI is a replacement engine built on stolen work, and any artist who uses it is part of the problem. Both sides were sure of themselves. Neither was useful for someone sitting in a studio on a Tuesday, trying to decide whether to open Midjourney.

The more I listened to both arguments, the more I started to think the framing itself was the problem. "Tool versus replacement" sounds like a real debate, but it borrows its shape from older technology shifts and assumes the lessons carry over. Some of them do. A lot of them don't.

Why the Binary Persists

Before taking the framing apart, it's worth asking why it's stuck around for so long. Smart people on both sides keep returning to it. There are reasons for that.

The "tool" framing is useful if you want to use AI without feeling bad about it. It puts generative models in the same category as a brush, a camera, or a copy of Photoshop — things nobody asks you to justify. If AI is just another tool, then the only interesting question is whether you're using it well. Ethics, training data, displaced illustrators — all of that becomes someone else's problem, or a problem for "the industry," which is another way of saying nobody in particular.

The "replacement" framing does something different but equally useful for the people who hold it. It names a real thing. Illustrators have lost work. Stock photographers have lost work. Concept artists are watching their pipelines compress. Calling AI a replacement engine puts that loss on the record and refuses to let it be reframed as progress. It also draws a clean line: if you use these tools, you're on one side of it.

Both framings work because they're easy to repeat. They fit in a tweet. They sort people into camps quickly, which is what most online arguments are actually for. And once you've picked a side, you don't have to keep thinking — the framing does the thinking for you.

That's the real cost. The binary isn't wrong because each side has nothing going for it. Each side has something. The binary is a problem because it makes the harder questions invisible. What does it mean to "use" a tool that makes most of the decisions? What do you owe an artist whose work was in the training data? When does AI assistance change what a piece of art is, and when is it just a faster way to do something you were already doing? Those questions don't fit on either side of the line, so they get dropped.

My husband's question — how much can I hand over before the work stops being mine — is one of the dropped ones. It doesn't have a tool-side answer or a replacement-side answer. It needs a different conversation, and the binary is the reason we're not having it.

The Photography Parallel

The comparison you hear most often is photography. When painting met the camera in the 1840s, the argument went the same way it's going now. Painters worried they'd be obsolete. Critics dismissed photography as mechanical, not real art. A few decades later, painting was still here, photography had become its own art form, and the people who'd predicted the end of one or the other looked silly. The lesson, supposedly: this always happens, it always works out, calm down.

There's something to this. The cultural pattern really does rhyme. Early reactions to a new image-making technology tend to be more intense than the eventual reality. Painters did adapt — impressionism and abstraction can be read, in part, as painting moving toward what photography couldn't do. New professions did emerge alongside the old ones. And the people who insisted photography would never count as art now sound exactly as wrong as you'd expect.

But the parallel starts to fall apart as soon as you look at how the two technologies actually work.

A photograph requires a photographer to be somewhere. Someone has to point the camera, choose the moment, decide what's in the frame. Even at its most automatic, photography is tied to a real scene and a person standing in front of it. Generative AI requires none of this. You type a sentence. The image comes from a model's interpretation of that sentence, shaped by everything it was trained on. Nobody has to be anywhere.

Photography also didn't need painting to function. Cameras worked because of optics and chemistry, not because someone had fed them a million paintings first. Generative AI is different in a way that has no real equivalent in the photography story: it was built using the work of the artists it now competes with. Whatever you think about whether that's fair, it's a structural fact about the technology, and it changes the moral shape of the comparison.

There's a skill question too. Photography had a high floor and a high ceiling. Almost anyone could press a shutter, but making a photograph that was actually good — composition, light, timing, printing — took years. The camera democratized image-making at the bottom without flattening the difference between a snapshot and an award-winning photograph. Generative AI has almost no floor at all. A first-time user can produce something that looks competent on the first try. Whether the ceiling is high is still being argued, but the bottom of the skill range has dropped to zero in a way it never did with photography.

So the photography parallel is half-useful. It's a decent guide for thinking about cultural reception — the panic, the dismissal, the eventual settling-in. It's a poor guide for thinking about the technology itself. If you only take the photography lesson, you'll end up reassuring yourself with a story that doesn't actually fit what's in front of you.

Let It Burn (2025) by Mykola Koidan — It started with a very clear idea for the painting, dozens of reference photos, hundreds of AI-generated images based on those references, lots of Photoshop editing to arrive at the final reference, and then painting on canvas.

The Digital Workflow Parallel

The other comparison that comes up a lot is closer to home: the shift from traditional media to digital. Painters moving to Wacom tablets. Photographers moving from the darkroom to Photoshop. Illustrators learning to work in Procreate instead of on bristol board. This one feels more relevant than photography, and in some ways it is.

The cultural pattern is familiar. There was a real purist resistance — "real artists use real paint" — that softened over a generation. The new tools turned out to require new skills, often substantial ones; digital painting didn't kill craft, it just moved it somewhere else. Hybrid practice became normal. Plenty of working artists today move between traditional and digital without thinking much about it. And the economic disruption was real: faster iteration meant fewer hands needed for the same output, and some kinds of illustration work quietly disappeared. That part of the story does map onto what's happening now.

But the parallel breaks in two specific places, and they're the places that matter.

The first is the question of who's making the marks. A digital painter still makes every brushstroke. The tablet is faster and more forgiving than oil on canvas, but the artist's hand is still the thing that decides where each line goes. Photoshop edits a photograph that the photographer took. The tools changed the medium and the workflow, but the person stayed in the same role: they were still the one executing the work. Generative AI moves the artist out of that role. You describe what you want and the model produces it. You can edit, you can iterate, you can paint over the result — but the moment of making the image has been handed to something else. That's not a faster brush. It's a different relationship to the work.

The second is where the technology came from. Photoshop was built by engineers Adobe paid to build it. Generative AI tools were built by paid engineers too — Midjourney, OpenAI, Google and the rest all employ them. But the engineers weren't enough. To make these models work, the companies also needed enormous datasets of existing images, much of which came from working artists who weren't asked, weren't paid, and in many cases weren't told. That second ingredient — the training data — has no equivalent in the history of art-making tools. Photoshop didn't need to ingest a million paintings to function. Generative AI did. Whatever you think about whether that's fair, it's a structural fact about how these tools came to exist, and it's something the digital parallel can't help you think about.

There's also a smaller difference worth naming, and it cuts in an interesting direction. Learning Photoshop rewarded the same things that being a good artist already rewarded — drawing skill, color sense, an eye for composition. The transition cost was real, but it built on what artists already knew. Generative AI rewards a different mix. Color sense and an eye for composition still matter; if anything, they matter more, because those are the things that separate a striking AI image from a generic one. But drawing skill — which used to be the main barrier to entry into visual art — doesn't. In its place, you need aesthetic taste and the ability to actually work the tools: knowing how to push a model toward a specific look, how to keep a character or a style consistent across images, how to combine multiple tools to get to the result you want.

That's not a smaller skill set than digital painting. It's a different one. And it lets in a different group of people. Some artists who spent years learning to draw will find that part of their training doesn't transfer the way it did when they moved to a tablet. Some people who never had the patience to learn to draw will find they can finally make the kind of images they always wanted to. Whether you call that democratization or displacement depends on where you're standing, but it's a real shift, and it's not what happened with the move to digital.

So the digital parallel gets you further than the photography one. It's genuinely useful for thinking about workflow disruption, hybrid practice, and the way an industry absorbs a new tool over time. But it can't help you with the two questions that matter most: who's actually making the work, and what was used to build the thing that's making it.

What All These Parallels Miss

If you stack the photography and digital parallels next to each other, the same set of things keeps falling outside what they can explain. Three of them, specifically.

The first is the training data. The obvious objection here is that human artists also learn from other artists — nobody picks up a brush in a vacuum, and influence is part of how every art form works. That's true and worth saying out loud. But the comparison breaks down at scale and at flexibility. A human artist might study a few hundred paintings closely over a career, and might learn to imitate one or two styles well enough that you could mistake their work for someone else's. A generative model trains on millions of images and can produce a convincing imitation of almost any style on request. That's not the same thing as influence happening faster. It's a different kind of relationship to the work it learned from — one that has no precedent in how artists have ever related to other artists' work, and one the historical comparisons don't help with.

The second is where the image actually gets made. Earlier tools — the brush, the camera, the tablet, Photoshop — gave the artist a faster or more flexible way to do something they were already doing. The artist was still the one making the image. With generative AI, you describe what you want and the model produces it. You can edit, iterate, paint over the result, but the moment of generating the image happens somewhere other than your hand. That isn't necessarily a loss. But it is different, and it raises questions about authorship and credit that earlier tools didn't raise in the same way.

The third is the skill floor. Photography, digital painting, every previous tool — they all required real time before you could produce something competent. Generative AI gives you a competent-looking image on the first try. Getting what you actually want is harder than that first try suggests, and the skills that lead to genuinely good AI work are real. But the bottom of the range has dropped to a place it has never been before, and that changes who shows up. When the floor drops, the room fills with people who couldn't get in before.

None of this makes AI bad, and none of it makes it good. These are just things that are true about it, and the familiar comparisons can't help you think about them. If you reach for the photography or the digital parallel and stop there, you end up with a story that feels reassuring but doesn't quite fit what's actually in front of you.

Which brings me back to my husband's question — how much can I hand over before the work stops being mine. It isn't a question the past can answer for him. It needs a different way of thinking about what AI actually is.

A Better Frame: AI as a Strange Kind of Collaborator

If "tool" doesn't fit and "replacement" doesn't fit, what does?

The frame I keep coming back to, and the one that seems to actually help artists I've talked to, is something like non-human collaborator with peculiar properties. It's a clunky phrase. But it does something the other two framings don't: it acknowledges that the AI is contributing something real to the work, while being honest about what kind of thing is doing the contributing.

Support (2021) by Mykola Koidan — No AI involved

A brush doesn't make decisions. A camera doesn't choose what's interesting in a scene. Photoshop doesn't have opinions about composition. Generative AI does all of these things, in its own way. When you give a model a prompt, it makes thousands of small choices about what your sentence might mean — what the lighting looks like, how the figure is posed, what gets emphasized, what gets left out. Those choices come from somewhere. They come from the patterns in the training data, filtered through whatever the model has learned to do with them. They aren't human choices, but they aren't no-choices either. Calling that "a tool" doesn't quite describe what's happening.

At the same time, the model isn't a collaborator in the way another artist would be. It doesn't have intentions. It doesn't have taste, exactly — it has something that mimics taste, which is the average of millions of images it was trained on. It doesn't know what your work is about. It doesn't care whether the piece succeeds. Treating it as a real creative partner gives it credit for things it can't actually do.

So the useful framing sits in between. The AI is contributing, but it's contributing something strange — choices without intention, taste without experience, output without a maker. The artist's job is to figure out how to work with that. What am I bringing to this piece? What is the model bringing? What does the final image owe to each of us, and how do I want to describe that to whoever sees it?

This frame also makes room for the fact that AI use isn't one thing. There's a real difference between generating an image with one prompt and calling it done, using AI for reference and painting the actual piece by hand, and training a model on your own work to extend your own style. "Tool" flattens all of these into the same thing. "Replacement" flattens them into the same thing too, just with the opposite verdict. The collaborator frame lets you tell them apart, because it asks the same question of each — what was the division of labor here — and gets a different answer every time.

Arya (2023) by Mykola Koidan — Idea → hundreds of generated images → final reference image → translating it onto canvas

There are other frames worth considering. Some artists I've spoken to think of AI as something closer to a commissioned worker: you describe what you want, it executes, and the relationship is more like working with an illustrator than working with a tool. Others think of it as a kind of found material — the way a collage artist works with images they didn't make. Others compare it to a synthesizer: an instrument with its own voice, capable of sounds the player didn't make but shaped.

Each of these frames lights up something different. None of them is the final answer. The point isn't to crown one — it's to give up the binary and let artists pick the framing that actually fits what they're doing in their own studio, on their own piece.

Living Without the Binary

I started this with my husband, sitting in his studio, trying to decide whether to open Midjourney. I want to come back to him, because the honest answer to where this article ends up is: he still hasn't fully figured it out. Some weeks he uses these tools for reference and color studies and finds them useful. Other weeks he doesn't open them at all, and the work goes fine without them. He hasn't landed on a rule. I don't think he's going to.

What changed for him isn't the answer. It's the question. He stopped asking whether AI was a tool he should learn or a threat he should refuse, because neither version of that question matched what he was actually doing in the studio. He started asking, piece by piece, how much of a given work he wanted to be his and how much he was willing to share with something else. Sometimes the answer is "all of it, mine, no AI involved." Sometimes it's "the composition came from a generated reference, the painting is mine." Sometimes it's something else. The question is the same; the answer keeps changing.

I think this is closer to what most working artists are actually doing right now, underneath the public arguments. The people who say "it's just a tool" and the people who say "it's a replacement engine" are mostly arguing on the internet. The people in studios are making smaller, more specific decisions, one piece at a time, and they're doing it without much help from the discourse that's supposed to be helping them.

My husband still paints in oil. He still takes weeks on a canvas. Sometimes there's a generated image somewhere in the process and sometimes there isn't. I don't think that makes him an AI artist or a traditional artist. I think it makes him an artist who's figuring it out, like most of the artists I know. The framing we've had hasn't been helping with that. Maybe a different one can.

Reply

or to participate.