No book review this week. Instead, please read this recent post by Chuck Litka in its entirety. Chuck is a wise and insightful writer who has lots of experience in the indie publishing game, so you would be well-advised to heed his words.

I can’t claim to be “wise” or “insightful”, and I certainly have not achieved anything like Chuck’s success in indie publishing. Perhaps the most I can aspire to is the role of the Shakespearean Fool in this drama. But like Jack Point, “winnow all my folly and you’ll find / a grain or two of truth among the chaff.”
Obviously, Chuck is quite right when he says that “AI is going to eliminate jobs that produce art, which is sad, but that doesn’t mean AI is eliminating art. It’s just eliminating jobs.” We can still make art, at a financial loss, on our own time. Which, as Chuck observes, is what most indie authors have been doing already anyway.
Of course, we all hope our books will be read by someone. And by that I mean someone human, not merely an LLM incorporating our words into its training data. And as AI-generated works proliferate, it will become harder for human readers to find human authors, even if they want to. AI is capable of generating content exponentially faster than humans. Which means it will be much easier to find books by AI than not. Which means game over for human writers and artists. Sorry, guys; go home. Gotta hand it to those neural networks, they just wanted it more.
Ah, but wait a moment… go back. A key word just struck me in that conclusion: “easier.” We automatically assume that doing what is easier is the logical choice. This seems intuitively correct. However, imagine if you told a bodybuilder or a marathon runner that there is an “easier” alternative to their activities. Obviously, it’s easier to sit on the couch eating chips than to go to the gym. Yet, some people choose the gym anyway.
Economists speak of individual preferences as a way of predicting people’s decisions. It’s normal to assume the easier choice will be preferred for any given individual.
But what if it’s not? What if we adjust our preferences so that we prefer the harder thing to the easier one? Suddenly, AI’s ability to make access to books “easier” is no longer a competitive advantage. If we prefer difficulty to ease, a whole lot of things get scrambled. For instance, it also becomes clear that using AI to write books is a non-starter as well, since part of the fun is the challenge of writing.
Sticking with thinking like an economist for now, we might next ask, “what is our incentive to change our preferences?” Well, what’s our incentive to go to the gym? Short-term pain for long-term gain. (Just ignore that one economist who famously said, “in the long run we are all dead.”)
Being an indie author means actively choosing something harder over something easier. I think most of us know this instinctively, but seeing it written down helps you incorporate it into a whole system of action. Because once you tell yourself that you prefer something harder, it no longer feels so much harder. And once you make a habit of exercise, it feels worse not to exercise.
But there is still the question of how to monetize this. This is the central problem of being subject to market forces. You and I may prefer human-written books all we like. The majority of the market is indifferent, and will still choose the easier option. So if it’s just you and me selling our books to each other, we can never hope to expand the total income of our two person market.
One possible answer to this is prestige: if human-authored books are seen as more impressive than AI-ones, it could be that a viable market for them will still exist. This is possible, but not likely. Another possibility is the return of patronage systems, where wealthy benefactors sponsor promising artists and intellectuals. Mass-market capitalism gradually replaced this system, but to the extent that AI content-generation is essentially the automation of capitalism, it may be that we will return to more human-based networks of creation, like the one that gave us the Renaissance.
These are just ideas, half-formed theories; nothing more. As of now, that’s Berthold’s Plan A for ensuring AI doesn’t completely eradicate humanity. There is no Plan B. Well, actually, there is, but no one wants that.
