Notes from the Changelog interview with Cory Doctorow on Chokepoint Capitalism
Lemonade stand operated (or perhaps supervised?) by a robot. Image by Bing Image Creator.
While driving to and waiting for Millie (the car) to get maintenanced, I listened to Examining capitalism's chokepoints, an interview with Cory Doctorow discussing his book Chokepoint Capitalism. This touched on all sorts of things, but right up at the front was an off-topic question - How is Cory so prolific?
Among some other things, Cory suggested that ... well, that practice makes perfect. And that instead of just copy/pasting a link into slack like I usually do, I should write up some notes trying to capture, for other people, what I found interesting or special about the link. So yeah. Good idea. Let's do that.
Problem: I don't really have or want "blog" software. I ran my site as a publicly-editable wiki for like 20 years. It is now a nuxt-content-driven static site, basically. Super easy to open a file in vim and edit more-or-less markdown. But do I do that? No, not really. So of course, then I'm like ... if only I could run a command to pop open my editor and start writing writing writing away. Great!
Turns out I already did that, made a horrid little shell script that goes like this:
#!/bin/bash
set +x
FILE_DATE=`date '+%Y.%m.%d'`
TEMPLATE_DATE=`date '+%Y-%m-%d'`
TITLE=${*:-Untitled}
NEWFILE="TLT - $FILE_DATE - $TITLE.md"
echo "Creating: $NEWFILE"
cd ~/tlt/thelackthereof/content
cp -n "Blog Template TLT - 2023.01.01 - Blog Template.md" "$NEWFILE"
perl -pi -e "s/TEMPLATE_TITLE/$TITLE/" "$NEWFILE"
perl -pi -e "s/TEMPLATE_DATE/$TEMPLATE_DATE/" "$NEWFILE"
vim "$NEWFILE"
But now, if I'm thinking of Cory's advice, I might actually be juggling several drafts or jotting down some partial thoughts. Next thing you know, I'm trying to write a CLI in Typescript (for reasons) and .... not writing a blog post. As my niece says, The Struggle Is Real!!!
Anyway. Where was I? Oh, capitalism.
I've been playing with ChatGPT a lot and even set myself up with the paid version to experiment with GPT-4. It's a fascinating tool and intriguing to explore. I find that to get interesting or useful things out of it, I must change what I feed in. It's not quite like a Markov chain bot or an Alexa Q&A; it's more like a brainstorming tool that has been fed The Internet. Right off the bat, I can't really expect answers that The Internet doesn't know, especially technical ones. If there's no hint of an answer on Stack Overflow, it's probably not going to come up with one here. Next, you have to feed it with some context. If you approached someone in a café and asked them to give you an itinerary for a vacation you're going on, they could do it, but they'd do a poor job if you didn't give them some information about your tastes and goals.
A significant part of this is exploring the question for myself - now what? What comes next? What do we do with this? How will it change things?
Many of my friends, acquaintances, and connections on Mastodon and elsewhere discuss how these Large Language Models (LLMs) are violating their creative and copyright rights by slurping it all in, mashing it all up, and spitting it all out (sometimes without much mashing) without citing their original work. Some of these same friends think that Aaron Swartz was over-prosecuted for exfiltrating JSTOR science articles. My sympathies are mixed here. I want people to be compensated for their labor and creativity, and I want information to be free, and I don't want monopolistic middlemen to leverage an advantage in the middle without adding value. I personally publish most things under a license that explicitly or implicitly allows my content to be republished, remixed, etc., ideally with a citation. But I'd be happy with being cited as being an incorporated part of every output. I think the LLM publishers should say, "Here is the dataset that we trained on and every contribution to it." And then I'd even be proud to be part of it.
The podcast. Remember the podcast? The podcast delves into what happens when humans are not only augmented by machines, LLMs among them, but also controlled by the machines. Cory calls this the Reverse-Centaur. Furthermore, what happens when the economic system is further twisted to where you are stuck in this situation, controlled by monopolistic, possibly non-value-adding middle players? This he calls Chickenized Reverse-Centaurs. They break down in some detail what this currently means, and it is scary. I have to remember how fortunate I am, celebrate it, and see if I can help others obtain some control over their lives as well.
Still, the Centaur concept is powerful. Tools can help us do things faster and better. Having the right tool for something can change a project from hours to minutes. I'd like to know how LLMs can accelerate us, and selfishly me, in day-to-day life.
I sometimes discuss global warming with my father-in-law. He thinks it's a scam on many levels. I generally take the mainstream view that we should probably be more scared than we are, although I also believe that we should accept the fact that the Earth is increasingly becoming a garden, and we need to make it an awesome one. Anyway, he sends me some lengthy and somewhat complicated videos on climate science and alternative models. YouTube has transcripts. I could feed the transcript to ChatGPT and have it give me a summary and critique some of the points.
But can I trust it?
One of the differences between modern Capitalism and Ideal Capitalism is that Ideal Capitalism is founded on the idea of information transparency. Efficiency in a market is all about setting the right price based on all the correct factors, as I understand it. So what happens when we don't know the real cost of something, like global warming? The price is too low. But what happens when we have the wrong information, when things we think are true are actually bad summaries from an LLM that have added a bit of false detail for fluidity? When humans do that on purpose, it is fraud, corruption, embezzlement, and scams. Will LLMs and similar technology accidentally amplify that, making markets even less efficient? Not to mention that efficient and humane are not the same thing. Ahhhhhhh.
So many things.