My GenAI/LLM usage:
negligible to non-existent
I don’t want to use the hallucination machine.
Last updated • Tagged /practices
You may perhaps gauge my general attitude by the fact that I like to call LLMs hallucination machines.
At this time, I don’t use LLMs at all,
beyond occasional attempts to get useful information out of one,
via DuckDuckGo’s Search Assist/Duck.ai (it seldom helps).
I do all my own stuff. Writing, code, diagrams, whatever.
Here are some of my personal reasons for not using this stuff. People are what matter.
I believe that Jesus will return and establish God’s kingdom on earth.
I’m confident that there won’t be computers in God’s kingdom:
I suspect our high-tech society may collapse before Jesus returns
(there are a handful of passages of scripture that I think hint in this direction),
but if not, the internet and all our manufacturing supply chains will be destroyed by his return.
So, I should focus on the things that can endure to the kingdom: people.
(Yes, I still make my living with computers.
It’s not inherently a reason not to use them,
but it is part of my reticence to put effort into them.)
I believe LLMs are a net negative for humanity.
Do note that this doesn’t preclude my using them;
I also believe the internet is a net negative for humanity,
yet I have built my career and hobbies on the internet.
Good is possible with LLMs, but overall I think they empower evil far more
(and I think this has become obvious far more quickly than with the internet).
I don’t trust the model-makers.
They all have barrows to push, and they’re trying their best to push them, with mixed success.
So far, LLM power and influence has been fairly centralised—
training is expensive, and that doesn’t look likely to change.
A lot of power centralises in a few companies,
and no one should trust them with this power.
History is clear about the corrupting influence of power.
When they succeed in guiding their hallucination machine, you should be worried:
they have politics which will not entirely align with yours.
When they fail in guiding their hallucination machine, you should be worried:
these are the experts, and they can’t control their creation,
don’t even understand why it’s doing what it’s doing.
(There’s quite a body of fiction exploring such scenarios.)
I’m loath to get into specific political examples,
but I think the easiest place to see this was early image generators,
on topics of racial diversity: it shows both problems.
Data sets were biased in one direction (disproportionately Caucasian);
and system prompts (or their equivalents) were biased in the other direction (be diverse!).
The end result was a lot of errors, both the ludicrous and the insidious:
prompted to recreate scenes historical or of particular locales,
diversity was missing in places where it should be (and where a human artist would have put it),
and forced in places where it was wildly incorrect.
Over time such things become more subtle, yet they are no less present.
OpenAI’s Where the goblins came from (2026-04-29) should be deeply alarming, in both directions.
On the one hand, OpenAI has gained a shadowy control of parts of society and culture.
They do deliberately influence things, and cannot be trusted with the power.
And yet they cannot be a malevolent djinn, for they bumble along unable to fully control their machine!
This is no science experiment; there is no control;
we’re just throwing humanity in the deep end and hoping for the best.
A lot of Progress is like that, but it’s getting more extreme, more dangerous.
(Doomsaying is a time-honoured tradition—
“kids these days”, “no one wants to work any more”, &c.—
but I think honest examiners will admit things are different this time.)
I don’t want to pay for it, and especially not them.
I can understand companies that pay for employees being willing to pay for the LLMs to replace employees.
But I’m just me, of limited means and sporadic paid work. I don’t have the money for LLMs.
I’m not employing anyone, and I’m not employing an LLM.
I also don’t like the idea of us all paying these few big companies.
It’s a nasty centralising effect.
One of the brilliant things about the software field, compared to many others,
is how easy it is for people to get into, if they’re interested.
The ascendency of locked-down phones upset this somewhat, but it’s still not too bad.
But LLMs look like they might seriously destabilise this.
If I end up with enough money and projects that return on investment,
I’ll find a human to hire and work with.
I’d really quite like to take an apprentice at some point.
I like coding.
Some want to get things done, others are artists.
I have come to realise over time that I am an artist.
I don’t want the machine to take that away from me,
especially when it’s a worker and not an artist.
I don’t want my skills to atrophy.
Even if I were using an LLM,
I expect I’d insist on still writing all of the actual code.
I don’t want to be a manager.
This is most applicable to the emerging trend of agentic workflows.
My oldest brother headed into the engineering management line, and struggled with delegation,
because he was simply a better developer than all his underlings,
and could do a better job faster.
He steadily worked out how to do it and make it work.
But me? I never had any interest in managing others.
(Specifically careerwise, I’m singularly unambitious.)
Teaching others, sure: I delight in that and will take any opportunity I can find.
But managing? I have enough trouble managing myself.
Agentic coding sounds like having a particularly enthusiastic, speedy, superficially-skilled and unwise underling.
This sounds like my nightmare: all the managing, none of the teaching, little of the coding.
I despise and scorn AI tone.
Full of unction, unwarranted exuberance, frivolous emojis, and excessive verbosity.
I care about writing and words. I do not choose words lightly.
(Back when I lived in Australia, I would sometimes pause mid-sentence for multiple seconds
because the exact right word was not quite coming.
Now that I live in India, where no one has as good English as me,
I have to constrain my language and never hit this problem.
I have also come to realise just how bad the average person’s literacy is,
and these models’ verbosity doesn’t interact well with that.)
Some of these issues are specific to chat-style usage,
others apply to code generation as well.
I’m lazy.
Seems like getting good results out of LLMs takes effort and practice.
I’d prefer to spend my effort on other things.
I am not satisfied with their quality.
For chat and search
Since DuckDuckGo added Search Assist (LLM answers),
I think I have found every single answer to fall into one of three categories:
Obvious. The links below said it, most of the time even in their summaries.
So it’s not necessary.
Wrong. Very often obviously.
So it’s not useful.
Unverifiable. After hunting, I couldn’t find any further information.
So it’s not trustworthy.
On some programming topics where I’ve tried conversations about deeper questions
(especially things like TypeScript keywords and techniques),
I have consistently found it to confidently spout bad information.
Occasionally it will still be useful,
because it will use some keyword or technique in a way not correct for my case,
but relevant so I can (with difficulty) find more about it in the TypeScript handbook
(seriously, TypeScript’s docs on its keywords and built-in types is so badly structured and difficult to find).
Bear in mind I’m a competent developer,
and I’m mostly only looking things up because they’re more complex.
Were I more junior, they might be more useful,
and for more general knowledge, not software-related, maybe it’s better.
Also, the “citation” links DuckDuckGo provides are consistently woefully bad:
when I have followed them (sometimes to confirm something it’s said,
sometimes because what it said was nonsense and maybe the link is useful,
sometimes of idle curiosity deliberately to try to counteract selection bias),
they almost never actually support the claims.
Other LLM providers might do better, I don’t know,
but it is indicative of the problems of the general design.
For writing code
I’ve read LLM-generated code.
It’s said to be improving, but ugh that stuff is still tasteless,
even when by some chance it’s correct.
I care about my code.
A combination of a couple of the earlier points, really.
I don’t trust the machine.
No, actually, I do—I trust that if I let it loose it will destroy my code and my reputation.
And you can’t teach the thing.
Those are the reasons that I will list, for now.
Others will choose other reasons.
See also Jacob Harris’s Why I Don’t Vibe Code, which generally resonated with me.
One other reason that some suggest is, I think, worth discussing: Non-argument: copyright infringement/moral hazard
Some hold that LLMs are embodied copyright infringement.
For my part, I’m not willing to think too much about this,
and the legitimacy or otherwise of the “fair use” exemption claim,
because I disagree so comprehensively with the state of the copyright system.
I consider it plausible that what LLMs do could be considered “learning” for legal purpose,
though they definitely have a tendency to regurgitate, which is problematic;
I do humour “plagiarism machine” as an alternative to “hallucination machine”.
The battle for striking down fair use completely, I will mention, is lost:
all current LLMs would have to be thrown away and they’d have to start almost from scratch,
and neither commercial interests nor geopolitics will let that happen.
They have developed faster than is compatible with regulation by Western-style judiciary,
and legislation waited too long—I suspect that ship sailed no later than early 2024.
But as I say, I’m not sympathetic to the copyright infringement argument.
Copyright terms have become ridiculous, as has enforcement in some areas.
The commons is robbed of much.
I’d like to return to something closer to the Statute of Anne with its 14 years + 14 year renewal.
Copyright being automatic is the only improvement that I can think of, made since then.
And convince ’em, if you can,
that the reign of good Queen Anne
was Culture’s palmiest day.
My preferred scheme gives ten years of free copyright protection,
then $10 for year 11, and increase the amount by 50% each subsequent year:
Details are negotiable, but I think it’s the right direction.
Year 11 is $10, year 12 is $15, year 13 is $22.50, and so on.
Years 11–20 total $1,133.30. The next decade will cost $98,604.40.
If your work is actually lucrative you can pay to keep it.
If after 40 years it’s still raking in well more than a million dollar of profit per year,
maybe you’re still renewing ($1,278,340).
But I doubt more than one or two works are still renewing at year 50,
and 60 years is probably pretty much the hard cap ($4.25 billion).
The general idea is that we want everything to end up in the commons as soon as it’s not helping the creator,
and that in practice, copyright is helping very few people after even ten years.
Anyway, I digress.
I should leave complaining about the copyright system
and the abuses large companies add atop it to another page.