[kwlug-disc] On usage ... on simplistic usage of GenAI in Node project

Chris Irwin chris at chrisirwin.ca
Thu Mar 19 13:40:09 EDT 2026


On Wed, Mar 18, 2026 at 11:29:44PM -0400, Paul Nijjar via kwlug-disc wrote:
>
>Putting aside the environmental externalities of LLMs, this seems like
>a pretty good policy.
>
>Given the strong anti-LLM bias from many in open source (the majority
>of whom are on Mastodon, it seems) this crisis might result in a
>fascinating natural experiment:

It is certainly getting annoying in the corporate world right now, and I 
don't deal with 1000 would-be contributors testing their "prompt 
engieering" "skills" on me.

I could spend a few days doing R&D on a project, weighing the options, 
and make a request to purchase a product. And the feedback I receive 
from up the chain is "Did you consder X, it's a designed as a drop-in 
for Y and Z, and solves all of the requirements you've outlined". 
Complete with Emoji checkboxes confirmation-biasing this to be the 
obvious solution.

Ignoring that it couldn't be designed as a replacement for Y and Z, 
because those are different technologies separated by 30 years in time. 
And that's saying nothing of the suggestion that it's "drop-in", when 
the project scope specifically includes major rewrites and a language 
change.

Human research, reasoning, and experience is being ignored because a 
computer said so. I expect everything I say will be run through an AI to 
see if I'm correct, ignoring decades of personal experience. And at 
significant environmental impact.

>Popular project X has a schism over whether to allow LLM-generated
>contributions, and has a fork. One side of the fork allows
>LLM-generated contributions, and the other prohibits it.
>Which project is more successful? Why?

I think that realistically, outlawing AI-assisted coding isn't going to 
be feasible entirely. At it's core, it's a tool, and it *can* be useful. 
However, the same rules need to apply to human-generated code: Minimal 
changesets for the task, adhere to coding standards, etc. Don't refactor 
a 10k-line file because you've added a single param to a single 
function. (Note: VSCode was bad for this *before* AI infested it).

I do worry about the growing reliance on these technologies, though. 
Right now, a skilled developer can review generated code because they've 
*written* similar code. But skills atrophy, and you'll start to trust 
the tools more over time. Whose to say you'll still catch mistakes later 
on, or that you even keep up-to-date with changes in standards/etc in 
the future, and can properly audit what the AI is writing.

>but looking at the outcomes of such natural experiments could be
>helpful in understanding which of these variables matter.

Not sure if anybody has been following the bcachefs saga recently (being 
excised from the kernel, etc), but it's recently taken some very strange 
AI-related turns, where AI-generated code is the least weird part of the 
story.

-- 
Chris Irwin

email:   chris at chrisirwin.ca
   web: https://chrisirwin.ca


More information about the kwlug-disc mailing list