HomePostsMay 13, 2025

Thoughts on AI

đź‘‹ I'm curently looking for a new role. Hire me!

How many of these posts have been written in the last year? Well, here you go. One more.

For context ... I am a generalist technologist who writes code, diagrams, and documentation. I work right at the crux of where AI is improving daily and that, recently, has been scaring the shit out of me. I use Copilot occasionally on a side project and I'm still trying to get an LLM (Claude to be exact) to do the kinds of amazing things I keep hearing about both from people I know (trustworthy individuals that have my phone number) and people I don't (strangers writing on the internet). I have spent the last year or so generally unimpressed with what AI can do with regards to what I do professionally but, in the last month, I've realized that this technology will absolutely change how I do my job in big ways starting right about now.

I'm going to start with my initial feeling on the matter: sadness.

I like writing code and solving technical problems. I feel both sad and angry when I imagine a world where I can't make a good living doing that.

Are humans going to stop writing code anytime soon? No. Are humans ever going to run out of technical problems to solve? No. If either happened, would my predicament be any worse than, say, manufacturing jobs leaving the US? No.

But this is what immediately ran through my head when I first started hearing and seeing that you could write working code with an LLM.

I truly enjoy writing clear, organized code that's meant for other people to read. I like the process of solving those little challenges that you tackle throughout the day as a programmer. I like solving a problem first, then going back and making the code look nice and read more clearly. I like deciding what lines need a comment and what can be written so that they don't. And I like giving other people kind, constructive feedback on all of this in a team setting.

I will say right now: there is clearly a technique for using an LLM to help you code that I don't currently have. Coding along with an LLM is not a fun or interesting experience for me and has not, thus far, been more productive than going at it on my own. I've felt for a long time that this was a clear sign that AI coding or vibe coding or CHOP (chat-oriented programming) was going to be a lot of hype and not much more than an interesting new tool that I could probably (hopefully) ignore.

Up until about a month ago. I don't know what changed (perhaps spending more time on LinkedIn) but my intuition shifted.

Everyone who writes code needs to know how to do it in concert with AI. Starting yesterday (or, if that's not possible, then right now).

I'm hoping that this admission will sound like "wow, this guy has been living under a rock" to half of you reading this and like "oh no, he's falling for this too" to the other half.

Despite a less-than-optimal experience, I've used Claude to very quickly solve a couple of technical problems where I understood the space I was working in but not the details (specifically, Kubernetes and an Obsidian plugin). It's clear that asking it to just write a huge chunk of code does not get good results but giving it small tasks with clear instructions can help write boilerplate code quickly.

I've also found Copilot to be a helpful advancement in code completion. Typing:

if (!has

... and having it suggest:

if (!hasThisImportantProperty) {
  log(`❌ Important property not found!`)
  process.exit(1);
}

... is pretty neat and saved me a solid 20 seconds of typing. Help in the 10-100 character range is pretty good. Writing a method name and then reviewing 10+ lines of generated code is not good and I have yet to accept any suggestion more than a couple of lines.

Again, I acknowledge my lack of skill here. I am reluctant to adopt this way of working, even if I know it will have to happen at some point. That's probably because:

I fucking hate AI.

I really, truly fucking hate it right now.

The generated images on blog posts (hard skip when I see them), the AI summaries that tell me nothing (I'm looking at you, Apple and Strava), the acronym infecting the description of every product and company out there. Unhelpful at best, mostly just gross.

I'm having a tough time coming to terms with the idea that my open source software, written for people to learn and solve problems, is being used to train a model that then takes the credit for helping people solve problems. I feel icky using Copilot.

"What's the hurry?" my neighbor asked, rhetorically, about the pace at which AI is infecting everything. I don't know ... but, also, of course I do. It's the same thing it always is when humanity is pushed aside in the name of "progress." @itsophieschoice on Substack said it so brilliantly:

The story always sounds the same: Technology will change everything. Adapt or perish. Yet adaptation mysteriously always means producing more for less, with an ever-smaller slice of the pie.

AI could be a very interesting, long-term experiment in the advancement of computers, the assimilation of human knowledge, and so many other things. But we're past that point now because it's here, there, and everywhere. I didn't have any say or any control over how it will be used, to my benefit or detriment. It's just here now. Like billboards, microplastics, and social media.

sigh ... I'll leave that there for now.

The fact is, AI has changed and will continue to change technology, for the good and the bad. I still enjoy making technology and I'm not retiring anytime soon so what is there to be done?

Back to my beginner's mind.

As always, the road leads back to the beginning. I can hem and haw about the newfangled widget and how the old way is the right way and how craft will win and how an AI could never do what I do. And I definitely reserve the right to continue doing that. But I also don't want to be what Steve Yegge calls a "stubborn developer" and miss out on an opportunity to grow with this technology and possibly even help shape it.

Capitalism critiques aside, LLMs are an astounding technology that can be used to do incredible things in the right hands. A friend of mine built an entire project management system in Obsidian using AI, scratching a major itch for himself and learning a ton along the way. He paired with me to try and solve my own Obsidian problem and we got something working much more quickly than I expected.

So, yes, I now know that this is the end of programming as we know it. But I'm starting to see it a tiny bit more like Tim O'Reilly does:

When there’s a breakthrough that puts advanced computing power into the hands of a far larger group of people, yes, ordinary people can do things that were once the domain of highly trained specialists. But that same breakthrough also enables new kinds of services and demand for those services. It creates new sources of deep magic that only a few understand ... we’re beginning a profound period of exploration and creativity, trying to understand how to make that magic work and to derive new advantages from its power ... This is not the end of programming. It is the beginning of its latest reinvention.

If I take a deep breath and pause, I can use this as a counterpoint to the AI dystopia I occasionally lose myself in. I remember that I have a personal data project that I care deeply about and getting that operational and in more hands means that more folks are taking control over their personal data. I remember that my kids and I have been talking about making a video game together and now we could probably get something working in a weekend.

I am sick to my stomach thinking about watching movies and reading books and comics made with AI. I will never pass my blog posts through an AI before I publish them and I will never let an AI write a text message to my wife. The words that I use to connect with people need to come from me.

But the code that I write, the wiring I use between systems? Maybe the preciousness and ownership that I feel over these purpose-built names and phrases is misplaced. How differently would I write code if I knew no one else, myself included, would ever see it again? Would I care about clear method names and setting variables instead of inline logic and writing guard clauses instead of another layer of indentation?

These are all things I'm still thinking about and contending with. I don't know exactly where I stand with it all but I do know that's it's moving forward, with or without my blessing. The best I can do right now is treat it like the newest JS framework: take a moment to facepalm and then give the docs a read.

< Take Action >

Suggest changes on GitHub ›

Comment via:

Email › GitHub ›

Subscribe via:
RSS › Twitter › GitHub ›

< Read More >

Tags
Writing + Publishing Software Engineering
Older

Apr 23, 2025

Personal Data Pipeline Update

Taking a step forward on the Personal Data Pipeline project by autoloading JSON and adding an interface for data transformation.