There has been so much conversation about artificial intelligence over the last 5 years that I hesitate to write anything at all. I wanted to make my experience clear, the research I had done available, and elicit discussion. But it seems like that all anyone ever talks about on the couple of social media outlets I hang out within. And there is a lot of heavy breathing.

There is a good reason for that. Some of this is kinda life changing. I mean, none of this just popped up out of nowhere. We have been working on language models and machine learning - the two technologies at the heart of this mess - for nearly 50 years. Only in the last 7 years have we had the hardware even invented that could do the (once) theoretical large language model. Only in the last 5 have we had it in place. Only in the last twelve months have we figured out how to use it. Only in the last four or so months have there been products built that can make use of it.
Things are moving fast.
My path so far
I read Matt Schumer's article recently. He's a techbro, sure, but he is a pragmatic developer and well respected. His thoughts gave me pause.
https://shumer.dev/something-big-is-happening
He says "I know this is real because it happened to me first." I feel you bro. When OpenAI first published its API, I wrote a simple application that would accept the list of vulnerabilities from my pentesting template and write quick descriptions. It didn't work very well, so I put it up, but I thought about it.
Later, when Microsoft introduced Copilot, I used it for writing little bits of JavaScript that I couldn't remember the syntax for - I'd have to look it up, like that. It was OK at that. I asked it to build a class though, and not so much. Still, useful for fixing my shitty asynchronous JavaScript code.
Then I went to CodeMash, Cory House, yes, that Cory House, was giving a talk on using AI tools in development. I thought "OK, I'll give this a try." He was using these new vibe tools, so I installed Claude and followed along while we built an entire working, tested, functional, and fairly secure app in a couple of hours. Casually, if I started with a template, I could have probably written it in 40 hours, at best. It was pretty impressive, but it was a demo, effectively, and I have been fooled before.
Also, I disliked Claude. It was sycophantic even when I asked it not to be. I like my tools to be tools, but that doesn't really have any bearing on this post. I bring it up because I tried them all and ended up with Mistral, which is way better for me and has the benefit of being European and subject to their privacy laws. Also, ya know, not being used to spy on me by my government. But that's another post altogether.
So back to the story at hand. I spent some time during my research time working with Mistral to analyze source code, review data for analysis, and format documents. It was nice. But I kept wondering about actually making software, from soup to nuts. I never thought it would be doable. Testing, review, everything will just be more work than using template code and writing stuff myself.
Then I learned how to do projects. I literally asked Mistral how it liked to do projects and it told me. I suggested some improvements to the way it was laid out and it made them. I started to get a feel for how this could be used. I asked it to do something for me that I had been putting off, which was converting OWASP ASVS 5.0 to a test plan that I use in my application pentesting. I had a short conversation with the console, explaining how I use the ASVS, what kind of document that I'd need to use as the test template.
It created exactly what I needed. In 5 minutes. No notes. Seriously, I have used it for the last 3 tests I have done and it is way easier than the ones I made for myself with V3 and V4. I was, frankly, blown away. But, it still wasn't code.
Then Devstral2 came out. It is Mistral's equivalent to GLT5.3codex and should be able to do the kinds of things that Matt talked about above. So I tried it. I had an actual project to review log records from Azure APIM WAF and recommend revisions to the rules based on the findings. It needed to speak natively to Splunk, process the massive datasets in the logs, store the resultant analysis, and make Azure specific rules that are ready for import.

Took about 20 minutes. All written in Python, with unit tests, and judiciously commented. Needed an API key for Splunk, then ran its own integration testing. Ran the report, decided it could make stronger correlations, figured out how, changed the code, ran all of the tests again.
I watched.
The human take
So Matt's key point is tangential to my experience. He posits that normies should really start to take heed. Are they gonna use Claude Code? No of course not. But their bank will. And their doctor will. And their builder will. And their lawyer will. Folks need to understand that we are really, really close to computers being able to do a lot of things better than us. Again.
My son Adam is an anthropology student and put together a few words about the human take on all of this. You should read that too.
https://adamthropology.ghost.io/a-small-complaint-about-the-current-state-the-world/
I'll leave the anthropology to the anthropologists, but I've got some economics background, and forty years of business ownership, and find a lot to agree with in Matt's and Adam's article.
What I'm doing now
The number one thing I am doing as a software engineer is keeping my feet wet. I already have several code generators that I have written over the years so I still use those to make common things I have to build for customers. But I will take an extra few minutes and see what Mistral has to say. Will I use it? Maybe, maybe not, but I want to see how close it is coming because - as I think I mentioned - things are moving fast. I didn't let internet development pass me by. I didn't let web services pass me by. I don't think I am going to let this pass me by.

As an appsec engineer, I am recognizing the reality. I have been asked to test a double handful of agentic AI bits already and have a blog post incoming on that soon. I have been keeping up with the AI features in the tools I use like Burp Suite by Portswigger. I advise clients as to how to verify code their developers put out using AI consoles and test it with an eye to common mistakes that such tools have been shown to make. (Pro tip: wanna know what kinds of mistakes a developer AI console might make in security? Ask it. The answer has been shown to be right, in my experience.)
In life, I'm learning to herd goats.
No, not really. I don't think there will be a global shift where all developers are replaced with 20 bros with Clause Code. It's not going to happen, probably ever. But I recognize that it is foolish to ignore popular trends even if they don't directly impact me. So I am watching. Keeping an eye on bits that I know might be being automated and taking care with the results they put out on my behalf. Even phone-based order systems. Almost bought a new iPhone the other day calling AT&T because I coughed at an inopportune time. I'm not jumping on a bandwagon for anyone, sorry. But I'll watch and listen.