Casual Productivity with LLMs

Using LLMs to implement workflow enhancements you wouldn't have attempted otherwise

September 1, 2025

hammer

I think AI gets a lot of heat because it doesn’t one-shot some complex task on a poorly architected codebase. Sure maybe this is the success case for AI that will calm the wolves (it won’t), but I thought that, because I’ve written about AI a fair bit in the last few blog posts, it was worth demonstrating some Real Unlocked Value in AI outside of vibecoding whatever app slop I may want to pursue on any given weekend.

A New Game Appears

So first, some background. I’m working on a New Game Thing that I can’t talk about directly but rest assured it’s Very Exciting and Up My Alley. And unlike other projects I’ve tackled in the past year, I’m writing most all of the code (for now) instead of the AI. Why?

First, in the project’s current state, it’s writing a lot of the code I really enjoy writing. It’s the sweet little nascent form of a codebase prior to maturity. And it’s also load-bearing. I’m defining and speccing out a domain and architecture that will be absolutely punished and pushed with every edge case possible as part of normal operation. Given this, every line is precious. I know I need to fully understand the bedrock of this game otherwise I’ll have untold number of issues later on.

Additionally, as part of this effort, I’m testing out some personal novel theses around game architecture. Trying to do this with an AI would defeat the point, as it would 1) deprive me of the joy I’m seeking to figure this out myself 2) likely reach for common patterns for doing the type of thing I’m doing, or 3) require immense guiding on my part such that it’s the same amount of effort for me to coach it vs. me just do it directly. So largely I’m writing everything.

BUT I’m also not trying to write a lot of code. I’m in fact trying to write as little code as possible. This is both a goal of the architecture itself, but also given I’m BUSTED (BUsy Tired Endeavoring Dad), I simply don’t have time to crank out lots of lines.

Implied Architecture Requirements

There is a tension here though — especially when it comes to architecture design and engineering. Namely, that there is very often some amount of glue code that needs to be written to hook things up. Not only this, but more outwardly simple” architecture often have a LOT of glue underneath. It’s hard work to make things look easy.

In a previous area, this plumbing work is seen as just the grunt work of programming. You develop some architecture and pattern that needs some specific hookups to run, and you just pay that as some tax on the design decisions you made.

However, this sort of plumbing work is also more insidious — if a pattern has an implied large amount of plumbing work to make things work, it’s possible that that pattern wouldn’t be chosen at all, even if the upstream pattern/design is superior for the use case.

LLMs Can Help Plumbing (but we can do more!)

With LLMs though, this changes. It can be easy to get an LLM to work in your wake, hooking up the things you need it to as you make your own changes. Writing code and vibecoding with LLMs don’t have to be mutually exclusive -—they can be complimentary.

LLMs are just Tools you can choose to use. They don’t have to be Everything. I think people don’t often think of this the same way I don’t think people often reach for code to solve a coding problem - as in, someone working on a React project won’t reach for a scripting language to pre-generate templates or something, but will instead try to stay in their domain as much as possible (maybe leverage a library or something, etc.)

But LLMs are flexible tools, and I want to describe how.

Game Background

In the new thing I’m working on, there are Cards. Cards usually have Actions (or some Effect).

How to Make Card Do Thing is a big topic, and there are many different schools of thought on this (that I won’t be talking about in this post — sorry!). For me, I knew that I wanted some sort of composable action system so card behavior could be dictated by smaller atomic units of state changing (instead of mapping a card to some specific function directly).

Now, this is easy if cards themselves have atomic actions, but in this game, and many others, cards react based on some other event taking place in the game, that can be wholly unrelated to the card itself. Meaning that a card must anticipate actions performed by something that it can’t neccessarily account for directly at authoring time.

This is often where the spaghetti meets the road. It’s possible you only get here after coding a lot of other dependent reference stuff and hardcoded general card abilities, and only after it’s stood up do you realize oh shit how does Ability X react to Ability Y?”

Again, many schools of thought, but we’re talking about LLMs remember.

For this game, I’m using interfaces as way to receive events. Interfaces declare some something like IListensToUnitMovement. This interface will then usually have a function on it like OnUnitMove that the implementing class will define. The signature of OnUnitMove will match (but not literally be coupled to) some event defined elsewhere in the code that is the actual OnUnitMoved event. Then at runtime, the game will use Reflection to determine if a type implements a given interface, and if so, bind that the implemented interface function to the event itself.

So to recap, for one event, a programmer must:

  1. Declare an event that will be emitted
  2. Declare an interface to listen to that event
  3. Update runtime reflection code to associate the interface binding class with the event itself

And notably, none of this is coupled. It’s not quite duck typing but its not not duck typing as a way to get around inheritance trees. And a game can have literally hundreds of these. So not only is this finnicky, but also its tedious, with three seperate steps for doing a single thing.

It’s also how a lot of people do things for this type of problem. They assume this as the Plumbing Tax and move on. But it can be better, and hence the point of this post so I can show you how.

Source Generators to the Rescue

The fact there is a pre-defined list of things to do per event made me realize this was actually a perfect use case for source generators. I only needed to do step 1 or 2 and just generate the rest of the code. I could declare an event or interface in a predefined class, and have the generator pick that up and emit the other parts I needed. It’s a perfect use case, AND it does actually couple signatures together as it’s generated together, so you don’t risk messing up some part of the interface <-> event coupling.

One option here is to then just have the LLM do this for you. You make some changes, and periodically ask it to wire up everything else. This is already an improvement, but we can get much better.

Because the code itself is so formulaic, this is actually a great use case for C# Source Generators. In any other era though, doing building a source generator just to do this would be A Lot. Aren’t source generators kind of lot to get setup (yes). Isn’t this just me being sort of lazy and not doing the Work of programming (maybe). Would it be worth it though (…probably?)

Building the Source Generator with an LLM

Instead of musing on a will they/won’t they here, I was like what if I have an LLM build the generator for me?” and I had it working in less than 5 minutes.

5 minutes! This is the kind of task that could have easily derailed a project for a week or two (given my general limited side project time), and also one that would eat away at the back of my mental space knowing that Things Could Be Better.

But I did it in 5 minutes and started immediately seeing the benefits. LLMs make it so you can Just Do Stuff. And I’m being efficient here — instead of having some setup where I need to continually poll an LLM to get it to generate glue code for me, I instead had it generate a generator that runs as part of normal C# compilation to generate all the code for me, no LLM required.

I called this post casual productivity” because this sort of workflow boosting hack” from a previous era may have taken an outsized amount of effort to manifest in the first place (such that you net lose the benefit of the addition due to the time it took to create it), but with LLMs is made immediately possible and I start to immediately see the benefits of it.

Casual Productivity, in my mind, is the idea of being able to aggressively adopt and move towards avenues for doing things Better than your current tools would allow you to. And with LLMs it feels much more imminently possible to see those gains because making workflow improvements mid-stream before were such a huge time risk. But now you can at least attempt stuff.

And not only that, but getting this generator in has now proven itself to be more valuable. Having it in is allowing me to unlock other workflow enhancements. This is obviously a big endorsement for Source Generators generally (people should use them more!), but also right now the generator is completely vibecoded with Claude and it works perfectly.

Conclusion

I don’t know if I ever explicitly wrote this on my blog, but one thing LLMs really do is sort of taunt you from the sidelines. Their Immense Purpose remains over there, beckoning you to engage with them, if only because of the negative space you can easily feel when not using them. They sort of challenge you to Be Better, Be Smarter — you have an effectively free well of talent and effort at your disposal and to not use it does yourself a disservice.

It feels more now like you need to have specific reasons NOT to use them. I outlined my own near the top of the post. But for putting together the generator — hell yeah throw it to an LLM. It’s eagerly (maybe too eager) waiting to do that work for you.

That’s been a bigger task for me this past year with LLMs — figuring out how to best understand what work they can take on and how that can complement my own process. I think the above is a perfect use case, and I’m sure there will be more down the line!

Thanks for reading!


Published on September 1, 2025.