December 2025 Lessons
Cursor and AI Agent Development
I have been using Cursor’s AI agents off and on for a while now but December was the first time I really applied them in some recent projects. AI coding agents are shockingly good at trouble shooting issues and feature development. This might not be a shock to those who have been using these tools for a while, but this was my first experience really using them.
Over the past month, there were three situations in which Cursor’s (and Warp’s) agent significantly aided in my development tasks.
Running into a performance issue in a production system, but the dev environment had no such issue. I asked the agent to generate a list of possible causes of this performance issue and steps that could be taken to rectify it. The AI did this rather well. It was able to understand the database queries that NestJS was making to the database and what database indexes should be utilized. After adding in the database indexes, the performance of the page load significantly improved. I was rather impressed with the solutions that the AI provided and how it directly applied to the codebase that I was working in.
Adding a new feature to a Flutter android app, as someone with no mobile app development experience. This was exactly what I like AI for. I can have years of software development experience, but no Flutter experience, so I know what I want the app to do and how to ask the AI for that functionality. This did take a little back and forth with the agent as well as manually code changes, but overall I enjoyed the collaborative approach with the agent. For this I actually used both Cursor’s agent mode as well as Warp.
Writing unit tests for a python api. Been doing python development for a while now, but unit tests have never been something that I invested the time to learn. The api already had a suite of tests, so I thought it might be worth having the agent take a crack at it. This task actually had mixed results. It came up with a good set of tests, however, the way they were mocked led to several errors. I prompted a few times to have the AI fix the tests, but never actually worked. I had to step through the code and better understand what was happening in order to manually make the code changes that fixed the tests. The issue ended up being the service mock return value was not being mocked correctly.
This all leads me to believe that AI agents are great tools in the developer tool belt. However, I can see temptation to just have the agent write a ton of code and completely trust the output, YOLO mode. I have to be an even more disciplined developer to actually review the outputs from these agents and understand the code. I still need to critically think about what is being done, if it actually addresses the request, and ensure that there are no hidden traps. I fully anticipate that I will be using AI agents like the one in Cursor and Warp. This means that I will likely be able to solve more difficult tasks quicker than before, but I am going to have to be very purposeful on the review and understanding of the outputs. The unfortunate thing is that reviewing code is the least fun part of the job. But maybe I can get past that if I can get working software out the door and into the hands of users quicker because of AI coding agents.
More lessons to come in 2026! What has been your experience with coding agents in 2025?

