Key Principles for Developing with AI

Key Principles for Developing with AI

The article was originally posted on LinkedIn.

During earlier experiments, I discovered key guidance for framing prompts and conversations when using AI as a "pair programmer" or "tasked agent." Extensive research exists on how to optimize model output, with various approaches like RISEN offering different strategies. My own journey involved moving from loosely worded, output-focused prompts to highly prescriptive ones. I generally found that for code assistance and development, the most crucial signals for the model came down to a few guidelines:

  • Reinforce context
  • Identify the valuable output
  • Leave more artifacts than just code

These guidelines helped produce consistent outputs that I could review (sometimes with the AI) and iterate on to achieve desired outcomes. I identified the second principle, "Identify the valuable output," through prior explorations. The other two became significant during more complex efforts.

Deep Dive into Prompting Principles

Reinforce Context

Creating digital solutions is as much an art as it is logic. Modern AI tools can rapidly generate scaffolding or constructs, making scope creep a natural consequence of the speed of feature implementation. The mantra "just one more thing" can lead to conversational streams with outputs that break previous functionality, ignore important criteria, or simply don't align with the overall approach and design.

Reinforcing context is crucial for making the implicit explicit. I quickly realized two things:

  1. Each conversation with the AI needed to be bounded as much as possible within the chat history to a single concept—whether it was implementing a feature, troubleshooting a problem, or discussing comparative approaches. Engaging the AI on too many topics, much like a person, can lead to confusion. This is primarily because, unlike a human, the model consumes only a portion of the history (the chat history) as a prelude to the current conversation.
  2. As solution complexity increased, consistently reinforcing the environment became vital. I started using phrases like "without breaking previously implemented stories, features, or capabilities" and "based on the current script and dependencies." These phrases became standard inclusions in new AI conversations. For example: "I want to implement feature X / I want to implement the following user story: as a user I want to do A so that I can accomplish B. I think it should be implemented using Y. I would like you to implement this feature as part of file Z, while taking the time to ensure that no stories, features, or capabilities are lost, broken, or orphaned in the process."

Identify the Valuable Output

Defining the valuable output in an AI conversation is important. It helps the model understand what output is acceptable; an undefined output can lead to unexpected behavior. In a development context, this often means explicitly defining how the model's output should be structured. Phrases like "as a bash file script" or "integrated into the current X script" are important indicators, as their inclusion (or exclusion) will alter the AI's behavior.

For instance, and likely based on my usage history, Amazon Q Developer began automatically writing new scripts. This behavior devolved into a "script with a script with a script" scenario, sometimes resulting in a complete rewrite of a section that could have been better handled by integrating changes into the original script. Clearly defining the valuable output was key; asking "show me how to integrate this back into the original (x) script" yielded more meaningful changes. However, there were points where the AI stopped "listening," necessitating a new chat altogether.

Leave More Artifacts Than Just Code

This last principle balances speed of delivery with developing valuable, long-lasting solutions. At one point in development, I was adding functions every 20 minutes: user management, roles, database interactions, API interactions. Code was generated so quickly that after completing an internal "milestone" (a feature deemed usable), I'd think, "Oh, I'll implement this other thing to enhance the base functionality." By the time I finished, I had completely broken the initial functionality.

One late night, I corrupted my workspace so thoroughly that I had to revert to the Git repository and pull everything down again. So, although this section is titled "Leave more artifacts than just code," the hardest thing when using AI tools is slowing down enough to remember to commit and push. This was a consistent struggle, as the temptation of "that was done in 10 minutes, I can just push forward and do this other thing" can have significant consequences that cost far more time.

As the project grew more complex, it also became clear that the AI was no longer reading all the necessary files to assist in development. This highlights the "make the implicit explicit" rule and leveraging AI for faster execution. While developing the concept of a user, feedback made it obvious that we needed to add user roles and a concept of user groups (teams) to manage access to stored data. At one point, I hit a wall. New work was clearly overwriting old work, despite applying all previous rules. I took a break, genuinely concerned I'd reached a capability limit for scale.

After a water break, I asked the AI to describe the current data model and create a Markdown document reflecting it. I then used this schema document to frame subsequent requests and establish a new baseline. It worked—I was able to expand my work and keep moving forward. Credit where credit is due, this idea came to me after an early project test where I tasked the AI with creating a readme.md for the project repository to describe the code.

What do you think?

Wrapping this up, what I've learned about working with AI as a dev partner really boils down to being intentional with your prompts. By reinforcing context, getting super clear on what valuable output actually looks like, and forcing myself to leave more artifacts than just code, I've found I can really steer the AI to build something useful and reliable. The speed is amazing, but you have to balance that with a deliberate approach to keep things stable long-term and know where you stand. It's about turning the AI from just a code-generator into a genuine collaborator, helping you tackle those complex dev challenges and actually get to where you want to be.