· Steven Mays · Software Engineering · 5 min read
Coding with AI - High Leverage LLM Techniques
Unlock the full potential of your coding workflow by mastering high-leverage techniques with AI.

I’ve been integrating large language models (LLMs) into my coding workflow for quite some time now, and they’ve fundamentally transformed how I approach software engineering tasks. Despite the hype and skepticism around their capabilities, I’ve found these tools incredibly valuable when used correctly.
- Understanding the Capabilities (and Limitations) of LLMs
- Account for Training Cut-Off Dates
- Give Clear and Specific Instructions
- Context is Everything
- Architectural and Design Decisions
- Develop Step-by-Step Plans
- Iterate Through Conversation
- Validate Everything Through Testing
- Leverage Code-Execution Tools
- Rapid Prototyping through “Vibe-Coding”
- Use LLMs as Knowledge Explorers
Understanding the Capabilities (and Limitations) of LLMs
Forget the overblown AGI narratives—think of LLMs more as ultra-efficient coding assistants. They’re brilliant at quickly generating boilerplate, exploring libraries, or writing first drafts of functions, but they aren’t infallible. You’ll see weird mistakes, hallucinated libraries, or outdated recommendations due to training cut-off dates. For instance, OpenAI’s models are typically trained up until October 2023, meaning they’re unaware of newer libraries or breaking API changes.
Practical Takeaway: Stick to well-established libraries or provide LLMs with context and recent documentation snippets.
Account for Training Cut-Off Dates
A significant limitation to keep in mind is the training data cut-off date. Most publicly available LLMs have a clearly defined endpoint to their knowledge base (for instance, OpenAI’s models often cut off around October 2023). When using new technologies or recently updated libraries, it’s crucial to supplement the LLM’s understanding with updated documentation or your own contextual knowledge. This ensures the responses and code samples you receive remain relevant and accurate.
Give Clear and Specific Instructions
The best results come from treating LLMs as highly capable but inexperienced junior developers. Your role becomes that of a technical lead, giving detailed specifications. Here’s a TypeScript example illustrating how I might prompt for a specific functionality:
// Prompt: Create a function in TypeScript that fetches JSON data, checks if the payload size exceeds a specified limit, and throws an error if it does.
async function fetchJsonWithLimit(url: string, maxSizeKB: number = 512): Promise<any> {
const response = await fetch(url);
const contentLength = parseInt(response.headers.get('Content-Length') || '0', 10);
if (contentLength > maxSizeKB * 1024) {
throw new Error(`Payload exceeds the maximum allowed size of ${maxSizeKB}KB.`);
}
return response.json();
}
I find writing these concise prompts faster than hand-coding, especially because the LLM handles the minor details like error handling and type annotations.
Context is Everything
Providing rich, relevant context is crucial. Fill up the context window with coding samples, related project files, or detailed descriptions of the task. Tools like VS Code Copilot are especially effective when you explicitly add relevant files or directories to the query, guiding the LLM toward more accurate outputs.
Architectural and Design Decisions
When using LLMs for architectural or design decisions, I prefer describing the problem or pattern in depth and prompting: “Give me 5 common architectural patterns that would work for this problem. Include pros and cons for each, and sort them in order of the best first. Best means the most pros, with the least cons.” This provides a structured starting point and helps clarify the best approach.
Architecture Prompt:
I need architectural design assistance for the following software system:
[SYSTEM DESCRIPTION]
Please include details about:
- Business domain and core functionality
- Scale and performance requirements
- Technical constraints (languages, platforms, etc.)
- Integration points with other systems
- Non-functional requirements (security, compliance, etc.)
Based on this information, please:
1. Give me 5 common architectural patterns that would work for this problem. Include pros and cons for each, and sort them in order of best first. "Best" means the pattern that offers the most advantages with the fewest disadvantages for my specific context.
2. For each pattern, explain which aspects of my problem specifically match well with that pattern's strengths.
3. For the top two recommended patterns, provide high-level implementation guidance including:
- Key components and their responsibilities
- Critical interfaces and data flows
- Example code structure or package organization
4. Identify potential implementation pitfalls, technical debt risks, or scaling issues for the recommended approaches.
5. Discuss how the top recommendation might evolve as requirements change or scale increases over time.
6. Suggest a phased implementation approach if appropriate.
Develop Step-by-Step Plans
Another high-leverage technique is using the LLM to develop a step-by-step plan before diving into implementation. Each step can then be addressed in a separate conversation window, ensuring clarity, minimizing unnecessary context, and maintaining an organized workflow.
Iterate Through Conversation
If an initial response isn’t perfect, don’t stop there—refine through iteration. Simple directives like “refactor this into a helper function” or “make the error handling clearer” are perfect for quickly evolving an implementation. Prompting the LLM to ask clarifying questions also helps sharpen ambiguous requirements.
Validate Everything Through Testing
Never blindly trust LLM-generated code. Consider it high-quality scaffolding that requires human oversight. Integrate unit tests right away, for instance:
// Jest test for fetchJsonWithLimit
import { fetchJsonWithLimit } from './fetchJsonWithLimit';
it('throws error if payload size exceeds limit', async () => {
await expect(fetchJsonWithLimit('http://example.com/largePayload', 1)).rejects.toThrow();
});
Leverage Code-Execution Tools
LLMs that integrate code execution, such as ChatGPT’s Code Interpreter or Claude’s sandbox environments, further speed up development by quickly verifying functionality.
Prototype REPL environments like RunJS can also help quickly test generated code.
Rapid Prototyping through “Vibe-Coding”
Andrej Karpathy’s coined “vibe-coding” aligns perfectly with how I use LLMs for quick prototyping. I quickly iterate by throwing loose prompts at an LLM and refining based on immediate outcomes, accelerating my learning curve with new technologies or APIs.
Use LLMs as Knowledge Explorers
Even if coding entirely via LLMs doesn’t appeal, their ability to digest large codebases and provide quick architectural summaries or deep insights is incredibly valuable. Rich prompts swiftly yield insights otherwise requiring extensive manual review.
The real power of LLMs in software development is their ability to amplify expertise and drastically speed up iterations. By intelligently combining human oversight and LLM efficiency, I can build more ambitious projects, experiment more freely, and ultimately enhance my effectiveness as a developer.