AI and Large Language Models (LLMs) like ChatGPT, Claude, and Grok are empowering data science, engineering, and software teams to accelerate coding efforts. Far from a mere technical novelty, this AI-driven shift is a strategic lever that reduces development cycles, optimises resource use, and delivers high-quality solutions aligned with business goals. Yet, achieving these outcomes requires precise direction and rigorous oversight and leadership, as the quality of our prompts directly drives project success.
In what follows, I've shared my notes on my personal observations in trying various approaches to improve coding efficiency using LLMs. These tips are particularly useful for mid to large-sized projects and may be less necessary for smaller scopes.
LLMs are continuously improving and becoming more powerful, so I expect these recommendations will remain valid only until LLMs advance to the point where many of these instructions become unnecessary.
Before Starting Coding with AI
From a project management standpoint, defining the scope is a must. Map a high-level architecture to align technical components with business priorities, ensuring scalability and stakeholder buy-in before coding begins.
- Define project scope: Clearly articulate your needs, inputs and outputs, main functionality, and features with deployment requirements in mind.
- Create a high-level architecture: Break down your project into major components and define the relationships between them.
Effective Prompting
Effective prompting is less about technical details and more about strategic clarity. It’s critical to engineer prompts that steer the AI toward outcomes matching your expectations in the shortest time, keeping leadership firmly in control.
- Explain as if to a child: Imagine you’re talking to a child which happens to be extremely knowledgeable about coding, but needs specific explanations about your code. Don't assume it knows what you want - it mostly knows how to build it. Clearly specify what you need and if you prefer particular tools or libraries and explain why. Avoid overloading prompts with unnecessary details while ensuring critical information isn't missed.
- Share beyond coding: Sometimes it's useful to share some information with AI beyond just code description. If possible, let it know why you need this code, what is the application, how it's going to be used by users, how it solves a problem, etc. It may add more insights and suggest better ways of doing it which can end up to architectural and user experience improvements.
- Verify logic understanding: Before requesting code, ask the LLM to explain what it understood from your instructions. Verify its interpretation to ensure all details align with your expectations. Your descriptions can be easily misinterpreted, this verification saves significant time on corrections and debugging later.
- Avoid endless clarification loops: Asking an LLM if it has questions can lead to an endless cycle, especially with complex codes. Eventually, after some back-and-forth questions and answers, you realise its questions aren't critical to your needs. In this point, directly request the code when you feel sufficient context has been provided.
- Request improvement suggestions: LLMs often suggest valuable optimisations or functionality improvements. You can selectively agree or disagree with these suggestions when finalising your code description.
- Maintain a master prompt: Throughout clarification and debugging, you'll develop more detailed explanations about your code. Add these to a master prompt draft. This becomes valuable when starting a new chat or creating a new conversation branch. Maintain master prompts for both the entire codebase or individual classes/functions, or both.
- Use examples and attach files: Use smart examples which cover critical points to help the LLM better understand your goals. Include these in your master prompt for reference.
- Provide environmental details: Specify the operating system, programming language, compiler version, and relevant hardware specifications (e.g. CPU, GPU, memory) if needed. For example, AI might generate Windows-specific code by default when you need Linux or macOS compatibility. Also, clearly identify target platforms (web, mobile, desktop) if applicable.
- Ask the code!
- Review the code: Even if the code output appears correct, always inspect the code itself to ensure it aligns with your instructions and project standards.
Managing Conversation Flow
Each prompt you send includes the full chat history in the current branch of the conversation (with some modifications to keep less from past). It will be more helpful if we manage what is sent to LLM from the chat history. We must manage the conversation carefully to pivot without cluttering the project roadmap, ensuring alignment with project goals.
- Create new branches strategically: LLMs may retain previously edited code sections in memory. This may re-enter into the code in some point and make bugs. To eliminate code remnants when making significant changes, go back in the conversation and create a new branch from an appropriate point with giving it the updated instructions and code segments. This prevents sending the unnecessary history of edits to the LLM. You can use comments or tags in your prompts (e.g., [New Branch: UI Updates]) to track your focus.
- Start fresh chats when needed: For lengthy conversations, it's better to start a new chat with updated master prompts and codes. You can provide only certain portions of your code (even if related to excluded ones) and continue the conversation; the LLM will examine inputs and outputs and request additional codes if necessary. Keep the context manageably short.
Cost Management
From a cost perspective, every token matters and long chats inflate usage, driving up expenses on paid platforms and risking context limits. Remember that by sending each message the entire conversation history in the current branch is sent to the LLM. In a lengthy chat, LLMs usually start reducing the effect of older parts of the conversation, however, reducing the size of the chat history significantly helps saving costs.
- Refresh conversations regularly: For lengthy exchanges, start fresh with all necessary code and required master prompts.
- Leverage branching efficiently: When creating a new branch from an earlier point, messages that occurred after that point in the old branch aren't included in the new branch's context and are not sent to LLM. This streamlines the conversation.
- Request condensed responses: Ask for only the modified parts of the code during iterations rather than the entire updated code. Shorter responses help from both a cost perspective and avoid hitting chat limits by preserving more conversation history.
- Craft concise queries: Avoid repetition and unnecessary wording in prompts while maintaining the context. This extends the chat by delaying when limits are reached.
- Make comprehensive queries: Anticipate follow-up questions and provide all relevant information in one prompt to minimise back-and-forth exchanges.
- Request separate files: Keep classes and functions in distinct files as necessary for easier management and more cost-effective new chat sessions if needed.
Preventing Unexpected Issues
Mitigating risks is a priority and without managing the LLM’s focus and memory, we risk misalignment and deviating from effective collaboration.
- Request minimal changes: For a working code needing only minor tweaks, explicitly ask the LLM to make minimal or no changes elsewhere, as it may introduce unnecessary alterations without this guidance. This may create issues with other parts of the bigger code.
- Communicate your made changes: If you modify the code yourself, inform the LLM before any other step to ensure its context remains current. Otherwise, you will receive the previous version of the code from its memory. Think of this as a collaborative process.
- Implement regression testing: Consistently verify that outcomes remain on track with your goals. This helps to identify if any unexpected change is made somewhere other than your expected parts.
Additional Tips
- Start simple: Simplify the problem and build an end-to-end code with AI, implementing your architecture at a high level. You can work with AI to add more complexity later. For example, begin with a basic script before scaling to a full app with UI and database integration.
- Build incrementally: For mid-large projects, construct your code piece by piece rather than in large chunks. Build critical components and foundation classes/functions first. Then work on a specific class or function by defining inputs and outputs for LLM. This approach works better than developing an entire large codebase at once. Describe what your target function does and how it interacts with the rest of the code.
- Request supplementary elements: Ask for code comments, error handling, logging, and output indicators at critical points when needed.
- Request test code: LLMs can provide separate test functions with various test cases based on your specifications.
- Modularise projects: Create separate projects for major components and develop them in parallel, though this isn't recommended for tightly related components.
- Seek targeted improvements: Request optimisation of specific functions or security reviews for sensitive areas.
- Generate documentation: LLMs can create operation guides, testing documentation, and other supportive materials.
- Utilise platform-specific features: Some platforms offer additional capabilities. For example, Claude has a Projects section that supports more and larger attachments while maintaining longer conversations for more persistent context. Explore the features of your preferred platform.
- Consider prompt templates: Develop reusable prompt templates for common coding tasks to maintain consistency and save time.
And most importantly:
- Be hands-on when necessary: With the current level of LLM's maturity they sometimes just don't give you what you need. For complex codes, you'll eventually need to get your hands dirty and manually improve the code.
Conclusion
Large Language Models (LLMs) are more than tools - they’re force multipliers that streamline development, cut costs, and align technical output with enterprise goals. Their ability to generate robust code fast-tracks our roadmaps, but only if we steer them with precision through clear prompts, managed workflows, and rigorous oversight. This isn’t just coding, it’s project orchestration, where strategic planning and hands-on governance ensure we deliver value, not just functionality, positioning us to lead in an AI-driven future.