AI coding tools have become indispensable for many tasks. In our tests, popular AI coding tools like Cursor have been responsible for generating over 70% of the code required for tasks. With AI agents still being relatively new, I observed some useful patterns in my workflow that I want to share. I will explain how to optimize these AI IDEs and analyze Cursor system prompts for agentic coding.
How did I develop my agentic coding approach?
Over the past few months, I have refined my coding workflow using AI tools, focusing on vibe coding, building simple apps, and performing mid-level code editing tasks. I have been working with AI code editors like Claude, Cursor, Replit, and Windsurf Editor by Codeium. I benchmark various AI coding tools. Here is my agentic coding approach and workflows that have been working well for me:
Optimizing agentic coding
When to apply this approach: This approach is not necessary for simple tasks like basic bug fixes, which can be solved with a straightforward prompt. For these situations, you can skip creating planning steps on your own and instead jump straight into the coding process execution mode after refining and accepting the plan by your AI code assistant.
1. Choosing what tools & platforms to use:
- Claude Code for everything, including workflow, project planning, documentation, and managing the project’s memory files.
- Cursor for agentic coding and automating workflows.
- .md files for maintaining claude.md (e.g., plan.md) file for each project to store specific coding guidelines and project context.
- GitHub (Optional) for version control, code reviews, and pull requests.
2. Creating the plan
While creating a plan, I usually use Claude because it offers contextual memory, task delegation, and read-only exploration. Think of Claude as a task manager that helps to:
- Read files and examine code
- Search through codebases
- Analyze project structure
- Gather information from web sources
- Review documentation
Here’s my approach when creating a plan:
Write clear and precise prompts: I write detailed, specific prompts to make sure the AI coding editor captures the context. For example, rather than just asking for a “modern design,” I specify “a Linear-like app UI design.” Avoid sharing excessive information, thinking it will improve Claude’s results, as it may lead to confusion instead.
Start new conversations regularly: One of the first steps I take when creating a plan is ensuring that I regularly start fresh conversations, especially for tasks that require focus. I use the /clear command to avoid any confusion from previous prompts.
Make sure the AI assistant reads the docs: Before jumping into any coding or implementation, I ensure that the planning tool reads the relevant documentation. This could include anything from API docs or framework manuals. By giving the planning tool access to these resources, I can make sure that the plan I create is grounded in the most up-to-date information.
3. Choosing the architecture
Choosing the right architecture is crucial for keeping the project structured.
In AI code editing, I usually use a flow-based architecture where the system’s components are organized into distinct nodes, each responsible for specific tasks such as decision-making, file operations, code analysis, and code modification.
The flow from one task to another is handled automatically. For example:
- User input: Describing the type of website (e.g., a blog or e-commerce site).
- Design node: AI generates layout based on user preferences.
- Content generation node: Text and image generation based on inputs.

For building scalable systems like screenshot-to-code tools, I use service-oriented architecture (SOA), where distinct components (such as UI extraction, code generation) need to be scaled independently.
In this hybrid approach, the flow-based approach manages the flow of tasks across distinct nodes, where each node handles a specific function, such as screenshot to code generation. For instance:
- UI extraction node processes the screenshot.
- Code generation node converts the identified UI components into structured code.
Other architectural approaches:
While flow-based and SOA are my preferred approaches for scalable systems, you can also use MABA, DDD Vertical Slice, or CRUD based on the project’s nature:
- Modular agent-based architecture (MABA): For autonomous agents performing specialized tasks like code generation, debugging, and optimization.
- Domain-driven design (DDD): For feature-driven development to break down the system into isolated features. This is particularly useful in complex coding systems (code generation, error detection, and security checks) where they can be developed and deployed independently.
- Create, read, update, delete (CRUD): For data-centric applications with basic operations like creating, reading, updating, and deleting code.
4. Refining the plan
Once the initial plan is created, I refine it to ensure it aligns with the project’s goals. I do this by:
- Document key findings and context: I ask Claude to document everything, including context, and any important details that might help during the implementation. This lays out as a reference and helps to ensure necessary information is readily available during the coding process.
- Create phase breakdown: After documenting the plan, I ask Claude to create a phase breakdown. This file (named phase.md) will list the distinct phases of the project.
- Refine task lists: I generate a task list stored in a markdown file with checkboxes to track progress. It ensures I stay focused on each task and maintain continuity between development sessions.
- Context and memory management: To maintain consistency, I also create a memory.md file that holds the current state of the project. This document serves as the bridge between development sessions.
- Clarifying the plan: As the plan is refined, I make sure to ask Claude, “Do you have any clarifying questions for me?”
5. Coding process
The coding process is centered around task-based development. I use a consistent prompt structure for each task:
Prompt structure: “Continue working on the project in @project_folder. Follow the development guidelines in @development_guidelines, and remember everything in @memory.”
Key components:
- Project memory: The task list markdown file, memory.md (created during the refining the plan step), is regularly updated to store the current state of the project
- Development guidelines: These are the rules that define how tasks should be approached (e.g., creating code).
In the coding process, I typically implement the following:
- Start with Phase 1: Start a fresh conversation for each task and ask Claude to handle the first step of the implementation. Create a plan.md file to track implementation phases.
- Iterative testing: When coding begins, I test the code after every task. If the code does not align with your implementation, share more context or modify the plan as needed. I typically iterate 3-5 times to ensure the plan is solid.
When Cursor asks whether to accept or reject after writing code. If everything is directly accepted, it could lead to mistakes. It provides an explanation for the changes it made, and you should read that and approve it if it makes sense; otherwise, reject it.
- Prompt feedback loop: As coding progresses, try to avoid prompting “fix this”. Detail what went wrong and what should have happened. For example, I prompt Claude with specific prompts like:
“Claude, the function handling user input is not dealing with empty strings as expected. It should return a default value when the input is empty, but instead, it’s throwing an error. Please update the function to handle this case properly. Here’s the specific issue: input(“”) causes an error, but it should return ‘default’.”
- Memory management: Use memory.md to store essential context and re-align past decisions during the coding process.
- Updating documentation: After each task, make sure to update both plan.md and phases.md to track progress and the current state of the project.
- Optional: Leveraging GitHub integration: You can also integrate GitHub with tools like Cursor and Cline to streamline code reviews, commit tracking, and pull requests.
How agentic coding works in your AI IDE (e.g., Cursor)?
The diagram illustrates the underlying mechanics of AI IDEs. These systems streamline the process for the main agent by shifting the “cognitive load” to other LLMs.

When working with these IDEs, the system first injects @-tags into the context, which helps the model know where to look for specific data or instructions.
It then calls on multiple tools to gather additional context and information, such as analyzing code or reviewing documentation.
After this, the IDE makes the necessary changes to the code using a special “diff syntax.” This means instead of rewriting entire sections of code, only the changed parts are sent with a clear indication of what has been modified. Finally, the IDE provides a summary response to the user, detailing what was updated or changed.2
Limitations of this agentic coding approach:
- Loss of context over time: As the conversation progresses, Claude may lose critical context.
- Difficulty with open-ended questions: Asking open-ended or vague questions can lead to ambiguous responses.
- Dependence on consistent input: The approach heavily relies on clear, detailed prompts and consistent task breakdowns. Any lack of clarity or missed details in the planning or coding process can result in misalignment.
How about model context protocol (MCP) servers?
MCP servers haven’t proven to add much value to my agentic coding workflow. In my experience, they often consume unnecessary tokens while tools like Claude Code’s built-in features are sufficient for most tasks.
I also think MCP may have vulnerabilities when it comes to prompt injections. Since MCP servers are typically user-provided, but the tool instructions are embedded within the commanding system’s instructions, users could exploit this to set arbitrary system instructions on agents.
Line-by-line Cursor system prompt analysis for agentic coding
Here’s an agentic coding implementation in Cursor. This is the prompt I used.

1. Proactive coding assistance
"You are an AI coding assistant, powered by Claude Sonnet 4. You operate in Cursor."
"You are pair programming with a USER…"
"Your main goal is to follow the USER's instructions at each message, denoted by the <user_query> tag."
How it leverages agentic coding:
These lines assign a precise role that allows the assistant to act as a proactive collaborator, taking the initiative to execute tasks rather than just responding passively.
The agent will be aware of its purpose and the context, enabling it to utilize available tools, modify code, and complete tasks independently.
2. Organize long prompts using XML-style tags
<communication>, <tool_calling>, <making_code_changes>, etc.
How it leverages agentic coding:
This approach utilizes XML-style tags to organize and break down lengthy prompts.
By segmenting instruction sets into manageable chunks, the LLM can locate and follow the relevant behavioral instructions. This reduces the chances of context loss to ensure tool commands are executed correctly
3. Parallelism
<maximize_parallel_tool_calls>
“CRITICAL INSTRUCTION: For maximum efficiency, whenever you perform multiple operations, invoke all relevant tools simultaneously rather than sequentially…”
How this supports agentic coding:
Cursor asks the agent to perform tool calls like read_file, grep_search, or codebase_search in parallel when gathering information.
This instruction ensures the agent operates efficiently by executing multiple tasks in parallel, reducing wait time and accelerating task completion.
4. Autonomous execution
“If you make a plan, immediately follow it, do not wait for the user to confirm…”
“Bias towards not asking the user for help if you can find the answer yourself.”
How this supports agentic coding:
A core requirement for an agentic AI system is autonomy. These lines instruct the agent to complete tasks from start to finish.
5. Search and verify before execution
<search_and_reading>
“If you're unsure... gather more information… use more tools before ending your turn.”
How this supports agentic coding:
Agents should verify assumptions before proceeding. This instruction ensures the assistant cross-checks and fills gaps using its tools. It prevents hallucinated responses and ensures edits or answers are based on actual code context.
6. Tool use
“Check that all the required parameters for each tool call are provided or can reasonably be inferred…”
“Carefully analyze descriptive terms in the request…”
How this supports agentic coding:
Agentic systems reason through tasks. This instruction asks the agent to reverse-engineer what’s needed, build solid tool calls, and match its reasoning to actual inputs. This is similar to what a developer would do when using APIs or CLI tools.
7. Restrict unnecessary file creation
“NEVER create files unless they're absolutely necessary…”
“NEVER proactively create documentation…”
How this supports agentic coding:
Unless the user requests it, the AI should not generate unnecessary files. This constraint ensures that the agent only makes essential changes, avoiding unnecessary additions to the project.
Comments
Your email address will not be published. All fields are required.