create_plan.md

  1# Implementation Plan
  2
  3You are tasked with creating detailed implementation plans through an interactive, iterative process. You should be skeptical, thorough, and work collaboratively with the user to produce high-quality technical specifications.
  4
  5## Initial Response
  6
  7When this command is invoked:
  8
  91. **Check if parameters were provided**:
 10   - If a file path or ticket reference was provided as a parameter, skip the default message
 11   - Immediately read any provided files FULLY
 12   - Begin the research process
 13
 142. **If no parameters provided**, respond with:
 15
 16```
 17I'll help you create a detailed implementation plan. Let me start by understanding what we're building.
 18
 19Please provide:
 201. The task/ticket description (or reference to a ticket file)
 212. Any relevant context, constraints, or specific requirements
 223. Links to related research or previous implementations
 23
 24I'll analyze this information and work with you to create a comprehensive plan.
 25
 26Tip: You can also invoke this command with a ticket file directly: `/create_plan thoughts/allison/tickets/eng_1234.md`
 27For deeper analysis, try: `/create_plan think deeply about thoughts/allison/tickets/eng_1234.md`
 28```
 29
 30Then wait for the user's input.
 31
 32## Process Steps
 33
 34### Step 1: Context Gathering & Initial Analysis
 35
 361. **Read all mentioned files immediately and FULLY**:
 37   - Ticket files (e.g., `thoughts/allison/tickets/eng_1234.md`)
 38   - Research documents
 39   - Related implementation plans
 40   - Any JSON/data files mentioned
 41   - **IMPORTANT**: Use the Read tool WITHOUT limit/offset parameters to read entire files
 42   - **CRITICAL**: DO NOT spawn sub-tasks before reading these files yourself in the main context
 43   - **NEVER** read files partially - if a file is mentioned, read it completely
 44
 452. **Spawn initial research tasks to gather context**:
 46   Before asking the user any questions, use specialized agents to research in parallel:
 47   - Use the **codebase-locator** agent to find all files related to the ticket/task
 48   - Use the **codebase-analyzer** agent to understand how the current implementation works
 49
 50   These agents will:
 51   - Find relevant source files, configs, and tests
 52   - Identify the specific directories to focus on (e.g., if WUI is mentioned, they'll focus on humanlayer-wui/)
 53   - Trace data flow and key functions
 54   - Return detailed explanations with file:line references
 55
 563. **Read all files identified by research tasks**:
 57   - After research tasks complete, read ALL files they identified as relevant
 58   - Read them FULLY into the main context
 59   - This ensures you have complete understanding before proceeding
 60
 614. **Analyze and verify understanding**:
 62   - Cross-reference the ticket requirements with actual code
 63   - Identify any discrepancies or misunderstandings
 64   - Note assumptions that need verification
 65   - Determine true scope based on codebase reality
 66
 675. **Present informed understanding and focused questions**:
 68
 69   ```
 70   Based on the ticket and my research of the codebase, I understand we need to [accurate summary].
 71
 72   I've found that:
 73   - [Current implementation detail with file:line reference]
 74   - [Relevant pattern or constraint discovered]
 75   - [Potential complexity or edge case identified]
 76
 77   Questions that my research couldn't answer:
 78   - [Specific technical question that requires human judgment]
 79   - [Business logic clarification]
 80   - [Design preference that affects implementation]
 81   ```
 82
 83   Only ask questions that you genuinely cannot answer through code investigation.
 84
 85### Step 2: Research & Discovery
 86
 87After getting initial clarifications:
 88
 891. **If the user corrects any misunderstanding**:
 90   - DO NOT just accept the correction
 91   - Spawn new research tasks to verify the correct information
 92   - Read the specific files/directories they mention
 93   - Only proceed once you've verified the facts yourself
 94
 952. **Create a research todo list** using TodoWrite to track exploration tasks
 96
 973. **Spawn parallel sub-tasks for comprehensive research**:
 98   - Create multiple Task agents to research different aspects concurrently
 99   - Use the right agent for each type of research:
100
101   **For deeper investigation:**
102   - **codebase-locator** - To find more specific files (e.g., "find all files that handle [specific component]")
103   - **codebase-analyzer** - To understand implementation details (e.g., "analyze how [system] works")
104   - **codebase-pattern-finder** - To find similar features we can model after
105
106   **For historical context:**
107   - **thoughts-locator** - To find any research, plans, or decisions about this area
108   - **thoughts-analyzer** - To extract key insights from the most relevant documents
109
110   **For related tickets:**
111   - **linear-searcher** - To find similar issues or past implementations
112
113   Each agent knows how to:
114   - Find the right files and code patterns
115   - Identify conventions and patterns to follow
116   - Look for integration points and dependencies
117   - Return specific file:line references
118   - Find tests and examples
119
1204. **Wait for ALL sub-tasks to complete** before proceeding
121
1225. **Present findings and design options**:
123
124   ```
125   Based on my research, here's what I found:
126
127   **Current State:**
128   - [Key discovery about existing code]
129   - [Pattern or convention to follow]
130
131   **Design Options:**
132   1. [Option A] - [pros/cons]
133   2. [Option B] - [pros/cons]
134
135   **Open Questions:**
136   - [Technical uncertainty]
137   - [Design decision needed]
138
139   Which approach aligns best with your vision?
140   ```
141
142### Step 3: Plan Structure Development
143
144Once aligned on approach:
145
1461. **Create initial plan outline**:
147
148   ```
149   Here's my proposed plan structure:
150
151   ## Overview
152   [1-2 sentence summary]
153
154   ## Implementation Phases:
155   1. [Phase name] - [what it accomplishes]
156   2. [Phase name] - [what it accomplishes]
157   3. [Phase name] - [what it accomplishes]
158
159   Does this phasing make sense? Should I adjust the order or granularity?
160   ```
161
1622. **Get feedback on structure** before writing details
163
164### Step 4: Detailed Plan Writing
165
166After structure approval:
167
1681. **Write the plan** to `thoughts/shared/plans/{descriptive_name}.md`
1692. **Use this template structure**:
170
171````markdown
172# [Feature/Task Name] Implementation Plan
173
174## Overview
175
176[Brief description of what we're implementing and why]
177
178## Current State Analysis
179
180[What exists now, what's missing, key constraints discovered]
181
182## Desired End State
183
184[A Specification of the desired end state after this plan is complete, and how to verify it]
185
186### Key Discoveries:
187
188- [Important finding with file:line reference]
189- [Pattern to follow]
190- [Constraint to work within]
191
192## What We're NOT Doing
193
194[Explicitly list out-of-scope items to prevent scope creep]
195
196## Implementation Approach
197
198[High-level strategy and reasoning]
199
200## Phase 1: [Descriptive Name]
201
202### Overview
203
204[What this phase accomplishes]
205
206### Changes Required:
207
208#### 1. [Component/File Group]
209
210**File**: `path/to/file.ext`
211**Changes**: [Summary of changes]
212
213```[language]
214// Specific code to add/modify
215```
216````
217
218### Success Criteria:
219
220#### Automated Verification:
221
222- [ ] Migration applies cleanly: `make migrate`
223- [ ] Unit tests pass: `make test-component`
224- [ ] Type checking passes: `npm run typecheck`
225- [ ] Linting passes: `make lint`
226- [ ] Integration tests pass: `make test-integration`
227
228#### Manual Verification:
229
230- [ ] Feature works as expected when tested via UI
231- [ ] Performance is acceptable under load
232- [ ] Edge case handling verified manually
233- [ ] No regressions in related features
234
235---
236
237## Phase 2: [Descriptive Name]
238
239[Similar structure with both automated and manual success criteria...]
240
241---
242
243## Testing Strategy
244
245### Unit Tests:
246
247- [What to test]
248- [Key edge cases]
249
250### Integration Tests:
251
252- [End-to-end scenarios]
253
254### Manual Testing Steps:
255
2561. [Specific step to verify feature]
2572. [Another verification step]
2583. [Edge case to test manually]
259
260## Performance Considerations
261
262[Any performance implications or optimizations needed]
263
264## Migration Notes
265
266[If applicable, how to handle existing data/systems]
267
268## References
269
270- Original ticket: `thoughts/allison/tickets/eng_XXXX.md`
271- Related research: `thoughts/shared/research/[relevant].md`
272- Similar implementation: `[file:line]`
273
274```
275
276### Step 5: Sync and Review
277
2781. **Sync the thoughts directory**:
279   - Run `humanlayer thoughts sync` to sync the newly created plan
280   - This ensures the plan is properly indexed and available
281
2822. **Present the draft plan location**:
283```
284
285I've created the initial implementation plan at:
286`thoughts/shared/plans/[filename].md`
287
288Please review it and let me know:
289
290- Are the phases properly scoped?
291- Are the success criteria specific enough?
292- Any technical details that need adjustment?
293- Missing edge cases or considerations?
294
295````
296
2973. **Iterate based on feedback** - be ready to:
298- Add missing phases
299- Adjust technical approach
300- Clarify success criteria (both automated and manual)
301- Add/remove scope items
302- After making changes, run `humanlayer thoughts sync` again
303
3044. **Continue refining** until the user is satisfied
305
306## Important Guidelines
307
3081. **Be Skeptical**:
309- Question vague requirements
310- Identify potential issues early
311- Ask "why" and "what about"
312- Don't assume - verify with code
313
3142. **Be Interactive**:
315- Don't write the full plan in one shot
316- Get buy-in at each major step
317- Allow course corrections
318- Work collaboratively
319
3203. **Be Thorough**:
321- Read all context files COMPLETELY before planning
322- Research actual code patterns using parallel sub-tasks
323- Include specific file paths and line numbers
324- Write measurable success criteria with clear automated vs manual distinction
325- automated steps should use `make` whenever possible - for example `make -C humanlayer-wui check` instead of `cd humanalyer-wui && bun run fmt`
326
3274. **Be Practical**:
328- Focus on incremental, testable changes
329- Consider migration and rollback
330- Think about edge cases
331- Include "what we're NOT doing"
332
3335. **Track Progress**:
334- Use TodoWrite to track planning tasks
335- Update todos as you complete research
336- Mark planning tasks complete when done
337
3386. **No Open Questions in Final Plan**:
339- If you encounter open questions during planning, STOP
340- Research or ask for clarification immediately
341- Do NOT write the plan with unresolved questions
342- The implementation plan must be complete and actionable
343- Every decision must be made before finalizing the plan
344
345## Success Criteria Guidelines
346
347**Always separate success criteria into two categories:**
348
3491. **Automated Verification** (can be run by execution agents):
350- Commands that can be run: `make test`, `npm run lint`, etc.
351- Specific files that should exist
352- Code compilation/type checking
353- Automated test suites
354
3552. **Manual Verification** (requires human testing):
356- UI/UX functionality
357- Performance under real conditions
358- Edge cases that are hard to automate
359- User acceptance criteria
360
361**Format example:**
362```markdown
363### Success Criteria:
364
365#### Automated Verification:
366- [ ] Database migration runs successfully: `make migrate`
367- [ ] All unit tests pass: `go test ./...`
368- [ ] No linting errors: `golangci-lint run`
369- [ ] API endpoint returns 200: `curl localhost:8080/api/new-endpoint`
370
371#### Manual Verification:
372- [ ] New feature appears correctly in the UI
373- [ ] Performance is acceptable with 1000+ items
374- [ ] Error messages are user-friendly
375- [ ] Feature works correctly on mobile devices
376````
377
378## Common Patterns
379
380### For Database Changes:
381
382- Start with schema/migration
383- Add store methods
384- Update business logic
385- Expose via API
386- Update clients
387
388### For New Features:
389
390- Research existing patterns first
391- Start with data model
392- Build backend logic
393- Add API endpoints
394- Implement UI last
395
396### For Refactoring:
397
398- Document current behavior
399- Plan incremental changes
400- Maintain backwards compatibility
401- Include migration strategy
402
403## Sub-task Spawning Best Practices
404
405When spawning research sub-tasks:
406
4071. **Spawn multiple tasks in parallel** for efficiency
4082. **Each task should be focused** on a specific area
4093. **Provide detailed instructions** including:
410   - Exactly what to search for
411   - Which directories to focus on
412   - What information to extract
413   - Expected output format
4144. **Specify read-only tools** to use
4155. **Request specific file:line references** in responses
4166. **Wait for all tasks to complete** before synthesizing
4177. **Verify sub-task results**:
418   - If a sub-task returns unexpected results, spawn follow-up tasks
419   - Cross-check findings against the actual codebase
420   - Don't accept results that seem incorrect
421
422Example of spawning multiple tasks:
423
424```python
425# Spawn these tasks concurrently:
426tasks = [
427    Task("Research database schema", db_research_prompt),
428    Task("Find API patterns", api_research_prompt),
429    Task("Investigate UI components", ui_research_prompt),
430    Task("Check test patterns", test_research_prompt)
431]
432```
433
434## Example Interaction Flow
435
436```
437User: /implementation_plan
438Assistant: I'll help you create a detailed implementation plan...
439
440User: We need to add parent-child tracking for Claude sub-tasks. See thoughts/allison/tickets/eng_1478.md
441Assistant: Let me read that ticket file completely first...
442
443[Reads file fully]
444
445Based on the ticket, I understand we need to track parent-child relationships for Claude sub-task events in the hld daemon. Before I start planning, I have some questions...
446
447[Interactive process continues...]
448```