1---
2name: critique
3description: Evaluate design from a UX perspective, assessing visual hierarchy, information architecture, emotional resonance, cognitive load, and overall quality with quantitative scoring, persona-based testing, automated anti-pattern detection, and actionable feedback. Use when the user asks to review, critique, evaluate, or give feedback on a design or component.
4version: 2.1.1
5allowed-tools:
6 - Bash(npx impeccable *)
7---
8
9## STEPS
10
11### Step 1: Preparation
12
13Invoke /impeccable, which contains design principles, anti-patterns, and the **Context Gathering Protocol**. Follow the protocol before proceeding. If no design context exists yet, you MUST run /impeccable teach first. Additionally gather: what the interface is trying to accomplish.
14
15### Step 2: Gather Assessments
16
17Launch two independent assessments. **Neither must see the other's output** to avoid bias.
18
19You SHOULD delegate each assessment to a separate sub-agent for independence. Use your environment's agent spawning mechanism (e.g., Claude Code's `Agent` tool, or Codex's subagent spawning). Sub-agents should return their findings as structured text. Do NOT output findings to the user yet.
20
21If sub-agents are not available in the current environment, complete each assessment sequentially, writing findings to internal notes before proceeding.
22
23**Tab isolation**: When browser automation is available, each assessment MUST create its own new tab. Never reuse an existing tab, even if one is already open at the correct URL. This prevents the two assessments from interfering with each other's page state.
24
25#### Assessment A: LLM Design Review
26
27Read the relevant source files (HTML, CSS, JS/TS) and, if browser automation is available, visually inspect the live page. **Create a new tab** for this; do not reuse existing tabs. After navigation, label the tab by setting the document title:
28```javascript
29document.title = '[LLM] ' + document.title;
30```
31Think like a design director. Evaluate:
32
33**AI Slop Detection (CRITICAL)**: Does this look like every other AI-generated interface? Review against ALL **DON'T** guidelines in the impeccable skill. Check for AI color palette, gradient text, dark glows, glassmorphism, hero metric layouts, identical card grids, generic fonts, and all other tells. **The test**: If someone said "AI made this," would you believe them immediately?
34
35**Holistic Design Review**: visual hierarchy (eye flow, primary action clarity), information architecture (structure, grouping, cognitive load), emotional resonance (does it match brand and audience?), discoverability (are interactive elements obvious?), composition (balance, whitespace, rhythm), typography (hierarchy, readability, font choices), color (purposeful use, cohesion, accessibility), states & edge cases (empty, loading, error, success), microcopy (clarity, tone, helpfulness).
36
37**Cognitive Load** (consult [cognitive-load](reference/cognitive-load.md)):
38- Run the 8-item cognitive load checklist. Report failure count: 0-1 = low (good), 2-3 = moderate, 4+ = critical.
39- Count visible options at each decision point. If >4, flag it.
40- Check for progressive disclosure: is complexity revealed only when needed?
41
42**Emotional Journey**:
43- What emotion does this interface evoke? Is that intentional?
44- **Peak-end rule**: Is the most intense moment positive? Does the experience end well?
45- **Emotional valleys**: Check for anxiety spikes at high-stakes moments (payment, delete, commit). Are there design interventions (progress indicators, reassurance copy, undo options)?
46
47**Nielsen's Heuristics** (consult [heuristics-scoring](reference/heuristics-scoring.md)):
48Score each of the 10 heuristics 0-4. This scoring will be presented in the report.
49
50Return structured findings covering: AI slop verdict, heuristic scores, cognitive load assessment, what's working (2-3 items), priority issues (3-5 with what/why/fix), minor observations, and provocative questions.
51
52#### Assessment B: Automated Detection
53
54Run the bundled deterministic detector, which flags 25 specific patterns (AI slop tells + general design quality).
55
56**CLI scan**:
57```bash
58npx impeccable --json [--fast] [target]
59```
60
61- Pass HTML/JSX/TSX/Vue/Svelte files or directories as `[target]` (anything with markup). Do not pass CSS-only files.
62- For URLs, skip the CLI scan (it requires Puppeteer). Use browser visualization instead.
63- For large directories (200+ scannable files), use `--fast` (regex-only, skips jsdom)
64- For 500+ files, narrow scope or ask the user
65- Exit code 0 = clean, 2 = findings
66
67**Browser visualization** (when browser automation tools are available AND the target is a viewable page):
68
69The overlay is a **visual aid for the user**. It highlights issues directly in their browser. Do NOT scroll through the page to screenshot overlays. Instead, read the console output to get the results programmatically.
70
711. **Start the live detection server**:
72 ```bash
73 npx impeccable live &
74 ```
75 Note the port printed to stdout (auto-assigned). Use `--port=PORT` to fix it.
762. **Create a new tab** and navigate to the page (use dev server URL for local files, or direct URL). Do not reuse existing tabs.
773. **Label the tab** via `javascript_tool` so the user can distinguish it:
78 ```javascript
79 document.title = '[Human] ' + document.title;
80 ```
814. **Scroll to top** to ensure the page is scrolled to the very top before injection
825. **Inject** via `javascript_tool` (replace PORT with the port from step 1):
83 ```javascript
84 const s = document.createElement('script'); s.src = 'http://localhost:PORT/detect.js'; document.head.appendChild(s);
85 ```
866. Wait 2-3 seconds for the detector to render overlays
877. **Read results from console** using `read_console_messages` with pattern `impeccable`. The detector logs all findings with the `[impeccable]` prefix. Do NOT scroll through the page to take screenshots of the overlays.
888. **Cleanup**: Stop the live server when done:
89 ```bash
90 npx impeccable live stop
91 ```
92
93For multi-view targets, inject on 3-5 representative pages. If injection fails, continue with CLI results only.
94
95Return: CLI findings (JSON), browser console findings (if applicable), and any false positives noted.
96
97### Step 3: Generate Combined Critique Report
98
99Synthesize both assessments into a single report. Do NOT simply concatenate. Weave the findings together, noting where the LLM review and detector agree, where the detector caught issues the LLM missed, and where detector findings are false positives.
100
101Structure your feedback as a design director would:
102
103#### Design Health Score
104> *Consult [heuristics-scoring](reference/heuristics-scoring.md)*
105
106Present the Nielsen's 10 heuristics scores as a table:
107
108| # | Heuristic | Score | Key Issue |
109|---|-----------|-------|-----------|
110| 1 | Visibility of System Status | ? | [specific finding or "n/a" if solid] |
111| 2 | Match System / Real World | ? | |
112| 3 | User Control and Freedom | ? | |
113| 4 | Consistency and Standards | ? | |
114| 5 | Error Prevention | ? | |
115| 6 | Recognition Rather Than Recall | ? | |
116| 7 | Flexibility and Efficiency | ? | |
117| 8 | Aesthetic and Minimalist Design | ? | |
118| 9 | Error Recovery | ? | |
119| 10 | Help and Documentation | ? | |
120| **Total** | | **??/40** | **[Rating band]** |
121
122Be honest with scores. A 4 means genuinely excellent. Most real interfaces score 20-32.
123
124#### Anti-Patterns Verdict
125
126**Start here.** Does this look AI-generated?
127
128**LLM assessment**: Your own evaluation of AI slop tells. Cover overall aesthetic feel, layout sameness, generic composition, missed opportunities for personality.
129
130**Deterministic scan**: Summarize what the automated detector found, with counts and file locations. Note any additional issues the detector caught that you missed, and flag any false positives.
131
132**Visual overlays** (if browser was used): Tell the user that overlays are now visible in the **[Human]** tab in their browser, highlighting the detected issues. Summarize what the console output reported.
133
134#### Overall Impression
135A brief gut reaction: what works, what doesn't, and the single biggest opportunity.
136
137#### What's Working
138Highlight 2-3 things done well. Be specific about why they work.
139
140#### Priority Issues
141The 3-5 most impactful design problems, ordered by importance.
142
143For each issue, tag with **P0-P3 severity** (consult [heuristics-scoring](reference/heuristics-scoring.md) for severity definitions):
144- **[P?] What**: Name the problem clearly
145- **Why it matters**: How this hurts users or undermines goals
146- **Fix**: What to do about it (be concrete)
147- **Suggested command**: Which command could address this (from: /animate, /quieter, /shape, /optimize, /adapt, /clarify, /layout, /distill, /delight, /audit, /harden, /polish, /bolder, /typeset, /critique, /colorize, /overdrive)
148
149#### Persona Red Flags
150> *Consult [personas](reference/personas.md)*
151
152Auto-select 2-3 personas most relevant to this interface type (use the selection table in the reference). If `AGENTS.md` contains a `## Design Context` section from `impeccable teach`, also generate 1-2 project-specific personas from the audience/brand info.
153
154For each selected persona, walk through the primary user action and list specific red flags found:
155
156**Alex (Power User)**: No keyboard shortcuts detected. Form requires 8 clicks for primary action. Forced modal onboarding. High abandonment risk.
157
158**Jordan (First-Timer)**: Icon-only nav in sidebar. Technical jargon in error messages ("404 Not Found"). No visible help. Will abandon at step 2.
159
160Be specific. Name the exact elements and interactions that fail each persona. Don't write generic persona descriptions; write what broke for them.
161
162#### Minor Observations
163Quick notes on smaller issues worth addressing.
164
165#### Questions to Consider
166Provocative questions that might unlock better solutions:
167- "What if the primary action were more prominent?"
168- "Does this need to feel this complex?"
169- "What would a confident version of this look like?"
170
171**Remember**:
172- Be direct. Vague feedback wastes everyone's time.
173- Be specific. "The submit button," not "some elements."
174- Say what's wrong AND why it matters to users.
175- Give concrete suggestions, not just "consider exploring..."
176- Prioritize ruthlessly. If everything is important, nothing is.
177- Don't soften criticism. Developers need honest feedback to ship great design.
178
179### Step 4: Ask the User
180
181**After presenting findings**, use targeted questions based on what was actually found. ask the user directly to clarify what you cannot infer. These answers will shape the action plan.
182
183Ask questions along these lines (adapt to the specific findings; do NOT ask generic questions):
184
1851. **Priority direction**: Based on the issues found, ask which category matters most to the user right now. For example: "I found problems with visual hierarchy, color usage, and information overload. Which area should we tackle first?" Offer the top 2-3 issue categories as options.
186
1872. **Design intent**: If the critique found a tonal mismatch, ask whether it was intentional. For example: "The interface feels clinical and corporate. Is that the intended tone, or should it feel warmer/bolder/more playful?" Offer 2-3 tonal directions as options based on what would fix the issues found.
188
1893. **Scope**: Ask how much the user wants to take on. For example: "I found N issues. Want to address everything, or focus on the top 3?" Offer scope options like "Top 3 only", "All issues", "Critical issues only".
190
1914. **Constraints** (optional; only ask if relevant): If the findings touch many areas, ask if anything is off-limits. For example: "Should any sections stay as-is?" This prevents the plan from touching things the user considers done.
192
193**Rules for questions**:
194- Every question must reference specific findings from the report. Never ask generic "who is your audience?" questions.
195- Keep it to 2-4 questions maximum. Respect the user's time.
196- Offer concrete options, not open-ended prompts.
197- If findings are straightforward (e.g., only 1-2 clear issues), skip questions and go directly to Step 5.
198
199### Step 5: Recommended Actions
200
201**After receiving the user's answers**, present a prioritized action summary reflecting the user's priorities and scope from Step 4.
202
203#### Action Summary
204
205List recommended commands in priority order, based on the user's answers:
206
2071. **`/command-name`**: Brief description of what to fix (specific context from critique findings)
2082. **`/command-name`**: Brief description (specific context)
209...
210
211**Rules for recommendations**:
212- Only recommend commands from: /animate, /quieter, /shape, /optimize, /adapt, /clarify, /layout, /distill, /delight, /audit, /harden, /polish, /bolder, /typeset, /critique, /colorize, /overdrive
213- Order by the user's stated priorities first, then by impact
214- Each item's description should carry enough context that the command knows what to focus on
215- Map each Priority Issue to the appropriate command
216- Skip commands that would address zero issues
217- If the user chose a limited scope, only include items within that scope
218- If the user marked areas as off-limits, exclude commands that would touch those areas
219- End with `/polish` as the final step if any fixes were recommended
220
221After presenting the summary, tell the user:
222
223> You can ask me to run these one at a time, all at once, or in any order you prefer.
224>
225> Re-run `/critique` after fixes to see your score improve.