SKILL.md

  1---
  2name: critique
  3description: Evaluate design from a UX perspective, assessing visual hierarchy, information architecture, emotional resonance, cognitive load, and overall quality with quantitative scoring, persona-based testing, automated anti-pattern detection, and actionable feedback. Use when the user asks to review, critique, evaluate, or give feedback on a design or component.
  4version: 2.1.1
  5user-invocable: true
  6argument-hint: "[area (feature, page, component...)]"
  7allowed-tools:
  8  - Bash(npx impeccable *)
  9---
 10
 11## STEPS
 12
 13### Step 1: Preparation
 14
 15Invoke /impeccable, which contains design principles, anti-patterns, and the **Context Gathering Protocol**. Follow the protocol before proceeding. If no design context exists yet, you MUST run /impeccable teach first. Additionally gather: what the interface is trying to accomplish.
 16
 17### Step 2: Gather Assessments
 18
 19Launch two independent assessments. **Neither must see the other's output** to avoid bias.
 20
 21You SHOULD delegate each assessment to a separate sub-agent for independence. Use your environment's agent spawning mechanism (e.g., Claude Code's `Agent` tool, or Codex's subagent spawning). Sub-agents should return their findings as structured text. Do NOT output findings to the user yet.
 22
 23If sub-agents are not available in the current environment, complete each assessment sequentially, writing findings to internal notes before proceeding.
 24
 25**Tab isolation**: When browser automation is available, each assessment MUST create its own new tab. Never reuse an existing tab, even if one is already open at the correct URL. This prevents the two assessments from interfering with each other's page state.
 26
 27#### Assessment A: LLM Design Review
 28
 29Read the relevant source files (HTML, CSS, JS/TS) and, if browser automation is available, visually inspect the live page. **Create a new tab** for this; do not reuse existing tabs. After navigation, label the tab by setting the document title:
 30```javascript
 31document.title = '[LLM] ' + document.title;
 32```
 33Think like a design director. Evaluate:
 34
 35**AI Slop Detection (CRITICAL)**: Does this look like every other AI-generated interface? Review against ALL **DON'T** guidelines in the impeccable skill. Check for AI color palette, gradient text, dark glows, glassmorphism, hero metric layouts, identical card grids, generic fonts, and all other tells. **The test**: If someone said "AI made this," would you believe them immediately?
 36
 37**Holistic Design Review**: visual hierarchy (eye flow, primary action clarity), information architecture (structure, grouping, cognitive load), emotional resonance (does it match brand and audience?), discoverability (are interactive elements obvious?), composition (balance, whitespace, rhythm), typography (hierarchy, readability, font choices), color (purposeful use, cohesion, accessibility), states & edge cases (empty, loading, error, success), microcopy (clarity, tone, helpfulness).
 38
 39**Cognitive Load** (consult [cognitive-load](reference/cognitive-load.md)):
 40- Run the 8-item cognitive load checklist. Report failure count: 0-1 = low (good), 2-3 = moderate, 4+ = critical.
 41- Count visible options at each decision point. If >4, flag it.
 42- Check for progressive disclosure: is complexity revealed only when needed?
 43
 44**Emotional Journey**:
 45- What emotion does this interface evoke? Is that intentional?
 46- **Peak-end rule**: Is the most intense moment positive? Does the experience end well?
 47- **Emotional valleys**: Check for anxiety spikes at high-stakes moments (payment, delete, commit). Are there design interventions (progress indicators, reassurance copy, undo options)?
 48
 49**Nielsen's Heuristics** (consult [heuristics-scoring](reference/heuristics-scoring.md)):
 50Score each of the 10 heuristics 0-4. This scoring will be presented in the report.
 51
 52Return structured findings covering: AI slop verdict, heuristic scores, cognitive load assessment, what's working (2-3 items), priority issues (3-5 with what/why/fix), minor observations, and provocative questions.
 53
 54#### Assessment B: Automated Detection
 55
 56Run the bundled deterministic detector, which flags 25 specific patterns (AI slop tells + general design quality).
 57
 58**CLI scan**:
 59```bash
 60npx impeccable --json [--fast] [target]
 61```
 62
 63- Pass HTML/JSX/TSX/Vue/Svelte files or directories as `[target]` (anything with markup). Do not pass CSS-only files.
 64- For URLs, skip the CLI scan (it requires Puppeteer). Use browser visualization instead.
 65- For large directories (200+ scannable files), use `--fast` (regex-only, skips jsdom)
 66- For 500+ files, narrow scope or ask the user
 67- Exit code 0 = clean, 2 = findings
 68
 69**Browser visualization** (when browser automation tools are available AND the target is a viewable page):
 70
 71The overlay is a **visual aid for the user**. It highlights issues directly in their browser. Do NOT scroll through the page to screenshot overlays. Instead, read the console output to get the results programmatically.
 72
 731. **Start the live detection server**:
 74   ```bash
 75   npx impeccable live &
 76   ```
 77   Note the port printed to stdout (auto-assigned). Use `--port=PORT` to fix it.
 782. **Create a new tab** and navigate to the page (use dev server URL for local files, or direct URL). Do not reuse existing tabs.
 793. **Label the tab** via `javascript_tool` so the user can distinguish it:
 80   ```javascript
 81   document.title = '[Human] ' + document.title;
 82   ```
 834. **Scroll to top** to ensure the page is scrolled to the very top before injection
 845. **Inject** via `javascript_tool` (replace PORT with the port from step 1):
 85   ```javascript
 86   const s = document.createElement('script'); s.src = 'http://localhost:PORT/detect.js'; document.head.appendChild(s);
 87   ```
 886. Wait 2-3 seconds for the detector to render overlays
 897. **Read results from console** using `read_console_messages` with pattern `impeccable`. The detector logs all findings with the `[impeccable]` prefix. Do NOT scroll through the page to take screenshots of the overlays.
 908. **Cleanup**: Stop the live server when done:
 91   ```bash
 92   npx impeccable live stop
 93   ```
 94
 95For multi-view targets, inject on 3-5 representative pages. If injection fails, continue with CLI results only.
 96
 97Return: CLI findings (JSON), browser console findings (if applicable), and any false positives noted.
 98
 99### Step 3: Generate Combined Critique Report
100
101Synthesize both assessments into a single report. Do NOT simply concatenate. Weave the findings together, noting where the LLM review and detector agree, where the detector caught issues the LLM missed, and where detector findings are false positives.
102
103Structure your feedback as a design director would:
104
105#### Design Health Score
106> *Consult [heuristics-scoring](reference/heuristics-scoring.md)*
107
108Present the Nielsen's 10 heuristics scores as a table:
109
110| # | Heuristic | Score | Key Issue |
111|---|-----------|-------|-----------|
112| 1 | Visibility of System Status | ? | [specific finding or "n/a" if solid] |
113| 2 | Match System / Real World | ? | |
114| 3 | User Control and Freedom | ? | |
115| 4 | Consistency and Standards | ? | |
116| 5 | Error Prevention | ? | |
117| 6 | Recognition Rather Than Recall | ? | |
118| 7 | Flexibility and Efficiency | ? | |
119| 8 | Aesthetic and Minimalist Design | ? | |
120| 9 | Error Recovery | ? | |
121| 10 | Help and Documentation | ? | |
122| **Total** | | **??/40** | **[Rating band]** |
123
124Be honest with scores. A 4 means genuinely excellent. Most real interfaces score 20-32.
125
126#### Anti-Patterns Verdict
127
128**Start here.** Does this look AI-generated?
129
130**LLM assessment**: Your own evaluation of AI slop tells. Cover overall aesthetic feel, layout sameness, generic composition, missed opportunities for personality.
131
132**Deterministic scan**: Summarize what the automated detector found, with counts and file locations. Note any additional issues the detector caught that you missed, and flag any false positives.
133
134**Visual overlays** (if browser was used): Tell the user that overlays are now visible in the **[Human]** tab in their browser, highlighting the detected issues. Summarize what the console output reported.
135
136#### Overall Impression
137A brief gut reaction: what works, what doesn't, and the single biggest opportunity.
138
139#### What's Working
140Highlight 2-3 things done well. Be specific about why they work.
141
142#### Priority Issues
143The 3-5 most impactful design problems, ordered by importance.
144
145For each issue, tag with **P0-P3 severity** (consult [heuristics-scoring](reference/heuristics-scoring.md) for severity definitions):
146- **[P?] What**: Name the problem clearly
147- **Why it matters**: How this hurts users or undermines goals
148- **Fix**: What to do about it (be concrete)
149- **Suggested command**: Which command could address this (from: /animate, /quieter, /shape, /optimize, /adapt, /clarify, /layout, /distill, /delight, /audit, /harden, /polish, /bolder, /typeset, /critique, /colorize, /overdrive)
150
151#### Persona Red Flags
152> *Consult [personas](reference/personas.md)*
153
154Auto-select 2-3 personas most relevant to this interface type (use the selection table in the reference). If `AGENTS.md` contains a `## Design Context` section from `impeccable teach`, also generate 1-2 project-specific personas from the audience/brand info.
155
156For each selected persona, walk through the primary user action and list specific red flags found:
157
158**Alex (Power User)**: No keyboard shortcuts detected. Form requires 8 clicks for primary action. Forced modal onboarding. High abandonment risk.
159
160**Jordan (First-Timer)**: Icon-only nav in sidebar. Technical jargon in error messages ("404 Not Found"). No visible help. Will abandon at step 2.
161
162Be specific. Name the exact elements and interactions that fail each persona. Don't write generic persona descriptions; write what broke for them.
163
164#### Minor Observations
165Quick notes on smaller issues worth addressing.
166
167#### Questions to Consider
168Provocative questions that might unlock better solutions:
169- "What if the primary action were more prominent?"
170- "Does this need to feel this complex?"
171- "What would a confident version of this look like?"
172
173**Remember**:
174- Be direct. Vague feedback wastes everyone's time.
175- Be specific. "The submit button," not "some elements."
176- Say what's wrong AND why it matters to users.
177- Give concrete suggestions, not just "consider exploring..."
178- Prioritize ruthlessly. If everything is important, nothing is.
179- Don't soften criticism. Developers need honest feedback to ship great design.
180
181### Step 4: Ask the User
182
183**After presenting findings**, use targeted questions based on what was actually found. STOP and call the `question` tool to clarify. These answers will shape the action plan.
184
185Ask questions along these lines (adapt to the specific findings; do NOT ask generic questions):
186
1871. **Priority direction**: Based on the issues found, ask which category matters most to the user right now. For example: "I found problems with visual hierarchy, color usage, and information overload. Which area should we tackle first?" Offer the top 2-3 issue categories as options.
188
1892. **Design intent**: If the critique found a tonal mismatch, ask whether it was intentional. For example: "The interface feels clinical and corporate. Is that the intended tone, or should it feel warmer/bolder/more playful?" Offer 2-3 tonal directions as options based on what would fix the issues found.
190
1913. **Scope**: Ask how much the user wants to take on. For example: "I found N issues. Want to address everything, or focus on the top 3?" Offer scope options like "Top 3 only", "All issues", "Critical issues only".
192
1934. **Constraints** (optional; only ask if relevant): If the findings touch many areas, ask if anything is off-limits. For example: "Should any sections stay as-is?" This prevents the plan from touching things the user considers done.
194
195**Rules for questions**:
196- Every question must reference specific findings from the report. Never ask generic "who is your audience?" questions.
197- Keep it to 2-4 questions maximum. Respect the user's time.
198- Offer concrete options, not open-ended prompts.
199- If findings are straightforward (e.g., only 1-2 clear issues), skip questions and go directly to Step 5.
200
201### Step 5: Recommended Actions
202
203**After receiving the user's answers**, present a prioritized action summary reflecting the user's priorities and scope from Step 4.
204
205#### Action Summary
206
207List recommended commands in priority order, based on the user's answers:
208
2091. **`/command-name`**: Brief description of what to fix (specific context from critique findings)
2102. **`/command-name`**: Brief description (specific context)
211...
212
213**Rules for recommendations**:
214- Only recommend commands from: /animate, /quieter, /shape, /optimize, /adapt, /clarify, /layout, /distill, /delight, /audit, /harden, /polish, /bolder, /typeset, /critique, /colorize, /overdrive
215- Order by the user's stated priorities first, then by impact
216- Each item's description should carry enough context that the command knows what to focus on
217- Map each Priority Issue to the appropriate command
218- Skip commands that would address zero issues
219- If the user chose a limited scope, only include items within that scope
220- If the user marked areas as off-limits, exclude commands that would touch those areas
221- End with `/polish` as the final step if any fixes were recommended
222
223After presenting the summary, tell the user:
224
225> You can ask me to run these one at a time, all at once, or in any order you prefer.
226>
227> Re-run `/critique` after fixes to see your score improve.