SKILL.md

  1---
  2name: critique
  3description: Evaluate design from a UX perspective, assessing visual hierarchy, information architecture, emotional resonance, cognitive load, and overall quality with quantitative scoring, persona-based testing, automated anti-pattern detection, and actionable feedback. Use when the user asks to review, critique, evaluate, or give feedback on a design or component.
  4version: 2.1.1
  5argument-hint: "[area (feature, page, component...)]"
  6---
  7
  8## STEPS
  9
 10### Step 1: Preparation
 11
 12Invoke $impeccable, which contains design principles, anti-patterns, and the **Context Gathering Protocol**. Follow the protocol before proceeding. If no design context exists yet, you MUST run $impeccable teach first. Additionally gather: what the interface is trying to accomplish.
 13
 14### Step 2: Gather Assessments
 15
 16Launch two independent assessments. **Neither must see the other's output** to avoid bias.
 17
 18You SHOULD delegate each assessment to a separate sub-agent for independence. Use your environment's agent spawning mechanism (e.g., Claude Code's `Agent` tool, or Codex's subagent spawning). Sub-agents should return their findings as structured text. Do NOT output findings to the user yet.
 19
 20If sub-agents are not available in the current environment, complete each assessment sequentially, writing findings to internal notes before proceeding.
 21
 22**Tab isolation**: When browser automation is available, each assessment MUST create its own new tab. Never reuse an existing tab, even if one is already open at the correct URL. This prevents the two assessments from interfering with each other's page state.
 23
 24#### Assessment A: LLM Design Review
 25
 26Read the relevant source files (HTML, CSS, JS/TS) and, if browser automation is available, visually inspect the live page. **Create a new tab** for this; do not reuse existing tabs. After navigation, label the tab by setting the document title:
 27```javascript
 28document.title = '[LLM] ' + document.title;
 29```
 30Think like a design director. Evaluate:
 31
 32**AI Slop Detection (CRITICAL)**: Does this look like every other AI-generated interface? Review against ALL **DON'T** guidelines in the impeccable skill. Check for AI color palette, gradient text, dark glows, glassmorphism, hero metric layouts, identical card grids, generic fonts, and all other tells. **The test**: If someone said "AI made this," would you believe them immediately?
 33
 34**Holistic Design Review**: visual hierarchy (eye flow, primary action clarity), information architecture (structure, grouping, cognitive load), emotional resonance (does it match brand and audience?), discoverability (are interactive elements obvious?), composition (balance, whitespace, rhythm), typography (hierarchy, readability, font choices), color (purposeful use, cohesion, accessibility), states & edge cases (empty, loading, error, success), microcopy (clarity, tone, helpfulness).
 35
 36**Cognitive Load** (consult [cognitive-load](reference/cognitive-load.md)):
 37- Run the 8-item cognitive load checklist. Report failure count: 0-1 = low (good), 2-3 = moderate, 4+ = critical.
 38- Count visible options at each decision point. If >4, flag it.
 39- Check for progressive disclosure: is complexity revealed only when needed?
 40
 41**Emotional Journey**:
 42- What emotion does this interface evoke? Is that intentional?
 43- **Peak-end rule**: Is the most intense moment positive? Does the experience end well?
 44- **Emotional valleys**: Check for anxiety spikes at high-stakes moments (payment, delete, commit). Are there design interventions (progress indicators, reassurance copy, undo options)?
 45
 46**Nielsen's Heuristics** (consult [heuristics-scoring](reference/heuristics-scoring.md)):
 47Score each of the 10 heuristics 0-4. This scoring will be presented in the report.
 48
 49Return structured findings covering: AI slop verdict, heuristic scores, cognitive load assessment, what's working (2-3 items), priority issues (3-5 with what/why/fix), minor observations, and provocative questions.
 50
 51#### Assessment B: Automated Detection
 52
 53Run the bundled deterministic detector, which flags 25 specific patterns (AI slop tells + general design quality).
 54
 55**CLI scan**:
 56```bash
 57npx impeccable --json [--fast] [target]
 58```
 59
 60- Pass HTML/JSX/TSX/Vue/Svelte files or directories as `[target]` (anything with markup). Do not pass CSS-only files.
 61- For URLs, skip the CLI scan (it requires Puppeteer). Use browser visualization instead.
 62- For large directories (200+ scannable files), use `--fast` (regex-only, skips jsdom)
 63- For 500+ files, narrow scope or ask the user
 64- Exit code 0 = clean, 2 = findings
 65
 66**Browser visualization** (when browser automation tools are available AND the target is a viewable page):
 67
 68The overlay is a **visual aid for the user**. It highlights issues directly in their browser. Do NOT scroll through the page to screenshot overlays. Instead, read the console output to get the results programmatically.
 69
 701. **Start the live detection server**:
 71   ```bash
 72   npx impeccable live &
 73   ```
 74   Note the port printed to stdout (auto-assigned). Use `--port=PORT` to fix it.
 752. **Create a new tab** and navigate to the page (use dev server URL for local files, or direct URL). Do not reuse existing tabs.
 763. **Label the tab** via `javascript_tool` so the user can distinguish it:
 77   ```javascript
 78   document.title = '[Human] ' + document.title;
 79   ```
 804. **Scroll to top** to ensure the page is scrolled to the very top before injection
 815. **Inject** via `javascript_tool` (replace PORT with the port from step 1):
 82   ```javascript
 83   const s = document.createElement('script'); s.src = 'http://localhost:PORT/detect.js'; document.head.appendChild(s);
 84   ```
 856. Wait 2-3 seconds for the detector to render overlays
 867. **Read results from console** using `read_console_messages` with pattern `impeccable`. The detector logs all findings with the `[impeccable]` prefix. Do NOT scroll through the page to take screenshots of the overlays.
 878. **Cleanup**: Stop the live server when done:
 88   ```bash
 89   npx impeccable live stop
 90   ```
 91
 92For multi-view targets, inject on 3-5 representative pages. If injection fails, continue with CLI results only.
 93
 94Return: CLI findings (JSON), browser console findings (if applicable), and any false positives noted.
 95
 96### Step 3: Generate Combined Critique Report
 97
 98Synthesize both assessments into a single report. Do NOT simply concatenate. Weave the findings together, noting where the LLM review and detector agree, where the detector caught issues the LLM missed, and where detector findings are false positives.
 99
100Structure your feedback as a design director would:
101
102#### Design Health Score
103> *Consult [heuristics-scoring](reference/heuristics-scoring.md)*
104
105Present the Nielsen's 10 heuristics scores as a table:
106
107| # | Heuristic | Score | Key Issue |
108|---|-----------|-------|-----------|
109| 1 | Visibility of System Status | ? | [specific finding or "n/a" if solid] |
110| 2 | Match System / Real World | ? | |
111| 3 | User Control and Freedom | ? | |
112| 4 | Consistency and Standards | ? | |
113| 5 | Error Prevention | ? | |
114| 6 | Recognition Rather Than Recall | ? | |
115| 7 | Flexibility and Efficiency | ? | |
116| 8 | Aesthetic and Minimalist Design | ? | |
117| 9 | Error Recovery | ? | |
118| 10 | Help and Documentation | ? | |
119| **Total** | | **??/40** | **[Rating band]** |
120
121Be honest with scores. A 4 means genuinely excellent. Most real interfaces score 20-32.
122
123#### Anti-Patterns Verdict
124
125**Start here.** Does this look AI-generated?
126
127**LLM assessment**: Your own evaluation of AI slop tells. Cover overall aesthetic feel, layout sameness, generic composition, missed opportunities for personality.
128
129**Deterministic scan**: Summarize what the automated detector found, with counts and file locations. Note any additional issues the detector caught that you missed, and flag any false positives.
130
131**Visual overlays** (if browser was used): Tell the user that overlays are now visible in the **[Human]** tab in their browser, highlighting the detected issues. Summarize what the console output reported.
132
133#### Overall Impression
134A brief gut reaction: what works, what doesn't, and the single biggest opportunity.
135
136#### What's Working
137Highlight 2-3 things done well. Be specific about why they work.
138
139#### Priority Issues
140The 3-5 most impactful design problems, ordered by importance.
141
142For each issue, tag with **P0-P3 severity** (consult [heuristics-scoring](reference/heuristics-scoring.md) for severity definitions):
143- **[P?] What**: Name the problem clearly
144- **Why it matters**: How this hurts users or undermines goals
145- **Fix**: What to do about it (be concrete)
146- **Suggested command**: Which command could address this (from: $animate, $quieter, $shape, $optimize, $adapt, $clarify, $layout, $distill, $delight, $audit, $harden, $polish, $bolder, $typeset, $critique, $colorize, $overdrive)
147
148#### Persona Red Flags
149> *Consult [personas](reference/personas.md)*
150
151Auto-select 2-3 personas most relevant to this interface type (use the selection table in the reference). If `AGENTS.md` contains a `## Design Context` section from `impeccable teach`, also generate 1-2 project-specific personas from the audience/brand info.
152
153For each selected persona, walk through the primary user action and list specific red flags found:
154
155**Alex (Power User)**: No keyboard shortcuts detected. Form requires 8 clicks for primary action. Forced modal onboarding. High abandonment risk.
156
157**Jordan (First-Timer)**: Icon-only nav in sidebar. Technical jargon in error messages ("404 Not Found"). No visible help. Will abandon at step 2.
158
159Be specific. Name the exact elements and interactions that fail each persona. Don't write generic persona descriptions; write what broke for them.
160
161#### Minor Observations
162Quick notes on smaller issues worth addressing.
163
164#### Questions to Consider
165Provocative questions that might unlock better solutions:
166- "What if the primary action were more prominent?"
167- "Does this need to feel this complex?"
168- "What would a confident version of this look like?"
169
170**Remember**:
171- Be direct. Vague feedback wastes everyone's time.
172- Be specific. "The submit button," not "some elements."
173- Say what's wrong AND why it matters to users.
174- Give concrete suggestions, not just "consider exploring..."
175- Prioritize ruthlessly. If everything is important, nothing is.
176- Don't soften criticism. Developers need honest feedback to ship great design.
177
178### Step 4: Ask the User
179
180**After presenting findings**, use targeted questions based on what was actually found. ask the user directly to clarify what you cannot infer. These answers will shape the action plan.
181
182Ask questions along these lines (adapt to the specific findings; do NOT ask generic questions):
183
1841. **Priority direction**: Based on the issues found, ask which category matters most to the user right now. For example: "I found problems with visual hierarchy, color usage, and information overload. Which area should we tackle first?" Offer the top 2-3 issue categories as options.
185
1862. **Design intent**: If the critique found a tonal mismatch, ask whether it was intentional. For example: "The interface feels clinical and corporate. Is that the intended tone, or should it feel warmer$bolder/more playful?" Offer 2-3 tonal directions as options based on what would fix the issues found.
187
1883. **Scope**: Ask how much the user wants to take on. For example: "I found N issues. Want to address everything, or focus on the top 3?" Offer scope options like "Top 3 only", "All issues", "Critical issues only".
189
1904. **Constraints** (optional; only ask if relevant): If the findings touch many areas, ask if anything is off-limits. For example: "Should any sections stay as-is?" This prevents the plan from touching things the user considers done.
191
192**Rules for questions**:
193- Every question must reference specific findings from the report. Never ask generic "who is your audience?" questions.
194- Keep it to 2-4 questions maximum. Respect the user's time.
195- Offer concrete options, not open-ended prompts.
196- If findings are straightforward (e.g., only 1-2 clear issues), skip questions and go directly to Step 5.
197
198### Step 5: Recommended Actions
199
200**After receiving the user's answers**, present a prioritized action summary reflecting the user's priorities and scope from Step 4.
201
202#### Action Summary
203
204List recommended commands in priority order, based on the user's answers:
205
2061. **`$command-name`**: Brief description of what to fix (specific context from critique findings)
2072. **`$command-name`**: Brief description (specific context)
208...
209
210**Rules for recommendations**:
211- Only recommend commands from: $animate, $quieter, $shape, $optimize, $adapt, $clarify, $layout, $distill, $delight, $audit, $harden, $polish, $bolder, $typeset, $critique, $colorize, $overdrive
212- Order by the user's stated priorities first, then by impact
213- Each item's description should carry enough context that the command knows what to focus on
214- Map each Priority Issue to the appropriate command
215- Skip commands that would address zero issues
216- If the user chose a limited scope, only include items within that scope
217- If the user marked areas as off-limits, exclude commands that would touch those areas
218- End with `$polish` as the final step if any fixes were recommended
219
220After presenting the summary, tell the user:
221
222> You can ask me to run these one at a time, all at once, or in any order you prefer.
223>
224> Re-run `$critique` after fixes to see your score improve.