SKILL.md


name: auditing-repositories description: Audits open source repositories for security, privacy, and unexpected behavior. Use when asked to audit, review, or assess a repo or codebase for vulnerabilities, privacy or telemetry concerns, suspicious behavior, dependency or supply chain risk, or OSS due diligence. license: GPL-3.0-or-later metadata: author: Amolith amolith@secluded.site

Be objective and evidence-based โ€” acknowledge what's well-designed, and don't invent issues to appear thorough. Each finding needs a sourceโ†’sink path, unsafe default, or concrete misconfiguration as evidence.

Guardrails

  • Prefer static inspection. Don't execute untrusted artifacts. If running anything (linters, npm audit, cargo audit), document exactly what and why.
  • Avoid speculative findings. Each issue needs concrete evidence, not hypotheticals.

Workflow

  1. Understand the project: Read README, docs, and config to learn what the software claims to do
  2. Identify the stack: Languages, frameworks, build systems, deployment targets
  3. Map attack surface: Locate entry points and trust boundaries โ€” CLI args, HTTP routes, RPC handlers, webhooks, config/env, file format parsers, plugins, deserialization
  4. Analyze systematically: Work through each analysis area below, starting with highest risk
  5. Check dependencies: Review dependency manifests for known vulnerabilities and supply chain risk
  6. Review build and release pipeline: Install scripts, CI workflows, build-time downloads, release signing
  7. LLM-specific checks: If the project involves LLMs, AI assistants, or MCP servers, DO NOT FORGET to apply references/llm-security.md
  8. Write the report: Use the output format below; rate each finding

Assess from both an external attacker and malicious insider perspective. Focus on practical, realistic threats โ€” not theoretical edge cases requiring unrealistic exploitation scenarios.

Analysis Areas

Correctness / Unexpected Behavior

  • Does the software function as described? Any undocumented behaviors?
  • Verify claims in README/docs against actual implementation

Security

  • Authentication/authorization weaknesses
  • Input validation and injection (SQL, command, XSS, CSRF, SSRF, path traversal)
  • Insecure cryptographic implementations or misuse
  • Unsafe deserialization (YAML, JSON, pickle), archive extraction (zip-slip), template engines
  • Race conditions and concurrency issues
  • Hardcoded credentials or secrets
  • Insecure default configurations
  • Memory safety concerns (C/C++/Rust unsafe blocks): bounds checks, use-after-free, unsafe FFI

Privacy

  • What data is collected, how it's stored, and where it's sent
  • Data retention policies and third-party sharing
  • Tracking mechanisms: cookies, persistent identifiers, telemetry, analytics SDKs
  • Whether data collection is disclosed to users and proportionate to functionality

Network Communications

  • All endpoints contacted during operation
  • Authentication and encryption of connections
  • Types of data transmitted; interception risk
  • Whether each communication is necessary for core functionality
  • Unexpected or suspicious outbound connections

Permissions and Sandboxing

  • System permissions requested (filesystem, network, hardware)
  • Whether permissions align with stated functionality
  • API scopes and integration breadth
  • Running as root, setuid, container escape potential
  • Background services, startup hooks, file permission modes
  • Temp directory safety, symlink attacks

Code Quality (Security-Relevant)

  • Error handling: fail-open vs fail-closed
  • Logging of sensitive data; secret leakage in logs or crash dumps
  • Debug endpoints exposing config or secrets
  • Input sanitization at boundaries
  • Secure coding patterns (or lack thereof)

Dependencies

  • Known vulnerabilities in direct and transitive dependencies
  • Supply chain risks (unmaintained, single-maintainer, typosquat)
  • Unnecessary dependencies increasing attack surface
  • Dependency pinning and verification practices

Build, Release, and Maintenance

  • Install scripts (postinstall, Makefile, setup.py, cargo build scripts, GitHub Actions)
  • Downloading binaries at build time
  • Signed releases, checksums, provenance
  • Versioning policy, security patch cadence, dependency update automation

Risk Rating

For each finding, provide:

  • Severity: Critical / High / Medium / Low / Informational
  • Impact: What breaks or leaks if exploited
  • Likelihood: How plausible is exploitation in practice
  • Complexity: How difficult is exploitation (Low / Medium / High)
  • Affected components: File paths with short code snippets as evidence

Output Format

# Due diligence for [Project Name]

## Executive summary

[2โ€“3 sentences: overall posture, most significant findings]

## Scope and methodology

[What was analyzed, what approach was taken]

## Findings

### ๐Ÿ”ด Critical / High

#### [Finding title]

**Severity:** Critical | **Likelihood:** High | **Complexity:** Low
**Location:** `path/to/file:42`

**Description**

[What the issue is, with evidence]

**Impact**

[What an attacker could achieve]

**Recommendation**

[Specific fix]

### ๐ŸŸ  Medium

[Same structure]

### ๐ŸŸก Low / Informational

[Same structure]

## Dependency summary

| Dependency  | Version | Known CVEs    | Risk   |
| ----------- | ------- | ------------- | ------ |
| example-lib | 1.2.3   | CVE-XXXX-YYYY | Medium |

## Recommendations

[Prioritized list of actions]

## Strengths

[What's done well โ€” acknowledge good security practices]

## Not assessed

[Areas outside the scope of this audit or not reachable via static analysis]

Omit empty severity sections. If the project is well-built and secure, say so clearly.