dev docs · evaluate

Evaluate before you deploy

You're about to run code from the internet. Read it first. The bot is small enough that this is genuinely practical — about 3000 lines across 24 files. Here are three reading depths, plus how to use AI review tools effectively. the point isn't that most people audit — it's that the option is real.

On this page

  1. Three reading depths
  2. Verify the hash first
  3. The 5-minute smoke inspection
  4. The 30-minute tour
  5. The afternoon review
  6. Using AI review tools
  7. Suspicious-looking patterns (and why they're there)
  8. What to do after reviewing

Three reading depths

Pick one based on how much time you have. None of these is "wrong" — each gives you a useful answer. The deeper you go, the more confident your verdict.

5 minutes

Smoke inspection

Answer: does anything look immediately wrong?

  • Verify the SHA-256 hash
  • Open package.json, scan dependencies
  • Grep for fetch, http, network calls
  • Confirm no obfuscated code, no unexpected binaries
30 minutes

Guided tour

Answer: do I understand what it's doing?

  • Read src/api/server.ts end to end
  • Skim each chain adapter
  • Skim each platform adapter
  • Understand the two listeners
an afternoon

Full review

Answer: does it do what it claims, only that, and safely?

  • Read every file in src/
  • Read the tests — they document expected behavior
  • Run npm audit fresh
  • Run the three-gate: typecheck, lint, test

Verify the hash first

Before any code review, confirm the tarball you downloaded matches the published hash. An attacker who swapped the tarball in transit is a bigger risk than a subtle bug we missed, and a hash check catches that in one command.

# Linux sha256sum justthetips-0.1.0-alpha.7.tar.gz # macOS shasum -a 256 justthetips-0.1.0-alpha.7.tar.gz # Windows (PowerShell) Get-FileHash justthetips-0.1.0-alpha.7.tar.gz -Algorithm SHA256

Compare the output to the hash published on the releases page. Or — if you want a double check that catches the case where someone compromised the releases page too — check the hash file published separately at /downloads/justthetips-0.1.0-alpha.7.tar.gz.sha256.

If the hashes don't match, don't extract the tarball. Re-download from the site. If it still doesn't match, something is compromised — tell us before running anything.

The 5-minute smoke inspection

Goal: rule out the obvious. You're not trying to find subtle bugs — you're checking that nothing is blatantly wrong at a glance.

Unpack and orient yourself

tar -xzf justthetips-0.1.0-alpha.7.tar.gz cd justthetips ls -la

What you should see:

justthetips/
├── package.json         # dependencies, scripts, version
├── tsconfig.json        # TypeScript config
├── biome.json           # linter config
├── config.example.env   # template for user config
├── README.md
├── LICENSE.md           # BSL 1.1
├── CONTRIBUTING.md
├── SECURITY.md
├── src/                 # the actual bot code
│   ├── api/             #   HTTP server + routes
│   ├── chains/          #   XRPL, Base, Arbitrum, Solana watchers
│   ├── adapters/        #   Discord, Telegram, Twitch, X
│   ├── config.ts        #   env loader
│   ├── logger.ts        #   pino setup
│   └── status.ts        #   /check page state registry
├── tests/               # 132 tests
├── scripts/             # release, smoke, pack-for-site
└── Dockerfile           # optional, for Docker deploys

No compiled binaries. No pre-built artifacts. No dist/ or build/ in the tarball. Everything is source.

Scan dependencies

cat package.json

Look at dependencies — there should be about 9 entries. Cross-reference each package name against npmjs.com. You're looking for:

Confirm there are no postinstall, preinstall, or install scripts in package.json. This is the biggest supply-chain attack vector, and we deliberately don't use any.

Grep for network calls

# What does the bot actually contact? grep -rn 'fetch\|http\.request\|https\.request\|WebSocket\|ws://' src/ | grep -v 'import\|@type'

Expected destinations (all configurable):

Nothing else. If you see calls to domains you don't recognize, that's a flag.

Quick red-flag checks

# Obfuscated strings? grep -rn 'eval\|Function(' src/ # Base64-encoded code? grep -rn 'atob\|Buffer.from.*base64' src/ # Shell commands? grep -rn 'child_process\|exec\|spawn' src/

Any hits should have an obvious, benign reason. child_process in scripts/release.ts is fine (release script calls npm). eval or Function() in chain or platform code is a serious flag — we don't use either.

If all checks pass: you haven't proven the code is safe, but you've ruled out the obvious attacks. Move on to the 30-minute tour, or stop here if you've decided to trust.

The 30-minute tour

Goal: understand what the bot does. After this, you can explain the architecture to someone else.

Start at src/api/server.ts

This is the entry point and orchestrator. It loads config, builds watchers and adapters, wires them together, starts the HTTP server. Read it top to bottom — about 450 lines.

What you'll learn:

Then each chain adapter

Four chains, four folders under src/chains/. Each folder has the same four files:

Read XRPL's watcher.ts first — it's the simplest (native transaction format, native memos, one asset filter). Then Solana's — similar pattern but uses token accounts. Then EVM (base/arbitrum share code under evm/) — this one uses the 60-second message_buffer for memo pairing since EVM has no native memos.

Then the platform adapters

Four platforms, same pattern. Each implements the same interface — see src/chains/types.ts for the contract (chains and platforms use analogous contracts).

Each platform adapter has two jobs: listen for bot mentions and reply with a tip link, and post thank-you announcements when a watcher reports a confirmed tip.

Finally, src/thankyous.ts

Templating layer. Loads thankyous.json (user-provided), picks a random template per tip, substitutes {name}. About 60 lines. Read it to confirm there's no code injection — the substitution is plain string replacement, not template evaluation.

After 30 minutes, you should be able to answer: "what does the bot do when a USDC payment arrives at my Base address?" Walk through the data flow: RPC event → watcher decodes transfer → status registry records → platform adapter announces. That's the whole shape.

The afternoon review

Goal: read every line, form a full picture, produce an opinion others can act on.

Everything under src/

Roughly 3000 lines across 24 files. Order I'd suggest:

  1. src/config.ts — how env vars become typed config. Catches many misconfigurations before the bot even starts.
  2. src/chains/types.ts — the contract every chain adapter implements. Short, worth internalizing.
  3. src/api/server.ts — the orchestrator.
  4. Each chain folder in turn (XRPL, Solana, EVM).
  5. Each platform adapter.
  6. src/status.ts — how /check gets its data.
  7. src/api/routes/*.ts — the three HTTP routes.
  8. src/thankyous.ts, src/logger.ts, src/node_version.ts — smaller utilities.

Then the tests

132 tests under tests/. Skim them — every test documents what the code is supposed to do. If a test contradicts your mental model, your model is probably wrong; read the code it tests.

Fresh audit

# Install deps and run a full audit at today's database state npm install npm audit # Run the three-gate npm run typecheck npm run lint npm test

Expected at release: 0 vulnerabilities, typecheck clean, lint clean, 132/132 tests passing. If anything differs, check the releases page to see if a security patch has been cut that you missed.

Form your opinion

After an afternoon, you'll know more about this bot than most people who run it. Your opinion is useful to others — consider publishing it. Concrete things to comment on:

Using AI review tools

modern AI is good at this. use it.

AI review tools (Claude, ChatGPT, Copilot, Cursor, Windsurf, Gemini) are well-suited for auditing codebases this small. A good LLM will read all 3000 lines, hold them all in context, and answer specific questions about the code with useful accuracy.

How to do it:

  1. Unpack the tarball.
  2. If your tool supports folder upload (Claude Projects, ChatGPT with files, Cursor, etc.), drop the unpacked folder in.
  3. If not, concatenate the source: find src -name '*.ts' -exec cat {} \; and paste the output into a long message.
  4. Ask specific questions. Not "is this safe" — that's too vague. Try prompts like the ones below.

Prompts that work well:

I've attached the source for a non-custodial stablecoin tipping bot. Review the code for: 1. Any path where a user's funds could be at risk 2. Any place that makes a network call to a domain that isn't chain RPC, Discord, Telegram, Twitch, or X 3. Any obfuscated or suspicious code patterns 4. Any place that deviates from its claim of being stateless (no persistent state besides a 60-second in-memory buffer) For each finding, cite the specific file and line.
Walk me through what happens when a Base USDC tip arrives at the owner's address. Start from the RPC event and end with the thank-you message being posted in Discord. Cite files and line numbers.
What would an attacker need to do to make this bot spend funds that don't belong to them? Analyze each chain watcher for this specific risk.

What AI tools are good at here: holding the whole codebase in context, spotting inconsistencies between claims and code, catching simple supply-chain red flags, walking through data flows, producing code summaries. They're a force multiplier for a short review.

What they're worse at: evaluating novel cryptographic weaknesses, assessing the security of specific chain protocols, judging whether a library's maintainer is trustworthy. For those, human expertise still wins.

Bottom line: if you have 20 minutes and an AI tool, you can get further than the 5-minute smoke inspection but less far than a full afternoon review. That's a genuinely useful middle gear. We welcome it explicitly.

Suspicious-looking patterns (and why they're there)

Some things in the codebase can look alarming at first glance but have legitimate reasons. Here's our honest inventory, so you don't waste review time on false positives.

Pattern Where Why it's there
child_process.execSync scripts/release.ts Release script calls npm run commands and tar. Only runs when you explicitly run npm run release, never at bot runtime.
Inline HTML string building src/api/routes/tip.ts, src/api/routes/check.ts Serving static HTML for the signing page and status page. All dynamic values are escaped. No user input is reflected into the HTML unescaped.
WebSocket connection to XRPL cluster src/chains/xrpl/watcher.ts How you read XRPL — it's a WebSocket-native protocol. The connection URL is configurable via XRPL_WS_URL.
Dynamic Buffer.from with base64 input src/chains/solana/watcher.ts Decoding Solana memo bytes from on-chain log entries. The input comes from the chain RPC, not from user strings.
Direct file reads at startup src/thankyous.ts Loading thankyous.json — a file the user provided. Read once at startup; no user-input paths.
Global in-memory buffer src/chains/evm/message_buffer.ts 60-second message buffer for EVM memo-pairing. Contains only message text — no user IDs, usernames, or addresses. Entries expire after 60 seconds. This is the only persistent-ish state in the bot.
Fastify() with default options src/api/server.ts Standard Fastify initialization. Our custom error handler narrows exception types safely. No JSON body parsing on the bot's routes since none accept POST.

If you find something that isn't in this table and looks concerning, trust that instinct and investigate. If you find something we missed, please tell us — see the responsible disclosure policy.

What to do after reviewing

If you're satisfied

If you found something concerning

If you want to modify it

Related