For ninety-three minutes on April 22, 2026, Bitwarden's official npm package for the CLI resolved to a credential harvester. The affected window was 5:57 PM to 7:30 PM ET, according to Bitwarden's statement. The bad version was @bitwarden/cli@2026.4.0.

Bitwarden says the issue was limited to the npm delivery path for the CLI, not vault data, production systems, browser extensions, or the normal Bitwarden product line.

That scope matters. This was not "Bitwarden vaults were breached." It was worse in a narrower, more operationally interesting way: a trusted developer tool became an install-time credential harvester. If a developer workstation or CI runner pulled that version during the window, the interesting question is not whether the password vault was safe. The question is what that host could read, and where it was allowed to send it. That is the same pattern I wrote about after the LiteLLM PyPI compromise: code execution is bad, but filesystem secrets and open egress turn it into an incident.

Socket's writeup and SafeDep's analysis describe the same basic shape: a loader executed during install, bootstrapped Bun, ran an obfuscated payload, harvested developer and CI secrets, and exfiltrated them to infrastructure impersonating Checkmarx. It also had fallback paths through GitHub and propagation logic for npm packages when it found publish credentials.

The Real Boundary

An npm install step should download packages from a registry. It should not be able to POST encrypted blobs to arbitrary hosts, create GitHub repositories, inject workflows, or publish packages just because a transitive dependency got code execution.

A CI package conveyor passing through a controlled network gateway while outbound paths are blocked at the boundary.

What Actually Ran

The package was not subtle in its goals. Researchers reported collection paths for npm tokens, GitHub tokens, SSH keys, cloud credentials, shell history, .env files, Git credentials, and AI tool configuration. That last bucket is no longer a footnote. Files like ~/.claude.json, MCP configs, Cursor state, and other agent wiring often contain long-lived tokens or point at services that do.

The payload did not need a kernel exploit. It needed the same three things every install-time supply-chain payload wants:

  1. Execution through npm lifecycle hooks.
  2. Reachable secrets on the filesystem or in the process environment.
  3. Outbound network to carry the loot away or extend the compromise.

Most remediation guidance starts with rotation, and it should. If you installed the malicious version, rotate exposed GitHub, npm, SSH, cloud, CI, and agent credentials. But rotation is cleanup. The durable control is breaking at least one link in that chain before the next poisoned package shows up.

Start With Lifecycle Scripts

The highest-leverage change is also the least glamorous: stop running dependency lifecycle scripts by default.

CI default
npm config set ignore-scripts true

That single setting disables npm lifecycle scripts: preinstall, install, postinstall, prepare, and the rest of the hook surface that runs during npm ci and npm install. It is not free. Some packages need scripts for native builds or binary downloads: sharp, esbuild, bcrypt, and friends. Good. Make those exceptions explicit.

scripts/npm-install-ci.sh
#!/usr/bin/env bash
set -euo pipefail

ALLOW_BUILD_SCRIPTS=(
  sharp
  esbuild
)

npm ci --ignore-scripts

for pkg in "${ALLOW_BUILD_SCRIPTS[@]}"; do
  npm rebuild --foreground-scripts "$pkg"
done

npm rebuild is the native npm escape hatch here. When you pass package names, npm rebuilds only the matching packages. That turns lifecycle execution from an ambient permission into a reviewed list.

Also set a release-age floor for new resolutions:

.npmrc
ignore-scripts=true
min-release-age=3

min-release-age tells npm to build the dependency tree using versions that have been available for more than the configured number of days. It would not save you from a lockfile already pinned to a bad version, but it does remove the most dangerous default behavior: accepting a package version published minutes ago just because a range or dist-tag points at it.

That is a much better failure mode than "every package in the graph may run code during install." New package needs scripts? It gets a line in the wrapper and a code review. New version published five minutes ago? It waits. The friction is the control.

Then Remove Ambient Egress

CI runners should not reach the public internet during dependency installation. They should reach DNS, your internal package proxy, and whatever internal services the build actually requires. Everything else should fail closed.

Put Verdaccio, Artifactory, Nexus, or another registry proxy in front of npm. Point CI at it. Cache known-good packages there. Quarantine bad versions there. Audit every package pull there. Then make the network agree with the policy.

.npmrc
registry=https://npm.internal.example.com/
ignore-scripts=true
CI runner egress shape
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: ci-runner-egress
  namespace: ci
spec:
  podSelector:
    matchLabels:
      role: ci-runner
  policyTypes:
    - Egress
  egress:
    - to:
        - namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: kube-system
      ports:
        - protocol: UDP
          port: 53
        - protocol: TCP
          port: 53
    - to:
        - namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: package-registry
      ports:
        - protocol: TCP
          port: 443

A malicious postinstall in that environment has a boring day. It can try to read files. It can try to open a socket. The socket goes nowhere. If it wants GitHub, npmjs.org, a lookalike telemetry domain, or a random IP, it gets a timeout instead of a credential dump.

Use Domain Filtering as a Backstop

Developer laptops are harder. People need browsers, docs, package managers, cloud consoles, weird vendor portals, and the occasional brand-new SaaS tool. You probably cannot give every laptop the same default-deny egress policy you give CI.

That is where DNS and secure web gateway controls earn their keep. Block newly seen domains in enforcement mode where you can. Run them in log-only mode where the false-positive cost is still unknown. The exact signal varies by provider: some use registration age, some use first-seen age, and some use a reputation category. The point is not purity. The point is to make first-contact exfiltration noisy or impossible.

For this incident, the primary endpoint was audit.checkmarx.cx, a Checkmarx lookalike under a different TLD. Even if a domain-age rule missed it, a "new destination from npm/node during install" rule should not.

Inspect Payloads, But Do Not Lead With It

Payload inspection is useful as a detective layer. Envoy access logs, an eBPF sensor, Tetragon, a commercial EDR, or a proxy that records process lineage can all answer the question that matters: why is node or npm sending outbound data during install?

Alert on the shape, not only the IOC:

Do not make this your first wall. Attackers can slow-drip data, use trusted destinations, or shape traffic to look less suspicious. Payload inspection tells you something got past the outer controls. Egress restriction prevents the easy win. If you already have cluster logs in Loki, this is also the kind of signal that belongs in the lightweight LLM-driven detection loop I described in the Sentinel post.

MCP Configs Are Secrets Now

The most important new wrinkle is the agent tooling. The Bitwarden payload reportedly looked for Claude, MCP, Cursor, Kiro, Codex, and related configuration. That is exactly what I would expect the next wave to do.

Agent configuration files stored in a vault drawer with abstract connections to cloud and CI systems.

Agent configs are not harmless preferences. They often contain bearer tokens, local service endpoints, cloud account mappings, GitHub auth, Slack bot tokens, and enough context to make a stolen credential immediately useful. Treat them like SSH keys:

This connects directly to the earlier Bitwarden Secrets Manager MCP work. The goal is not "agents never touch secrets." That is not how real infrastructure work happens anymore. The goal is that an agent gets a narrow, audited path to the secret it needs, and a poisoned install script does not get to inherit the same power by accident.

The Rollout I Would Actually Ship

If I were walking into a team cold after this incident, I would not start with a six-week architecture program. I would ship the boring controls first.

  1. Today: set ignore-scripts=true and min-release-age=3 in CI. Add the reviewed rebuild wrapper for the handful of packages that actually need lifecycle scripts.
  2. This week: put CI behind an internal npm proxy, point every pipeline at it, and make the proxy the only registry path CI can use.
  3. Next: apply default-deny egress to CI runners. Allow DNS, the internal registry, and required internal services. Do not allow general GitHub, npmjs.org, or arbitrary HTTPS during dependency installation.
  4. Then: enable newly seen domain blocking or logging for developer egress, because laptops will always be messier than CI.
  5. Finally: add process-aware detection for install-time outbound writes, GitHub workflow injection, unexpected repository creation, and package publish attempts.

Against the Bitwarden CLI compromise, that stack changes the outcome. Lifecycle scripts disabled? The loader never runs. Release-age gating enabled? Freshly published versions stop being the default answer to a broad range or dist-tag. CI egress locked to a registry? The payload cannot exfiltrate or propagate. Domain filtering on laptops? First-contact infrastructure becomes harder to use. Payload inspection? You get evidence quickly when something slips through.

The next npm compromise will have different filenames, different domains, and a different writeup. It will still need execution, secrets, and egress. Design around that, and you stop betting your incident response on being faster than a 93-minute window.