Your upstream registry is not your friend: Why security teams are demanding package cooldown policies

Many engineering teams treat PyPI, npm, Maven Central, and other upstream registries as trustworthy by default. They're not. These registries have no admission controls, no malware guarantees, and no obligation to notify downstream consumers when a vulnerability emerges for a previously clean package. While they offer convenience and availability, teams need to be careful not to conflate ease with safety.

Every time a developer pulls a dependency, they’re making a trust decision. But who vets that trust decision, especially with transitive dependencies? What often happens is the package enters the environment unchecked, moves into the build, gets stored in an artifact repo, and is assumed safe indefinitely. No one explicitly decided to trust it. It just arrived.

This is the structural vulnerability in modern software supply chains and better scanning alone won’t close the gap. The problem isn't just that teams occasionally pull bad packages. It's that teams delegate the decision to trust anything at all to the registry, with no policy enforcement at the moment that actually matters.

The cooldown period: A governance posture, not a product feature

The package cooldown period is one response to this problem gaining traction in DevSecOps.

The concept is straightforward. A newly published package, or a new version of an existing package, doesn't go straight to your development environment. It enters a controlled holding state for a defined period, where it undergoes continuous evaluation against evolving threat intelligence before any build can consume it. Ideally, teams can set the length of that window in customizable policies, calibrated to the risk window for their organization and/or a given ecosystem.

A cooldown is distinct from quarantine, which is an open-ended hold triggered by a specific CVE or malware indicator on a known-bad package. A cooldown period is time-based and preventive. The cooldown policy exists because the highest-risk window for a newly published package is in the first few days, when a compromised or malicious package is most likely to surface and when threat intelligence is still trying to catch up to it.

Enforced intake holds are standard practice in regulated industries, like pharmaceuticals, food manufacturing, and medical devices. You don't distribute what you haven't cleared. The software industry is arriving at the same conclusion, just later, under more duress (thanks to AI), and with considerably more attack surface.

The difference between a cooldown policy and the vague notion of "we'll scan before we use it" is precision. Cooldown policies are written as code. Teams define the threshold, the scope – which package formats, which registries, which versions, etc. – and what happens when a package triggers the policy: hold it, tag it for review, block it with a message. The parameters are custom because the risk tolerance, regulatory context, and build velocity are different for every organization.

Why cooldown policies matter now

The anatomy of a modern supply chain attack

The Shai-Hulud campaign made the failure mode visible at scale.

Shai-Hulud is a self-replicating worm that spread through compromised npm developer accounts, inserting malicious code into legitimate packages belonging to those developers. The mechanism that Shai-Hulud used is what matters here: it didn't introduce obviously malicious new packages. It compromised packages that were previously legitimate, with real publishing histories and real downstream users. Those packages would have passed a point-in-time scan at first pull.

The second wave, detected in November 2025, escalated the technique – executing during the pre-install phase rather than post-install, which moved execution ahead of most pipeline scanning controls. Within hours of detection, the campaign compromised over 700 npm packages, created more than 27,000 malicious GitHub repositories, and exposed approximately 14,000 secrets across 487 organizations.

A cooldown period, combined with continuous re-evaluation against new threat intelligence, changes the exposure window. Not because it catches everything at intake – it doesn't – but because the hold creates a window in which intelligence can catch up. A package that passed on Monday may be flagged by Wednesday. Without a cooldown policy, your build already consumed it on Monday.

Similar tactics, different ecosystems

The axios compromise in March 2026 ran the same playbook on a package with an even larger blast radius. Axios is the dominant HTTP client library in the JavaScript ecosystem – present in roughly 80% of cloud and code environments and downloaded approximately 100 million times per week. The attacker compromised the npm account of the project's primary maintainer and published two backdoored versions within a 39-minute window. Both versions introduced a hidden dependency that deployed a cross-platform remote access trojan to macOS, Windows, and Linux, targeting developer secrets, cloud credentials, SSH keys, and CI/CD pipeline tokens. The malicious versions were live for roughly three hours. Due to the short exposure window but high prevalence of axios, even limited availability resulted in measurable execution across environments.

LiteLLM, compromised by a separate threat group in the same period, demonstrated what that blast radius looks like in the AI stack specifically. LiteLLM is a unified proxy layer used to route requests across over 100 LLM providers, like OpenAI, Anthropic, AWS Bedrock, Google Vertex, etc. Compromising LiteLLM didn't just expose a build environment, it could provide attackers with credentials that span an organization's entire AI infrastructure. The attack reached LiteLLM indirectly, through another attack event against Trivy, a security scanner that LiteLLM's CI/CD pipeline trusted. The attacker poisoned that trusted dependency, harvested PyPI publishing credentials through it, and published malicious versions from there.

The vector changes across every one of these campaigns. The tactic – compromise something the ecosystem already trusts and weaponize dependency resolution – does not. The scope, speed, and cross-registry reach of these attacks show the limits of manual review in the Age of AI. When attackers can weaponize 100 million weekly downloads inside a three-hour window, the response requires automation and enforcement at the point of pull.

Cooldown policies and developer experience

Security teams that propose cooldown policies consistently encounter the same pushback: they add delays to developer workflows, create friction in the build process, and slow down teams already under delivery pressure.

These are legitimate concerns from engineering teams, even when they buy into a tighter risk posture. Fortunately, the right tooling provides a real solution.

The goal of a cooldown policy isn't to hard-stop every new dependency. It's to introduce a controlled hold with a clear, automated release path. Done correctly, a cooldown policy is largely invisible to developers working with packages that already cleared policy. The friction targets new packages in their highest-risk window, not on the entirety of the dependency graph on every build. For packages that pass continuously without incident, the cooldown period is a one-time delay on first use.

There's also a practical counterargument that often goes unmade: most engineering teams aren't looking to upgrade packages on zero-day. Unless teams configure a pipeline to pull the latest version automatically – an explicit choice, not a universal default – a developer may not know an update is available for days or weeks after publication. For those teams, a cooldown period introduces no meaningful delay. The package sits in its hold state and when it clears policy it is available when the team is ready to consume it. In this case, the friction becomes largely theoretical.

The objection to cooldown policies becomes harder to sustain when considering the alternative. The delay from a cooldown period is likely hours or days when first pulling a new package. Compare that to the blast radius from a compromised dependency that entered your environment unchecked. Teams measure the impact in incident response hours, remediation scope, and, increasingly, regulatory exposure under compliance frameworks, like DORA, CRA, and SOC 2, that treat supply chain controls as auditable requirements, not optional best practices.

The friction argument also assumes that the only alternative is speed. It isn't. Uncertainty is a significant factor as well. A supply chain where teams can answer the "are we affected?" question when a CVE drops because they tracked everything on entry mitigates the uncertainty factor drastically.

Building a mature supply chain governance posture

Cooldown policies are one layer in an enforcement stack, not a standalone fix. A mature posture has four components:

Enforcement at first pull: Evaluate every package against policy before it reaches the developer – not after the build, not in a separate scanning tool downstream, but at the moment the request occurs. This is where cooldown and quarantine policies activate.

Continuous re-evaluation: Today’s threat intelligence is different from that of last week so packages need on-going re-evaluation to stay up-to-date. CVEs don't announce themselves in advance. Malware indicators evolve. The assumption that a package is safe because it passed once is precisely the assumption that Shai-Hulud exploited.

Clear, actionable policy outcomes: Context matters. Block with a message. Hold with a tag. Allow with a warning. Developers need clear, actionable answers – not a silent failure that sends them looking for a workaround that bypasses your controls entirely.

Unified visibility across formats and registries: "Are we affected?" needs a fast, authoritative answer. That's only possible if there's a single source of truth for what's in the environment and what state it's in based on policy.

Making cooldown policies operationally viable at scale

The conceptual case for cooldown periods isn't difficult to make to a CISO. The operational challenge is implementing them in a way that doesn't break builds, generate alert fatigue, or require a dedicated team to manage exceptions.

This is where the underlying platform matters. Cloudsmith prioritizes enforcement. It evaluates packages against your policies at the point of first pull, before it reaches the environment or developer. Cooldown policies, quarantine actions, and blocking responses are configurable and written as code, giving security teams precise control over parameters without creating blanket friction for engineering.

The critical piece is what happens after intake: continuous re-evaluation. A cooldown period without ongoing re-evaluation is still a point-in-time scan – just one with a delay attached. Cloudsmith’s vulnerability data sources update multiple times per hour. The platform uses a lightweight matching process to associate new vulnerability data with existing packages in your registry continuously. When the threat landscape changes for one of your packages, this triggers your policies to run and take action on the impacted package(s). That combination – policy enforcement at intake plus continuous monitoring after the fact – is what narrows the exposure window.

For organizations with DORA, CRA, or SOC 2 obligations, this also produces the audit trail that auditors will ask for: a traceable record of what entered the environment, when, under what policy conditions, and whether it is still safe.

The gate doesn't protect you if it only checks once. Supply chain governance is not a point-in-time moment – it's a discipline. Cooldown periods are one piece of that discipline: a concrete, policy-driven practice that treats trust as something that has to be earned constantly, not assumed at the point of first pull.

Want to talk through what supply chain governance looks like for your organization? Whether you're building the case internally or evaluating tooling, we're happy to help you tackle the problem. Book a conversation with our team.