Securing AI-generated code with Cloudsmith

How well do you really know your LLM-generated code? LLM-powered copilots now write a lot of code, and they tend to suggest new dependencies your organization may not have used before. Without systematic security checks, you’re at risk of a hallucination exposing your production systems to a supply chain attack. Our increased reliance on open source and other third-party code increases risk to your builds. AI-generated code exacerbates those risks.

Let’s talk about how Cloudsmith secures your AI-generated code. By leveraging policy-as-code, private repo caching, and other best practices, Cloudsmith helps software development teams adopt a “verify and then trust” architecture, balancing AI-assisted software development with the need to defend system integrity.

Stop LLM hallucinations from breaking your build

LLMs are a great example of an untrusted contributor. They generate code without a complete understanding of your internal security standards. Their code may reference non-existent packages in a phenomenon known as AI hallucinations, and they may pull in malicious dependencies, created via a new attack vector dubbed slopsquatting. Cloudsmith’s 2025 Artifact Management Report revealed that 79% of professionals believe that AI will exacerbate open-source malware threats, including typosquatting and dependency confusion.

The traditional "trust, then verify" model is no longer viable, because there is no true human intent behind an LLM’s package selection. Only 67% of developers review AI-generated code before every single deployment, per the 2025 report. To ship AI-generated code safely, we recommend moving to a "verify, then trust" architecture. This means treating every AI-suggested dependency as a potential threat until it passes a programmable gate.

Cloudsmith acts as that gate. By sitting between your LLM-powered copilots and your package ecosystem, we provide the metadata layer and policy enforcement needed to ensure that AI-generated code only utilizes verified, licensed dependencies within your private, controlled repositories.

Here is how you use Cloudsmith to stop AI-induced challenges at the front door.

The role of programmable security policies

What does “good” software supply chain security look like? It should do a few things:

  • Builds pull packages from trusted sources.
  • Scan new dependencies for malware and other vulnerabilities.
  • Automate blocking vulnerable packages from production environments.
  • Builds generate SBOMs.
  • Check package license(s) before it enters a build.
  • Continuous and systematic performance of these tasks at scale for end-to-end protection.

When you introduce AI into the equation, these tasks become more important. Implementing automated security processes and policies is the best way to make sure they happen.

Limitations of “checkbox security”

The tradeoff between power and simplicity is one developers often face. “Checkbox security” is simple and straightforward, but it has its limits. You set policies via UI, using checkboxes, radio buttons, and other basic interface elements. These security interfaces are easy to understand, but they’re static and are not fully customizable to the user, organization, or project. Limited options and functionality mean that checkbox security doesn’t scale for enterprise-level development. It’s hard to do things like apply unique or fine-grained security policies to different projects, teams, environments, formats, and contexts.

Scaling security with policy-as-code

Contrast that with policy-as-code where you get granular control over security policies, which is an inherently more scalable approach. Open Policy Agent (OPA) is an industry standard for policy-as-code. It pairs with the Rego language to define rules for software usage and consumption. While Rego introduces a learning curve, OPA’s benefits are an enterprise-level industry standard.

Cloudsmith’s Enterprise Policy Manager (EPM) acts as an OPA engine so you can use the same Rego you may already be familiar with to create policy-as-code controls in Cloudsmith. You use Rego to create universal, baseline policies for every artifact in your Cloudsmith repos.

With EPM and Rego, you can create custom rules for specific repos, projects, or applications that cater to fine-grained security needs. EPM policies can evaluate SBOM data to ensure end-to-end consistency throughout your data pipeline. Our Rego Cookbook highlights some of the different approaches to building policies, and serves as a great starting point for creating your own. We also developed the Cloudsmith Policy Advisor GPT to help flatten the EPM learning curve.

For example, in the below Rego policy for EPM, you can enforce an organization-wide requirement for a package with a specific version range. The policy evaluates the version of a package against a specified package version. It then triggers an action, like quarantining the package, if the version of the current package being evaluated is older than the specified package version.

package cloudsmith
default match := false
match if count(reason) > 0
reason contains msg if {
	pkg := input.v0.package
	pkg.name == "h11"
	# Parse versions using Cloudsmith's semantic version comparator
	semver.compare(pkg.version, "0.16.0") == -1
	msg := sprintf("Package 'h11' version '%s' is older than 0.16.0", [pkg.version])
}

Cloudsmith functions as a central source of truth for all your development packages and artifacts. From a security perspective, it functions as a gatekeeper, blocking or quarantining malicious, vulnerable, or non-compliant packages, in accordance with your policies, before they enter your build systems. When you pull packages, you always pull them from Cloudsmith, which makes sure they meet your security requirements.

AI-generated code introduces new complexities to securing your software supply chain. Discover how Cloudsmith can help automate the way you secure AI-generated code at enterprise scale with a free demo.

Keep up to date with our monthly newsletter

By submitting this form, you agree to our privacy policy