Dependency Confusion Attacks

Aug 6 2021/Security/4 min read
Dependency Confusion Attacks
Picture of Dan McKinney
by Dan McKinney
A new class of software supply chain attack shows that a lot of organizations are vulnerable to the injection of malicious code into their build processes. Are you one of them?

You must secure your software supply chain. Now, more than ever, it is vital.

For a long time, a primary concern in security was malicious actors exploiting inherent weaknesses in software. Privilege escalations, SQL injections, race conditions etc. These are, of course, still a concern and should be afforded the attention that they deserve. But now, there is another worry, one that is arguably even more important – A Supply Chain Attack.

Rather than discover an existing flaw in a piece of software, a supply chain attack is designed to allow a malicious actor to target a specific piece of software, and insert a weakness of their choosing during the build process.

One recent example was the SolarWinds breach, where a build process itself (or rather, a build host) was compromised and was modified to insert malicious source code into a build of one of SolarWinds most popular products.  This product was then distributed to SolarWinds customers, and as it came direct from the vendor none of the customers was aware they were running a vulnerable build of the product.

That’s scary enough, but it did involve the penetration of an internal network and the build host first (using various existing exploits), in order to achieve the malicious actor’s goal of inserting their malware into a SolarWinds product.  But now, another approach has been demonstrated that can achieve the same outcome, with a lot less effort.

Just a few weeks ago it was revealed by a security researcher, Alex Birsan, that over 35 major companies were found to be vulnerable to the injection of malicious packages, via public sources, into their software supply chains.

It’s been dubbed the Dependency Confusion Attack or Package Namesquatting Attack, and the exploit process goes like this:

  1. Identify the names of private internal packages used in software builds – primarily through leaked information in javascript files and other packages.
  2. Upload a malicious package with the same name as one of these private internal packages to one of the public repositories. Note: Anyone can upload and there is very little or no checks performed.
  3. Wait for a build process or individual developer at a victim organization to try to pull the private internal package, alongside public packages that the build or developer relies upon.
  4. The malicious package gets pulled from the public upstream instead of the private internal package.

By the time it’s been pulled, and an attempt made to install the package, remote code execution has been achieved thanks to things like package pre-install scripts. And in an automated CI environment, where there is likely no one actively monitoring builds, this failure could go unnoticed for a while.

Let’s also consider for a moment that the malicious package implements the functionality from the original package as well as its own malicious code. Given that there have been major leaks of internal source code in the past, it’s not impossible that a malicious package could do this. In this case, the build using this malicious package would succeed.  This then results in something akin to the SolarWinds breach, with a product or package now compromised - except it wasn’t necessary to penetrate a network or attack a build host or other infrastructure to achieve the same outcome.

So, how do you defend against this type of attack?

One way to mitigate this attack is simply to not make use of any public upstreams at all and to manually verify all packages that are used, but given the large number of open-source public packages that most modern software relies upon (and increasingly so as we move forward) - this is a difficult task, and certainly, one that will not scale.

But, using a package management service like Cloudsmith, another approach could look like this:

  1. Two private repositories - one “unsafe” with upstream proxying and caching from a public repository enabled, and the other one “vetted” with public upstreams proxying/caching disabled.
  2. Packages are pulled from upstreams into the “unsafe” repo, automation uses a private repository management API to pull package checksums and signatures, and run vulnerability scans.
  3. If all checks are clear, and nothing untoward is discovered, the packages can then be promoted to the “vetted” repo, ready for use by development teams and build / deployment processes.

Importantly, internal build processes and developers never connect directly to a public upstream – this is the most likely attack vector, and it offers an almost unlimited attack surface.  By pulling packages into a private repository, scanning/checking and then promoting based on the results, you have taken back control. Control over your software supply chain, control of the “single source of truth” for all your packages and dependencies.

You’ve closed the door. And cut off the previously exposed entry point into your supply chain.


Supply chain attacks are here, and they are here to stay. In addition, npm and PyPi (for example) are now actively removing security PoC packages that are being used to test for this vulnerability, so external testing for this is becoming increasingly difficult.  The answer is to take back control.  Control covers a lot of things, but in essence:

  • Minimise trust. Scan at source, during the build and at packaging.
  • Isolate from third parties. Control the single source of truth.
  • Restrict publishers. Be a curator, not just a consumer.
  • Pin dependencies. Prefer repeatable builds.
  • Use environment segregation.
  • Use package promotion.
  • Draw a thread from build to deployment.

Secure the supply chain.

Get our next blog straight to your inbox