Blog

ChatGPT: Friend or Foe? Protect your dev team from the Dark Side!

Jun 29 2023/4 min read

It’s no surprise that AI is rapidly becoming the most powerful tool for developers. The ease of use and accuracy of such tools can even generate code snippets for a website written entirely on a napkin! While the potential of AI is exciting, it is crucial to address the potential vulnerabilities that bad actors can exploit within AI systems.

Regardless of the scale of a development project, whether it's a school assignment or a large-scale corporate venture involving multiple developers, they will all have one thing in common, packages! These can range from your usual NPM or Python packages to your core infrastructure servers running Linux that utilize RPM/DEB packages.

Potential Vulnerabilities: Exploiting AI Systems for Malicious Purposes


Bad actors have already found a way to exploit ChatGPT for software supply chain attacks, leveraging the service to introduce malicious packages into various systems. These packages may possess critical known vulnerabilities, be discontinued, or be deliberately crafted as malicious entities while still providing the functionality that the developer needs, avoiding suspicion. By manipulating data from various popular online sources, they can propagate false positive information about specific packages and their functionality. Particularly challenging is the fact that ChatGPT's training data is limited to a specific timeframe, leading to potential nightmares for large organizations progressively integrating ChatGPT into their development workflows.

To illustrate these risks, let's consider a simple scenario in which I asked ChatGPT to create a Node.js application for displaying local weather information:

Node.js example

From the provided screenshot, we can see that ChatGPT's initial instruction is to install package dependencies from NPM. While these packages may seem safe at first glance (They are!), their future integrity cannot be guaranteed due to ChatGPT's inability to verify their safety beyond its training data.

Understanding the Impact of Outdated Training Data

Notably, as of the publication of this blog, the free version of ChatGPT relies on data from September 2021, rendering it two years outdated. This poses a significant concern for packages that frequently receive security updates.

Interestingly, the command suggested by ChatGPT allows for the installation of the most recent version of a package, provided no specific version is specified. This proves beneficial for packages actively maintained and supported by the community. Nevertheless, for packages with limited maintenance or a history of security vulnerabilities, this approach exposes users to substantial risks, and providing the latest version wasn't always the case during my testing. For older projects where the dependencies are harder to update, the suggestions may include a specific version of a package that can be vulnerable.

The absence of real-time data access restricts ChatGPT's awareness of the latest updates or security patches released for packages. Consequently, users become vulnerable to potential security breaches. Malicious actors can exploit outdated packages by injecting malevolent code, thereby leading to devastating consequences such as data breaches, system compromises, or complete takeover of affected systems.

Enhancing Package Security with Cloudsmith

To mitigate this risk, Software as a Service (SaaS) solutions like Cloudsmith provide invaluable assistance. Cloudsmith offers a comprehensive package management solution, centralizing and securing the storage, management, and distribution of software packages.

By leveraging Cloudsmith, developers gain superior control over the packages integrated into their applications. Instead of relying solely on ChatGPT's recommendations, developers can actively scrutinize and verify the packages' authenticity, ensuring they originate from trusted sources. Even if the developer still decides to copy and paste the instructions from ChatGPT, an organization utilizing Cloudsmith will ensure that the packages are pulled through Cloudsmith, ensuring security throughout the entire organization.

Cloudsmith also offers advanced features such as package vulnerability monitoring and real-time alerts through webhooks. Through continuous scanning of packages within its repository, Cloudsmith identifies known vulnerabilities, enabling developers to remain informed about potential security risks. Consequently, even if ChatGPT suggests a package with security vulnerabilities, Cloudsmith can promptly flag it and notify developers to take appropriate action.

The potential consequences of relying solely on ChatGPT's recommendations for package installation become even more apparent when considering the dynamic nature of the software development landscape. New vulnerabilities and security patches are regularly discovered and released, rendering even recently installed packages potentially obsolete or vulnerable. The absence of real-time data access restricts ChatGPT's awareness of these updates, leaving users exposed to potential security breaches. This highlights the critical importance of staying up to date with the latest information on package vulnerabilities and security patches.

Proactive Measures: Automated Vulnerability Monitoring and Real-Time Alerts

Fortunately, there are proactive measures that developers can take to mitigate these risks. One such measure is to implement automated vulnerability monitoring and real-time alerts within the development workflow. By integrating tools like Snyk or WhiteSource into the package management process, developers can receive immediate notifications when a package used in their project is discovered to have a vulnerability. These alerts enable developers to take swift action, such as updating to a patched version or finding an alternative package, thus minimizing the window of opportunity for attackers.

Developers should prioritize establishing a culture of security within their organizations. This includes fostering a mindset of continuous learning and staying informed about emerging security threats and best practices. By investing in regular training and knowledge-sharing sessions, developers can strengthen their understanding of secure coding practices, vulnerability management, and the potential risks associated with AI-powered tools. Ultimately, a well-informed and security-conscious development team is better equipped to identify and mitigate potential vulnerabilities introduced by AI systems.

Conclusion

In conclusion, while ChatGPT's lack of access to live data poses the risk of receiving outdated or vulnerable packages, incorporating robust package management practices and utilizing SaaS tools like Cloudsmith significantly mitigate this risk. By prioritizing stringent package vetting and harnessing tools that provide real-time security insights, developers can enhance the security and integrity of their software projects.

Get our next blog straight to your inbox