OWASP CI/CD Part 10: Insufficient Logging and Visibility

Why developers must prioritise logging and visibility in CI/CD pipelines

This marks the final entry in our 10-part series on the OWASP Top 10 for CI/CD security risks. While production environments often benefit from rigorous hardening, logging and observability, in CI/CD pipelines this often remains critically under-addressed, despite being a key focus of OWASP. In this post, we’ll unpack the security risks posed by insufficient logging in CI/CD environments, why it matters, and how you can strengthen your defences using modern artifact management solutions like Cloudsmith, alongside powerful open-source tools like Falco and GUAC.

Risk of attacks going undetected in CI/CD pipelines

Without proper logging and visibility, attackers can move through your CI/CD systems undetected - whether stealing credentials, injecting malicious code, or tampering with artifacts. This lack of audit-ability makes post-incident investigations nearly impossible, delaying containment and increasing the potential damage.

As we look to evolving regulatory pressures through EU’s Cyber Resiliency Act (CRA), Network and Information Systems Directive 2 (NIS2) and Digital Operational Resilience Act (DORA), it’s more urgent than ever to have adequate auditing and visibility capabilities to meet these compliance requirements.

At the same time, threat actors are increasingly exploiting programmatic access, automated bots, and compromised tokens to breach and persist within CI/CD pipelines. While strong observability might feel like a "nice-to-have" on a checklist, it should be your first line of defence and your best tool for recovery.


Why logging in CI/CD is often overlooked

While logging is typically mature in production environments (covering servers, apps, and network infrastructure) CI/CD systems often fall through the cracks. Consider how many tools touch your pipeline:

  • Container registries
  • CI servers (Jenkins & CircleCI)
  • Package managers (npm, pip & more)
  • Supply Chain Management (GitHub & GitLab)
  • Deployment/orchestration platforms (Kubernetes)
  • Artifact repositories (Cloudsmith, Nexus & Artifactory)

Each of these components introduces new vectors, and often lack default security logging capabilities unless explicitly configured.


How to build effective logging and visibility in CI/CD

There are several elements to achieving sufficient logging and visibility:

1. Map your CI/CD environment
Before you can secure your pipeline, you need to inventory every system involved, from source code managers, build servers, artifact storage, through to deployment systems, whether that be Kubernetes or Terraform. Open-source tools like GUAC (Graph for Understanding Artifact Composition) help you map relationships between artifacts, identify missing SBOMs/SLSA attestations, and catch policy violations. GUAC ingests metadata like SBOMs and builds a graph to trace how artifacts are produced, increasing transparency across your pipeline.

The GUAC Visualizer explores the various nodes and relationships of GUAC. The below CLI command searches the GUAC graph for any vulnerabilities affecting that Package URL (PURL), including transitive ones.

guacone query vuln --package-url pkg:docker/myimage@sha256:abc123

From here, you can generate a summary report of a given PURL. Users can generate a summary of all SBOMs, SLSA attestations, vulnerabilities and licensing information that GUAC knows about for the artifact:

guacone query known --package-url pkg:docker/myimage@sha256:abc123

2. Enable all relevant log sources
In the case of Github, make sure audit logs, workflow run logs, system logs and security event logs are enabled and accessible. Naturally, this same rule applies to other SCM tools like Gitlab and CI servers like CircleCI. These sources should include user access logs (such as logins and permission changes), as well as pipeline events (build triggers and artifact uploads). Developers also need to be able to track API usage and access token activity, deployment actions and runtime configurations, and more. Don’t forget programmatic access. API tokens, webhooks, and CLI-based interactions are commonly exploited, and we must ensure visibility covers these too.

3. Centralise logs for correlation & alerting
Shipping logs to a centralised Security Information and Event Management (SIEM) like Datadog allows you to correlate events across tools and detect anomalies early. Thankfully, Cloudsmith now offers a prebuilt Datadog Agent container with the Cloudsmith integration already installed. This streamlines observability by eliminating manual integration steps and avoiding version drift. You’ll get dashboards and alerts covering, artifact usage and delivery, audit logs, compliance status, and suspicious access patterns. In this way you can monitor if a previously unused artifact is suddenly downloaded from a production repository. That could be a red flag for your team.

4. Detect unwanted SCM activity in real-time with Falco
In some cases you might prefer not to send all events and logs to a centralised SIEM solution like Datadog, Dynatrace or Splunk (either due to verbosity of logs or due to storage costs). In these scenarios, Falco could be a great lightweight approach for monitoring real-time events from Supply Chain Management (SCM) platforms like Github as well as the hosts and orchestration layers (Linux and Kubernetes). Falco is an open-source runtime security tool that integrates seamlessly with GitHub and Kubernetes via dedicated plugins. It listens to events in real time and can alert you when something suspicious happens, like, if a user pushes sensitive credentials like Kubernetes Secrets to a public-facing repository.

- rule: Secret pushed into a public repository
  desc: A secret (AWS keys, github token...) was committed into a public repository
  condition: github.type=push and github.diff.has_secrets = true and github.repo.public=true
  output: One or more secrets were pushed into a private repository (repository_name=%github.repo.name repository_url=%github.repo.url repo_owner=%github.owner org=%github.org user=%github.user secret_types=%github.diff.committed_secrets.desc file=%github.diff.committed_secrets.files line=%github.diff.committed_secrets.lines url=%github.diff.committed_secrets.links) 
  priority: CRITICAL
  source: github
  tags: [github]


For GitHub, Falco can set up webhooks on your repositories, analyse push events, new branch creation, token usage, as well as alert you via Slack, email, or send those alerts back into your SIEM. Falco doesn’t store or index your data, meaning it's a fast, lightweight, and cost-effective alternative for sufficient logging and visibility of Github. Think of it as a security camera for your repositories and clusters.

5. Enforce security policies and log decisions
As a comprehensive artifact management platform, Cloudsmith allows users to define and enforce security policies around artifact usage and publishing. Every decision made by these policies is logged and can be queried via API. We call these Decision Logs.

Example API call to retrieve policy logs:

curl -X GET \
"https://api.cloudsmith.io/v2/workspaces/$CLOUDSMITH_WORKSPACE/policies/decision_logs/?policy=$POLICY_SLUG" \
  -H "Accept: application/json" \
  -H "X-Api-Key: $CLOUDSMITH_API_KEY" | jq .


You’ll get detailed insight into the evaluation start and end times of the Cloudsmith security policies, but also inputs used to evaluate those policies, the results and actions taken, as well as providing traceability and supporting regulatory audits and incident investigations.

Separate to decision logs, Cloudsmith provides client logs and audit logs. Client logs provide a single pane of glass to visualise, filter, and export information about package usage within your entire workspace. It keeps the same artifact-agnostic support to unify information about all of your repositories, packages, and users in one place. No matter if your goal is gaining visibility, performing an exploratory analysis, or reporting to stakeholders, client logs provide all the information that you need. With client logs you could analyse which Maven artifacts were downloaded last month, or better understand all the different artifacts used in a staging pipeline.

Similarly, Cloudsmith’s repository Audit Logs provide a timeline view of non-package (but typically administrative) actions that have occurred within the repository, such as creating/modifying repository Retention Rules or creation of an Entitlement Token.

Key takeaways for developers

You can’t protect what you can’t see.
Logging and visibility are your eyes and ears in fast-moving CI systems.


Invest the time to get it right, and your response time in a real-world attack could shrink from hours to minutes. That’s where different approaches matter. SIEMs can be really powerful for audit-ability of all logs data from multiple different CI/CD sources, but a dedicated event streaming approach through Falco might be ideal for triggering detections when a user has made an anomalous change to your Github repository in real-time.

CI/CD systems are increasingly targeted, so treat them like production. Logging must cover both human and programmatic access. Centralising logs unlocks correlation, detection, and faster incident response. Open-source tools like Falco and GUAC, and platforms like Cloudsmith and Datadog, make observability scalable and actionable. If you're deploying code to production, you need full visibility into your build and release pipeline. If you enjoyed this content, you can download the OWASP Top 10 for CI/CD Guide for free today.

Free Cloudsmith eBook on the OWASP Top 10 for CI/CD systems
Read more on
Keep up to date with our monthly newsletter

By submitting this form, you agree to our privacy policy