EVENTS / webinar

How to securely source your LLM models from Hugging Face

Learn how to safely ingest, verify, and manage LLM models from Hugging Face in this live webinar. See a real workflow for quarantining, approving, and promoting models into production without slowing developers down.

  • Thu, Feb 5 · 4:00PM UTC

Things you'll learn

  • The real risks of sourcing models directly from public registries
  • How model drift and mutable artifacts impact reproducibility and safety
  • How to create a trusted intake path for Hugging Face models
  • How to quarantine, verify, and promote models before they reach production
  • What “artifact management for AI” actually looks like in practice

Speakers

Nigel Douglas
Nigel Douglas
Head of Developer RelationsCloudsmith
Liana Ertz
Liana Ertz
Product ManagerCloudsmith

Summary

Open model ecosystems like Hugging Face have transformed how teams build with AI. But with that speed comes risk: unverified publishers, mutable artifacts, dependency confusion, and models that change beneath your feet.

If you’re pulling models straight into production pipelines, you’re inheriting all the uncertainty of the public internet - into some of your most sensitive systems.

In this live session, Cloudsmith experts will show you how to take control of your AI supply chain. You’ll learn how to securely ingest, verify, and distribute LLM models from Hugging Face without slowing your teams down.