Keywords
License
- Apache-2.0
- Yesattribution
- Permissivelinking
- Permissivedistribution
- Permissivemodification
- Yespatent grant
- Yesprivate use
- Permissivesublicensing
- Notrademark grant
Downloads
Readme
One framework to🧑💻 Develop ▶️ Deploy and 📊 Operate
data workflows with Python and SQL
🔄 Ready-to-use data ETL/ELT patterns.
🧩 Lego-like extensibility.
🚀 Single click deployment.
🛠 Operate and monitor. ️
Introduction to the VDK SDK
- Framework to simplify data ingestion and data processing.
- Write any code using Python or SQL.
- A toolset enabling you to run data jobs.
Get started with VDK SDK:
➡ Install Quickstart VDK. Only requirement is Python 3.7+.pip install quickstart-vdk
vdk --help
➡ Develop your First Data Job if you are impatient to start quickly.
Data Ingestion
- Extract data from various sources (HTTP APIs, Databases, CSV, etc.).
- Ensure data fidelity with minimal transformations.
- Load data to your preferred destination (database, cloud storage).
Ingestion examples:
➡ Ingesting data from REST API into Database➡ Ingesting data from DB into Database
➡ Ingesting local CSV file into Database
➡ Incremental ingestion using Job Properties
Data Transformation
- SQL and Python parameterized transformations.
- Extensible templates for data modeling.
- Creates a dataset or table as a product.
Get started with transforming data:
➡ Data Modeling: Treating Data as a Product➡ Processing data using SQL and local database
➡ Processing data using Kimball warehousing templates
Data Job Deployment (build, deploy, release)
VDK Control Service provides REST API for users to create, deploy, manage, and execute data jobs in a Kubernetes runtime environment.- Scheduling, packaging, dependencies management, deployment.
- Execution management and monitoring.
- Source code versioning and tracking. Fast rollback.
- Manage state and credentials using Properties and Secrets.
Get started with deploying jobs in control service:
➡ Install Local Control Service with vdk server --install➡ Scheduling a Data Job for automatic execution
➡ Using VDK DAGs to orchestrate Data Jobs
Operations and Monitoring
- Use Operations UI to monitor, troubleshoot data workloads in production.
- Notifications for errors during Data Job deployment or execution.
- Route errors to correct people by classifying them into User or Platform errors.
Get started with operating and monitoring data jobs:
➡ Versatile Data Kit UI - Installation and Getting Started➡ VDK Operations User Interface - Versatile Data Kit
Lego like extensibility
- Modular: use only what you need. Extensible: build what you miss.
- Easy to install any plugins as python packages using
pip
. - Used in enhancing data processing, ingestion, job execution, command-line lifecycle
Get started with using some VDK plugins:
➡ Browse available plugins➡ Interesting plugins to check out:
Track Lineage of your jobs using vdk-lineage
Import/Ingest or Export CSV files using vdk-csv
➡ Write your own plugin
Support and Contributing
For Support, you can join our Slack channel, create an issue or pull request on GitHub to submit suggestions or changes.
If you are interested in contributing as a developer, visit the contributing page.
Contacts
- Message us on Slack:
☝️ Join the CNCF Slack workspace.
✌️ Join the #versatile-data-kit channel. - Join the next Community Meeting
- Follow us on Twitter.
- Subscribe to the Versatile Data Kit YouTube Channel.
- Join our development mailing list, used by developers and maintainers of VDK.
Code of Conduct
Everyone involved in working on the project’s source code, or engaging in any issue trackers, Slack channels, and mailing lists is expected to be familiar with and follow the Code of Conduct.