Cribl > Cribl on Kubernetes

Run Cribl Like You Run Everything Else — on Kubernetes.

Your applications are containerized. Your infrastructure is declarative. Your deployments are automated. But your data pipeline is still a VM you SSH into. Blue Cycle deploys Cribl Stream and Edge as fully cloud-native, Kubernetes-managed infrastructure — Helm charts, auto-scaling worker groups, GitOps promotion workflows, and infrastructure-as-code from day one.

Talk to a Pipeline Architect

Your Pipeline Shouldn't Be the One Thing That Isn't IaC

The Problem

Most Cribl deployments start as VM-based installs — leader nodes on EC2, workers provisioned manually, configs managed through the UI. That works at small scale. It does not work when your platform team manages 50 microservices through Terraform and ArgoCD, and the data pipeline is the one piece of infrastructure that requires console access to change.

VM-based Cribl can't auto-scale with traffic spikes. It can't be promoted through staging to production with a pull request. It can't be rolled back in 30 seconds when a config change breaks routing. Worker groups are manually sized, and capacity planning is guesswork.

The Solution

Blue Cycle deploys Cribl Stream and Edge natively on Kubernetes — leaders, workers, and edge nodes all running as container workloads managed by the same orchestration platform as your applications. Helm charts define the deployment. HPA policies auto-scale workers based on actual throughput. Pipeline configurations live in Git and promote through the same CI/CD workflows as application code.

This isn't just Cribl in a container. It's Cribl as infrastructure-as-code — declarative, version-controlled, and operated with the same tooling your platform team already uses. ArgoCD or Flux for GitOps. Terraform for the underlying infrastructure. Prometheus and Grafana for observability of the pipeline itself.

Architecture Patterns We Deploy

Helm-Managed Cribl Stream

Leader and worker nodes deployed via Helm with values files per environment. Worker groups auto-scale using HPA based on CPU, memory, or custom metrics. Rolling updates with zero-downtime deployments. Persistent volumes for queuing and replay.

GitOps Pipeline Promotion

Pipeline configurations stored in Git — routes, pipelines, packs, and destinations — versioned alongside your application infrastructure. Changes promote through dev to staging to production via pull request. ArgoCD or Flux sync ensures the running config matches the declared state.

Multi-Region & Multi-Cluster

Cribl worker groups deployed across regions and clusters with cross-region failover. Persistent queuing ensures zero data loss during outages or maintenance windows. Regional worker isolation for data residency requirements.

Edge Nodes as DaemonSets

Cribl Edge deployed as Kubernetes DaemonSets for node-level telemetry collection. Consistent agent deployment across all nodes without per-host configuration. Automatic deployment to new nodes as the cluster scales. Collection at the source before data hits the network — reducing east-west traffic and enabling pre-pipeline shaping.

FedRAMP & Compliance-Ready

Cribl now has its FedRAMP ATO, and Blue Cycle has the deployment experience to match. Kubernetes deployments that satisfy compliance boundaries: network policies for pod isolation, audit-ready logging, data residency controls, and RBAC enforcement that passes assessor review. StateRAMP-ready architectures included.

RBAC & Multi-Tenant Governance

Kubernetes namespaces and Cribl RBAC aligned to organizational boundaries. Security teams, SRE, platform engineering, and compliance each get isolated pipeline access without stepping on each other. From 2-team environments to 15+ distinct Cribl user groups — the access model scales with the org.

WHAT YOU WALK AWAY WITH

Cribl Stream & Edge running natively on Kubernetes (production-ready). Helm charts with per-environment values (dev / staging / prod). GitOps pipeline promotion workflow (ArgoCD or Flux). HPA auto-scaling tuned to your throughput patterns. Persistent queuing & cross-region failover architecture. RBAC model aligned to your org (Kubernetes + Cribl layers). Terraform / IaC for the underlying infrastructure. Prometheus + Grafana dashboards for pipeline health. Runbooks for upgrades, rollbacks, DR, and capacity planning. Compliance documentation (FedRAMP/StateRAMP as applicable).

✓ Cribl Stream & Edge running natively on Kubernetes (production-ready)

✓ Helm charts with per-environment values (dev / staging / prod)

✓ HPA auto-scaling tuned to your throughput patterns

✓ Persistent queuing & cross-region failover architecture

✓ RBAC model aligned to your org (Kubernetes + Cribl layers)

✓ Terraform / IaC for the underlying infrastructure

✓ Prometheus + Grafana dashboards for pipeline health
(add-on)

✓ Runbooks for upgrades, rollbacks, DR, and capacity planning

The Infrastructure-as-Code Stack

Platform Layer

Terraform / Pulumi for cluster & networking. EKS / GKE / AKS provisioning. VPC, subnets, security groups as code. IAM roles & service accounts (IRSA/Workload Identity). Secrets management (Vault, AWS Secrets Manager, SOPS).

Deployment Layer

Helm charts for Cribl Stream & Edge. ArgoCD / Flux for GitOps sync. HPA policies tuned to pipeline metrics. PersistentVolumeClaims for queuing & replay. Rolling updates & canary deployments.

Operations Layer

Prometheus + Grafana for pipeline observability. Alert rules for queue depth, lag, worker saturation. Config drift detection & automated remediation. Staging → production promotion via PR. Disaster recovery & cross-region failover runbooks.

Related Use Cases