The Golden Bough – Zepl and Kubernetes
Elyse Phillips
April 17, 2018
Cloud-native architecture is revolutionizing the way we design, implement, and deploy software. Here at ZEPL, we are fully embracing this modern approach, utilizing Kubernetes, containers, and scalable microservice architecture to create the foundation for the next generation of data analytics.
Kubernetes is an open source platform that provides many powerful features for managing modern containerized applications in a distributed environment, including:
- Elastic scalability
- Fault tolerance
- Streamlined deployments and upgrades
- Storage management
To understand better how we benefited from using Kubernetes orchestration, Docker containers, and cloud-native architecture, let’s look back at our original approach to application design and deployment.
Zepl is fundamentally a distributed system containing many interoperating, independently distributed microservices. When we first began creating our platform, we utilized an automated process to build and package our applications as RPMs, and deployed them to servers using a combination of SSH scripts and Ansible playbooks. There were a number of limitations to this approach:
- Automating and managing the deployment process was complex, and difficult to debug when failures occurred. This limited our ability to perform frequent deployments to production.
- Application versioning was not automated. Rolling back to a previous version was a time-consuming manual process.
- Application configuration and secrets were managed manually with each deployed component, making it difficult to maintain consistency and update configurations reliably across the distributed environment.
- Our automation was focused on basic build and deployment, but we needed more – runtime monitoring of components, elastic scalability, centralized logging, and rolling updates/rollbacks of components in the distributed system.
- Packaging and deploying our application services as RPMs limited our ability to maximize infrastructure and resource utilization.
By adapting our application architecture to a containerized model, using Kubernetes for orchestration, we were able to make many significant improvements to our build, deployment, and runtime management processes:
- For every merge commit to the master branch in our source repository, run a full build and test cycle, packaging the application into a new Docker image version in our AWS container registry (ECR).
- Automate the deployment of new versions of application components (or rollbacks to previous versions) in a reliable, controlled way using Kubernetes rolling upgrades, referencing our versioned Docker images in ECR.
- Manage application configuration centrally and consistently using Kubernetes support for ConfigMaps and Secrets.
- Implement a centralized logging and monitoring solution by capturing standard output logging from our Docker containers, using fluent agents on each node, and ingesting that data into an Elasticsearch cluster running in Kubernetes for analysis and monitoring through Kibana dashboards.
- Incorporate automated health checks into our runtime environment, using Kubernetes readiness and liveness probes. Using these health checks, not only can we detect and resolve problems more quickly, we can automate network traffic management through our load balancers.
- Support elastic scalability in our runtime environment by utilizing Kubernetes Deployments and ReplicaSets, allowing us to easily add/remove instances of our services when necessary.
- Save on AWS infrastructure costs – by containerizing our applications, we save on server infrastructure costs by deploying lightweight containers instead of spinning up full EC2 instances for each application instance. And by taking advantage of Kubernetes Ingress controllers for load balancing and traffic management, we can run with fewer Elastic Load Balancers (ELBs).
By embracing cloud-native architecture, we’ve been able to dramatically improve the speed and reliability at which we can deploy new versions of our applications, while at the same time supporting more robust scalability, monitoring, and configuration management. This gives us greater agility and allows us to deliver more exciting features to our customers quickly, and provide a stable, reliable runtime environment for our users.
By Mei Long