I attended the kubernetes meetup in Amsterdam on 2019-10-02. Here are my summaries of the talks :-)
Alex is both a founder of StorageOS and a co-chair of the CNCF storage SIG. So he’s got two hats. More details on the SIG: https://github.com/cncf/sig-storage
Why is storage important? Well, there’s no such thing as a stateless architecture, in the end. So something needs to be stored somewhere. Containers are nicely portable, but if the storage it needs isn’t portable…. That’s why it is important.
The SIG wrote a summary of the storage landscape: https://github.com/cncf/sig-storage . Normally, you had to work with whatever storage your company’s IT department was using. Now developers get a say in it.
Storage has attributes: availability, performance, scalability, consistency, durability. But they can mean different things to different people. Performance might mean “throughput” but also “latency”, for instance.
You can categorize storage solutions: hardware, software, cloud services. “Software” then means “software-defined storage on commodity hardware” and often tries to “scale out”. “Hardware” is much more specialized and tries to “scale up”.
Another categorization: access via volumes (blocks, filesystem) and access via an API (like object stores). Kubernetes mostly deals with the volumes kind.
Data access: file system, block, object store. All of them are better/worse suited for different tasks. You won’t use an object store for low-latency work, for instance.
A big differentiator: storage topology. Centralised, distributed, sharded, hyperconverged. “Centralised” often means proprietary hardware. “Distributed” often uses a shared-nothing architecture with regular hardware. “Sharded” is often good at spreading your load, but it can be very tricky to get right. “Hyperconverged” means that nodes are used for both storage and computing.
Another aspect: data protection. RAID and mirrors for local disks. Or replicas of entire nodes. Erasure coding: quite extreme distribution, that’s why amazon’s s3 can claim six 9’s of durability.
Kubernetes has the CRI (the Runtime interface) and the CNI (network). It now has a CSI: storage. Kubernetes is a container orchestration solution, so it really also needs to talk to the storage layer.
How k8s progressed:
K8S native drivers: hard to debug and update.
Docker volume driver interface.
K8S flex volumes, the first outside-of-the-core solution. It still works.
CSI, container storage interface. 1.0 was released in 2018, it is now the standard.
Now the second part of the presentation storageOS. “Software defined cloud native storage”. It is a containerised project, so there are no other dependencies.
It consists of two parts. Control plane: it manages the actual storage. The data plane manages the volumes (both block and file system).
It normally is deployed as a single light-weight container on every individual node (via a daemonset, for instance). Every container has an API. One of the integrations available for it is k8s’ CSI.
StorageOS creates a pool of storage that spans the entire cluster. An admin will configure/register storage classes. Developers put “volume claims” in their k8s.
As soon as you get a volume in the storage pool, it is available on any node in the entire cluster. This gives you lots of flexibility in moving containers between nodes.
Behind the scenes, it uses synchronous replication beween a primary volume and a user defined number of replicas to protect data from disk or node failure. Nodes can have different numbers/sizes of disks.
They’ve tried to make storageOS usable for a “hyperconverged” environment where every node is used for both storage and calculation. StorageOS will run quite happily on a single CPU and a GB of RAM.
Most people will manage storageOS via k8s, but you can also use the command line or a GUI. For monitoring, they provide lots of prometheus end points.
Some extra features:
Locality, you can get the workload to run on the node where the data is.
There’s encryption at rest. Keys are stored as kubernetes secrets. They advantage is that you have your keys, instead of your cloud provider having the keys to your data.
Sergey works at everon/evbox (https://evbox.com), the host of the meeting.
They knew from day one that they had to run in the cloud, so they were lucky to be cloud-native from the start. They chose Google’s cloud platform then. And in general, it has been working fine for them.
They had a small team originally and didn’t want to “waste” time on infrastructure. They started using Google App Engine. Google at that time used the marketing term “NoOps”, which sounded fine to them :-)
When they switched to kubernetes, it took seven months. That was a bit long. They tried to get buy-in for the process by involving lots of people from most teams. This wasn’t such a good idea (making decisions took a lot of time), it would have been better do it with a smaller ad-hoc team. Another reason for the slow switch was that the company was growing a lot at that time: they needed to get the new developers up to speed at the same time.
Another problem: slow development environments. They used Docker Desktop. That used 25% CPU when idle. Kubernetes just isn’t designed to run on a laptop. (Note: there were some other suggestions, like minikube, from the audience)
A third problem: cluster configuration. Configuring anything within a kubernetes cluster works fine. But once you have to interact with something in the outside world (like some IP ranges), you can run into
Some lessons learned:
Try it with one product first. Only then move on to the rest of your products. You have some initial pain because you have to maintain two infrastructures, but it is worth it.
Spread the knowledge, but focus. Don’t let knowledge-spreading hold your migration back.
Set a scope by prioritizing. Application servers; configuration/scheduling/service mesh; messaging/storage.
Know the cost of a configuration change.
Know if cloud-agnostic is important for you.
Monitoring is important. The rest of the talk is about monitoring.
Monitoring. There’s a lot! Zabbix, prometheus, splunk, nagios, datadog, graphite, etc.
A book he suggests: the art of monitoring . From the same author there’s also “monitoring with prometheus”.
Monitoring: there are lots of sources. Your code, libraries, servers, the OS, your infrastructure, services from your cloud provider, external services, etc. And there are many destinations: storage, visualisation, alerting, diagnostics, automation, etc.
So: make an inventory of what you want to monitor and how you want to use it.
In kubernetes, you additionally want to monitor containers, pods,
nodes and your cluster. There are some extra sources, too: kubelet,
the scheduler and the proxy. Interestingly, there are also more
destinations: scheduler (they’re not that advanced that they need to
customise it, yet), autoscalers (they’re using this), dashboard and
Note: there is no build-in monitoring data storage solution in kubernetes. You’ll need to use something else for that (like prometheus).
What you need to design is a monitoring pipeline:
Some public clouds have their own default monitoring solution. With google, you get “stackdriver”. Amazon: cloudwatch. Azure: monitor. It is relatively cheap and it is preconfigured for the tooling you’re using.
If you don’t want to use such a specific monitoring stack… and if you want an OSS stack… Very common: prometheus (https://prometheus.io/). And for visualisation, grafana.
Prometheus itself is just a monitoring gatherer/forwarder, but there are a several other projects under its umbrella, like TSDB for storing the monitoring data. Also there’s an alert manager. There’s no visualisation, but you can use grafana for that. Prometheus uses a pull model, so you need to provide metrics via endpoints for it to collect. If you need to push metrics, you can configure a “pushgateway” to work around this.
For OSS, you can also look at InfluxData (InfluxDB, telegraf, chronograf, kapacitor).
Open source stacks: they’re cheap. Cloud-agnostic. Highly customizable. A healthy ecosystem. There is still a bit of competition in this area: graphite, ELK, zabbix/nagios.
And…. there are loads of commercial solutions that promise to solve all your monitoring problems. For instance Datadog. Datadog inside kubernetes means installing an agent container on every node. Once collected by datadog, they handle everything else for you.
Commercial solutions: they cost you a lot of money. But they’re often quick to configure! So if you have the money to spend, you can get up and running with pretty good monitoring real quick. There’s lots of competition in this area. Lots of companies offering this kind of service.
There was a question about logging. He answered that google’s stackdriver is working quite OK here. If they move to OSS, they’ll probably use prometheus for monitoring and an ELK stack for logging. Doing the monitoring inside ELK, too, wouldn’t give you good monitoring, he thinks.
Kubernetes 1.16: watch out, some libraries have been deprecated. When deploying a new cluster (for a training) two days after 1.16 came out, for a workshop, with infrastructure as a code, his code broke down. Because Helm and all Helm charts used where broken… He flies close to the sun, by always directly using the latest of the latest, but be aware that the change to 1.16 can be somewhat more bothersome.
Something to look at: Octant, made by vmware. It is a bit like kubernetes dashboard, but works on the client (uses kubectl config file). It visualizes ‘kubectl’. https://github.com/vmware-tanzu/octant
Kapp (https://get-kapp.io/). It is part of https://k14s.io/, “kubernetes tools that follow the unix philosophy to be simple and composable”. Kapp is a bit comparable to ansible, especially in its output. It is a simple depoyment tool, focused on the concept of a “kubernetes application”.
My name is Reinout van Rees and I work a lot with Python (programming language) and Django (website framework). I live in The Netherlands and I'm happily married to Annie van Rees-Kooiman.
Most of my website content is in my weblog. You can keep up to date by subscribing to the automatic feeds (for instance with Google reader):