Theo Andresier, resident Kubernetes man, sustainability expert extraordinaire, and tech savy at Devoteam M Cloud Denmark, talks about recent challenges in building a climate-aware Kubernetes (K8s) solution while adhering to all best practices.
At Devoteam M Cloud we always rise to the challenge when it comes to state of the art Cloud solutions and services! And this time, a customer asked our resident Kubernetes expert, Theo Andresier, for advice on bringing sustainability into the Kubernetes solution they are using. Read his thoughts on sustainability in tech and specifically sustainable Kubernetes below.
Spoiler: We measure and calculate the ‘cleanliness’ of the electricity in different Microsoft Azure Datacenter regions. We found a significant difference in carbon-cost when comparing the same workloads on K8s in different regions!
Can the Cloud daredevils go green?
A new client has brought an interesting challenge to our doorsteps. And, being the Cloud daredevils we are at M Cloud, we rose to meet the challenge:
How does one run a Kubernetes environment while being sustainably minded? And is there a way to run Cloud resources dynamically in regions depending on the climate impact of that region throughout the day?
Not ones to shirk or hide from difficult projects, we scoured the internet and professional forums to find examples of when other consultancies or SaaS [Software as a Service] solutions had succeeded in going green or offered a comparable solution.
Inspiration from the internet!
It turned out none were to be found. So – I volunteered to tackle the architecture and technical challenge offered myself. I found help and inspiration in two different publications:
A brilliant K8s scheduler paper from 2018:
‘A Low Carbon Kubernetes Scheduler’ – ceur-ws.org/Vol-2382/ICT4S2019_paper_28.pdf
An insightful blog post from Azure:
‘Carbon Aware K8s’ devblogs.microsoft.com/sustainable-software/carbon-aware-kubernetes/
In the end, we agreed to tackle the Carbon emissions caused by the workloads and Cloud architecture as a general indication of their climate impact. The next challenge was to figure out how to measure the data and apply the knowledge in a new solution.
The Idea
If the electrical usage impact of Kubernetes clusters is isolated, we can weigh each Watt used with a rating of CO2/MW generated – Marginal Operating Emissions Rate (MOER).
Using this measurement is not as arbitrary as it seems at first – I am aware that it excludes other greenhouse gases and the emissions in manufacturing – but this information is readily available through different APIs and government websites.
Different Cloud regions/availability zones and probably solutions have a different MOER and this makes it possible to place Cloud workloads in different regions according to their environmental impact.
If we look at the figure below we can, for instance, see the difference in Azure for two different regions (with the $ cost for posterity):
Depending on the bias/weight that your company gives their carbon emissions this data can be used in a variety of ways. A single-minded decision to focus on reducing your average MOER can lead to devastating cost increases and vice versa.
Technical shenanigans
When the MOER is defined for each of the locations where we are running a workload – an example could be the different nodes within a Kubernetes cluster – we can take our region agnostic workloads and schedule them in the regions where the environmental impact will be lessened (lower MOER).
Without doing any programmatic updating, we can then write the average MOER data for each of the regions our cluster will work in into a secret variable and run a scheduling bias off of that data – An example YAML file below:
An even more stark difference between two regions in the US:
In conclusion
It is incredibly exciting to see that the technologies that we use for the harshest of our workloads can be biassed to reduce our environmental impact. One can only imagine what is truly possible from those that understand the control plane of a cluster to a greater extent than I.
While I don’t believe most companies will accept a direct cost increase for (even significant) environmental savings – I have high hopes for the future of the K8s and Azure ecosystem in its ability to promote and ease-access to ways/technologies that are beneficial and impactful. I intend to release this scheduler as an open source project so that these savings can be passed on and utilised to the best of our ability.
Only with everyone doing something can we hope to do anything!
Further reading
Without going into too much technical detail I will include the inspiration for the technical side of this delivery below:
‘A Low Carbon Kubernetes Scheduler’ – ceur-ws.org/Vol-2382/ICT4S2019_paper_28.pdf
‘WattTime Grid Identifier’ – watttime.org
‘WattTime API Documentation’ – watttime.org/api-documentation/#introduction
‘Carbon Aware K8s’ devblogs.microsoft.com/sustainable-software/carbon-aware-kubernetes/
‘Sustainability with Rust – AWS’ aws.amazon.com/blogs/opensource/sustainability-with-rust/