Our Digital Carbon Footprint is Not Negligible
The pandemic ushered in a greater acceptance for digital business. Unlike pre-2019, many companies are now willing to conduct more business in virtual domains.
Even though we are moving back to an in-person world, and there are still great benefits for face-to-face meetings and interaction, there are still numerous upsides for conducting business digitally. To name just a couple, there's the convenience and the reduced environmental impact.
It's pretty clear that conducting a meeting online is going to have less of a carbon impact than having to take a flight. In most cases, it is probably also more environmentally beneficial than driving or taking a train.
But we shouldn't be lured into thinking that everything digital is a good choice for the planet.
For example, meetings where multiple participants from the same location join individually are having a bigger impact than if those participants were to either meet in person or join from the same stream.
Of course, there are variables to that equation too. Do you meet in a conference room that is heated and well lit, do you make a coffee or tea?
It's complicated, but adopting the reduce, reduce, reduce mindset can help move you in the right direction.
Comparing Video Streaming to Boiling a Kettle
According to the International Energy Agency, one hour of HD video streaming using a laptop over WiFi in the UK generates 11g of CO2e. Compare that to the 17g of CO2e generated by boiling a kettle.
It sounds odd, but that could mean that two unnecessary HD video streams could generate more CO2e emissions that boiling a kettle.
Data Has a Carbon Footprint Too
Every character, every image, every byte of data created and transferred requires energy to fuel its lifetime.
In fact, if we were to think of this amorphous, nebulous thing we call the Internet and the Cloud as a country it would rank very up the list of the highest CO2 emitting countries.
Requiring 20% of the planet's global energy production, the Internet generates between 2% and 5% of the global CO2 emissions - that's more energy than any other countries except the USA, India and China.
So we should be mindful of needlessly generating, transmitting and storing, data. Simply storing data in the Cloud also requires vast amounts of energy.
(Like everything there are exceptions, for example, long term storage could be offline, on magnetic tape. This would require less, but there is still some overhead in creating the tapes, and managing the ambient storage conditions)
Insights Not Data
For many years, many companies have been building vast, vast sets of data. Known as big-data lakes, the intention has been to collect as much data as you can, wherever you can because someday it will be valuable.
Not only has this approach often failed to deliver on the promises, but it is incredibly costly - financially and environmentally.
In a previous article, I looked at how, if the Internet of Things (IOT) industry reached the projected figures and all data was sent to/via the Cloud, a total of 79.4 zettabytes of Cloud data could be responsible for the production of 158.8 billion tons of greenhouse gases. 😳 That's a lot! Read the article to put that into a galactic context.
Related article: Using Edge AI for privacy-conscious computer vision
Evolving from data towards insights
Due to the dual high costs of a big data approach, many companies are looking towards using local compute power, known as Edge Computing, to keep as much data out of the Cloud as possible, storing only what is necessary.
One such company is Ekkono.ai, and I had the opportunity to ask their CEO, Jon Lindén a few questions.
Jon, when it comes to improving digital sustainability, what would you suggest we focus on?
Across the IT sphere, energy is one of, if not the biggest costs, and has the biggest CO2 impact, so reducing energy consumption is absolutely necessary.
That goes for all sorts of things, from buildings, to vehicles, laptops and sensors. Our focus is really in the world of IoT. We look to see how we can help reduce energy consumption by deriving insights from sensors data using AI at the Edge, also known as Edge AI.
As you mention, though, data transmission and storage can be a huge burden, from a pure cost basis as well as a carbon footprint point of view. The transmission, or communication of data is the single biggest energy consuming function of most connected devices.
So our approach uses compute power local to the data source, to run machine learning Edge AI algorithms and derive important insights from the data - or simply clean the information from the noise.
This means you only have to worry about data that really adds value, and you don't have to spend all that energy moving and storing pointless data.
So you'd advocate using Edge over Cloud?
The Cloud is incredibly powerful for many reasons, but you don't have to put everything there. We aim to put only the really important data, or insights in the Cloud.
We use computing power in the Edge to sift through the massive quantities of data, derive insights and only then communicate those across the net to the Cloud.
In fact, by taking this approach we see that you can reduce the carbon footprint of the Cloud, in terms of both data storage and processing, quite significantly.
No one has the data they need. We take the Tamagotchi approach, the more you feed it the better it gets.
Do you have any tips on best practice?
It's tricky to come up with something generic, other than blanket statements like "reduce energy consumption", because in my experience, every problem is unique.
Actually, that's a big part of my vision. Manufacturers have spent a lot of effort optimising their designs and products to make sure they operate at their best, most efficient. But when you deploy or install these devices, be they cars, computers, air conditioning units or sensors, each becomes unique.
For example, the same model of air conditioning unit supplied by the same manufacturer would need to operate in entirely different ways in New Dehli vs Stockholm.
With that, I focus on how to solve the individuality problem. How do we optimise each and every single piece of equipment, knowing that its specific circumstances make it unique.
We refer to this as the next big sustainability leap for technology - helping optimise each and every device for its specific use case and deployment, whether that is an air conditioning unit, robot lawn mower, car or a bank of photovoltaic solar panels.
I'd also say that an incremental approach is always a winning strategy.
It can be tempting to try and go for the big-bang win, but often the route to quicker success is through smaller, incremental steps.
Our AI solutions work on that basis. We look to start small, learning insights from the smallest data sets possible. Over time, we feed them new data sets, which they learn from and evolve.
The reason for this is that at the beginning of a project no one has the data they need. We take the Tamagotchi approach, the more you feed it the better it gets.
Are you talking about Big Data and Data Lakes?
There's been a lot of focus (and money) on building data lakes, but we're not a fan of collecting data for the sake of data. As you mentioned earlier, much of this data never really finds its way into value.
We believe that the Cloud should be used wisely to store and process only the data that really matters. I refer to this as cleaning the water up-river, before it enters the lake.
I want to say thank you again to Jon for sharing his opinions with me. If you'd like to know more about his work, check out Ekkono.ai.