The Ericsson T95 launched in 2001. It had a 720 mAh Li-Ion battery, a standby time of 300 hours and a talk time of 11 hours. A decade later the Sony-Ericsson Xperia X10 mini was launched. It has a 910 mAh Li-polymer battery, and using 3G, a standby time of approximately 285 hours and a talk time of approximately 3.5 hours with 3G.
Such is progress! Yet we cannot fault the hardware designers. Clock gating, dynamic voltage & frequency scaling and ever smaller components mean silicon is more energy efficient today than ever. And when thinking about energy, hardware designers know that the greatest energy saving come from starting at the highest abstraction level. Research by LSI Logic and Mentor Graphics found that the right architectural decisions could reduce energy usage by 80%, whereas optimizing at the detailed gate or layout stages saved just 5-10%.
Software can undo this at a stroke. Famously one Linux system wasted 70% of its energy waking up to blink the console cursor. I was part of a consumer electronics project where the software demands of a standard codec meant raising the processor frequency, and hence its operating voltage. With static leakage proportional to the square of the voltage and dynamic leakage proportional to frequency, the projected battery life plummeted, the project became non-viable and was cancelled.
Ultimately software controls the hardware. Choice of algorithms and data structures will have a huge impact on energy consumption. The traditional compiler focus on speed at the expense of all other considerations is very bad news for energy usage. Yet energy usage is invariably a secondary software requirement, if it is a requirement at all. We need a new way of thinking in software, and like our hardware colleagues, the further up the abstraction level we go, the better.
As part of the Energy Aware Computing Initiative (EACO), Embecosm funded research at Bristol University during the summer of 2012, to quantify rigorously the impact of compiler optimization options on energy consumption. Before we can improve energy performance of the code generated by compilers, we need to measure how they perform today. We looked at detailed energy consumption across a number of embedded processors as they ran a range of benchmark programs compiled with different optimizations selected. I’ll be covering this work and the results in more detail in future blog posts, but the good news is that the compiler offers considerable scope for reducing the energy consumption of compiled programs.
Meanwhile, when your smartphone next runs flat before the day is over, and you ask “who ate my battery?” you’ll know the answer. The software ate it.