Embecosm’s reputation stands on our ability to deploy the latest open source technology, and since the development of this is a collaborative effort, if we wish to benefit we must also invest.

We invest in academic and industrial research that will lead the development of the next generation of software and electronics. Central to which is a commitment to open source technology. As such the projects we fund are fully open: all outputs are published under free and open source licenses and progress is made public, typically using a wiki for reporting and collaboration.

We are very fortunate to work with world-class academic partners that include INRIA (France), Bristol University (UK) and The UK Science & Technology Facilities Council high performance computing center at Daresbury, UK. We are grateful to the UK Government — in particular the Technology Strategy Board — and European Union for the research support that they provide.

We only publish in journals that offer open access.


 

Active

Total Software Energy Reporting and Optimization (TSERO)

The UK has world leading technology for monitoring high performance computing (HPC) systems and for compiling programs for energy efficiency. This project brings these technologies together to provide an end-to-end system to significantly reduce the energy used by such systems, using the STFC’s HPC facilities to demonstrate its effectiveness. Advancing such technology carries risks, but is made possible through Innovate UK support.

As well as HPC, the technology is applicable to the much larger data centre market, the annual electricity bill for which is $7.6Bn in the US alone. This project has the potential to reduce that by 20%. This will yield commercial benefits to the industrial participants, all small, export based UK companies addressing this global market as well as to the wider UK economy. It will also have major environmental benefits, by reducing CO2 emissions.

An Altruistic Processor (AAP)

Embecosm is developing an architecture specification designed for experimenting with compiler back end implementation, that in particular has features that are common within small deeply embedded systems. It is also designed to be easy to use in demonstrations and for education/training, with hardware and simulator implementations as well as the tool chain and standard libraries.

A particular objective is to improve LLVM for deeply embedded systems. At present the official LLVM distribution has only one of its 12 officially supported architectures, the MSP360, that is not a 32- or 64-bit design. But even the MSP430 is a RISC architecture, with a uniform address space, and its LLVM implementation still has experimental status.

For further information please see the announcement blog post.


 

Completed

Machine Guided Energy Efficient Compilation (MAGEEC)

From our previous work, “Identifying Compiler Options to Minimize Energy Consumption for Embedded Platforms”, we confirmed that choice of compiler optimization flags can have a significant effect on the energy usage of a compiled program.

This project builds on earlier MILEPOST work to create a generic compiler framework which combines machine learning with accurate energy measurement, in order to select the most energy-efficient compilation options.

Embecosm lead on this 18 month collaboration with Bristol University, which was funded through the Technology Strategy Board’s Energy-efficient Computing programme.

All the design materials, source code and data can be accessed, via the project wiki at mageec.org.

Key to enabling this research, the project also developed a low cost, high-performance energy measurement platform, the MAGEEC WAND. Embecosm has funded the manufacture of a final run of complete energy measurement kits, which have been made available to purchase at cost.

This area of research is now being taken forward by the TSERO project.

Superoptimization Feasibility Study

This was a follow on from the reserach into factoring memory modeling (see below) and funded by TSB for 4 months of James Pallister’s time (summer 2014). The premise was that superoptimization had been around for nearly 30 years and always been considered purely academic. We thought after 30 years working on heuristics, and with computers around 10^6 times more powerful, it might be feasible commercially.

We had a 3-fold answer:

  • First current superoptimizers have specialist application in identifying machine specific peephole optimizations and accellerating emulation libraies.
  • Secondly with a modest amount of industrial research, ideas such as stochastic superoptimization could be made commercially robust. These approaches are powerful enough to tackle complete (small) programs and can be justified for critical software.
  • Thirdly generally applicable superoptimization, will need constructive methods, and the theoretical and practical basis of this remains an academic research topic.

The work from this project is now being further developed under the TSERO project.

Factoring Memory Modeling Into Intelligent Compilation

Factoring Memory Modeling started by considering superoptimization, but that was peripheral, and only led us to realize we needed the superoptimization feasibility study.

What this HiPEAC internship worked on was optimizations that were specifically aimed at energy saving. The two that came out are: 1) aligning code in flash to avoid lighting up multiple rows of flash in inner loops (which can save 12% of energy); and 2) moving commonly executed loops into RAM, which is much lower energy.

This led to the Superoptimization Feasibility Study funded by TSB for 4 months of James’ time (summer 2014). The premise was that superoptimization had been around for nearly 30 years and always been considered purely academic. We thought after 30 years working on heuristics, and with computers around 10^6 times more powerful it might be feasible commercially.

We had a 3-fold answer. First current superoptimizers have specialist application in identifying machine specific peephole optimizations and accellerating emulation libraies.

Secondly with a modest amount of industrial research, ideas such as stochastic superoptimization could be made commercially robust. These approaches are powerful enough to tackle complete (small) programs and can be justified for critical software.

Thirdly generally applicable superoptimization, will need constructive methods, and the theoretical and practical basis of this remains an academic research topic.

Identifying Compiler Options to Minimise Energy Consumption for Embedded Platforms

It has often been suggested that the compiler can have a significant impact on the amount of energy consumed by a processor running compiled code. However, there have been very few studies to actually measure if this is true, and none that have been comprehensive.

During the summer of 2012, James Pallister of Bristol University was seconded to Embecosm, where he assembled a broad suite of test bench programs appropriate to deeply embedded systems, and then measured the energy consumed by those programs when compiled with different compiler options. This approach was applied across a range of architectures (ARM, Epiphany and XMOS) and different processors with the same architecture (ARM Cortex M0, Cortex M3 and Cortex A8).

MILEPOST GCC

Embecosm’s Joern Rennecke was part of the original MILEPOST research team, which over the period 2006-2009 developed a version of GCC which used machine learning to determine the best compiler options to use for different types of program, based on features in the source code. This concluded with MILEPOST GCC 4.4 being released in 2009.

In 2010, Joern was seconded to INRIA, funded by the EU HiPEAC scheme to migrate MILEPOST to GCC 4.5, which is now the latest version of GCC with MILEPOST support.