Energy system performance evaluation methods: problems & solutions Vilnis Vesma, UK 1
Speaker s credentials Graduate in engineering science and economics, University of Oxford Worked as an energy manager Specialist in the analysis and presentation of energy consumption data Author, Energy Management Principles and Practice Committee member, International Performance Measurement and Verification Protocol Committee member, EN16001, ISO50001, EN16247-1 2
The problem A fundamental and widespread problem with energy-systems performance evaluation is the use of simple energy intensity as a measure of performance. As this chart from a survey of UK ironfoundries shows, energy use per tonne of good castings shows a huge spread even when plants running similar processes are grouped together. A sites melting by cupola B sites melting by cupola and holding by electricity C sites melting and holding by electricity D sites melting by cupola, holding by electricity, all output heat treated Source: UK government Energy Thrift Scheme survey 1980 3
The problem One of the problems in the earlier slide is that consumption was evaluated per tonne of good castings. This means that variations in product yield affect the apparent energy efficiency. It is impossible to separate the two: a poor figure may represent poor quality and high reject rate, or poor energy efficiency, or both. But even when assessed against quantity of metal melted, as in this slide, there is as much or more apparent variation in energy efficiency. 4
The problem Energy-intensity values (energy per unit of output) are clearly not comparable Even within groups of similar sites Benchmarks are likely to be meaningless The wide spread in performance-indicator values suggests huge variations in energy efficiency from one plant to another, and casts doubt on the reliability of this kind of indicator. 5
Another problem The cause of the variability between plants can be traced to a phenomenon that is clearly evident when one looks at the history of an individual melting shop. This shows the history of MWh per tonne in a melting shop, at monthly intervals. Note the wild apparent swings in performance, particularly the apparently extremely bad performance in July 2004 (seventh chart point from the left) 6
Another problem January 2008 February 2009 Look at two months in particular January 2008 (0.89 MWh/tonne) February 2009 (0.975 MWh/tonne) February 2009 appears to have worse energy performance 7
Another problem January 2008 February 2009 But now plot the data on a scatter diagram of monthly electricity consumption against monthly melting output. It is perfectly clear that there is a strong correlation between the two. I have superimposed a regression line which shows the relationship. We can see that February 2009 falls on the regression line; this means that performance that month was typical. But in January 2008 consumption was higher than might have been expected. 8
Another problem Actual Expected We can see here that there was substantial excess consumption in January 2008. 9
A further problem Correct analysis: February 2009 (0.98 MWh/tonne) OK January 2008 (0.89 MWh/tonne) inefficient This is the opposite of the natural conclusion In short, the month with the higher energy-per-unit-output figure was actually the better performer: not what most people would have thought. 10
Weaknesses of energy-intensity methods Affected by things other than energy efficiency Only a ratio: tell us nothing about absolute waste or savings Cannot be calculated at all if there are multiple product grades The big problem here is that there is some fixed overhead consumption which distorts the picture. At higher throughputs, a lower performance-indicator value would be expected anyway. Therefore it is meaningless to compare simple energy performance indicators even within the history of one process, let alone between one plant and another. Energy performance indicators suffer from only being ratios. Absolute estimates of energy savings or losses would be more useful. This without mentioning their fatal weakness: they can only be computed if there is a single factor driving variation in consumption. In any other real-life scenario, they are impossible to compute at all. 11
Towards a solution The straight-line relationship which we observed offers as a solution. It allows us to estimate the expected quantity of energy for any given level of output. 12
Towards a solution Actual Expected The ability to compute expected consumption in each period gives us a dynamic yardstick for correct consumption. Here we see the history of actual and expected consumption compared. 13
Towards a solution This is a history of the deviation from expected consumption. The dotted lines are set at about +/- 15% and most of the time, variances are much less than this. 14
Towards a solution This shows how misleading a simple energy-intensity metric can be. Remember that July 2004 had the highest energy performance indicator by a wide margin: but we now see that in fact consumption was well below what might otherwise have been expected. Performance was actually good that month, not the worst ever. 15
Towards a solution International Performance Measurement and Verification Protocol introduces the idea of a mathematical model relating consumption to influencing factors. Comparing actual consumption with a good estimate of expected consumption, derived from a performance model, is the basis of the methodology developed over the last 17 years in IPMVP. 16
Towards a solution IPMVP shows how a model of prior performance can be used, after the implementation of energy systems optimisation, to answer the question how much energy would we have used in the absence of the energy conservation measures?. It provides a dynamic yardstick against which to assess the actual quantity of energy used, and thereby to calculate what IPMVP calls the avoided energy use. 17
Towards a solution Changes in performance can be evaluated even if it is impossible to express performance as a single number All you need is a mathematical model of baseline relationship between consumption and the factors which cause consumption to vary The model can be as simple as a straight-line relationship, or more complex if necessary What I have demonstrated here deals only with the evaluation of savings achieved at a particular plant. What about benchmarking between plants? 18
The final problem Simple energy-intensity performance indicators are dangerously misleading. So how can we compare one plant with another? 19
One possible solution We could compare performance characteristic lines instead: 20
Another possible solution Continue to use energy performance indicators but adjust them to standard conditions For example, adjusted to full-capacity output If we do this, an energy-efficient plant would not be penalised for having low throughput. 21
Summary Energy-per-unit-of-output indicators are dangerous often yield wrong conclusions are not necessary for evaluation of savings Methods based on expected-consumption models are more accurate can be deployed in many more circumstances For benchmarking Compare performance characteristics, or Adjust indicator values to standard conditions 22