Energy economics – why system costs matter

In our last post, we quoted from recent reports that clearly lay out the environmental benefits of nuclear power.  This month we want to start off the year by launching a short series addressing some of the issues that impact energy economics.  Today we will talk about the importance of system costs in understanding the relative costs of different generation technologies. 

Last year at this time we wrote about the IEA/NEA report, Projected Cost of Electricity 2020, that shows nuclear is competitive with alternatives in most jurisdictions using the traditional Levelized Cost of Electricity (LCOE) approach.  LCOE is a great way to compare costs of electricity as it is generated from two or more different options to be implemented at a single spot on the grid with similar system characteristics.  With intermittent variable renewables on the system, LCOE alone no longer provides a sufficient basis for direct comparison.  By their very nature, deploying these renewables add costs to the system to be able to deliver reliable electricity in the same way as more traditional dispatchable resources like nuclear, hydro and fossil generation.   

Source: pexels.com

What are system costs?  In a report issued by the OECD Nuclear Energy Agency (NEA), system costs (see the report for a full definition) are basically the additional costs to maintain a reliable system as a result of intermittent variable renewables only producing electricity for a limited number of hours when the resource is available (e.g. daytime for solar), their uncertainty due to the potential for days with little resource (e.g. rainy or cloudy days), and the costs to the grid to be able to access them given their more distributed nature (e.g. good source of wind but far from demand).

A 2018 study undertaken by MIT “The Future of Nuclear Energy in a Carbon Constrained World” considers the impact of nuclear power on the cost of electricity systems when deep decarbonization is desired.  It looks at various jurisdictions around the world and the conclusion is always the same; the cost of electricity is lower with a larger nuclear share than trying to decarbonize with intermittent variable renewables (and storage) alone. 

The reason for this impact is fundamentally due to the relatively little time these resources produce electricity.  Solar and wind only generate when the sun shines and the wind blows, meaning they produce only some of the time and not always when needed.  The average capacity factors of these technologies vary by location with world average capacity factor of just below 20% for solar and about 30 – 35% for wind (capacity factor is the amount of time a resource produces compared to if it would produce 100% of the time).  Contrast this with the 24/7 availability of nuclear power, which can operate at capacity factors of more than 90%.

The impact on electricity systems is clear.  Given the limited duration of operation of intermittent variable renewables, there is a need to dramatically overbuild to capture all the electricity needed when the resource is available to cover periods when the sun is not shining, and the wind is not blowing (all assuming there is reasonable efficient storage available which is not yet the case).  The result is a system with much larger capacity than a system that includes nuclear (or any other dispatchable resource).  In the MIT study for example, the system in Texas would be 148 GW including nuclear but would require 556 GW of capacity with renewables alone.  In New England a system with nuclear would have a capacity of 47 GW but would require a capacity of 286 GW with renewables alone.   In the UK this would mean 77 GW with nuclear compared to 478 without.  And so on.  The costs of adjusting the system to accommodate these much larger capacities is significant.

Since that time study after study finds the same result.  This includes a study in Sweden in which 20 different scenarios for full decarbonization always come out the same; in every scenario the most cost-effective system has continued long-term operation of existing nuclear.  And more recently a study in France has shown that decarbonizing without nuclear means a system more than twice as large as one with nuclear and the more nuclear in the system, the lower the overall average cost of production.

So, what does this mean for planning?  The approach to implementing a reliable economic low carbon electricity grid must start with looking at the entire system.  A study should assess the total costs of deploying the system under a range of scenarios using different shares of available resources.  Different forms of generation have different capabilities and these need to be modelled.  Once an efficient mix is determined, a plan should be put in place to implement it (i.e., X% nuclear, Y% solar, Z% wind, A% storage, etc.).  When looking to deploy each technology, LCOE can be used to compare various options.  For example, when comparing one solar project to another or one nuclear project to another.  And of course, should the costs of any given technology vary too significantly from the assumptions in the system study that determined the efficient mix, then the system study should be updated.

Today’s energy markets are most often based on the assumption that all electricity generated is the same (to be discussed in a future post).  This is true at the moment of generation when yes, an electron is an electron.   Unfortunately, the ability of any given technology to actually be there to produce at the moment it is needed varies substantially.  Therefore, a direct comparison of the LCOE of one option vs another is only part of the story.

To fully understand the costs of electricity generated, the costs of integrating any given technology into a reliable system must also be considered.  After all, what really matters is how much we pay as customers for our electricity and the studies are clear, nuclear as part of a fully decarbonized system is always lower cost than a system based on renewables alone.