Energy economics – why system costs matter

Published by mzconsultng on

In our last post, we quoted from recent reports that clearly lay out the environmental benefits of nuclear power.  This month we want to start off the year by launching a short series addressing some of the issues that impact energy economics.  Today we will talk about the importance of system costs in understanding the relative costs of different generation technologies. 

Last year at this time we wrote about the IEA/NEA report, Projected Cost of Electricity 2020, that shows nuclear is competitive with alternatives in most jurisdictions using the traditional Levelized Cost of Electricity (LCOE) approach.  LCOE is a great way to compare costs of electricity as it is generated from two or more different options to be implemented at a single spot on the grid with similar system characteristics.  With intermittent variable renewables on the system, LCOE alone no longer provides a sufficient basis for direct comparison.  By their very nature, deploying these renewables add costs to the system to be able to deliver reliable electricity in the same way as more traditional dispatchable resources like nuclear, hydro and fossil generation.   

Source: pexels.com

What are system costs?  In a report issued by the OECD Nuclear Energy Agency (NEA), system costs (see the report for a full definition) are basically the additional costs to maintain a reliable system as a result of intermittent variable renewables only producing electricity for a limited number of hours when the resource is available (e.g. daytime for solar), their uncertainty due to the potential for days with little resource (e.g. rainy or cloudy days), and the costs to the grid to be able to access them given their more distributed nature (e.g. good source of wind but far from demand).

A 2018 study undertaken by MIT “The Future of Nuclear Energy in a Carbon Constrained World” considers the impact of nuclear power on the cost of electricity systems when deep decarbonization is desired.  It looks at various jurisdictions around the world and the conclusion is always the same; the cost of electricity is lower with a larger nuclear share than trying to decarbonize with intermittent variable renewables (and storage) alone. 

The reason for this impact is fundamentally due to the relatively little time these resources produce electricity.  Solar and wind only generate when the sun shines and the wind blows, meaning they produce only some of the time and not always when needed.  The average capacity factors of these technologies vary by location with world average capacity factor of just below 20% for solar and about 30 – 35% for wind (capacity factor is the amount of time a resource produces compared to if it would produce 100% of the time).  Contrast this with the 24/7 availability of nuclear power, which can operate at capacity factors of more than 90%.

The impact on electricity systems is clear.  Given the limited duration of operation of intermittent variable renewables, there is a need to dramatically overbuild to capture all the electricity needed when the resource is available to cover periods when the sun is not shining, and the wind is not blowing (all assuming there is reasonable efficient storage available which is not yet the case).  The result is a system with much larger capacity than a system that includes nuclear (or any other dispatchable resource).  In the MIT study for example, the system in Texas would be 148 GW including nuclear but would require 556 GW of capacity with renewables alone.  In New England a system with nuclear would have a capacity of 47 GW but would require a capacity of 286 GW with renewables alone.   In the UK this would mean 77 GW with nuclear compared to 478 without.  And so on.  The costs of adjusting the system to accommodate these much larger capacities is significant.

Since that time study after study finds the same result.  This includes a study in Sweden in which 20 different scenarios for full decarbonization always come out the same; in every scenario the most cost-effective system has continued long-term operation of existing nuclear.  And more recently a study in France has shown that decarbonizing without nuclear means a system more than twice as large as one with nuclear and the more nuclear in the system, the lower the overall average cost of production.

So, what does this mean for planning?  The approach to implementing a reliable economic low carbon electricity grid must start with looking at the entire system.  A study should assess the total costs of deploying the system under a range of scenarios using different shares of available resources.  Different forms of generation have different capabilities and these need to be modelled.  Once an efficient mix is determined, a plan should be put in place to implement it (i.e., X% nuclear, Y% solar, Z% wind, A% storage, etc.).  When looking to deploy each technology, LCOE can be used to compare various options.  For example, when comparing one solar project to another or one nuclear project to another.  And of course, should the costs of any given technology vary too significantly from the assumptions in the system study that determined the efficient mix, then the system study should be updated.

Today’s energy markets are most often based on the assumption that all electricity generated is the same (to be discussed in a future post).  This is true at the moment of generation when yes, an electron is an electron.   Unfortunately, the ability of any given technology to actually be there to produce at the moment it is needed varies substantially.  Therefore, a direct comparison of the LCOE of one option vs another is only part of the story.

To fully understand the costs of electricity generated, the costs of integrating any given technology into a reliable system must also be considered.  After all, what really matters is how much we pay as customers for our electricity and the studies are clear, nuclear as part of a fully decarbonized system is always lower cost than a system based on renewables alone.

image_pdfimage_print

7 Comments

bruce macdonald · January 31, 2022 at 9:55 pm

A great blog. Always insightful. Keep it coming.
Thx Thx.
Bruce

Peter Farley · February 2, 2022 at 9:55 am

This is a good example of GIGO. The assumptions for nuclear availability are way above real world examples in Europe, while the improving capacity factors of wind and solar seem to be ignored.

France has similar annual but slightly higher peak demand than Texas but much more hydro and its nuclear fleet occasionally falls to 55% capacity. Peak demand is in winter where nuclear plant output is 8~10% above the same plant on a 42C summer day in Texas.
Peak demand in Texas is around 75 GW on hot days so going on the French experience Texas would need 155 GW of nuclear. At current nuclear costs that would be $1,700 bn investment. of course Texas would not build 155 GW of nuclear it would build probably 95 GW of nuclear and 50~60 GW of storage
If Texas builds 90 GW of wind 70 GW of tracking solar, 120 GW of rooftop solar, because peak demand occurs on summer afternoons it can count on a minimum of 15 GW of solar and 8 GW of wind to clip the peak, meaning it will need about 60 GW of storage to give some reserves to cover transmission outages. That amount of wind and solar would have an expected annual capacity of 680,000 GWh, well above Texas annual demand of 450,000 GWh. Based on figures for the Australian NEM grid, the worst renewable day will still provide 1,150 GWh vs highest day demand of 1,350 GWh

As wind picks up at night particularly on hot days the supply from storage would quickly fall to 20~40 GW so overall storage capacity of around 250 GWh supplemented by existing hydro would be more than adequate. On current trends the storage will cost about $70bn, the wind and solar about $350 bn so let’s say $420 bn for a renewable storage system about 1/4 the cost of the nuclear system. Even if the duration of the storage is tripled, total system cost will still be well under 1/3rd of the nuclear system and not much more than half the cost of the nuclear system with equivalent storage

    mzconsultng · February 2, 2022 at 3:20 pm

    I did allow your comment as I prefer to not stifle any views. However, starting your comment with GIGO (Garbage in/Garbage out) is not helpful. You are suggesting that studies by reputable organizations such as MIT, the OECD NEA and RTE – France’s transmission system operator are a result of garbage work. This is not the case. We are showing that the studies we have seen are all consistent in their conclusions. This may or may not be the case for other jurisdictions. What we are promoting is that system costs matter and that in any jurisdiction a good study be done to optimize the system and then that be the technology mix that is implemented. If it includes nuclear as many studies are showing, great. If not, also OK. Each jurisdiction should do what is in their best interests.

      Paul · February 24, 2022 at 11:59 pm

      Very measured and calm response.

    Bjorn Toft Madsen · February 7, 2022 at 7:21 pm

    Please explain to us how the MIT study from 2018 is garbage. It’s assembled by a large study group of experts, advised by an even larger group of cross-industry experts. To dismiss it simply as “garbage” because you do not like the views surely is a folly.

Antoine de la Chevrotiere · February 9, 2022 at 11:07 am

Thank you very much for this insightful post on the importance of system costs. Those are especially relevant given that many countries are working/planning towards meeting their net-zero climate objectives. Of course, there is no unique solution and pathways to carbon-neutral energy systems will differ from one jurisdiction to another. In this regard, would it be possible to get a reference (or link) to the study mentioned in your post about the 20 different scenarios for full decarbonization in Sweden?
Thank you!

    mzconsultng · February 9, 2022 at 11:21 am

    This was in a presentation by Staffan Qvist at the WNA Annual Symposium 2019

Comments are closed.