The California Duck Curve gets deeper – the challenges of high levels of intermittent variable renewable energy

A recent article caught our eye – “Stanford study warns against overnight charging of electric cars at home” in California.  This study noted that most electric vehicle (EV) owners tend to charge their vehicles at home during the evening or overnight (which should come as no surprise to anyone), leading to significant costs for the electricity grid as California relies more and more on solar energy.  It projects the rapid growth of EVs and their reliance on nighttime charging could lead to a 25% increase in peak electricity demand within a little over a decade. This study’s solution, get people to shift towards daytime charging at public charging stations or workplaces.  It goes on to explain that “if more people charged their vehicles during the day at work or public charging stations, it could reduce greenhouse gas emissions (presumably by avoiding gas usage at night) and avoid the added costs of generating and storing electricity”. 

Source: istockphoto.com

This is the beginning of an awareness of what happens when you rely too much on intermittent variable renewables for your electricity needs.  It forces you to use the electricity when the sun shines (in this case) or the wind blows, which is not necessarily when you actually need it.

California has had this issue for years.  Due to a rapidly increasing amount of solar electricity, the net load on the system (total load less renewables) reduces rapidly in the morning when the sun comes up and solar power comes online, then increases again as the sun goes down and solar drops off.  This has come to be known as the “Duck Curve”, as the shape of the curve looks like a duck! What we see below is that the depth of the curve has continued to get deeper over the last eight years as California adds more and more solar power. 

Source: https://cleantechnica.com/2023/07/07/california-duck-curve-getting-deeper-with-solar-growth/

Don’t get us wrong, we like solar especially in sunny locations like California.  Generally, solar plants produce about 15 to 20% of the time depending on location (based on the level of sunshine).  Well, in very sunny California, the average capacity factor for solar is just over 28%.  Excellent for this type of generation.  This clearly has an important role to play in the generation mix. 

But we also see that too much of a good thing can create new challenges.  The cost to the system of being able to accommodate this rapid change in load when the sun comes up and again when it goes down is large.  Storage and other dispatchable sources of electricity (likely gas) are required to meet the needs the 70% of the time the sun is not shining.  The duck curve also reduces the amount of time dispatchable conventional power plants operate, reducing their revenues, making them less economic to operate in the California market. If these plants are then retired without replacement, it becomes even harder to meet the needs of the system. 

The other issue is grid stress. Grid operators need to drastically ramp up non solar generation as the sun sets, a very difficult thing to do.  In the past, when we considered how big of a single generating plant a system could accommodate, we often used a simple rule of thumb that no unit should be larger than 10% of the entire system. Larger than that, the ability of the system to manage a unit outage would be compromised putting system reliability at risk.  That is what solar has become in California.  While you may think that there are many solar units in place, due to their intermittency, they operate on the system as one extremely large plant.  They all come on at the same time when the sun comes up and they all go off at the same time when the sun goes down. What is the system to do?

We had a wonderful vacation in southern California this past July.  Spent some time in Palm Springs where the temperatures were on the order of 45 to 47 degrees Celsius (~115 degrees Fahrenheit).  I can assure you that we needed air conditioning as much at night as during the day. 

Now imagine what would happen without having the back up needed.  Storage is part of the solution but requires a huge overbuild of daytime capacity to both meet the day’s energy needs while also filling storage for other times.  And mostly current storage technology is good for hours, not days or weeks creating issues for when the weather is simply not cooperating (two weeks of continuous rain for example) or to meet seasonal load changes.  The result is a growing consensus that firm dispatchable capacity also needs to be an essential part of any clean energy solution.

The Diablo Canyon nuclear plant in California produces energy about 90% of the time, in other words each MW of capacity of California nuclear produces more than 3 times the amount of energy in a year than the equivalent capacity of solar.  That is what builds a resilient system. 

I don’t have an electric vehicle yet, but when I do, I will definitely feel better knowing I can leave home in the morning with a full charge.




Nuclear project structures – it’s about managing risk

In our recent post on nuclear project financing, we noted the importance of reducing risk to investors to ensure projects can raise sufficient competitively priced capital needed to build them.  Today we will discuss project structures.  What are they and why are they important? 

The project structure is how the project is organized contractually to build the plant and then sell the electricity to the market.  Good structures help the project to succeed while poor ones end up with lawyers arguing where to lay blame rather than people delivering on their commitments. 

Source: pexels.com

There are four major categories of participants in a large energy project. 

  • The customer – who needs the energy and pays for it to be reliably delivered to their home or business;
  • The owner/operator (yes these can be separated, but we will keep them together for simplicity), who is responsible for building and operating a generating station to provide the energy to the customer;
  • The contractor(s), who have technology, design, and construction capabilities to build the plant; and
  • The investors, who provide the funding to support this construction and who will be repaid during plant operations when there are revenues from selling electricity.

When talking about contractual structures, the primary relationships are between the owner/operator and the customer (market structure); and between the owner/operator and the contractor (project structure). 

There are a whole range of contractual structures for both relationships.  Some are simple and some are complex.  None are perfect.  Historically, electric utilities tended to be vertically integrated monopolistic companies, often owned by governments, who were charged with delivering electricity to customers at low cost.  Utilities carried most project risks and passed them on to the customers.  A government regulator was charged with setting rates for customers (while looking out for their best interests) based on the utility costs and performance. 

Poor project performance and a belief that competition would incent better results led to a shift to deregulated markets in many jurisdictions in the early 1990s whereby the utilities would be broken up and generators would have to compete to sell their electricity to the market.  (We wrote a previous post on why these deregulated markets do not work well for building new low carbon generation.)

Being forced to take on more risk by their customers, owners wanted more certainty of outcomes and believed contractors, as the experts in performing the work, were in the best position to take on these risks.   Wanting this work, contractors agreed to take on more project risk, for a price.  This provided a sense of security to the owners that their risk was limited, and that they could rest easy, knowing it would be up to others to ensure successful project delivery.

Unfortunately, this has been proven to be nothing more than an illusion.  In reality, the contractor’s ability to take on additional risk is limited and when project costs increase, they will generally make a claim for a change in scope requiring additional funds.  This often results in contractual disputes that slow down project progress and negatively impact company relationships.  In the end, there is no escaping the project risks for the owner, as it is their project and their money.  After all, there is no scenario where the contractor fails, and the project succeeds. 

The lesson is that when developing project structures, the objective is to manage risk while incentivising the behaviours from the project stakeholders necessary for project success; not to decide who suffers the most in the case of failure.  Because for long term commercial success, there is one truth.  All costs must be borne by the customer.  There is no one else (unless government provides a subsidy in which case taxpayers are involved which is a different discussion – we will talk about the potential role of government in mitigating risk in a future post).  When the investors state that they do not want to be exposed to excessive risk, what they mean is that they want a credit worthy borrower who can reliably replay loans and deliver a return on equity.  And while ensuring they are contractually protected from risk is important, the best way forward is to confidently deliver projects to cost and schedule.

This is changing the way that projects are structured to more collaborative models whereby all parties’ objectives are aligned, and everyone sinks or swims together.  Good project contracting is important in defining the project, but on its own is insufficient to ensure good project outcomes.  Successful project delivery results from good project planning, doing enough work upfront to set a realistic cost and schedule; and excellent project management, supported by a high level of transparency together with a strong set of project metrics to enable informed rapid decision making to keep the cost and schedule under control.   Continuously improving the ability to deliver successful projects to cost and schedule will ensure that nuclear power can meet its full potential on the road to a Net Zero future.




Closing perfectly good nuclear plants before their end of life – it’s a sin!

In March, Kuosheng Unit 2 became the latest nuclear unit to be retired following the expiry of its 40-year operating licence in accordance with Taiwan’s nuclear phase-out policy.  This is the fourth unit to be shut down in Taiwan leaving just two more operating units at Maanshan.  When their licences expire in 2024 and 2025, the island’s phase out will be complete, taking its once 20% nuclear share down to zero.  And as has been the case with most other nuclear plant closures around the world, its output will be replaced with fossil fuels, adding carbon emissions at a time when we are all trying to reduce them.  Taipower has reassured its customers there are numerous new gas-fired power generation projects and even new coal-powered units being brought online this year to make up for the energy lost as a result of its unnecessary nuclear phase out. 

Of course, Taiwan is not the first to go down this path.  Over the last few years, there have been a number of plants that were closed before their time.  In the US, it was primarily due to competition from low-cost gas in deregulated markets.  In Europe and Asia, it was simply a result of government anti nuclear policies.  Today as we pass the 12th anniversary of the Great Tohoku earthquake and tsunami in Japan, that also triggered the Fukushima nuclear plant accident, things are changing rapidly.

Source: istockphoto.com

Why?  There are two urgent drivers to the revisiting of nuclear power.  First and foremost, is the energy crisis in place in Europe due to the war in Ukraine.  When energy security is at risk, people respond, and respond quickly.  And then there is climate change.  With more and more countries setting net zero goals, it has become crystal clear that nuclear must be part of the mix.  We have never been more optimistic about the future of nuclear power playing an essential role in a decarbonizing world. 

As we have said many times before, deciding not to continue to use nuclear power is the right of every sovereign nation.  However, if you believe you have better options, build them, then shut down the old plants.  What we have seen is the opposite.  Closing nuclear plants in Germany, emissions go up, close Indian Point in New York, emissions go up, close San Onofre in California, emissions go up.  Belgium plans to close its nuclear fleet and replace it with gas, emissions will go up.  And so on and so on and so on.

It took an energy crisis in Europe for the penny to drop.  Closing perfectly good plants that emit zero carbon without having something better to replace them is folly. 

Progress has been made.  After seeing about 10% of its operating units close, the US started saving units through state legislated support, and now is ensuring nuclear remains an essential part of its carbon reduction strategy with provisions in the recent federal Inflation Reduction Act (IRA).  Even when it was generally thought to be too late to save Diablo Canyon in California, common sense prevailed.  Belgium has agreed to run its two newest plants another decade and is considering minor extensions for its older units.  Korea has recovered from its period of anti nuclear policies and is once again moving full steam ahead.  Japan, a decade after the Fukushima accident is recommitting to nuclear power.  Even Germany is contemplating extending its final units’ lifetimes, even if only by a very little bit. 

We now have enough experience with the early movers who have hoped to decarbonize with renewables alone.  Germany has spent two decades and over $500 Billion dollars and made little progress on its emissions reduction goals.  Its huge investment in renewables has not been sufficient to overcome the impact of shutting down most of its nuclear fleet.  The chart above shows that in 2022, France, with its mostly nuclear fleet emitted about 8 times less carbon than Germany.  The evidence is in.  Trying to decarbonize with renewables alone is simply not feasible. 

But the worst offences remain shutting down perfectly good operating plants before their time.  There are 437 nuclear units in operation around the world producing about 10% of the world’s electricity.  Yet they also represent the second largest source of global low carbon generation after hydro.  Add to that, as stated in the IEA/NEA Projected Cost of Electricity 2020, life extending nuclear plants is the single lowest cost option of any type of electricity generation.  No surprise.  If something is capital intensive, as nuclear power is, then it makes sense to maximize use of the asset once you have the capital behind you.

So, for all those countries thinking about closing well operating zero emissions nuclear plants before their time, remember what the Pet Shop Boys have said many years ago – It’s a Sin!

Press Play to enjoy!!






Keeping the lights on is of critical importance for a prosperous future

We previously talked about energy security and the impact on global energy markets resulting from the crisis in Ukraine.  In that post we discussed energy security from the traditional perspective of risk of disruption in global energy flows as a result of geopolitical issues.  Today we will expand upon the concept of energy security to go beyond the political and address the technical issues that impact our ability to deliver energy reliably to consumers.   For society to truly prosper, we need strong reliable and resilient energy systems.

Source: pexels.com

System reliability – means a system (or grid) where electricity flows can be counted on to be available when required – i.e., customers need confidence that when they flip the switch, the lights come on, and stay on.  Given that electricity supply and demand must be always in balance, our very reliable electricity grids are nothing short of an engineering marvel.  Expert planners design systems where supply adjusts to changes in demand as needed, and that can tolerate most supply disruptions (outages – both planned and unplanned) without impacting customers.  Some simple rules of thumb (actual system design is quite complex) suggest no single generating station should be larger than 10% of the capacity of the total system and grids should have 15% or more excess capacity to accommodate outages. 

Somehow, over the past years, attention to this very important objective seems to have been diluted as the focus shifted to emissions reduction and market deregulation.  Therefore, in some jurisdictions, system reliability has suffered due to a too rapid increase in intermittent variable renewable generation that needs dispatchable back up, and poorly designed electricity markets that focus on cost above all else with real time energy markets. 

Renewables present two major challenges to system planners.  First, their intermittency and reliance on weather complicate system design to ensure there is sufficient back up supply for when the sun doesn’t shine, and the wind doesn’t blow.  We have seen, as stated in an article by Robert Bryce, where an excessive focus on renewables just doesn’t make sense. For example, in hot climates like Texas, the times when you need the most energy are also going to be the times when you have the least wind.  That’s just how the weather works. 

And the other, less talked about issue is that even though there may be large numbers of solar panels or wind turbines in operation within a given jurisdiction, they actually behave on the system as one very large super plant.  Hence the famous “duck curve” in California where all solar panels come on at once when the sun rises in the morning and then all go off when the sun sets.  This causes additional stresses for reliability planning as the system tries to respond to these large sudden changes in supply.

We talked about the issues with deregulated market pricing in a previous post noting that least cost does not necessarily mean most reliable.  And now as we did then, we will recommend reading Meredith Angwin’s book, “Shorting the Grid.”

System resilience – which is related to how well the system can withstand external events that may cause it to go down such as extreme weather or other man made events.  This concept took hold post 9/11 when the concern was how to harden power plants against potential terrorism.  More recently the issue has been extreme weather such as hurricanes, tornadoes and wildfires that have forced systems down and damaged them to the point of disaster.  The unfortunate thing is that the same jurisdictions we listed above, Texas and California are also suffering from these kinds of extreme weather events, that are challenging the ability of their systems to operate reliably.

This is where nuclear power can play an important role.  Nuclear power’s high energy density, low carbon emissions, highly reliable operations and built-in resilience can provide the stable energy source we need.  It is one of the reasons law makers in California have provided overwhelming support for a bill to keep the Diablo Canyon nuclear plant operating at least another five years, once thought impossible.

Having reliable affordable access to abundant energy is one of the tenets of a prosperous society.  Our lives are much better for it.  A public threatened with losing this reliable access will not respond well.  We have become so used to having a reliable grid that we now take it for granted.  However, assuming it will always be, misunderstands how complex an electricity grid actually is.  It’s time to go back to basics and ensure that system reliability and resilience are the cornerstones of our energy systems.  Given the need for a stable baseload 24/7 supply, nuclear power has an important role to play.




Deregulated electricity markets don’t support a viable energy transition

In the early 1990s, deregulating electricity generation seemed like a good idea.  Led by the UK, many markets rushed to dismantle their vertically integrated electric utilities with the goal of creating competition to benefit their customers, the electricity using public.   The view was that utilities had become fat and lazy and since they were mostly able to pass on their costs through a regulated pricing system, they didn’t do their best to keep prices low.  Competition would remove the fat.

Fast forward 30 years or so and much of the world has followed this path.  There is a large relatively integrated European electricity market, the UK continues to operate its market and there are multiple states in the United States that operate this way.  But is it working – and of more importance – is this the right path to support the transition to a low carbon energy system?

Source: iStockPhoto.com

To fully answer this question is a subject that requires a much longer discussion than is possible in a blog post.  We will address some of the issues and explain why we believe large scale market redesign is required.  For another excellent perspective we strongly recommend the book “Shorting the Grid” by Meredith Angwin that clearly explains how the current US deregulated model is failing the customer while reducing the reliability of the electric grid.  Read it – please.

The original concept was sensible.  Create competition in the electricity market to force electricity generation companies to become more efficient (In most cases transmission and distribution were not deregulated).  It seemed to work in telecom.  Why wouldn’t it work in electricity generation?  And at the beginning it did work.  Government owned electricity companies were sold off and broken up.  New generating companies competed with existing companies and yes, the result was improved operations of the existing generation fleet.

The markets were mostly created as energy markets, where generators competed on marginal cost of production (variable operating and fuel costs) in basically real time markets to sell electricity.  All that mattered was the price of electricity at any given moment.  This was happening at about the same time as gas was ascending to be a major player in electricity generation both in the US and in the UK.  Each generator would bid into the market at its marginal cost.  The market would accept bids at the lowest cost available and continue to accept higher prices until the demand was met.  The market price was the energy cost of the last generator who bid, and all participants received this price (the clearing price).  When demand was high, the last bid accepted was usually gas generation which has the highest marginal cost of production and this price seemed to be enough to keep the other players with lower marginal costs but higher fixed costs content. 

Then three things happened that started to change the equation.

First, at least in North America, the price of gas fell dramatically so that the only technology actually making money were gas generators.  Their marginal cost had become very low given the low cost of gas and other forms of generation could no longer survive at that price.  Hence the current situation where nuclear plants are closing before their end of life as they struggle to compete at very low gas prices.  The US government has just launched a $6 Billion program to help save these plants.  Market supporters may say – who cares?  The market is the market.  If gas plants are the lowest cost, then just run gas plants.  And yes, that is certainly an option if a single source electricity system based on 100% gas is deemed acceptable.  But if the objectives of the system are broadened to include diversity of generation for security purposes or to mitigate the risk of volatile fuel prices (yes, gas prices can and do go up), or to lower carbon emissions, then change is required.

Second, having an energy market only made it impossible to build new capacity.  Since everyone was operating on marginal cost, there was no possibility to recover full costs – which is needed to support new plant investment.  The solution was to create capacity markets.  Payments would be made for capacity based on a bidding process so that low-cost capacity would be added to the system.  Once again, in most jurisdictions, gas came to the rescue.  The cost structure of a gas plant is just right for this type of market.  The capital to build a plant is relatively low.  Once the capacity is paid for, you only operate the plant when the energy is needed, at an energy cost that covers the marginal costs (which is primarily based on the cost of fuel).

The issue with this market structure is that gas generators were always price makers, and all other technologies were price takers.  In other words, the business of electricity generation for all other technologies became a competition with gas.  While these technologies made or lost money based on this competition, gas generators were always whole, no matter the price of gas.  In effect, gas generation is pretty much a risk-free business in this market structure.  Consumers are happy as long as gas prices are low – but will be very unhappy when prices rise.

Next, countries committed to decarbonization goals and started to support adding low carbon electricity, primarily intermittent variable solar and wind power on the system.  To get these to work, subsidy was required both for price and to ensure the market takes the output of these resources when they produce, when the sun is shining and the wind blows.

To keep this story short, this structure made it near impossible for any other technology than gas or subsidized renewables to be built.  Other projects were just too risky, especially those technologies like nuclear power where the bulk of the cost of energy is based on their capital investment.  Even though a nuclear project is projected to be economic, once built, the price of the alternatives may change in the future so that the plant becomes unprofitable.  Or in other words, no matter how successful and low cost the project, the risk of having to compete with daily changes in gas prices would be unmanageable.  The solution was once again to contract outside of the market.  Power purchase agreements, contracts for difference (Hinkley Point C) and other approaches were developed to support these types of projects.  The result, more complexity, and complexity tends to increase costs.  That is why we see the Sizewell C project in the UK moving to a Regulated Asset Base (RAB) model, to simplify the project structure and keep costs lower.  (We will talk about this model in a future post.)

The reality is that data from the US DOE Energy Information Administration (EIA) show that customers do not benefit from these market structures.  2020 data shows that customers in deregulated states pay on average about 23% more for electricity than those in regulated ones.  And while most states remain regulated (about 32 to 19), when you consider the actual amount of generation under both regimes, it is much closer to half of US generation is deregulated and half regulated.

Back to the point of this post.  If you want to ensure grid stability, the markets need to change.  If you want to encourage diversity of generation, the markets need to change.  But most of all, a completely new structure has to be developed because the low carbon options (wind, solar, nuclear, hydro) have relatively high fixed costs and near zero marginal costs making an energy cost based market unworkable. For these forms of generation, a market structure based on recovering fixed costs is required. 

If we really want to work towards net zero carbon emissions, now is the time to re-imagine how we are going to generate electricity and pay for it.  One thing is certain.  The existing deregulated model in place in many jurisdictions will not take us where we need to go and the longer we take to accept that, the longer it will be to reach our carbon goals.




Energy economics – why system costs matter

In our last post, we quoted from recent reports that clearly lay out the environmental benefits of nuclear power.  This month we want to start off the year by launching a short series addressing some of the issues that impact energy economics.  Today we will talk about the importance of system costs in understanding the relative costs of different generation technologies. 

Last year at this time we wrote about the IEA/NEA report, Projected Cost of Electricity 2020, that shows nuclear is competitive with alternatives in most jurisdictions using the traditional Levelized Cost of Electricity (LCOE) approach.  LCOE is a great way to compare costs of electricity as it is generated from two or more different options to be implemented at a single spot on the grid with similar system characteristics.  With intermittent variable renewables on the system, LCOE alone no longer provides a sufficient basis for direct comparison.  By their very nature, deploying these renewables add costs to the system to be able to deliver reliable electricity in the same way as more traditional dispatchable resources like nuclear, hydro and fossil generation.   

Source: pexels.com

What are system costs?  In a report issued by the OECD Nuclear Energy Agency (NEA), system costs (see the report for a full definition) are basically the additional costs to maintain a reliable system as a result of intermittent variable renewables only producing electricity for a limited number of hours when the resource is available (e.g. daytime for solar), their uncertainty due to the potential for days with little resource (e.g. rainy or cloudy days), and the costs to the grid to be able to access them given their more distributed nature (e.g. good source of wind but far from demand).

A 2018 study undertaken by MIT “The Future of Nuclear Energy in a Carbon Constrained World” considers the impact of nuclear power on the cost of electricity systems when deep decarbonization is desired.  It looks at various jurisdictions around the world and the conclusion is always the same; the cost of electricity is lower with a larger nuclear share than trying to decarbonize with intermittent variable renewables (and storage) alone. 

The reason for this impact is fundamentally due to the relatively little time these resources produce electricity.  Solar and wind only generate when the sun shines and the wind blows, meaning they produce only some of the time and not always when needed.  The average capacity factors of these technologies vary by location with world average capacity factor of just below 20% for solar and about 30 – 35% for wind (capacity factor is the amount of time a resource produces compared to if it would produce 100% of the time).  Contrast this with the 24/7 availability of nuclear power, which can operate at capacity factors of more than 90%.

The impact on electricity systems is clear.  Given the limited duration of operation of intermittent variable renewables, there is a need to dramatically overbuild to capture all the electricity needed when the resource is available to cover periods when the sun is not shining, and the wind is not blowing (all assuming there is reasonable efficient storage available which is not yet the case).  The result is a system with much larger capacity than a system that includes nuclear (or any other dispatchable resource).  In the MIT study for example, the system in Texas would be 148 GW including nuclear but would require 556 GW of capacity with renewables alone.  In New England a system with nuclear would have a capacity of 47 GW but would require a capacity of 286 GW with renewables alone.   In the UK this would mean 77 GW with nuclear compared to 478 without.  And so on.  The costs of adjusting the system to accommodate these much larger capacities is significant.

Since that time study after study finds the same result.  This includes a study in Sweden in which 20 different scenarios for full decarbonization always come out the same; in every scenario the most cost-effective system has continued long-term operation of existing nuclear.  And more recently a study in France has shown that decarbonizing without nuclear means a system more than twice as large as one with nuclear and the more nuclear in the system, the lower the overall average cost of production.

So, what does this mean for planning?  The approach to implementing a reliable economic low carbon electricity grid must start with looking at the entire system.  A study should assess the total costs of deploying the system under a range of scenarios using different shares of available resources.  Different forms of generation have different capabilities and these need to be modelled.  Once an efficient mix is determined, a plan should be put in place to implement it (i.e., X% nuclear, Y% solar, Z% wind, A% storage, etc.).  When looking to deploy each technology, LCOE can be used to compare various options.  For example, when comparing one solar project to another or one nuclear project to another.  And of course, should the costs of any given technology vary too significantly from the assumptions in the system study that determined the efficient mix, then the system study should be updated.

Today’s energy markets are most often based on the assumption that all electricity generated is the same (to be discussed in a future post).  This is true at the moment of generation when yes, an electron is an electron.   Unfortunately, the ability of any given technology to actually be there to produce at the moment it is needed varies substantially.  Therefore, a direct comparison of the LCOE of one option vs another is only part of the story.

To fully understand the costs of electricity generated, the costs of integrating any given technology into a reliable system must also be considered.  After all, what really matters is how much we pay as customers for our electricity and the studies are clear, nuclear as part of a fully decarbonized system is always lower cost than a system based on renewables alone.




Welcome nuclear newcomer countries to the nuclear family

So far in 2021 two new countries have started producing nuclear energy for the first time.  The UAE has put the first unit of its 4-unit Barakah plant into service with the second one following close behind.  In Belarus, it is the same story, as the first unit of the Ostrovets station entered service and the second is going through its start up. 

We know that the countries that have the lowest carbon emissions rely on either hydro or nuclear power (or both) as the backbone of their electricity systems.  And these countries have achieved this low carbon footprint in reasonable time frames.   So, a country like the UAE who has almost 100% fossil fuelled electricity will quickly decarbonize as the four-unit Barakah plant comes into service at which time nuclear will be 25% of their mix.  Their further investments in renewables will help them meet their carbon targets. 

Often when considering the future of nuclear power, the case of Germany comes up.  Here we have a high-tech industrialized country who has decided to not only meet its climate goals without nuclear power but has put phasing it out as a higher priority than reducing emissions.  This is often given as the example to demonstrate that nuclear has no future in a clean energy world.  

Nothing could be more wrong. These decisions tend to be purely for ideological reasons.  Germany who has invested heavily in renewables while at the same time phasing out nuclear power has struggled to meet its carbon objectives.  Belgium announced it would build new gas plants to replace its nuclear fleet given its commitment to a nuclear phase out.  Frankly, these countries have every right to meet their carbon targets as they see fit.  But if they are so certain that renewables can do it alone, then they should just do it and remove nuclear when it is no longer needed.  But this is not the case.  Each of these countries has had to rely more on fossil fuel when nuclear is removed from their systems even as they invest heavily in new renewables.

Given the urgency of decarbonizing the world, the solution is clear.  Countries that rely on fossil fuel for their energy should pursue both hydro and nuclear for their baseload needs and supplement with renewables to fully decarbonize their systems.  Unfortunately, hydro is limited by geography but nuclear can be implemented almost anywhere.  This means nuclear is an important option and countries planning to decarbonize are taking note.

According to the IAEA there are up to 30 countries looking into nuclear power for the first time.

The World Nuclear Association (WNA) has just this month updated it biannual Nuclear Fuel Report.  In this report the industry surveys companies around the globe to develop its scenarios.  This year’s update sees an expansion of the market with new countries embarking down the path of deploying nuclear power.  In the reference scenario there are 9 new countries including Bangladesh, Egypt, Ghana, Indonesia, Kenya, Poland, Saudi Arabia, Turkey and Uzbekistan.  Of these countries, Bangladesh, Egypt and Turkey have their first plants under construction.  The Upper Scenario adds an additional 7 countries:  Chile, Jordan, Kazakhstan, Nigeria, Philippines, Thailand and Vietnam.  And there are others who are starting to consider nuclear for their future.

All of these projections do not take into consideration the increased demand on energy systems as the goal becomes net zero carbon emissions.  Once those pledged to meet net zero by 2050 start to develop their plans, and with the new nuclear options such as SMRs entering the market, we expect to see many more countries taking a hard look at implementing nuclear as part of their future energy systems.

So, for those countries that are truly committed to decarbonizing their energy systems and want to deploy nuclear as part of their solution – welcome to the nuclear family – you are on the path to abundant, reliable, and economic low carbon energy.




It’s time to rethink the South Korean nuclear phase out policy

President Moon Jae-in of South Korea followed through on his campaign pledge to reduce Korea’s reliance on nuclear power only a month after his inauguration in May 2017.  He quickly announced Korea would stop building new reactors and not life extend those in operation.  The objective was to replace nuclear with other clean energy options over time.  This policy was developed following the 2011 Fukushima accident in Japan and a 2016 movie (Pandora) which fictionalized a similar accident in Korea.  Now, with the next presidential election coming up in March of 2022, this policy is becoming an election issue – as it should.

We first wrote about Korea’s current anti-nuclear policy three years ago when they decided to shut down the Wolsong 1 reactor and decommission it.   So far Korea has only closed two reactors.  Kori unit 1, the nation’s oldest PWR, was closed rather than life extended in 2017; and Wolsong 1.  The narrative is that Wolsong 1 was closed only 3 years before its end of life.  Although that would have been when its licence expired, it was far from its end of life.  Just a few years earlier, in 2011, Wolsong 1 had been refurbished, a life extending process for pressurized heavy water (CANDU) plants, where the key nuclear components are all replaced allowing for another 30 years operation.  There is no doubt this unit was sacrificed to support the phase out policy and should be operating today, together with Wolsong units 2, 3 and 4, providing clean carbon free energy to the Korean grid.

The skyscrapers of Seoul light up as evening comes on in South Korea. Source: iStockphoto.com

In December 2020 Korea issued its Ninth Basic Plan for Electricity Supply and Demand for the years 2020-2034.  This plan suggests that supply will increase by just over 50% while reducing dependence upon coal and nuclear power.   30 coal plants will reach their end of life by 2034 reducing the share of coal in the system from 40 to 15%.  Unfortunately, 24 of these coal plants will be converted to gas.  While we know that gas produces less carbon emissions than coal, entrenching fossil generation for the long term is not a path to net zero emissions.  Today Korea’s electricity sector emits over 500 g/kWh and has a long way to go to decarbonize.

The goal is to increase renewables from its current 6.5% to about 42 percent of capacity.   Nuclear will be reduced from its current 25% to just over 10%.  It is always important to remember that plant capacity is not the right metric for comparison since renewable sources of energy such as solar and wind produce much less energy than equivalent sized coal and nuclear plants due to the limited time the wind blows and the sun shines.  This means more plants are needed to produce the same amount of electricity. 

And these plants all require land, and lots of it.  This creates further challenges as Korea is a small mountainous country with limited space to implement large scale renewable solutions.  The most promising source of renewables is offshore wind.  In February, plans to invest $43.2 Billion in the world’s largest single offshore wind project with a capacity of 8.2GW (today Korea has only 1.67 GW of wind capacity) by 2030 were reported.  This is a technically challenging project and claims this would produce the energy equivalent to the output of six (1.4 GW)  nuclear reactors is somewhat deceptive because as stated above, a nuclear plant will produce more than double the energy as a similar sized wind turbine, i.e., 4 GW of nuclear would produce more energy in a year than 8 GW of wind. 

Korea is a global industrial powerhouse and as the world’s 9th largest energy consumer in 2019 needs access to economic reliable energy to fuel its dynamic economy.  This is not easy as South Korea has little to no domestic energy resources and is one of the world’s top five importers of liquefied natural gas (LNG), coal, and oil.

Trying to decarbonize without nuclear power means that Korea will lock in fossil use (gas) for decades to come.  In addition to increasing risk to their energy security, recent reports are suggesting the era of cheap gas is coming to an end.  Spurred by increasing global demand, LNG prices in Asia have increased about six-fold in the last year. 

Korea once made a bold decision to implement nuclear power in a big way to reduce its dependence on foreign supplied fossil fuel and provide large amounts of low carbon economic and reliable energy to fuel its growing economy.  Through dedication and hard work, it went from an importer of nuclear technology to becoming self sufficient and then exporting the technology; its export to the UAE is a source of great pride.

This also resulted in a very high level of both technology and human development.  Nuclear power creates high quality jobs for thousands of Koreans.  This expertise is valued all over the world.  Unfortunately, it doesn’t take long for negative policies to start to degrade this expertise.   Young people will not choose nuclear as a career if government policy is to phase it out even if there are still years of operations that require trained experts.  And for those more experienced, there is a whole world out there that would value their excellent Korean qualifications. 

The International Energy Agency (IEA) has stated that net zero emissions cannot be reached without nuclear continuing to play a critical role.  Governments around the world are becoming more vocal in their agreement.  In Canada and the United States, both governments have stated unequivocally that nuclear is needed to reach these goals.   In Europe a group of 87 parliamentarians have signed a letter supporting nuclear to be included in the EU taxonomy as a sustainable clean generating option.  China and Russia are pursuing large nuclear expansions and Japan continues to declare that nuclear must be part of its energy mix.

Nuclear power in Korea has been an unqualified success and is the example to be used for other nations wisely choosing to deploy nuclear as part of their climate and energy infrastructure.  Korea needs nuclear to maintain its industrial base and meet its climate goals.  And the world needs Korean nuclear experience and expertise.  The time is right for a discussion with the Korean people on the nuclear phase out policy – and an election is a good time to have it.    




Yes – Nuclear power is an economically competitive low carbon energy source

When it comes to the economics of electricity, there is no report more important than Projected Cost of Electricity, issued every 5 years by the International Energy Agency (IEA) and the OECD Nuclear Energy Agency (NEA).  This report (now in its 9th edition) collects electricity costs of various technologies from a range of countries and reports on the competitiveness of each.  The 2020 version of this report was issued in December and its conclusion is clear – nuclear power is the dispatchable (meaning always available) low-carbon technology with the lowest expected costs.

Source: pexels.com

This is in stark contrast to what we often hear – that even though nuclear power may well be a low carbon solution, its costs are much too high to consider.  Recent projects that have not gone well, primarily in the west due to a long absence from nuclear construction coupled with the challenges of building first of a kind (FOAK) designs are the evidence to support this argument.   The successful economic deployment of nuclear in countries like China, Korea and Russia are ignored.  We even have a good example that new countries can successfully build nuclear plants with the start up of the Barrakah nuclear power plant in the UAE. 

This report sees through this bias.  This is not a nuclear report.  It is about electricity and its costs.  The conclusions are based on the results of the analysis, not on any preconceived biases. It concludes that all low carbon options have improved their costs since the 2015 version. 

Projected Cost of Electricity 2020 (IEA/NEA)

One change since the 2015 version of this report is the inclusion of nuclear life extension or Long-Term Operation (LTO) in addition to the traditional consideration of the economics of nuclear new build.  The results show that LTO provides the lowest cost electricity of all technologies considered.  This makes for a very simple message – for the best low carbon, low-cost option – invest in keeping the current nuclear fleet operating. 

Given the changing generating mix from traditional fossil fuelled plants to more and more variable renewables; there is an acknowledgement that to truly understand their economics the costs to the system of incorporating these variable resources must be considered.    A model, called the Value Adjusted Levelized Cost of Electricity (VALCOE) has been developed but adds considerable complexity given, as would be expected, results are very sensitive to the actual system being analysed.  This approach continues to be a work in progress.  We should expect a more fulsome analysis in the next edition.

When it comes to nuclear, this report notes that countries willing to pursue the nuclear option have three main technology solutions to reduce cost at the system and plant level (interestingly consistent with our previous series on Saving the Planet):

  1. LTO or investing to keep the current fleet operating into the future.
  2. Building existing Generation III reactors. These designs have now passed their FOAK demonstrations and are ready to demonstrate improved economics going forward; and
  3. New designs being developed such as Small Modular Reactors (SMRs). These designs are poised to extend the value proposition of nuclear power.

The IEA/NEA, in its updated Projected Cost of Electricity report, has assessed the costs of the many low carbon options to meet electricity needs going forward. Based on this analysis, nuclear power is well positioned to continue and expand its role in providing reliable, economic, low carbon electricity to the world. 




Delivering reliable electricity – nuclear plants just keep on running

On October 22, 2020 Darlington Unit 1 achieved a milestone never achieved before by a nuclear power plant running for 1,000 days continuously without an outage, either unplanned or planned1.  And it is still running.  This unit, operated by Ontario Power Generation (OPG) secured the world record for continuous operations last month, when it hit 963 days to take over from the Kaiga 2 unit in India, the previous record holder at 962 days achieved in 2018.  Kaiga took the record from Heysham 2 in the UK which reached 940 days in 2016 breaking the record set by the Canadian Pickering Unit 7 reactor 22 years earlier2.

Why does this matter? 

Source: istockphoto.com

The world runs on energy.  We need it to keep warm (or cool, depending upon the climate), cook our food, light our homes, communicate with one another and travel from place to place; and to enable pretty much everything that drives our economies.  We need this energy to be affordable and most of all, we need it be reliable.  For most people in the developed world, we fully expect that when we flip the switch, the lights will come on.  Not sometimes, but each and every time.  We also want this energy to not harm the environment (although unfortunately we will concede on the environment rather than do without).  

And there is no more reliable low carbon source of energy than from nuclear plants.  Once in operation, they just run and run and run, like the energizer bunny.  These plants run in bad weather and good, during the day and during the night, providing 24 / 7 electricity to their customers. 

System reliability is not something we often think about until we experience an issue.  It came as a shock to many this year when California suffered ongoing blackouts and energy shortages.  There are many contributing factors to poor reliability as electricity grids are complex systems that require a never-ending balance between supply and demand, meaning a need for reliable generation and a robust transmission and distribution system.   In this case, the California Independent System Operator described the conditions that caused demand to exceed available supply: scorching temperatures and diminished output from renewable sources and fossil-fuelled power plants when electricity was needed most.

The president of the system operator blamed the California Public Utilities Commission for not ordering companies to make available sufficient supply.  A critical issue is the changing mix of generation with solar growing quickly without sufficient back up when the sun goes down and the air conditioning load remains high.  This demonstrates that solar power alone cannot meet the future energy needs of large energy intense systems like that of California, and that reliability must always be considered as we make structural changes to these systems. 

On the other hand, the US nuclear fleet continues to hum along providing 20% of the country’s electricity supply. 

Source: NEI.org

Once again in 2019, the US nuclear fleet operated at a very high capacity factor (the percentage of time the plant is producing compared to if it ran 100% of the time) achieving 93.4%.  The US fleet continues this stellar performance, even as it is aging.  For the past 20 years the fleet has produced in the range of 90% capacity factor or more, demonstrating how robust a technology nuclear power really is.

This is not just true of the US.  It is true for the entire global nuclear fleet.  As shown in the WNA Nuclear Performance report 2020, more than a third of the world’s plants operate at 90% capacity factor or above and a full two thirds operate at capacity factors greater than 80%.

Nuclear technology is so robust that this excellent performance is not restricted to one specific type of plant.  Light water reactors, gas cooled reactors, heavy water reactors – they all operate great.  The distinguishing factor is more related to the expertise and excellence of the individual operator and to specific local market conditions, not to any specific technology.  International cooperation through organizations like INPO (Institute of Nuclear Power Operators) and WANO (World Association of Nuclear Operators) ensure best practices are shared and that all have access to the tools they need to achieve a high level of performance.  This is an industry that collaborates to ensure continuous improvement across the global fleet.

What really demonstrates the strength of nuclear technology is the continued strong performance, even as the plants age.  Heysham achieved it record run at 28 years of age and Darlington Unit 1 is 30 years old with only a year or so left before going down for refurbishment and a life extension outage.  Many would expect that the life cycle of a nuclear plant would look like an inverted bathtub, with less than average performance when it is new as the kinks are worked out and then declining performance with age as it nears its end of life.  But this is not the case.  Nuclear plants run well when they are new, when they are middle aged and actually tend to run their very best as they get old.

Need reliable electricity supply even when the sun is not shining, and the wind is not blowing? When it comes to reliable low carbon electricity, nuclear plants set the bar very high.  They just run and run and run some more…….


1 Every station in Canada had at least one unit set a station performance record this year.

2 It should be noted that the AGR units in the UK and the PHWR units in Canada and India use on-power fuelling, so they are not limited by the need for refuelling outages.