Aslak Grinsted‎ > ‎

Misc. Debris

A list of debris posts can be found here.

EGU 2014

posted Apr 15, 2014, 3:07 AM by Aslak Grinsted   [ updated Apr 16, 2014, 5:23 AM ]

Me at EGU 2014:
  • Convening the sea level session (Friday afternoon – you are welcome!)
  • Surface velocities at Engabreen produced from time-lapse feature tracking (Alex)
  • Sea level projections from FAR to AR5
  • Sea level rise projection for Northern Europe
  • Trends in global and regional sea levels since 1807 (Sveta)
  • Haar Wavelet Analysis of Climatic Time Series (Zhang)
  • Trends in normalized hurricane damages in the US
I put some pdfs of my posters below (as I make them and as i decide whether they should be on the internet). The regional sea level projection will not be uploaded before i have submitted a manuscript.

I decided to make my posters in inkscape after some painful experiences with powerpoint created pdfs that turned out crap regardless of which workarounds I tried. I also used adobe inDesign which is really cool but also has a bit of a learning curve when you only use it once per year. Inkscape is limited, but perhaps that is a good thing. Atleast it has been a very smooth ride thus far. Three posters in 2 days (including analysis for one of them). So I highly recommend it for posters. 

Another possible workflow would be to use powerpoint for all the text layout, export as pdf, and then import it in inkscape to do the final tweaking. (Powerpoint tip: avoid shadows/transparencies/gradients if you want nice prints)

Comparison of sea level projections

posted Oct 2, 2013, 4:18 AM by Aslak Grinsted   [ updated Nov 23, 2013, 3:18 PM ]

I have been making a nice figure which compares the 21stC sea level projections from the AR5, with previous IPCC reports, Semi empirical models, and an expert elicitation ... I hope this may be useful in presentations for many people (Feel free to use them where ever). See also this page for a similar figure for the Ice sheet contribution only.

The IPCC FAR,SAR,TAR,AR4 have all been converted to RCP scenarios using conversion factors (see below). All projections have been regularized to 100 years using plain scaling. 

Evolution of sea level rise projections RCP85

  • Extrap: constant rate of sea level rise at present day trend from (An absolute lower limit of plausibility IMO)
  • FAR: full range of SLR projections from FAR (taken from SAR table 7.8)
  • SAR: full range of SLR projections from SAR (taken from TAR table 11.14). (SARp369: "Excluding the possibility of collapse of the West Antarctic ice sheet").
  • TARfull range of SLR projections from TAR table 11.14. (TAR p.642: "The range of projections given above makes no allowance for icedynamic instability of the WAIS".)
  • AR4: SLR projection excluding scaled-up ice sheet discharge. (AR4 WG1 Table 10.7). 
  • AR4+: SLR projection including scaled-up ice sheet discharge. (AR4 WG1 Table 10.7). Context for "larger values cannot be excluded" can be found in the AR4 SPM.
  • SEM: full range of semi-empirical projections in AR5 (from AR5 fig.13.12). 
  • AR5: "process based" ice sheet projections from AR5 table 13.5. These do not account for a potential collapse of Antarctic marine based sectors which may contribute up to several decimetres (indicated with thin shaded line).
  • Ice sheet experts*. refers to Bamber and Aspinall (2013) table S1 5-95% plus non ice sheet contributions from AR5 table 13.5. Note BA13 does not refer to a specific scenario (hence the asterisk)
  • SLR experts refers to the expert elicitation of Horton et al. 2013 (table 1). They do not provide RCP45 but only RCP85 and RCP3PD. However both SEMs and AR5 agree that the projection for RCP45 lies at about a third of the way between. So I have used this weighing. 
Some assumptions on normality and covariance structure were necessary to derive 5-95% confidence intervals from the likely ranges reported in AR5 table 13.5. 

"Antarctic collapse" does not literally mean a full collapse, but refers to a marine ice sheet instability. Read AR5 text for more precise meaning.

Scenario conversion factors that I have used:
The aim of the conversion factors is to predict what the old models would give if forced with new scenarios.
RCP45/A1B=0.90 & RCP85/A1B=1.20 from AR5 fig 13.10
A1B/IS92A=1.20 calculated by forcing a Jevrejeva model with TAR table II.3.11 
IS92A/SA90=0.87 calculated by forcing a Jevrejeva model with SAR fig Ax.9.

Compare TAR II.3.11 with SAR Ax.9. I'd greatly appreciate any comments on how to improve these conversion factors. 

My interpretation:
Interestingly sea level projections was coming down (and narrowing) until we started getting worrying records from the ice sheets. By AR4 it became evident that the ice sheets had a far more dynamic behavior than previously thought (Larsen-B, JakobshavnHelheimKangerdlugssuaq). It became clear that the representation of ice physics and marine ice sheet interaction needed to be improved (see SeaRISE & ice2sea). Since then the evidence for an important dynamic ice sheet contribution has only been strengthening with e.g. Thwaites in the Antarctic, and Petermann in Greenland

Note: I might update figs with a better representation of the SEM uncertainties. 

If you have comments then please email or tweet me. 

Optimistic & over-confident ice sheet projections in AR5

posted Sep 30, 2013, 7:47 AM by Aslak Grinsted   [ updated Nov 23, 2013, 1:59 PM ]

Take away message: The graph on the right shows that AR5 process based ice sheet projections are optimistic and over confident when compared to views of ice sheet experts. To be fair they do mention a possible collapse scenario which could close the gap.


Arguably the most uncertain component of sea level rise projections is the rate of future ice sheet mass loss. In AR4 ice sheet models were unable to simulate key processes and the AR4 sea level projections were hugely criticized for being too conservative. 

Since the AR4 there has been great progress in ice sheet modelling but ice-sheet ocean interaction is still a major challenge. E.g. IPCC AR5 is still unable to give scenario dependent projections of the dynamic ice loss (see AR5 table 13.5) and it is
unable to assess the probability of an Antarctic collapse
. This lead them to exclude this possibility from the process based sea level projections in table 13.5, and only report the 'likely' range.

The "best" picture of the full uncertainty in ice sheet mass loss projections is from the expert elicitation by Bamber & Aspinall (2013). Figure 1 compares how the AR5 "process based" ice sheet projections (table 13.5) compare to the views from this ice sheet expert elicitation.

Conservative & Overconfident

AR5 process based model projections are much more conservative/optimistic and has much more narrow uncertainties than the ice sheet experts (Fig.1). There can be no good reason for why the AR5 authors have much greater confidence in their ability to project ice sheet loss than ice sheet experts themselves. Notably the best guess view of ice sheet experts nearly falls outside the AR5 process based range. The worst case scenario from ice sheet experts is more than 60 cm higher than the worst case from the AR5 process models. 

Clearly the process based SLR projections from AR5 are over-confident and too conservative by themselves. You have to invoke a significant probability of a collapse of Antarctic marine based sectors before it can be reconciled with Bamber & Aspinall (2013). This is particularly important for the worst case, but it is also evident that even the central estimates from AR5 process based models are practically inconsistent with the views of ice sheet experts (fig.1). 

Another way to put it: AR5 ice sheet projections are incompatible with the views held by about half of ice sheet experts.

Footnote: other comparisons

  • Uncertainties from semi-empirical models show much better correspondence with ice sheet experts. (fig.1)
  • I consider a constant mass loss at present day rates to be the absolute lower limit of plausibility in a warming world. This is shown as "Extrap" in figure 1. Notice how this lower limit excludes much of the lower tail of the AR5, AR4, and AR4+ sea level projections.
  • The AR5 projects that there is 21% chance that the 21stC ice sheet mass loss will be slower than the present rate under RCP4.5. IMO this is simply implausible. (Assuming normality of the extrap and AR5 numbers)

Figure 1: Projections of ice sheet mass loss over the 21st century under RCP4.5. The AR5 process based projections appear optimistic and over confident when compared with views of ice sheet experts.


  • Extrap: Fixed rate of mass loss rate based on Shepherd et al. (2012). (An absolute lower limit of plausibility IMO)
  • AR4: Ice sheet mass loss excluding scaled-up ice sheet discharge. (AR4 WG1 Table 10.7). 
  • AR4+: Ice sheet mass loss including scaled-up ice sheet discharge. (AR4 WG1 Table 10.7). Context for "larger values cannot be excluded" can be found in the AR4 SPM.
  • SEM*: full range of semi-empirical projections for RCP4.5 subtracted a central estimate of the non-ice sheet contributions to SLR. (Calculated from AR5 table 13.6 minus central values from AR5 table 13.5). 
  • AR5: "process based" ice sheet projections from table 13.5. These do not account for a potential collapse of Antarctic marine based sectors which may contribute up to several decimetres (shown as thin shaded line).
  • Ice sheet experts. refers to Bamber and Aspinall (2013) table S1 5-95%. Notice: not specifically RCP4.5.
All projections have been scaled to 100 years. AR4 estimates are based on A1B but scaled with the RCP45/A1B ratio (=90%) from AR5 figure 13.10. Some assumptions on normality and covariance structure were necessary to derive 5-95% confidence intervals from the likely ranges reported in AR5 table 13.5. 


AR5 sea level rise uncertainty communication failure

posted Sep 27, 2013, 4:18 AM by Aslak Grinsted   [ updated Mar 19, 2014, 6:55 AM ]

I am disappointed in how the sea level rise projection uncertainties are presented in the IPCC AR5. The way the numbers are presented makes people believe 98 cm by 2100 is a worst-case scenario which it clearly isn't
. The AR5 does have caveats which explains why it could be more, but unfortunately this is buried in language that clearly goes over the heads of most people. 

[UPDATE: The AR5 sea level chapter authors have written a letter to Science where they emphasize/clarify that "The upper boundary of the AR5 “likely” range should not be misconstrued as a worst-case upper limit.". This is my point exactly with this page.]

This is how it is presented in the summary for policy makers

Global mean sea level rise for 2081−2100 relative to 1986–2005 will likely be in the ranges of 
0.26 to 0.55 m for RCP2.6, 0.32 to 0.63 m for RCP4.5, 0.33 to 0.63 m for RCP6.0, and 0.45 to 
0.82 m for RCP8.5 (medium confidence). For RCP8.5, the rise by the year 2100 is 0.52 to 0.98 
m, with a rate during 2081–2100 of 8 to 16 mm yr–1 (medium confidence).

The numbers of 0.26 to 0.98 m is what people pick up. To appreciate why 0.98 m is not an upper limit of SLR then you have to read on and understand the caveats stated in the AR5. The SPM also says:

The basis for higher projections of global mean sea level rise in the 21st century has been 
considered and it has been concluded that there is currently insufficient evidence to evaluate 
the probability of specific levels above the assessed likely range.

To parse this you need to understand the IPCC jargon. "Likely" means the 66% confidence interval. I.e. slightly less than a one sigma interval. So, the full uncertainties are at least twice as large but they are unwilling to say by how much exactly. They also say that there is an additional uncertainty that is unlikely to be anything but positive:

"Based on current understanding, only the collapse of marine-based sectors of the Antarctic ice sheet, if initiated, could cause global mean sea level to rise substantially above the likely range during the 21st century. However, there is medium confidence that this additional contribution would not exceed several tenths of a meter of sea level rise during the 21st century. {13.4, 13.5}"

It is unclear what they mean by "several tenths of a meter". I find it remarkable that they could not agree on a more quantitative statement considering they are only stating something with "medium confidence". In any case this excluded potential contribution is clearly positive. This uncertainty strongly affects the upper tail of the uncertainty range. It is effectively a bias. Ice sheet experts appear to judge this collapse scenario quite probable, and post-AR5 modelling indicates that Pine Island Glacier in Antarctica is already engaged in an unstable retreat (Favier et al., 2014).

The literal meaning of the AR5 likely range is that there is 17% chance of exceeding 1m SLR assuming that there is no marine instability (under RCP8.5). If there is an instability then the probability is greater.  

What is the worst case? 1.7 m?

I recommend looking at the semi-empirical sea level projections from AR5 figure 13.12 which have about 1.5 m as the worst case by 2100. But if we try to construct one from the three quoted bullet points above then we need a few assumptions. First I convert the RCP85 scenario likely range to a sigma of 0.24m (assuming a normal distribution). This gives a 95% confidence upper limit of 1.22 m (again assuming normality). To this we need to add "several tenths of a meter" which we are left to interpret as we see fit. In the worst case we might say something like 0.5 m, which would result in ~1.7 m, an entirely different story than what most people get from the AR5! I think that deserves the exclamation mark.

It is remarkable how this roughly agrees with the plausible upper limit derived from an ice sheet expert elicitation, and with semi-empirical models (see AR5 table 13.6). I cannot understand why the AR5 could not use the semi-empirical models plus the expert elicitation to derive the very likely upper limit. Even if there is disagreement of the reliability of these methods then surely everybody could agree to use them to derive a very likely upper range with some degree of confidence. It is essential in some types of adaptation planning. For comparison here's a quote from Josh Willis a few days before the AR5 release:

"The 2 meters by 2100 is cited a lot, but if you ask scientists what they think of that number, they say it is probably a little high, maybe 1.5 meters [4.9 feet] is more like an upper bound" 

Since then we have had an expert elicitation and this is pretty accurate for experts view of RCP8.5. It is predictable that people will look at the numbers and the graph and look no further. I think it mirrors the problems of how the AR4 presented sea level projections, and I find it unbelievable that people did not learn from that experience. Instead this became a petty struggle between process based and semi-empirical modelling. I applaud that the caveats made it into the SPM, but unfortunately nobody will notice.

I have heard from several authors that they were very much concerned with quantifying a worst case. Evidently they could not achieve a consensus. But it seems counter intuitive to report a narrower uncertainty interval because you are more uncertain. (AR5 used likely ranges whereas AR4 used very likely ranges.)

Further information:
On this companion page I argue that the AR5 ice sheet projections are both optimistic and over-confident. Go take a look. 

On this page I compare AR5 with other sea level projections. It shows the evolution of IPCC sea level projections.

Please send comments on twitter. This is just a quick first reaction/draft before I delve into more detail on other aspects of the AR5 sea level projections. 

Can process models close the sea level budget?

posted Sep 23, 2013, 7:41 AM by Aslak Grinsted   [ updated Nov 20, 2013, 1:33 AM ]

Before we can trust the projected models of sea level rise then it is reasonable to demand that these models can match the record of global mean sea level rise. Gregory et al. (2012) writes:  "Confidence in projections of global-mean sea level rise (GMSLR) depends on an ability to account for GMSLR during the twentieth century".

Gregory et al. (2012) investigates this systematically. They select a set of different models for each of the individual SLR contributors. They then try all the different combinations of the selected models to see how they compare to global mean sea level. This is shown in this graph

Figure 1Comparison of time series of annual-mean global mean sea level rise from four analyses of tide gauge data (lines) with the range of the 144 synthetic time series (gray shading). Each of the synthetic time series is the sum of a different combination of thermal expansion, glacier, Greenland ice sheet, groundwater, and reservoir time series.

This graph shows that it is only possible to close the sea level budget if you cherry-pick the most sensitive models. In general we see that the whole is greater than the sum of the parts. This is not how it is framed in the paper where they instead argue that they can satisfactorily account for the GMSLR. Personally this necessity to cherry pick does not reassure me. 

A severe problem with the Gregory et al. paper is that they claim that their analysis has implications for semi-empirical models. This assertion is unsupported in their study. There is no analysis of semi-empirical models what so ever, and they show no evidence. This hints to me that there is an agenda. They want to convince us that we should have confidence in their preferred type of models but not in alternatives (semi-empirical models). Frankly, I think they succeeded in demonstrating in pretty much the opposite. 

Another paper / same authors
Church et al. 2013 also tries to argue that we can have confidence in process models of sea level rise. The first thing i notice is that the author team has a great deal of overlap with Gregory et al. (2012)In this new paper they repeat the exercise of testing whether they can close the 20th century sea level budget. The difference from the Gregory et al. study is that in this case they pick one particular set of models which roughly close the budget. The problem is that they select models that disagree with data when you look in detail: e.g. they pick a version of the glacier model with a much greater historical sea level contribution than our best data based estimate from Leclercq. Is that a successful validation of the models against data? 

Figure 2: Blue is the modelled glacier contribution used to close the budget in Church et al. (2013), and red is a data based estimate. Clearly the chosen model has a much larger contribution than data suggests.

Both papers also add a constant long term ice sheet equilibration term of 0-0.2 mm/yr. I do not want to discuss this term in detail here, but I just note that I think this estimate needs to be updated with more recent modelling. I am also concerned with whether this term is being counted twice in the budget. 

Gregory et al. 2012, Twentieth-Century Global-Mean Sea Level Rise: Is the Whole Greater than the Sum of the Parts?

Church et al. 2013, Evaluating the ability of process based models to project sea-level change, ERL,

Return period of Boulder 2013 extreme rain

posted Sep 20, 2013, 2:30 AM by Aslak Grinsted   [ updated Nov 20, 2013, 1:30 AM ]

Several media stories on the Colorado floods talk of it as a 1000-year flood. However, the original source of this claim was talking about a 1000-year precip event (not the same as a flood-event). Pielke Jr discusses this on his blog. The source for the 1000-year return level estimates appears to be this NOAA page. We can argue about how solid these very rare return levels are as they obviously must be based on an extrapolation beyond the largest recorded event. Nevertheless out of curiosity I looked up what return period NOAA estimated for the observed rainfalls.

I found this map of weekly rain totals and clicked on a gauge in Boulder. It received 12.91 inches over 1 week. Other neighboring gauges experienced similar amounts.

I then went to the NOAA return-period/return level page and plugged in the lat lon of the station and got this return level plot for 7-day totals:

From that return level graph the rainfall appears to be inconsistent with anything less than a 500-year event and the best estimate is that it is more rare than a 1000 year event. But again estimating the return period for this precipitation event is extrapolating far beyond past records and we should take it with a grain of salt. I would not go as far as calling it pure fantasy however. 

Once the return period estimates are updated with the new observed precipitation extreme then the odds must shorten. Now that we have seen that it can happen then it will no longer seem quite as unlikely that it might happen again. 

I have also downloaded Boulder precip data from here and calculated my own empirical return period plot (see below). From this we clearly see that 12.91in is really off the scale. Prior to observing this extreme event you would probably have estimated a 1000 year return period (as indeed NOAA appears to have done). (shading is 1sigma). 

Discharge in Boulder creek
It would also be interesting to see how the weekly discharge in Boulder Creek compares to the past. At this station I get the weekly discharge (starting the 11th sept 2013) to be: 1.42bn feet3 = 40M m3. A historical record can be found here (since 1986). Taking this record, and stacking into weekly bins and looking at empirical return periods then I get the plot on the right. (shading is 1sigma / 17-83%). There the 2013 event comes out to be a ~25-year event (unsurprisingly since it has been observed once in a record starting in 1986). But you can also notice that the 2013 event does not diverge from a straight line extrapolation of the more common events. I.e. you would/should probably have expected a ~25 year return period prior to the 2013 floods.

Why do i look weekly discharge rather than flood levels? The reason is that flood levels and instantaneous discharge are very sensitive to changes in infrastructure and adaptation/protective measures (e.g. flood plains). Weekly discharge is much less sensitive to such changes. Return period confidence intervals have been calculated using the approach described in the supplementary info to this paper

Trends in extreme hurricane damage

posted Sep 3, 2013, 4:46 AM by Aslak Grinsted   [ updated Nov 20, 2013, 1:34 AM ]

I have downloaded the ICAT damage estimates of normalized hurricane damages(/losses) and looked at the trends in the data. This series is often used to argue that there is no significant trend in hurricane damage. 

I am convinced that it is simply impossible for any normalization procedure to remove all non climatic influences / societal changes that has taken place over the 20th century. I have previously argued that there are remaining biases in normalized hurricane damage records. Nevertheless this is still an very useful record and we can extract lots of interesting info from this.

Note the analysis below has caveats. Please do not over interpret the plot, although I do find it rather suggestive.

Frequency of extreme damage events

The damage distribution is extremely skewed with a few major events dominating the total damage. We can take advantage of this if we want to figure out whether there are any climatic trends in hurricane damage. The very skewed distribution means that the frequency of events above a certain damage threshold will not be strongly affected by a small remaining bias in damage. We can therefore try to plot the frequency of extreme damage events.  This is what I do in the plot here.

A positive trend in extreme damage events of all magnitudes.   

The plot shows there is a positive trend in hurricane damage events of all magnitudes. Numbers show damage ratio between start and end of plot. One issue is that small damage events may not have been recorded in the past when coastal population density was much lower. So we may expect a trend inducing bias in the frequency of the most common events (e.g. "top 241"). But this bias is unlikely to influence the trend in the most extreme damage events which are unlikely to have been missed in the past. Detailed examination of the above figure also show that the darker shading disappears from the plot prior to about the 1930s. I therefore recommend to take care in interpreting the trend in events with less than 389M$ damage. However, even when disregarding these then there is a clear trend towards more frequent damaging events.

- Pielke Jr (2005) argues that we can examine events >1bn$ if we want to avoid the above bias. I.e the events with trends ammounting to an increase of 1.7-2.6x in the above plot.

Note if events of all magnitudes are increasing by a consistent ratio, then that means that the expected value is also changing by the same ratio. 

[The thresholds were chosen at the 0:10:90th percentiles. Frequencies are estimated in 5 year moving windows for the background. trends are simply least squares fits to the unsmoothed daily data.]

Data source: ICAT hurricane damage estimator. Damages are in units of 2013 USD.

A note about the visualization and color choices 
I wanted to plot the frequency for many different threshold choices. One issue is that the largest values on the y-axis corresponds to the smallest thresholds and least severe events. The hot color scale helps because the rare/most severe events are colored with the hottest colors. 

WAIS collapse during the last interglacial?

posted Sep 2, 2013, 3:21 PM by Aslak Grinsted   [ updated Oct 17, 2013, 2:11 AM ]

There are several lines of evidence pointing to a WAIS collapse during the last interglacial (MIS 5 / Eemian).

Evidence for a WAIS collapse.

  1. Oldest ice in WAIS-divide ice core is younger than 70,000 yrs
    • Lack of interglacial ice is very suggestive.
    • Note however: bottom melting
    • Note however: Total air content from the ice core can inform on past ice sheet elevations and may help with interpretation but there may be complications (see this issue w. total air content interpretation.) - Does anybody have a link to total air content data for the bottom of the WAIS core. What does it say?
  2. Octopus genetics.
  3. Sudden sea level rise during the last interglacial (WAIS best candidate). Multiple records show this around the world:
  4. High global sea level rise / Greenland not deglaciated implies Antarctic contrib.
    • Kopp et al: inversion of paleo sea level records. 
    • Rohling et al: High rates of SLR during LIG. review of evidence pointing to an instability.
    • Neem-members, (and refs therein): an upper limit on the Greenland contrib. 

Counter evidence for a complete collapse:

Partial collapse?

There have recently been several papers that show that EAIS also reacts to climate change. So the last interglacial sea level highstand may have contributions from EAIS, WAIS and GrIS. It does not have to be a complete collapse to close the budget. But a partial collapse can explain the octopus genetics.

If you have comments or additional evidence either way then please mail or tweet me. 

The units of ACE are simply wrong

posted Aug 29, 2013, 7:14 AM by Aslak Grinsted   [ updated Sep 11, 2013, 12:39 AM ]

Definition of Accumulated Cyclone Energy 
as is lifted from wikipedias entry:

"The ACE of a season is calculated by summing the squares of the estimated maximum sustained velocity of every active tropical storm (wind speed 35 knots (65 km/h) or higher), at six-hour intervals. If any storms of a season happen to cross years, the storm's ACE counts for the previous year. The numbers are usually divided by 10,000 to make them more manageable. The unit of ACE is 104kt2, and for use as an index the unit is assumed."

But those units don't match the definition and cannot be right. -even if that is how everybody reports ACE. It is an integrated squared velocity over time, summed over all storms. The units are speed2*time. You can get rid of the "time" by reporting the ACE per year rather than the pure ACE and then you will get speed2*time/time. But that only cancels out if the nominator and denominator are in the same units of time. The number of 6hour blocks per year is 1461. 
An example:
The Atlantic 2005 season is commonly reported to have an ACE of ~250e4 kt2. That is wrong! If we want to report in kt2 units and take into account that there are 1461 timesteps/year then we get: ACE/yr=1711 kt2. I am not a fan of this way of reporting it either. But it illustrates that the definition of ACE is sloppy.

Energy is a misnomer

The idea that cyclone energy is proportional to v2 is also poorly justified. The energy of what exactly are we talking about? 
The "Energy" in ACE is a misnomer. I am not alone in thinking this but it should be testable in GCMs (scatterplot of some well defined measure of cyclone energy vs ACE). I believe that ACE is simply some index which is useful because it is easy to calculate and allows comparison with earlier work. So while it may be practical for some purposes then the theoretical justification is weak. I guess it is good that it is non-linear in v and that it takes duration into account. This is important for risk/potential damage. Potential damage would require a greater exponent though (perhaps something like v6.5 see figure). In the figure to the right I plot loglog fits. Murnane et al. 2012 find that another type of law give better fits [Dmg=k*exp(a*V)]. This type of law is able to capture the curvature evident from the lower slopes at low wind speeds.

The end... I cannot be bothered wasting more time with this rant.

This is a reply to @sdwx94 on twitter who says: "[ACE] still waaay better than named storm count". 

My answer: 
That depends... Big storms definitely pose a more severe threat so if you want to rank the severity of different seasons, then ACE is better than storm count. But you could look at hurricane damage instead or at something like wind^6 (see above). However, if you want to try to understand how climate change affect the risk then it may be better to look at frequencies. At least if your mental model is one where frequencies are changing. E.g. if you are assuming some stochastic process where:
  • P(damage|maxwind,exposure) is stationary 
  • P(maxwind|storm) is stationary
  • P(storm) time-varying (due to AGW / ENSO /... )
Here expected hurricane losses per year scales linearly with frequency because all else is being equal. In this case it is clearly better to look at frequencies if you want to estimate how expected annual damage changes with time. My point is that what is best depends on your expectations for how the world is put together. (I am not arguing for this particular model). Frequency measures are obviously best at detecting changes in frequency. 

Finally, I think that if you are building models to detect the trends etc. then I think you will get much more out of the data if you do not aggregate the data into a single number per season. This amounts to discarding information. It is useful for presentation and simple analysis like straight line fits, but you will get better results if you try to model it in more detail.

A quick look at El Niño in CMIP5

posted Aug 20, 2013, 7:03 AM by Aslak Grinsted   [ updated Feb 27, 2014, 6:54 AM ]


El Nino is not well represented by many climate models. 

In Nature there is a very high anti correlation between SOI and NINO3.4 in the order of -0.8. I have examined whether that also holds for CMIP5. It does not...

Note: Some runs may have been double counted. However restricting to smaller subsets are qualitatively similar. The few models with positive C are be due to a file mismatch between runs where KNMI-ClimateExplorer did not provide all meta data (I assumed r1i1p1 where this info was missing).

(The models with a correlation coef occasionally greater than -0.5 are: EC-EARTH, GISS-E2-HGISS-E2-R, IPSL-CM5A-LRbcc-csm1-1, FIO-ESM - not all were counted in above histogram

Data were downloaded from KNMI climate explorer. Intervals were 1880-present.

See also 
Coats et al. 2013: Stationarity of the tropical pacific teleconnection to North America in CMIP5/PMIP3 model simulations 

Power Spectra

The power spectra of Nino3.4 (after removing global warming trend) looks like this. 

The model spectra has on average a reasonable shape, but it appears there is a bias towards less variability in modelled nino3.4. There are also models that seem completely off in their shapes. 

similar for SOI:

1-10 of 60