Aslak Grinsted‎ > ‎

Misc. Debris

A list of debris posts can be found here.

Plausible upper limit of WAIS ice loss

posted Nov 23, 2015, 6:57 AM by Aslak Grinsted   [ updated Dec 15, 2015, 3:09 PM ]

Ritz et al. (2015) have a recent model study of potential sea level rise from Antarctic marine ice sheet instability. In it they conclude:

"Our assessment suggests that upper-bound estimates from low-resolution models and physical arguments (up to a metre by 2100 and around one and a half by 2200) are implausible under current understanding of physical mechanisms and potential triggers". 

I like the study, -yet I am not ready to accept their assessment that higher rates are implausible. There are several reasons for this, but the most important is that this is a single study, and while they do a commendable job in exploring uncertainties, then history is littered with examples of overconfidence. See for example these estimates of the speed of light over time, where estimates clearly have had too narrow confidence intervals (from "Ubiquitous Overconfidence" section in Morgan et al., 2014): 

The Ritz et al. modelling approach is to some extent conditional on our physical understanding, and new mechanisms have been proposed which make the ice sheet respond an order of magnitude faster (Pollard et al., 2015). For that reason alone I think it is too early to say "implausible". Ritz et al. do acknowledge the possibility of new physics in their final sentence:

But, given current understanding, our results indicate that plausible predictions of Antarctic ice-sheet instability leading to greater than around half a metre of sea-level rise by 2100 or twice that by 2200 would require new physical mechanisms23, new projections of MISI triggers, or both.

You can look at this problem from many perspectives: modelling, paleo evidence, semi-empirical, and people do. Different experts will assign different weights to different lines of evidence. I think the best (and probably most objective) way to quantify the broader uncertainty is through formal expert elicitations such as Bamber & Aspinall (2013)Maybe expert opinion have moved since BA13, -and I hope that an updated formal elicitation will come out soon. There have been several new studies since BA13 suggesting that WAIS (in particular the ASE region) may already be committed to collapse. On the other hand, we have also had a range of model studies concluding that it probably would not happen that fast. But, then we also have Pollard et al. which highlights a mechanism that potentially could make it go a lot faster. The case for a WAIS collapse during the last interglacial has strengthened the last few years in my opinion. So, in my view it is not trivial to conclude how the full uncertainty has evolved. 

Note: this was written in a rush. Please, excuse any trailing thoughts and unfinished sentences. 

Update: Greg Laden has a nice exchange on the subject between Richard Alley (from the Pollard et al. Study) and Tamsin Edwards (of the Ritz et al. study). Great to hear their own views. Go and read for yourself.

Update: Here's a tweet from @alevermann which compares the upper limit AIS contribution from recent studies.
Observe how the 83% interval of BA13, is reasonably similar to 95% of other studies. I note that in AR5 they concluded that ensemble spread is smaller than the full uncertainty for several key quantities (e.g. steric expansion). They therefore inflated model-spread by a factor 2, which essentially maps 95% model spread to 83% uncertainty intervals. 

Update 2: I think the difficulty with defining an upper limit for the AIS contribution to SLR is similar to the difficulty with assigning an upper limit to climate sensitivity. Here's a figure from AR4 (chosen because it is plotted as a CDF). I think this figure makes it easy to appreciate why model spread alone does not capture the full uncertainty.

Edit Fixed a few inverted statements. Thanks to @p_aulina

Digitizing pdf figures using inkscape

posted Sep 28, 2015, 12:28 AM by Aslak Grinsted   [ updated Sep 30, 2015, 12:04 AM ]

I recommend using inkscape if you want to digitize a vector figure from a pdf file. Here's how you do that. 
  1. Open Inkscape.
  2. In [inkscape preferences]-[svg output] disallow "relative coordinates". 
  3. Import the pdf into inkscape. Inkscape will let you select which page to import.
  4. Delete all the objects you are not interested in. That is text and virtually everything else except the datapoints. 
  5. Save as svg.
  6. Open the svg in a text editor and read the coordinates... 

In the example shown here the path has a set of coordinates and there are some M and C commands. Usually you can just ignore these letters, but if you are curious then you can read more on their meaning here. M means moveto, and C means curveto. 

When you are done, then you probably want to re-allow "relative coordinates", as it results in smaller files. 

Digitizing non-vector bitmap figures:
The svg-approach only works with vector graphics. When i need to digitize a bitmap, then I usually use digitize2 in matlab, or WebPlotDigitizer to digitize bitmap figures. 

More and Stronger Hurricane surges

posted Sep 25, 2015, 3:22 AM by Aslak Grinsted   [ updated Sep 30, 2015, 12:05 AM ]

I thought I would share this quite illustration on how to interpret changes in return period/return magnitude plots. 

Data on hurricane surges along the US Atlantic and gulf coasts indicate that more and stronger hurricane activity in globally warm years

Our statistical model used for projection of hurricane surge threat also finds this relationship. 

Note: that if you focus only on the most rare, and most intense events then you will not be able to tell the difference between "more intense" and "more frequent". 

There is a consensus on "stronger" hurricanes in a warmer world, but not a consensus on "more" (see e.g. discussion here). It should be noted that fewer but stronger hurricanes, is not necessarily incompatible with more and stronger hurricane surges. 

Sea level, the moon, and Frankenstein

posted Sep 22, 2015, 3:01 AM by Aslak Grinsted   [ updated Sep 25, 2015, 5:23 AM ]

Jens Morten Hansen (JMH) and co-authors have recently published a study where they use sine-regression to fit 5 oscillations plus a linear trend to a 160-year sea level record from waters near Denmark (a stack of local GIA corrected tide gauge records). They then observe that the sines-plus-trend model correlates highly with the record it was fitted to, after applying a 19-year moving average. This should be unsurprising as the procedure guarantees high correlation, regardless of input data (see figure). This result is clearly not significant in any meaningful sense of the word. 

After having tuned their model to fit the data, they proceed to post-rationalize the fitted model frequencies in terms of the nearest half-multiples of the lunar nodal cycle (18.6 years). I use the word "post-rationalize" to highlight that their new hypothesis is formulated in order to explain their first insignificant result. Again, this key result is clearly not significant. You can postulate any similar set of frequencies, and you would still get high correlation. This is illustrated in the figure on this page where we show how sea level can be 'predicted' using the orbital periods of five exoplanets.

It is therefore not surprising that it has been hard to get this study published. After multiple rejections, it was eventually published in Journal of Coastal Research. JMH then began to publicize it in primarily danish media. Here are some of the headlines it generated: 
This persistent push for publicity have prompted us to voice our critical concerns regarding this study. No, this is not a case of scientific censorship. This study has fundamental flaws and it is simply not convincing. 

[Update: Our criticism is now out in Aktuel Naturvidenskab and republished on Lise Brix from followed up with a thorough piece where they interview outside experts.]

Our criticism - over fitting 
We wrote a critical comment highlighting the flaws in their analysis. It can be found here (in danish). We demonstrate by example the non-significance by showing how you get high correlations regardless of data set, and periods used (see figure). 

The plot on this page shows how we can fit pretty much any time series using the exact same lunar frequencies they used in their study. We illustrate this using data for the population in Denmark over time, and the frequency of the words "Frankenstein", and "hamlet" in books over time. We can also predict sea level using the orbital periods of exoplanets. The point is that high-correlations are guaranteed even when there clearly is no causal link. 

From “Drawing an elephant with four complex parameters” by Jurgen Mayer, Khaled Khairy, and Jonathon Howard,  Am. J. Phys. 78, 648 (2010), DOI:10.1119/1.3254017.
The basic issue is over-fitting: Their statistical model has 17 degrees of freedom (5 periods, 5 phases, 5 amplitudes, 1 trend, 1 constant). They test the quality of the resulting fit against 19-year moving averages of a 160-year series. There is at most 8-9 independent points in the smoothed series (and that is being generous as the time-series is auto-correlated). Of course it will fit with a high correlation coefficient. 

"With four parameters I can fit an elephant, and with five I can make him wiggle his trunk" - John von Neumann 

On the right, I show a figure where some authors put von Neumanns statement to the test. They use a cycle model and fit it to a drawing of an elephant. Their model is remarkably similar to JMHs model but with fewer free parameters and with periodicity constraints.

Post-hoc rationalization of an insignificant result

JMH and coauthors were struck by the high-correlation, and felt they needed a causal explanation. Our examples show that there is absolutely no need for an explanation as their procedure guarantees high correlations even when there is no causal link. They then come-up with what I call the lunar nodal resonance hypothesis where all sea level variability is explained by the moon. I see this as a post-hoc rationalization of a (very) insignificant correlation, and I am not convinced. 

Armed with their new nodal resonance hypothesis, they reformulate their model with the frequencies rounded to nearest half-multiple of the nodal cycle. They then re-optimize all the remaining parameters and 'test' whether the nodal resonance model also can fit the data. We show that this is a poor test as it is also guaranteed to give a positive result. 

Personally I also think the nodal resonance hypothesis is physically far-fetched. I don't have a problem with the nodal cycle having a detectable effect on sea levels, but he is also arguing that multiples thereof have a large significant effect on winds (and AMO / NAO). That is quite a mental leap. In his own words: 

"Additionally, we have [in Denmark] documented additional cyclic oscillation with a period length of 76 years. This oscillation implies, by all accounts , that we are entering a period of more easterly wind" - Jens Morten Hansen (translated by me, original here)

Other notes
JMH presents the iterative hand-tuning of parameters as a strength, presumably because there is no incomprehensible mathematical 'magic' going on. I see the manual iterative procedure as a weakness. The manual hand-tuning means it probably will be impossible to exactly replicate the results. Another problem is that the iterative search is not guaranteed to result in a globally best-fitting solution. (Further, an automated procedure would also facilitate a Monte Carlo significance testing against a reasonable null.) 

The paper makes no serious attempt at validation. They could have done a cross calibration / validation exercise. The could also have compared their statistical hind-cast to GIA adjusted versions of the longer tide gauge records from Northern Europe. 

Their explanatory model is entirely tidal, and thus leaves no room for climate change to have any impact on sea level. This is regardless of whether it is natural or anthropogenic climate change. -But we know how glaciers have retreated globally (Zemp et al. 2015), and therefore must have contributed to sea level rise over this period. Clearly, the global mountain glacier retreat cannot be forced by the lunar nodal oscillation. This inflexibility of their statistical model exposes a fundamental problem. 

In a radio show from the danish national broadcasting company (DR), JMH lets us know that he and a like-minded swede tried to push the story to a major Swedish newspaper. It was rejected with a note that the newspaper never accepted letters opening with references to the Emperors new clothes. I find that particular reference ironic. 
A google search reveals that JMH is on the scientific board of klimarealistene in Norway. They have the motto: "it is the sun that drives climate", and their logo looks like a sun and a sine wave. Mörner is also on the board, and I wonder if he is the like-minded swede. 

There is a tendency to see cycles everywhere. JMH claims that the AMO has a clear period of 74 years, and that NAO has a clear period of 56 years (Link in Danish). I strongly dispute that these two indices have 'clear' periodicities on those wavelengths. Here's a graph of the NAO - can you spot the "clear" 56 year cycle? 

JMH believes that it is consensus thinking and scientific censorship in the name of climate which made it hard for him to publish his 'bomb-shell' study (See Well, the type of cycle fitting in JMHs paper is also unconvincing to climate change skeptics: Here's Willis Eschenbach using the term "cyclomania" with respect to a somewhat similar analysis by Scafetta. Nobody can argue that Willis is part of the climate change consensus (see exhibit a).  

Data sources
The data in the figure comes from these sources: Sea level digitized from JMHs paper. Word frequencies from google ngrams. DK population from exoplanet periods from . 

The particular datasets were chosen based on these criteria: 1) same temporal coverage as JMHs sea level series. 2) could not be argued to be causally linked to the nodal cycle.

The new title of this page is credited to a tweet by Sou.

Ice sheet pessimists are not outliers

posted May 22, 2015, 4:50 AM by Aslak Grinsted   [ updated May 27, 2015, 7:03 AM ]

Bamber and Aspinall 2013 published a formal expert elicitation on the future of our two ice sheets, and found a large spread between individual expert answers. It has been argued by them,  and others that there are outlier experts in the data, and that they drive the long-tails of the pooled uncertainty distribution. I wonder if that is really true or whether it could be a visual artifact from drawing samples from a long tailed distribution. When you plot samples from a long tailed distribution then of course the tail values will appear to stand out, but that does not necessarily make them outliers. The question is really whether there are two groups of experts optimists vs. pessimists or whether they all have been sampled from a continuum of experts. 

I focus on the expert assessment of the rate of future ice loss from the West Antarctic Ice Sheet (WAIS), as this is a key uncertainty. In particular i use the 2012 estimates shown in red in their figure 1 reproduced on this page. I have digitized this figure to obtain the low and high values of each expert range listed in the bottom of this page. 

No evidence for outlier experts

When I rank these values from low to high and plot them then I see that there is no evidence for a discontinuity in the neither experts upper, nor lower estimates. There is no split between ice sheet optimists and ice sheet pessimists. For comparison I also plot some surrogate data sampled from a log-normal distribution, so that you can see an example of how it looks when you draw a small sample from a continuous long tailed distribution. 

See also the final remarks at realclimate in relation to the Horton et al. paper: "There is no split into two groups that could be termed “alarmists” and “skeptics” – this idea can thus be regarded as empirically falsified"

Data plotted in graph:
s=[3.0 0.5;5.0 25.0;1.5 0.5;2.0 0.2;-0.5 2.0;4.0 30.0;-0.5 10.0;0.0 2.0;2.0 8.0;0.5 2.5;1.0 20.0;0.0 4.0;0.3 2.0];

Technical detail: if you draw 13 samples from the surrogate distribution, then there is a 51% chance that the highest value will be greater than 5. The maximum value in the surrogate plot is thus very close to the median value for this distribution. It is not an outlier.

Loading MOLA PEDR files in matlab

posted May 19, 2015, 6:05 AM by Aslak Grinsted   [ updated May 19, 2015, 6:30 AM ]

I have written a little matlab function to read the MOLA laser altimeter data from the Mars Global Surveyor.  More specifically the so called Precision Experiment Data Records (PEDR) records. PEDRs are generated from the raw altimetry profile data with precision orbit corrections applied. They are time-ordered binary tables with attached PDS labels. You can download the MOLA PEDR data here.

Perhaps somebody will find this useful. The function is attached below. 

Thoughts on the Pachauri harassment case

posted Feb 23, 2015, 10:35 AM by Aslak Grinsted   [ updated Sep 9, 2015, 1:21 AM ]

Pachauri has been accused of sexual harassment. You can read about it here:
Pachauri's initial defense was to claim that he didn't do it, but that his phone was hacked. That is not entirely implausible considering the history of the CRU hack. Hence I initially withheld any commentary. The climate community I follow on twitter has also mostly been quiet on this. 

Since then more news has surfaced. We have seen the content of some of the messages in the case, and more women have spoken out against Pachauri:
I don't know if he is guilty or not, but these additional reports push me towards the womens account.  

I don't usually comment on gender issues, but I think it is important to publicly make it clear that sexual harassment is not OK regardless of who you are, or how much bad publicity it might give for climate science. 

A better windows experience

posted Jan 22, 2015, 9:21 AM by Aslak Grinsted   [ updated Mar 9, 2015, 4:11 AM ]


Chocolatey is a package manager sort of like apt-get for windows, or a command-line version of ninite. You can do stuff like:

choco install <packagename> 
By Luisa Contreras (Flickr: bizcocho de chocolate y nata) [CC BY 2.0 (], via Wikimedia Commons
Usually you have to run choco as an administrator. That means starting the command prompt as an admin. That can get annoying pretty fast, so I recommend installing the sudo package as the next step.

choco install sudo

After this you can install a bunch of other stuff using "sudo choco install xxxx" install, without starting an administrative shell.  

A better terminal, and putty
sudo cinst conemu 

I installed putty manually instead of using chocolatey because that makes it easier to get conemu to cooperate with putty.  see also comments here

SSHFS adds the ability to mount drives over ssh

sudo cinst win-sshfs

I recommend installing FiSSH instead (a fork of win-sshfs) because it is compatible with pageant. I have not tested if it is better than this fork.   


sudo cinst SublimeText3
sudo cinst SublimeText3.powershellalias

Python2.xx (& essentials)

For scientific python you might want to try Anaconda instead of installing python yourself. That is not in the chocolatey.

Note this python section is from memory and may not be optimal.  
sudo cinst python2
pip install --upgrade pip
pip install virtualenv
pip install virtualenvwrapper-win
If you already have python2, then you might want to remove it first using the control panel. 

If PIP is not found, then it is most likely a problem with your path. 

Bayesian exercises for Inverse problems

posted Oct 22, 2014, 2:17 AM by Aslak Grinsted   [ updated Oct 23, 2014, 4:18 AM ]

Below is a pdf with some exercises for that i use in my course on inverse problems.

Problem 0: Goal understand why it is attractive to use loglikelihoods over likelihoods.
Problem 1: An exact model with a single model parameter. This makes easy to plot.
Problem 2: A variant of problem 1 with an inexact model.
Problem 3: Goal: Show that it is easier to calculate derived quantities such as "expected value" of the posterior when you use mcmc.
Problem 4: A problem where it is easy to understand why is the likelihood called a likelihood. 
Problem 5: A problem where you have to grasp many concepts. 

Exercises 4 & 5 are probably the most fun. 

Tracking ice motion in Landsat 8

posted Jun 11, 2014, 1:54 AM by Aslak Grinsted   [ updated Jun 11, 2014, 1:57 AM ]

I have been attempting to track ice motion in landsat 8 files. I have only been able to find the L1T files. These are terrain corrected files. Unfortunately for my study region (the Renland ice cap, East Greenland) the DEM used in this processing is of too poor quality and which introduces large geodetic errors. DEM errors can be seen as residual perspective effects [Click image on the right to expand to an animated GIF showing flipping between two images]. 

If you know where to get L1G, then please email me! 

I would therefore really like to have the L1G product and account for the terrain correction my self (using gimpdem). 

Another advantage of L1G over L1T is that there is less resampling which degrades feature tracking performance. 

(This work is partly motivated by testing a new open source feature tracking & georeferencing toolbox for matlab we have been working on. I am looking forwards to putting it on github.)

Descriptions of the levels of processing 

  • Level 1 Systematically Corrected (L1G)
    • The Level 1G (L1G) data product provides systematic radiometric and geometric accuracy, which is derived from data collected by the sensor and spacecraft. The scene will be rotated, aligned, and georeferenced to a user-specified map projection. Geometric accuracy of the systematically corrected product should be within 250 meters (1 sigma) for low-relief areas at sea level.
  • Level 1 Terrain Corrected (L1T)
    • The Level 1T (L1T) data product provides systematic radiometric accuracy, geometric accuracy by incorporating ground control points, while also employing a Digital Elevation Model (DEM) for topographic accuracy. Geodetic accuracy of the product depends on the accuracy of the ground control points and the resolution of the DEM used.

1-10 of 73