One good thing about the BBC is that it still tries to do balanced and rational reporting (rather than most of the media that just try to out hype each other).
Still, as most of the article says, it is very difficult to know how the ocean and its inhabitants will respond and indeed what is really going on out there. Rather scrap all those silly space missions and put the effort into understanding our oceans a bit better.
It is also very hard to accept the data. How do they get a pH measurement from 1750? How can they say that this was an average for the oceans? In 1750 they were not taking measurements all over the world and were definitely not taking deep water measurements. Perhaps they have an average for some measurements 5 miles off a few English ports.
Apparently there has been a 0.1pH shift in 250 years. Now how do you measure that?
pH is very difficult to measure accurately when using standard titration methods, or litmus paper etc. With these you're doing well to get anywhere near 0.1pH tolerance. Litmus paper etc is only suggested for a 0.5 to 1.0 pH tolerance. For anything more accurate than that you really need to use a calibrated pH meter which were only invented in 1934.
So where do these numbers come from? From computer modelling. For example http://pangea.stanford.edu/research/Oceans/GES2...
They take a computer model (a guess at best) and then "forces" it.
Forcing models is a suspect practice at best. Models are fragile enough in steady state and all models have their limits. When you force a model you're pushing it into its less reliable working area and thus the results you get are going to be less accurate.
As an analogy consider a simple car "model": A car can accelerate from zero to 60 in ten seconds. From that we can make the simple model that this car accelerates 6mph per second. At 5 seconds it will be doing bout 30mph. So what happens if we keep accelerating for 50 seconds? Well if we use that simple model the car will be going at 300 mph! Clearly a model that was reasonably valid for zero to approx 15 seconds breaks down badly when we push it to 50 seconds. That's the danger of forcing a model.
As another example, take the gravitational model. We know the earth's gravitational acceleration to be approx 9.8ms^-2. That will predict speed pretty well if you ignore friction and wind resistance. However, in the real world there is significant friction and if you drop an object out off a cliff it is very difficult to determine when it will actually hit the ground. You'll still get an answer accurate to better than 20%.
Like all models, climate models make some assumptions and ignore some inputs because we just don't understand the process well enough and have no way to get some of the data. That's OK when we need a rough model but is insufficient when we need the precision to measure small output changes. As a result these models have huge errors in them and the outputs are frequently smaller than the tolerance of the model.
All of physics and chemistry is based on building models, but the difference is that some models are highly accurate and repeatable (for example gravitational models that are accurate to parts per billion or a millionth of a percent) and some are not (climate models which are accurate to hundreds of percent).
Sure, we should be acting cautiously and should stop treating the ocean as an infinite garbage dump and resource, but it is wrong to "prove" that we should do this through bogus science.
Written in March 2009