28 February 2011

Science Impact

The Guardian has a blog post up by three neuroscientists decrying the state of hype in the media related to their field, which is fueled in part by their colleagues seeking "impact."  They write:
Anyone who has followed recent media reports that electrical brain stimulation "sparks bright ideas" or "unshackles the genius within" could be forgiven for believing that we stand on the frontier of a brave new world. As James Gallagher of the BBC put it, "Are we entering the era of the thinking cap – a device to supercharge our brains?"

The answer, we would suggest, is a categorical no. Such speculations begin and end in the colourful realm of science fiction. But we are also in danger of entering the era of the "neuro-myth", where neuroscientists sensationalise and distort their own findings in the name of publicity.

The tendency for scientists to over-egg the cake when dealing with the media is nothing new, but recent examples are striking in their disregard for accurate reporting to the public. We believe the media and academic community share a collective responsibility to prevent pseudoscience from masquerading as neuroscience.
Their analysis of why this happens has broad applicability.  They identify an . . .
. . . unacceptable gulf between, on the one hand, the evidence-bound conclusions reached in peer-reviewed scientific journals, and on the other, the heavy spin applied by scientists to achieve publicity in the media. Are we as neuroscientists so unskilled at communicating with the public, or so low in our estimation of the public's intelligence, that we see no alternative but to mislead and exaggerate?

Somewhere down the line, achieving an impact in the media seems to have become the goal in itself, rather than what it should be: a way to inform and engage the public with clarity and objectivity, without bias or prejudice.

Our obsession with impact is not one-sided. The craving of scientists for publicity is fuelled by a hurried and unquestioning media, an academic community that disproportionately rewards publication in "high impact" journals such as Nature, and by research councils that emphasise the importance of achieving "impact" while at the same time delivering funding cuts.

Academics are now pushed to attend media training courses, instructed about "pathways to impact", required to include detailed "impact summaries" when applying for grant funding, and constantly reminded about the importance of media engagement to further their careers.

Yet where in all of this strategising and careerism is it made clear why public engagement is important? Where is it emphasised that the most crucial consideration in our interactions with the media is that we are accurate, honest and open about the limitations of our research?
Neuroscience is not the only field where the cake is over-egged.

Oil Prices and Economic Growth

Last week I solicited perspectives on the relationship of oil prices and economic growth.  Thanks to all who emailed and commented.  This post shares some further thoughts.

First, there does appear to be a sense of conventional wisdom on this subject.  For instance, from last Friday's New York Times:
A sustained $10 increase in oil prices would shave about two-tenths of a percentage point off economic growth, according to Dean Maki, chief United States economist at Barclays Capital. The Federal Reserve had forecast last week that the United States economy would grow by 3.4 to 3.9 percent in 2011, up from 2.9 percent last year.
Similarly, from Friday's Financial Times:
According to published information on the Federal Reserve’s economic model, a sustained $10 rise in the oil price cuts growth by 0.2 percentage points and raises unemployment by 0.1 percentage points for each of the next two years.

Jan Hatzius, chief US economist for Goldman Sachs in New York, comes up with similar numbers ...
Today's New York Times sames something very similar:
Nariman Behravesh, senior economist at IHS Global Insight, said that every $10 increase in the price of a barrel of oil reduces economic growth by two-tenths of a percentage point after one year and a full percentage point over two years.
And adds:
As a rule, every 1-cent increase takes more than $1 billion out of consumers’ pockets a year.
Reuters offers some different numbers:
In 2004, the International Energy Agency calculated that an increase of $10 per barrel would reduce GDP growth in developed countries by 0.4 percent a year over the following two years. It would also add 0.5 percent to annual inflation. The impact was more severe in the developing world: in Asia, growth would be 0.8 percent lower and inflation 1.4 percent higher.

But the IEA’s estimates were made when the oil price was just $25 per barrel. While a $10 price increase today is lower in percentage terms, the absolute level is much higher: at the current price, oil consumption accounts for more than 5 percent of global GDP.
That these numbers seem unsatisfying 9at least they do to me) should not be surprising, as economists have devoted precious little attention to understanding the role of energy in economic growth.  David Stern of the Australian National University has a nice review paper titled "The Role of Energy in Economic Growth" that asserts:
The principal mainstream economic models used to explain the growth process (Aghion and Howitt, 2009) do not include energy as a factor that could constrain or enable economic growth, though significant attention is paid to the impact of oil prices on economic activity in the short run (Hamilton, 2009).
James Hamilton, cited above by Stern and a professor of economics at UCSD who blogs at Econbrowser, explains the math as follows:
Americans consume about 140 billion gallons of gasoline each year. I use the rough rule of thumb that a $10/barrel increase in the price of crude oil translates into a 25 cents per gallon increase in the price consumers will eventually pay for gasoline at the pump. Thus $10 more per barrel for crude will leave consumers with about $35 billion less to spend each year on other items, consistent with a decline in consumption spending on the order of 0.2% of GDP in a $15 trillion economy.
So much for that fancy Federal Reserve model, but I digress.  Hamilton has a new paper out on oil shocks here in PDF.

From this cursory review, it seems that the details of the relationship of energy prices and economic outcomes remains fairly cloudy in the economic literature, with conclusions resting significantly on assumptions and the specification of relationships.  Even so, the big picture is clear enough to draw some general conclusions, such as this prescient assertion put forward by Professor Hamilton in 2009 testimony before the U.S. Senate:
Even if we see significant short-run gains in global oil production capabilities, if demand from China and elsewhere returns to its previous rate of growth, it will not be too long before the same calculus that produced the oil price spike of 2007-08 will be back to haunt us again.
The conclusion that is draw is that regardless of the best way to represent oil prices and GDP in economic models, we need to work harder to make energy supply more reliable, abundant, diverse and less expensive.

27 February 2011

Australia Carbon Tax Poll

The Flip Side of Extreme Event Attribution

I first noticed an interesting argument related to climate change in President Bill Clinton's 2000 State of the Union Address:
If we fail to reduce the emission of greenhouse gases, deadly heat waves and droughts will become more frequent, coastal areas will flood, and economies will be disrupted. That is going to happen, unless we act.
Taken literally, the sentences are not wrong. But they are misleading. Consider that the exact opposite of the sentences is also not wrong:
If we reduce the emission of greenhouse gases, deadly heat waves and droughts will become more frequent, coastal areas will flood, and economies will be disrupted. That is going to happen, if we act.
The reason why both the sentence and its opposite are not wrong is that deadly and economically disruptive disasters will occur and be more frequent independent of action on greenhouse gas emissions. If the sentences are read to imply a causal relationship between action on greenhouse gases and the effects on the impacts of extreme events -- and you believe that such a direct causal relationship exists -- the sentences are still misleading, because they include no sense of time perspective. Even if you believe in such a tight coupling between emissions and extremes, the effect of emissions reductions on extreme events won't be detectable in your lifetime, and probably for much longer than that.

Why do I bring up President Clinton's 2000 State of the Union in 2011?  Because I have seen this slippery and misleading formulation occur repeatedly in recent weeks as the issue of carbon dioxide and extreme events has hotted up.  Consider the following examples.

First John Holdren, science advisor to President Obama:
People are seeing the impact of climate change around them in extraordinary patterns of floods and droughts, wildfires, heatwaves and powerful storms.
He also says:
[T]he climate is changing and that humans are responsible for a substantial part of that - and that these changes are doing harm and will continue to do more harm unless we start to reduce our emissions
It would easy to get the impression from such a sentence that if we "start to reduce our emissions" then climate changes will no longer be "doing harm and continue to do more harm"  (or more generously, will be "doing less harm"). Such an argument is at best sloppy -- particularly for a science advisor -- but also pretty misleading

Second, Ross Garnaut, a climate change advisor to the Australian government:
[T]he systematic, intellectual work of people who've spent their lifetimes studying these things shows that a warmer climate does lead to intensification of these sorts of extreme climatic events that we've seen in Queensland, and I think that people are wishing to avoid those awful challenge in Queensland will be amongst the people supporting effective action on climate change.
It would be easy to get the impression that Garnaut is suggesting to current Queensland residents that future Queensland floods and/or tropical cyclones might be avoided by supporting "effective action on climate change."  If so, then the argument is highly misleading.

It is just logical that one cannot make the claim that action on climate change will influence future extreme events without first being able to claim that greenhouse gas emissions have a discernible influence on those extremes. This probably helps to explain why there is such a push to classify the attribution issue as settled. But this is just piling on one bad argument on top of another.

Even if you believe that attribution has been achieved, these are bad arguments for the simple fact that detecting the effects on the global climate system of emissions reductions would take many, many (many!) decades.  For instance, for an aggressive climate policy that would stabilize carbon dioxide at 450 ppm, detecting a change in average global temperatures would necessarily occur in the second half of this century.  Detection of changes in extreme events would take even longer.

To suggest that action on greenhouse gas emissions is a mechanism for modulating the impacts of extreme events remains a highly misleading argument.  There are better justifications for action on carbon dioxide that do not depend on contorting the state of the science.

Climate Science Turf Wars and Carbon Dioxide Myopia

Over at Dot Earth Andy Revkin has posted up two illuminating comments from climate scientists -- one from NASA's Drew Shindell and a response to it from Stanford's Ken Caldeira.

Shindell's comment focuses on the impacts of action to mitigate the effects of black carbon, tropospheric ozone and other non-carbon dioxide human climate forcings, and comes from his perspective as lead author of an excellent UNEP report on the subject that is just out (here in PDF and the Economist has an excellent article here).  (Shindell's comment was apparently in response to an earlier Dot Earth comment by Raymond Pierrehumbert.)

In contrast, Caldeira invokes long-term climate change to defend the importance of focusing on carbon dioxide:
If carbon dioxide and other long-lived greenhouse gases were not building up in the atmosphere, we would not be particularly worried about the climate effect from the short-lived gases and aerosols. We are concerned about the effect of methane and black carbon primarily because they are exacerbating the threats posed by carbon dioxide.

If we eliminated emissions of methane and black carbon, but did nothing about carbon dioxide we would have delayed but not significantly reduce long-term threats posed by climate change. In contrast, if we eliminated carbon dioxide emissions but did nothing about methane and black carbon emissions, threats posed by long-term climate change would be markedly reduced.
Presumably by "climate effect" Caldeira means the long-term consequences of human actions on the global climate system -- that is, climate change. Going unmentioned by Caldeira is the fact that there are also short-term climate effects, and among those, the direct health effects of non-carbon dioxide emissions on human health and agriculture. For instance, the UNEP report estimates that:
[F]ull implementation of the measures identified in the Assessment would substantially improve air quality and reduce premature deaths globally due to significant reductions in indoor and outdoor air pollution. The reductions in PM2.5 concentrations resulting from the BC measures would, by 2030, avoid an estimated 0.7–4.6 million annual premature deaths due to outdoor air pollution.
There are a host of reasons to worry about the climatic effects of  non-CO2 forcings beyond long-term climate change.  Shindell explains this point:
There is also a value judgement inherent in any suggestion that CO2 is the only real forcer that matters or that steps to reduce soot and ozone are ‘almost meaningless’. Based on CO2’s long residence time in the atmosphere, it dominates long-term committed forcing. However, climate changes are already happening and those alive today are feeling the effects now and will continue to feel them during the next few decades, but they will not be around in the 22nd century. These climate changes have significant impacts. When rainfall patterns shift, livelihoods in developing countries can be especially hard hit. I suspect that virtually all farmers in Africa and Asia are more concerned with climate change over the next 40 years than with those after 2050. Of course they worry about the future of their children and their children’s children, but providing for their families now is a higher priority. . .

However, saying CO2 is the only thing that matters implies that the near-term climate impacts I’ve just outlined have no value at all, which I don’t agree with. What’s really meant in a comment like “if one’s goal is to limit climate change, one would always be better off spending the money on immediate reduction of CO2 emissions’ is ‘if one’s goal is limiting LONG-TERM climate change”. That’s a worthwhile goal, but not the only goal.
The UNEP report notes that action on carbon dioxide is not going to have a discernible influence on the climate system until perhaps mid-century (see the figure at the top of this post).  Consequently, action on non-carbon dioxide forcings is very much independent of action on carbon dioxide -- they address climatic causes and consequences on very different timescales, and thus probably should not even be conflated to begin with. UNEP writes:
In essence, the near-term CH4 and BC measures examined in this Assessment are effectively decoupled from the CO2 measures both in that they target different source sectors and in that their impacts on climate change take place over different timescales.
Advocates for action on carbon dioxide are quick to frame discussions narrowly in terms of long-term climate change and the primary role of carbon dioxide. Indeed, accumulating carbon dioxide is a very important issue (consider that my focus in The Climate Fix is carbon dioxide, but I also emphasize that the carbon dioxide issue is not the same thing as climate change), but it is not the only issue.

In the end, perhaps the difference in opinions on this subject expressed by Shindell and Caldeira is nothing more than an academic turf battle over what it means for policy makers to focus on "climate" -- with one wanting the term (and justifications for action invoking that term) to be reserved for long-term climate issues centered on carbon dioxide and the other focused on a broader definition of climate and its impacts.  If so, then it is important to realize that such turf battles have practical consequences.

Shindell's breath of fresh air gets the last word with his explanation why it is that we must consider long- and short- term climate impacts at the same time, and how we balance them will reflect a host of non-scientific considerations:
So rather than set one against the other, I’d view this as analogous to research on childhood leukemia versus Alzheimer’s. If you’re an advocate for child’s health, you may care more about the former, and if you’re a retiree you might care more about the latter. One could argue about which is most worthy based on number of cases, years of life lost, etc., but in the end it’s clear that both diseases are worth combating and any ranking of one over the other is a value judgement. Similarly, there is no scientific basis on which to decide which impacts of climate change are most important, and we can only conclude that both controls are worthwhile. The UNEP/WMO Assessment provides clear information on the benefits of short-lived forcer reductions so that decision-makers, and society at large, can decide how best to use limited resources.

26 February 2011

Bringing it Home

Writing at MIT's Knight Science Journalism Tracker, Charles Petit breathlessly announces to journalists that the scientific community has now given a green light to blaming contemporary disasters on the emissions of greenhouse gases:
An official shift may just have occurred not only in news coverage of climate change, but the way that careful scientists  talk about it. Till now blaming specific storms on climate change has been frowned upon. And it still is, if one is speaking of an isolated event. But something very much like blaming global warming for what is happening today, right now, outside the window has just gotten endorsement on the cover of Nature. Its photo of a flooded European village has splashed across it, “THE HUMAN FACTOR.” Extreme rains in many regions, it tells the scientific community, is not merely consistent with what to expect from global warming,  but herald its arrival.

This is a good deal more immediate than saying, as people have for some time, that glaciers are shrinking and seas are rising due to the effects of greenhouse gases. This brings it home.
We recently published a paper showing that the media overall has done an excellent job on its reporting of scientific projections of sea level rise. I suspect that a similar analysis of the issue of disasters and climate change would not result in such favorable results. Of course, looking at the cover of Nature above, it might be understandable why this would be the case.

25 February 2011

Full Comments to the Guardian

The Guardian has an good article today on a threatened libel suit under UK law against Gavin Schmidt, a NASA researcher who blogs at Real Climate, by the publishers of the journal Energy and Environment.  While Gavin and I have had periodic professional disagreements, in this instance he has my full support. The E&E threat is absurd (details here).

Here are my full comments to the reporter for the Guardian, who was following up on Gavin's reference to comments I had made a while back about my experiences with E&E:
Here are some thoughts in response to your query ...

In 2000, we published a really excellent paper (in my opinion) in E&E in that has stood the test of time:

Pielke, Jr., R. A., R. Klein, and D. Sarewitz (2000), Turning the big knob: An evaluation of the use of energy policy to modulate future climate impacts. Energy and Environment 2:255-276.
http://sciencepolicy.colorado.edu/admin/publication_files/resource-250-2000.07.pdf

You'll see that paper was in only the second year of the journal, and we were obviously invited to submit a year or so before that. It was our expectation at the time that the journal would soon be ISI listed and it would become like any other academic journal. So why not publish in E&E?

That paper, like a lot of research, required a lot of effort.  So it was very disappointing to E&E in the years that followed identify itself as an outlet for alternative perspectives on the climate issue. It has published a number of low-quality papers and a high number of opinion pieces, and as far as I know it never did get ISI listed.

Boehmer-Christiansen's quote about following her political agenda in running the journal is one that I also have cited on numerous occasions as an example of the pathological politicization of science. In this case the editor's political agenda has clearly undermined the legitimacy of the outlet.  So if I had a time machine I'd go back and submit our paper elsewhere!

A consequence of the politicization of E&E is that any paper published there is subsequently ignored by the broader scientific community. In some cases perhaps that is justified, but I would argue that it provided a convenient excuse to ignore our paper on that basis alone, and not on the merits of its analysis. So the politicization of E&E enables a like response from its critics, which many have taken full advantage of. For outside observers of climate science this action and response together give the impression that scientific studies can be evaluated simply according to non-scientific criteria, which ironically undermines all of science, not just E&E.  The politicization of the peer review process is problematic regardless of who is doing the politicization because it more readily allows for political judgments to substitute for judgments of the scientific merit of specific arguments.  An irony here of course is that the East Anglia emails revealed a desire to (and some would say success in) politicize the peer review process, which I discuss in The Climate Fix.

For my part, in 2007 I published a follow on paper to the 2000 E&E paper that applied and extended a similar methodology.  This paper passed peer review in the Philosophical Transactions of the Royal Society:

Pielke, Jr., R. A. (2007), Future economic damage from tropical cyclones: sensitivities to societal and climate changes. Philosophical Transactions of the Royal Society A 365 (1860) 2717-2729
http://sciencepolicy.colorado.edu/admin/publication_files/resource-2517-2007.14.pdf

So, in my case alls well that ends well. Over the long run I am confident that good ideas will win out over bad ideas, but without care to the legitimacy of our science institutions -- including journals and peer review -- that long run will be a little longer.

Please follow up if anything is unclear or if you have other questions ...

Be Careful What You Wish For

Two members of the US Congress, Representatives Henry Waxman and Bobby Rush, have called for a hearing on two recent papers in Nature.  In their letter to the Republican chairmen of the House Energy and Commerce Committee and its Energy and Power Subcommittee Waxman and Rush write:
We believe it would be irresponsible for the Committee to ignore the mounting scientific evidence linking strange and dangerous weather to rising carbon levels in the atmosphere.
Waxman and Rush explain what they think is implicated by the Nature papers:
The potential implications of these results are illustrated by multiple recent weather disasters. In the United States, severe flooding in Arkansas, Kentucky, Mississippi, and Tennessee killed dozens and caused widespread property damage last year. Some scientists see evidence that the bitterly cold storms that gripped our nation this winter could be tied to climate changes3 Internationally, unprecedented floods in Pakistan last year submerged one-fifth of the country, killing thousands, and devastating livelihoods.4 Similarly, floods following heavy rains displaced hundreds of thousands of people in northeastern Australia and damaged the agricultural and mining sectors5 In Russia, yields of wheat and barley in 20 I 0 fell by 30% following a summer of record-breaking heat and drought6 This month, the United Nations warned that the worst drought in decades threatens the wheat crop in China7.
The over-hyping of this issue has left Waxman and Rush exposed out on a thin, weak limb.  If they are lucky, their call for a hearing will be ignored.

24 February 2011

Wanted

Two tickets to Arsenal v. Man U on May 1, 2011.  Send me an email.

Economists: Riddle Me This

Writing in yesterday's Financial Times, James Mackintosh says that,
The rough rule of thumb economists use is that a 10 per cent rise in the oil price equals half a percentage point off global growth.
I'm no economist, so I'm hoping that one will show up and explain to me how to reconcile this statement with oft-made claims that increasing the costs of energy (e.g., through a high carbon tax) will boost GDP, or less optimistically, reduce it imperceptibly.  I am particularly interested in pointers to the academic literature.  Thanks all!

And a note to the FT:  How about enabling video embed to your excellent videos like this one?

What's a Science Advisor For?

The Chief Scientist for Australia, Penny Sackett, resigned this week halfway into her five-year term, citing personal and professional reasons.  The Australian media has reported that during her tenure Professor Sackett met with Kevin Rudd once and has never briefed Julia Gillard. In a Senate hearing yesterday, Professor Sackett downplayed any conflict.

Even so, the distance from top level policy making is at distinct odds with how the position of Chief Scientist is officially described (PDF):
The Chief Scientist for Australia, Professor Penny D Sackett, provides high-level independent advice to the Prime Minister and other Ministers on matters relating to science, technology and innovation. . . While responsive to requests from Government for advice generated as a result of emerging issues, Professor Sackett also provides proactive advice to the Prime Minister on issues she deems important in securing Australia’s wellbeing into the future.
Nature reports the views of a few leading Australian scientists on the role of Chief Scientist:
“I don’t think the chief scientist’s role is very highly regarded by Australian governments,” said Peter Doherty, a Nobel prize-winning immunologist from the University of Melbourne. Doherty said Sackett was a victim of the new political landscape in Australia that evolveed while she was in office, largely shaped by the fact that the government is now in a minority. “I think new appointee would have to be pretty naïve going into this parliament if they thought they were going to make much of a difference, except on something the government is already looking to do, such as putting a price on carbon.”

“I suspect that Penny Sackett probably signed up for a job that was different to the one that she ended up having to do,” agreed materials scientist Cathy Foley, president of the Federation of Australian Scientific and Technological Societies, who served with Sackett on the Prime Minister’s Science, Engineering and Innovation Council. “I think when it comes to policy development, science has been the loser for the sake of political concerns.”
From 2005-2007 I conducted interviews of 7 former science advisors to the US president, who had served presidents from Lyndon Johnson to George W. Bush.  What we learned from them suggests that we should not be too surprised by what has happened in Australia. Our analysis of those interviews concludes as follows (PDF):
The position of science advisor has evolved and changed over the past half-century, as has both science and government. The experiences of the science advisors that we were fortunate to visit with chronicle those changes. Underneath the anecdotes and stories that describe presidents over the past half-century is a deeper story, one of the long-term decline of the influence of the president’s science advisor while at the same time, the importance of expertise to government has increased tremendously. The decline of the science advisor, juxtaposed against the rise of government expertise, provides ample reason to reconsider the future role of the presidential science advisor, and to set our expectations for that role accordingly.
Professor Sackett's departing advice is well worth heeding:
When quizzed about what improvements could be made to the role of chief scientist, Professor Sackett said it was the Government's responsibility to clarify what role the chief scientist should play.

"I think the responsibility rests firmly with the Government to make it, to decide how the role of chief scientist for Australia will fit into the variety of advice that it receives on matters of science"
For further reading:

R. A. Pielke, Jr. and R. Klein (2009). The Rise and Fall of the Science Advisor to the President of the United States. Minerva 47 (1) 7-29, doi: 10.1007/s11024-009-9117-3.

23 February 2011

"Biotech Crops" -- Here to Stay

The International Service for the Acquisition of Agri-Biotech Applications (ISAAA) -- a catchy organizational name -- has issued a report on the global status of genetically modified crops, or as the ISAA likes to call them, biotech crops.  The numbers that it reports are pretty remarkable.  The two figures above come from the report and show the growth in areal extent (top), and the adoption rates by crop (bottom)

Here is an excerpt from the executive summary of the report (emphasis in original) which provides some aggregate numbers:
Remarkably, in 2010, the accumulated hectarage planted during the 15 years, 1996 to 2010, exceeded for the first time, 1 billion hectares, which is equivalent to more than 10% of the enormous total land area of the USA (937 million hectares) or China (956 million hectares). It took 10 years to reach the first 500 million hectares in 2005, but only half that time, 5 years, to plant the second 500 million hectares to reach a total of 1 billion hectares in 2010.

A record 87-fold increase in hectarage between 1996 and 2010, making biotech crops the fastest adopted crop technology in the history of modern agriculture

The growth from 1.7 million hectares of biotech crops in 1996 to 148 million hectares in 2010 is an unprecedented 87-fold increase, making biotech crops the fastest adopted crop technology in the history of modern agriculture.
The ISAAA, which clearly champions biotech crops, explains why they think these crops are so important in terms of innovation, and singles out the EU as a laggard:
It is evident that the world’s economic axis is shifting in favor of the emerging nations of the world, and this has implications for the development of all products, including biotech crops. Increased participation in innovative approaches in plant biotechnology is already evident in the lead developing countries of BRICBrazil in Latin America, and India and China in Asia. Emerging countries are no longer satisfied to have only low labor costs as their only comparative advantage, but operate dynamic incubators of innovation, producing new and competing products and employing innovation to redesign products for customers at significantly lower cost, to meet fast growing domestic and international demands. Thus, “frugal innovation” is not only an issue of cheap labor but increasingly will apply to the designing and redesigning of more affordable products and processes which will require both technological and business innovation.

All this implies that the western world may be losing out to the emerging countries, but this is not necessarily so. Of the Fortune 500 companies, 98 have R&D activities in China and 63 in India, and these include collaborative efforts on biotech crops with both public and private partners in their respective host countries. The philosophy underlying these investments by the multinationals in the developing country BRICs is that they will retain a comparative advantage in innovation, in addition to being well placed to participate in the new markets that will be developed to meet the needs of an increasingly wealthy population of more than 2.5 billion in their home countries. This compares with only 303 million in the US and 494 million in the 27 EU countries. Given that the nature of innovation is to feed upon itself, “innovation in the emerging world will encourage rather than undermine innovation in the western world” (The Economist, 15 April 2010).

The current unprecedented explosive growth and change occurring in the emerging countries will have enormous implications for the rest of the world, and will demand more innovative solutions from successful developers. The global share of the emerging world’s GDP increased from 36% in 1980 to 45% in 2008 and is predicted to reach 51% by 2014. In 2009, productivity in China grew by 8.2% compared with 1.0% in the US and a decline of 2.8% in the UK. Emerging country consumers have outspent the US since 2007 and are currently at 34% of global spending versus 27% in the USA. Thus, emerging country consumers are, and will continue to demand a better quality of life including a better diet, with significantly more meat, which in turn drives increased demand for the principal biotech feed stocks, maize and soybean.

Consistent with other lead countries of the world, the policy guidelines of the EU strongly promote innovation as a general policy in science but it has chosen not to practice what it preaches when it is applied to biotech crops – one of the most innovative approaches to crop technology. If innovation is the key to success with crop technology this could seriously disadvantage the EU. Some multinationals involved in crop biotechnology have already reduced  R&D activities in some EU countries and, where possible, are relocating activities to outside the EU because it does not provide a congenial environment for the development of biotech crops which are viewed in the EU as a threat and not as an opportunity.
Whatever you might think about GM crops, they appear to be here to stay.  It seems only a matter of time before the EU follows the US, Asia and South America in their adoption.  I'll venture that pretty soon we'll all be calling them "biotech crops."

22 February 2011

A Science Assessment as an Honest Broker of Policy Options

Last week, Science Express published an advance copy of a Policy Forum on the new Intergovernmental Platform on Biodiversity and Ecosystem Services by Charles Perrings and colleagues.  The article calls for an approach to science assessments that is far different than that which has been employed by the IPCC.  The authors explain:
[R]ather than investigating consequences of specific policies indentified (sic) by a governing body, most previous assessments were constructed around scenarios devised by scientists
The alternative approach that they recommend has three components:
(i) The governing body of IPBES, the plenary, should ask for assessment of consequences of specific policies and programs at well defined geographical scales. (ii) Projections of changes in biodiversity and ecosystem services should take the form of conditional predictions of the consequences of these policies and programs. And (iii), capacity-building efforts should enhance skills needed for policy-oriented assessment within IPBES and should catalyze external funding for underpinning science and science-based policy development.
An approach to assessment focused on identifying and even evaluating policy options will not be without its difficulties.  However, it also has great promise to deliver far more policy relevant information to decision makers than has been the case in other international assessments.  the authors conclude:
For IPBES to provide the policy support envisaged in the Busan outcome, it needs to answer questions that are meaningful to the nations that have brought it into being. This requires an approach that differs from those adopted in previous assessments—in the functions and membership of the plenary, in assessment methodology, and in decision support. The IPBES plenary should specify the policy options to be evaluated; assessment should include quantitative conditional prediction of the consequences of those options; and reports should enable policy-makers to evaluate the relative merits of mitigation, adaptation, and stabilization strategies.
 The lead author explains:
Discussions between decision makers and scientists should start with the question 'what do governments want and what options do they have?' Knowing the likely consequences of alternative policy options is critical to choosing the best strategy.
The new approach being taken by IPBES is well worth watching as a highly visible experiment in the "honest brokering of policy options."

Notes from the (Postcard) Underground

Many thanks, you guys are great!

18 February 2011

Linguistic Innovation: The Story of OK

The BBC has a great column up about a linguistic innovation -- the origins of the word "OK" -- by the author of a book telling this story.  Here is an excerpt from the column:
On 23 March 1839, OK was introduced to the world on the second page of the Boston Morning Post, in the midst of a long paragraph, as "o.k. (all correct)".

How this weak joke survived at all, instead of vanishing like its counterparts, is a matter of lucky coincidence involving the American presidential election of 1840.

One candidate was nicknamed Old Kinderhook, and there was a false tale that a previous American president couldn't spell properly and thus would approve documents with an "OK", thinking it was the abbreviation for "all correct".

Within a decade, people began actually marking OK on documents and using OK on the telegraph to signal that all was well. So OK had found its niche, being easy to say or write and also distinctive enough to be clear.

But there was still only restricted use of OK. The misspelled abbreviation may have implied illiteracy to some, and OK was generally avoided in anything but business contexts, or in fictional dialogue by characters deemed to be rustic or illiterate.

Indeed, by and large American writers of fiction avoided OK altogether, even those like Mark Twain who freely used slang.

But in the 20th Century OK moved from margin to mainstream, gradually becoming a staple of nearly everyone's conversation, no longer looked on as illiterate or slang.

Intolerance: Virtue or Anti-Science "Doublespeak"?

John Beddington, the Chief Scientific Advisor to the UK government, has identified a need to be "grossly intolerant" of certain views that get in the way of dealing with important policy problems:
We are grossly intolerant, and properly so, of racism. We are grossly intolerant, and properly so, of people who [are] anti-homosexuality... We are not—and I genuinely think we should think about how we do this—grossly intolerant of pseudo-science, the building up of what purports to be science by the cherry-picking of the facts and the failure to use scientific evidence and the failure to use scientific method.

One way is to be completely intolerant of this nonsense. That we don't kind of shrug it off. We don't say: ‘oh, it's the media’ or ‘oh they would say that wouldn’t they?’ I think we really need, as a scientific community—and this is a very important scientific community—to think about how we do it.
I really believe that. . . we need to recognise that that is a pernicious influence, it is an increasingly pernicious influence and we need to be thinking about how we can actually deal with it. I really would urge you to be grossly intolerant. We should not tolerate what is potentially something that can seriously undermine our ability to address important problems. . .

I'd urge you—and this is a kind of strange message to go out—but go out and be much more intolerant.
Just what the science community needs -- advice to be even more intolerant of wrongheaded, uninformed or politically unhelpful views.

Fortunately, Andrew Stirling, research director of the Science Policy Research Unit (which these days I think just goes by SPRU) at the University of Sussex, provides a much healthier perspective:
What is this 'pseudoscience'? For Beddington, this seems to include any kind of criticism from non-scientists of new technologies like genetically modified organisms, much advocacy of the 'precautionary principle' in environmental protection, or suggestions that science itself might also legitimately be subjected to moral considerations.

Who does Beddington hold to blame for this "politically or morally or religiously motivated nonsense"? For anyone who really values the central principles of science itself, the answer is quite shocking. He is targeting effectively anyone expressing "scepticism" over what he holds to be 'scientific' pronouncements—whether on GM, climate change or any other issue. Note, it is not irrational "denial" on which Beddington is calling for 'gross intolerance', but the eminently reasonable quality of "scepticism"!

The alarming contradiction here is that organised, reasoned, scepticism—accepting rational argument from any quarter without favour for social status, cultural affiliations  or institutional prestige—is arguably the most precious and fundamental quality that science itself has (imperfectly) to offer. Without this enlightening aspiration, history shows how society is otherwise all-too-easily shackled by the doctrinal intolerance, intellectual blinkers and authoritarian suppression of criticism so familiar in religious, political, cultural and media institutions.
Stirling concludes:
[T]he basic aspirational principles of science offer the best means to challenge the ubiquitously human distorting pressures of self-serving privilege, hubris, prejudice and power. Among these principles are exactly the scepticism and tolerance against which Beddington is railing (ironically) so emotionally! Of course, scientific practices like peer review, open publication and acknowledgement of uncertainty all help reinforce the positive impacts of these underlying qualities. But, in the real world, any rational observer has to note that these practices are themselves imperfect. Although rarely achieved, it is inspirational ideals of universal, communitarian scepticism—guided by progressive principles of reasoned argument, integrity, pluralism, openness and, of course, empirical experiment—that best embody the great civilising potential of science itself. As the motto of none other than the Royal Society loosely enjoins (also sometimes somewhat ironically) "take nothing on authority". In this colourful instance of straight talking then, John Beddington is himself coming uncomfortably close to a particularly unsettling form of unscientific—even (in a deep sense) anti-scientific—'double speak'.


Anyone who really values the progressive civilising potential of science should argue (in a qualified way as here) against Beddington's intemperate call for "complete intolerance" of scepticism. It is the social and human realities shared by politicians, non-government organisations, journalists and scientists themselves, that make tolerance of scepticism so important. The priorities pursued in scientific research and the directions taken by technology are all as fundamentally political as other areas of policy. No matter how uncomfortable and messy the resulting debates may sometimes become, we should never be cowed by any special interest—including that of scientific institutions—away from debating these issues in open, rational, democratic ways. To allow this to happen would be to undermine science itself in the most profound sense. It is the upholding of an often imperfect pursuit of scepticism and tolerance that offer the best way to respect and promote science. Such a position is, indeed, much more in keeping with the otherwise-exemplary work of John Beddington himself.
Stirling's eloquent response provides a nice tonic to Beddington's unsettling remarks. Nonetheless, Beddington's perspective should be taken as a clear warning as to the pathological state of highly politicized science these days.

(H/T Bishop Hill)

Predistortion in Action?

President Obama's science advisor John Holdren has this to say to the BBC today:
People are seeing the impact of climate change around them in extraordinary patterns of floods and droughts, wildfires, heatwaves and powerful storms.
It is my view of the literature that a defensible scientific position can be presented with respect to "extraordinary patterns" in maximum daily temperatures, and perhaps even drought and wildfires. But in floods and powerful storms?  No way, not even close. Then Dr. Holdren has this to say:
I think it is going to be very hard to persuade people that climate change is somehow a fraud.
By making claims that are scientifically without merit, he makes such persuasion that much easier.  But perhaps he is just engaging is a bit of innocent predistortion.

Predistortion

Yesterday in Washington, DC I participated on a panel on science and politics in a pre-AAAS workshop on "Responsible Research Practices in a Changing Research Environment."  Also on the panel was former Congressman Bill Foster, of Illinois, one of three PhD scientists in the last Congress. He lost his seat in 2010 to a "Sarah Palin-supported Tea Party candidate."  Foster is presently engaged in a worthwhile effort seeking to start up a political action committee -- Albert's List -- to get more scientists to run for office.  Here I focus on an interesting aspect of Foster's presentation at the AAAS workshop yesterday.

In his short presentation, he explained that politicians often look to experts to provide information that is useful in advancing their agenda, typically information that can be conveyed in SOUNDBITE fashion.  He explained that scientists should expect that the information that they bring to the political process, such as through testimony before congressional committees, will inevitably be "distorted" in the political process.

He then raised what he called "a difficult ethical question" -- if a scientist knows that their message will be distorted in the political process, to what degree should s/he predistort their message in hopes that what comes out the other end is a closer approximation to reality?  Foster cited as an analogy cheap earphones that achieve high quality through software that counterbalances distortion. Foster warned that such predistorion might be "heading down a slippery slope" but he was fairly ambiguous about the tactic.

I am not so much interested in Foster's views on the subject than I am the concept, which I think is very useful for helping to raise issues that often come up at the messy interface of science and politics. Longtime readers will probably guess that I am no fan of predistortion, and in fact, I think it is an enormously problematic practice for various parts of the science community today.

The way to deal with distortion in the political process is not through counterbalancing distortion, but through effective science institutions that can serve as trusted arbiters of knowledge and honest brokers of policy alternatives.  What do you think?  Is predistorion sometimes a justifiable practice from our experts?

Breakthrough Report on Rebound

Whatever one thinks about the so-called "rebound effect" or the role of efficiency in contributing to emissions reductions goals, the Breakthrough Institute (where I am a Senior Fellow) has done a great service to the discussion by publishing a new literature review on the subject.  You can find the review here in PDF and a PowerPoint overview here in PPT.  They discuss the new report on their blog here. This massive effort represents think tanks at their very best, and is likely to be the definitive literature review for years to come.  Whatever your views or level of expertise, if you want to dive into the subject, I can think of no better place to start.

17 February 2011

Outlier

Flood Disasters and Human-Caused Climate Change

[UPDATE: Gavin Schmidt at Real Climate has a post on this subject that  -- surprise, surprise -- is perfectly consonant with what I write below.]

[UPDATE 2: Andy Revkin has a great post on the representations of the precipitation paper discussed below by scientists and related coverage by the media.]  

Nature published two papers yesterday that discuss increasing precipitation trends and a 2000 flood in the UK.  I have been asked by many people whether these papers mean that we can now attribute some fraction of the global trend in disaster losses to greenhouse gas emissions, or even recent disasters such as in Pakistan and Australia.

I hate to pour cold water on a really good media frenzy, but the answer is "no."  Neither paper actually discusses global trends in disasters (one doesn't even discuss floods) or even individual events beyond a single flood event in the UK in 2000.  But still, can't we just connect the dots?  Isn't it just obvious?  And only deniers deny the obvious, right?

What seems obvious is sometime just wrong.  This of course is why we actually do research.  So why is it that we shouldn't make what seems to be an obvious connection between these papers and recent disasters, as so many have already done?

Here are some things to consider.

First, the Min et al. paper seeks to identify a GHG signal in global precipitation over the period 1950-1999.  They focus on one-day and five-day measures of precipitation.  They do not discuss streamflow or damage.  For many years, an upwards trend in precipitation has been documented, and attributed to GHGs, even back to the 1990s (I co-authored a paper on precipitation and floods in 1999 that assumed a human influence on precipitation, PDF), so I am unsure what is actually new in this paper's conclusions.

However, accepting that precipitation has increased and can be attributed in some part to GHG emissions, there have not been shown corresponding increases in streamflow (floods)  or damage. How can this be?  Think of it like this -- Precipitation is to flood damage as wind is to windstorm damage.  It is not enough to say that it has become windier to make a connection to increased windstorm damage -- you need to show a specific increase in those specific wind events that actually cause damage. There are a lot of days that could be windier with no increase in damage; the same goes for precipitation.

My understanding of the literature on streamflow is that there have not been shown increasing peak streamflow commensurate with increases in precipitation, and this is a robust finding across the literature.  For instance, one recent review concludes:
Floods are of great concern in many areas of the world, with the last decade seeing major fluvial events in, for example, Asia, Europe and North America. This has focused attention on whether or not these are a result of a changing climate. Rive flows calculated from outputs from global models often suggest that high river flows will increase in a warmer, future climate. However, the future projections are not necessarily in tune with the records collected so far – the observational evidence is more ambiguous. A recent study of trends in long time series of annual maximum river flows at 195 gauging stations worldwide suggests that the majority of these flow records (70%) do not exhibit any statistically significant trends. Trends in the remaining records are almost evenly split between having a positive and a negative direction.
Absent an increase in peak streamflows, it is impossible to connect the dots between increasing precipitation and increasing floods.  There are of course good reasons why a linkage between increasing precipitation and peak streamflow would be difficult to make, such as the seasonality of the increase in rain or snow, the large variability of flooding and the human influence on river systems.  Those difficulties of course translate directly to a difficulty in connecting the effects of increasing GHGs to flood disasters.

Second, the Pall et al. paper seeks to quantify the increased risk of a specific flood event in the UK in 2000 due to greenhouse gas emissions.  It applies a methodology that was previously used with respect to the 2003 European heatwave.

Taking the paper at face value, it clearly states that in England and Wales, there has not been an increasing trend in precipitation or floods.  Thus, floods in this region are not a contributor to the global increase in disaster costs.  Further, there has been no increase in Europe in normalized flood losses (PDF).  Thus, Pall et al. paper is focused attribution in the context of on a single event, and not trend detection in the region that it focuses on, much less any broader context.

More generally, the paper utilizes a seasonal forecast model to assess risk probabilities.  Given the performance of seasonal forecast models in actual prediction mode, I would expect many scientists to remain skeptical of this approach to attribution. Of course, if this group can show an improvement in the skill of actual seasonal forecasts by using greenhouse gas emissions as a predictor, they will have a very convincing case.  That is a high hurdle.

In short, the new studies are interesting and add to our knowledge.  But they do not change the state of knowledge related to trends in global disasters and how they might be related to greenhouse gases.  But even so, I expect that many will still want to connect the dots between greenhouse gas emissions and recent floods.  Connecting the dots is fun, but it is not science.

16 February 2011

What Does Climate Change Mean for Investment Portfolio Risk?

Mercer, a global financial services firm, has just issued a major new report on the risks of climate change on investment portfolios.  The report was produced with the help of the LSE Grantham Institute and the World Bank.

A Reuters news story on the report says the following:
Climate change could put trillions of investment dollars at risk over the next 20 years, a global study released on Wednesday said, calling for pension funds and other investors to overhaul how they allocate funds.
However, when you actually take a look at the report, you find that it does not say what Reuters (or others) says it does .  The report presents several top line conclusions about portfolio risks over the next 20 years.

First, climate policies might have a large financial impact on portfolio risk:
[C]limate policy could contribute as much as 10% to overall portfolio risk: Uncertainty around climate policy is a significant source of portfolio risk for institutional investors to manage over the next 20 years. The economic cost of climate policy for the market to absorb is estimated to amount to as much as approximately $8 trillion cumulatively, by 2030. Additional investment in technology is estimated to increase portfolio risk for a representative portfolio by about 1%, although global investment could accumulate to $4 trillion by 2030, which is expected to be beneficial for many institutional portfolios.
Second, what about the risks caused by actual changes in the climate? (emphasis added)
The economic model used in this study excludes physical risks of climate change which are not consistently predicted by the range of scientific models, and primarily for this reason concludes that, over the next 20 years, the physical impact of changes to the climate are not likely to affect portfolio risk significantly. However, this does not imply the absence of significant (and growing) risk, as shown by recent climate-related disasters that investors need to monitor closely.
Thus, the risk to financial portfolios in the report is entirely due to climate policies and not the effects of "the physical impact of changes to the climate."  Of course, a news story that begins -- "Climate change policies could put trillions of investment dollars at risk" -- doesn't really have the same ring to it.

Why More Precipitation Does Not Necessarily Mean More Flood Damage

It is common to see studies that find an increase in precipitation (whatever the cause) quickly linked to claims of increasing floods. Making such a link, however comfortable and intuitive, is not so direct in practice.

In a peer-reviewed essay in the Bulletin of the American Meteorological Society in 1999 we explained the "apparent paradox" between observations of increasing precipitation but at the same time, a lack of trends found in the same regions in peak streamflow (floods).  Here is an excerpt from that piece:
Recently, Lins and Slack (1999) published a paper showing that in the United States in the twentieth century, there have not been significant trends up or down in the highest levels of streamflow. This follows a series of papers showing that over the same period "extreme" precipitation in the United States has increased (e.g., Karl and Knight 1998a; Karl et al. 1995). The differences in the two sets of findings have led some to suggest the existence of an apparent paradox: How can it be that on a national scale extreme rainfall is increasing while peak streamflow is not? Resolving the paradox is important for policy debate because the impacts of an enhanced hydrological cycle are an area of speculation under the Intergovernmental Panel on Climate Change (Houghton et al. 1996).

There does exist some question as to whether comparing the two sets of findings is appropriate. Karl and Knight (1998b) note that
As yet, there does not appear to be a good physical explanation as to how peak flows could show no change (other than a sampling bias), given that there has been an across-the-board increase in extreme precipitation for 1- to 7-day extreme and heavy precipitation events, mean streamflows, and total and annual precipitation.
Karl's reference to a sampling bias arises because of the differences in the areal coverage of the Lins and Slack study and those led by Karl. Lins and Slack focus on streamflow in basis that are "climate sensitive" (Slack and Landwehr 1992). Karl suggests that these basis are not uniformly distributed over the United States, leading to questions of the validity of the Lins and Slack findings on a national scale (T. Karl 1999, personal communication). While further research is clearly needed to understand the connections of precipitation and streamflow, in this letter we report the results of a recent study on the relationship of precipitation and flood damages. This letter seeks to address the apparent paradox from the perspective of societal impacts. We suggest that an analysis of the relationship of precipitation and flood damages provides information that is useful in developing relevant hypotheses and placing the precipitation/streamflow debate into a broader policy context (cf. Changnon 1998).

A recent study (Pielke and Downton 2000) offers an analysis that helps to address the apparent paradox. Pielke and Downton relate trends in various measures of precipitation with trends in flood damage in the United States. The study finds that the increase in precipitation (however measured) is insufficient to explain increasing flood damages or variability in flood damages. The study strongly suggests that societal factors – growth in population and wealth – are partly responsible for the observed trend in flood damages. The analysis shows that a relatively small fraction of the increase in damages can be associated with the small increasing trends in precipitation. Indeed, after adjusting damages for the change in national wealth, there is no significant trend in damages. This would tend to support the assertion by Lins and Slack (1999) that increasing precipitation is not inconsistent with an absence of upward trends in extreme streamflow. In other words, there is no paradox. As they write,
We suspect that our streamflow findings are consistent with the precipitation findings of Karl and his collaborators (1995, 1998). The reported increases in precipitation are modest, although concentrated in the higher quantiles. Moreover, the trends described for the extreme precipitation category (>50.4 mm per day) are not necessarily sufficient to generate an increase in flooding. It would be useful to know if there are trends in 24-hour precipitation in the >100 mm and larger categories. The term "extreme", in the context of these thresholds, may have more meaning with respect to changes in flood hydrology.
 Karl et al. document that the increase in precipitation occurs mostly in spring, summer, and fall, but not in winter. H. Lins (1999, personal communication) notes that peak streamflow is closely connected to winter precipitation and that "precipitation increases in summer and autumn provide runoff to rivers and streams at the very time of year when they are most able to carry the water within their banks. Thus, we see increases in the lower half of the streamflow distribution."

Furthermore, McCabe and Wolock (1997) suggest that detection of trends in runoff, a determining factor in streamflow, are more difficult to observe than trends in precipitation: "the probability of detecting trends in measured runoff [i.e., streamflow] may be very low, even if there are real underlying trends in the data such as trends caused by climate change." McCabe and Wolock focus on detection of trends in mean runoff/streamflow, so there is some question as to its applicability to peak flows. If the findings do hold at the higher levels of runoff/streamflow, then this would provide another reason why the work of Lins and Slack is not inconsistent with that of Karl et al., as it would be physically possible that the two sets of analyses are complementary.

In any case, an analysis of the damage record shows that at a national level any trends in extreme hydrological floods are not large in comparison to the growth in societal vulnerability. Even so, there is a documented relationship between precipitation and flood damages, independent of growth in national population: as precipitation increases, so does flood damage. From these results it is possible to argue that interpretations in policy debate of the various recent studies of precipitation and streamflow have been misleading. On the one hand, increasing "extreme" precipitation has not been the most important factor in documented increase in flood damage. On the other hand, evidence of a lack of trends in peak flows does not mean that policy makers need not worry about increasing precipitation or future floods. Advocates pushing either line of argument in the policy arena risk misusing what the scientific record actually shows. What has thus far been largely missed in the debate is that the solutions to the nation's flood problems lie not only in a better understanding of the hydrological and climatological aspects of flooding, but also in a better understanding of the societal aspects of flood damage.
So if you want to connect increases in precipitation to flood damages, you need to ask at least two more questions:

What has been the influence of increased precipitation on peak streamflow?
What has been the influence of increased streamflow on damage?

Too often such questions are left unasked and unanswered.

14 February 2011

Do Nations Compete for Jobs and Industry?

The image above comes from The Economist and shows the share of profits in the mobile phone industry, with the growing bright blue wedge representing Apple taking a big bite out of Nokia's profits.  The Economist writes:
UNTIL 2007 Europe appeared to have beaten Silicon Valley in mobile technology for good. Nokia, based in Finland, was the world's largest handset-maker—and raked in much of the profits. But everything changed when Apple introduced the iPhone in 2007, the first smartphone that deserved the name.
Obviously there are relative winners and losers in the marketplace, but apparently many economists don't think that countries are in competition for jobs or industry.  Writing in the NY Times yesterday, Gerg Makiw dismisses the notion of countries in competition with one another, as suggested by President Obama in the State of the Union:
Achieving economic prosperity is not like winning a game, and guiding an economy is not like managing a sports team.

To see why, let’s start with a basic economic transaction. You have a driveway covered in snow and would be willing to pay $40 to have it shoveled. The boy next door can do it in two hours, or he can spend that time playing on his Xbox, an activity he values at $20. The solution is obvious: You offer him $30 to shovel your drive, and he happily agrees.

The key here is that everyone gains from trade. By buying something for $30 that you value at $40, you get $10 of what economists call “consumer surplus.” Similarly, your young neighbor gets $10 of “producer surplus,” because he earns $30 of income by incurring only $20 of cost. Unlike a sports contest, which by necessity has a winner and a loser, a voluntary economic transaction between consenting consumers and producers typically benefits both parties.

This example is not as special as it might seem. The gains from trade would be much the same if your neighbor were manufacturing a good — knitting you a scarf, for example — rather than performing a service. And it would be much the same if, instead of living next door, he was several thousand miles away, say, in Shanghai.

Listening to the president, you might think that competition from China and other rapidly growing nations was one of the larger threats facing the United States. But the essence of economic exchange belies that description. Other nations are best viewed not as our competitors but as our trading partners. Partners are to be welcomed, not feared. As a general matter, their prosperity does not come at our expense.
Rob Atkinson of ITIF has a great post up which critiques such conventional wisdom among economists that nation's are not in competition for jobs and industry.  Here is a lengthy excerpt from Rob's post:
When ITIF released our report Effective Corporate Tax Reform for the Global, Innovation Economy, I briefed a group of prominent tax economists on the report.  The group included leading tax economists for Congress, Treasury, OMB and other government agencies.   The report laid out 6 key principles to guide corporate tax reform efforts.   When I got to the principle number 4 – “In a globally competitive economy nations need competitive corporate tax regimes,” – several hands shot up.   One economist reflected the group’s consensus when he said “the corporate tax code doesn’t have to reflect this because while Boeing may compete with Airbus, for example, the United States is not in competition with Europe.   At this point most heads were nodding in agreement.   I should reiterate that these were not academic economists, or junior GS 8’s.  These were senior USG economists charged with advising policy makers on tax policy. 

When I share this story with colleagues, they look at me with jaws agape, in disbelief.  I actually think many don’t even believe me.   For such a view is so patently out of touch with the reality of the 21st century global economy that most people just don’t believe that top government advisors actually believe this.   But they really do.  Just look at a recently op ed by Paul Krugman where he argued that countries don’t compete with one another, only companies, and therefore the entire enterprise of trying to make the U.S. more competitive in international markets is a fool’s errand.

And when the top advisors to USG fundamentally believe that we are not in competition with other nations, they are not likely to push for policies that would make us more competitive.  We see this is the current craze over corporate tax reform.   President Obama, presumably on the advice of these “we don’t really compete” economists reflected this view when he proposed in his State of the Union address to “Get rid of the loopholes. Level the playing field. And use the savings to lower the corporate tax rate for the first time in 25 years — without adding to our deficit.”  Sound good.  Is actually not.   This kind of corporate tax reform is exactly what the “we don’t really compete” economists want: a corporate tax code that taxes every industry the same because the government shouldn’t be picking winners.

This brings us back to the states.   State tax codes pick winners all the time.   By this I mean states try to lower taxes on companies that produce products and services, like manufacturing and software, which are “traded” outside the state.  They know that lower taxes on barber shops and dry cleaners don’t matter.  The barber shops and restaurants wont’ expand if their taxes are lower.  But higher taxes on mobile establishments like a car factory or software firm can mean that that car factory or software firm will move to another state.  It’s also why from 1970 to 2008, corporate taxes as a share of overall state tax revenues fell from 8.3 percent to 6.2 percent.   States realize that a competitive corporate tax rate, particularly on “traded firms” are essential.

In Washington, corporate tax “reform” would actually make U.S. competitiveness worse for three key reasons.  First, as long as corporate tax reform has to be revenue neutral, it means that U.S. companies overall will still pay high taxes, unlike our international competitors (oops, I mean “other nations”), most of which have lowered effective corporate tax rates over the last two decades.   Second, corporate tax reform risks cutting rather than expanding, tax incentives which are critical to growth and innovation, such as the R&D tax credit and expensing of capital equipment.

Third, reform would end up reducing taxes on industries that face virtually no international competition (e.g. electric utilities) and raising them on industries that are fighting every day for global market share (e.g. many technology-based industries).  The end result would be further movement of U.S. jobs offshore.  But again, if you believe that we are not in competition who cares.  Indeed, some in power today appear to hold the view that since many U.S. firms in traded industries have moved some jobs offshore that we should instead lower taxes on “Main Street” barber shops and restaurants: after all these kinds of industries have remained loyal to the United States and created jobs.   Amazing!   They remained “loyal” because they had no choice.

Supercuts and McDonalds aren’t going to move their barber shops and hamburger restaurants to India to take advantage of low-wage workers to serve American customers.    But “traded” industries will because under with the current U.S. business and innovation climate they often have no choice.

The sooner Washington starts thinking like a state the better off we will be. States don’t blame companies that move out of state.  They work to make their business climate more attractive.   States don’t think they are entitled to jobs. They fight for jobs.  States don’t want their tax code to be neutral.  They want it to directly support their economic development goals.
Lets take another look at Nokia and Apple -- The Economist explains why the center of gravity for mobile phones moved west:
The first generations of modern mobile phones were purely devices for conversation and text messages. The money lay in designing desirable handsets, manufacturing them cheaply and distributing them widely. This played to European strengths. The necessary skills overlapped most of all in Finland, which explains why Nokia, a company that grew up producing rubber boots and paper, could become the world leader in handsets.


As microprocessors become more powerful, mobile phones are changing into hand-held computers. As a result, most of their value is now in software and data services. This is where America, in particular Silicon Valley, is hard to beat. Companies like Apple and Google know how to build overarching technology platforms. And the Valley boasts an unparalleled ecosystem of entrepreneurs, venture capitalists and software developers who regularly spawn innovative services.
Last week Nokia announced a new strategic partnership with Microsoft, with one implication being a loss of jobs in Finland.
Part of the announcement that Nokia has switched its primary smartphone platform to Windows Phone is that the Symbian mobile operating system is being slowly phased out. As a result, Nokia CEO Stephen Elop confirmed the company will be cutting jobs. In Finland, the numbers may be quite significant. "You're talking about 20,000 people, it's a big number," Mauri Pekkarinen, Minister for Economic Affairs, said in a statement. "We're talking about far and away the biggest process of structural change that Finland has ever seen in the new technology sector."
Google, which operates the rival Android mobile phone operating system, wasted no time reminding the world that they are hiring, tweeting to those displaced by Nokia.

The lesson to take from Nokia's experience is not at all unique -- So long as companies compete for the rewards of innovation, countries will have a stake in that competition, with the result being relative winners and losers. Nations put forward many policies that influence the short and long term outcomes of innovation and thus can work in the direction of their citizens interests or against it, with the implementation of successful innovation policies no easy feat.  But make no mistake -- nations are in competition, despite what economists might say.

13 February 2011

Tall Tales in the New York Times

I've made peace with the fact that many people want to believe things utterly unsupported by data, such as what Elisabeth Rosenthal writes in today's New York Times, that intense storms and floods have become three times more common and increasing damage from such events is evidence of human caused climate change. Of course, people believe a lot of silly things that data don't support -- like President Obama is a Muslim with a fake birth certificate, vaccines cause autism, and climate change is a hoax, just to name a few on a very long list. While such misplaced beliefs are always disconcerting, especially so to academics who actually study these issues, such misjudgments need not necessarily stand in the way of effective action.  So it is not worth getting too worked up about tall tales.

But even so, it is still amazing to see the newspaper of record publish a statement like the following about Munich Re, one of the world's largest reinsurance companies:
Munich Re is already tailoring its offerings to a world of more extreme weather. It is a matter of financial survival: In 2008, heavy snows in China resulted in the collapse of 223,000 homes, according to Chinese government statistics, including $1 billion in insured losses
Munich Re's financial survival? Here Rosenthal makes a leap well beyond the perhaps understandable following along with the delusions of crowds. There are always risks to bringing data to bear on an enjoyable tale tall, but let's look anyway at what is actually going on in Munich Re's business over the past several years.

Here is what Muinch Re reported on its 2008 company performance, the year in which China suffered the heavy snows:
Notwithstanding the most severe financial crisis for generations, Munich Re recorded a clear profit for the financial year 2008, in line with previous announcements. According to preliminary calculations, the consolidated profit amounted to €1.5bn.
How about 2009 then?
Nikolaus von Bomhard, Chairman of the Board of Management: “We have brought the financial year 2009 to a successful close: with a profit of over €2.5bn, we were even able to surpass expectations and achieve our long-term return target despite the difficult environment.”
Sure, 2010 must have see some evidence of a threat to the company's financial survival?  Guess again:
On the basis of preliminary estimates, Munich Re achieved a consolidated result of €2.43bn for 2010 (previous year: €2.56bn), despite substantial major losses. The profit for the fourth quarter totalled €0.48bn (0.78bn). Shareholders are to participate in last year's success through another increase in the dividend: subject to approval by the Supervisory Board and the Annual General Meeting, the dividend will rise by 50 cents to €6.25 (5.75) per share. In addition, Munich Re has announced a further share buy-back programme: shares with a volume of up to €500m are to be repurchased before the Annual General Meeting in 2012
The NYT may be unaware of the fact that not only is Munich Re in the catastrophe reinsurance business, meaning that it pays out variable and large claims for disasters, but that its business actually depends upon those disasters -- Munich Re explains in the context of recent disasters (emphasis added):
Overall, pressure on prices in most lines of business and regions is persisting. Munich Re therefore consistently withdrew from under-rated business. It nevertheless proved possible to expand accounts with individual major clients, so that the business volume grew slightly on balance, despite the difficult environment. Munich Re owes this profitable growth especially to its ability to swiftly offer complex, tailor-made reinsurance solutions to its clients. Besides this, the many large losses resulting from natural hazards and also from man-made events, had a stabilising influence on the lines of business and regions affected. Thus, prices increased markedly for natural catastrophe covers in Australia/New Zealand (Oceania) and in offshore energy business. There were no major changes in conditions in this renewal season. The overall outcome of the reinsurance treaty renewals at 1 January 2011 was again very satisfactory for Munich Re.
Here is how to interpret these remarks -- There is downward pressure on prices in the reinsurance industry because there have not been enough disasters to keep up demand and thus premium prices. The following observation was made just three months ago:
Insurance and reinsurance prices have been falling across most business lines for two years, reflecting intense competition between well-capitalised insurers and a comparative dearth of major catastrophe-induced losses.
But, as Munch Re explains, they have been able to overcome the dearth of disasters because recent extreme events have allowed them to increase prices on coverage in a manner that not only counteracts recent losses to some degree, but even allows for "profitable growth."  As with most tall tales, the one about the financial plight of reinsurers dealing with a changed climate isn't going away any time soon. It is just another bit of  popular unreality that effective decision making will have to overcome.