The science of sustainable seafood, explained

Retraction of flawed MPA study implicates larger problems in MPA science

After months of public criticism and findings of a conflict of interest, a prominent scientific paper (Cabral et al. 2020, A global network of marine protected areas for food) was recently retracted by The Proceedings of the National Academy of Sciences (PNAS).

A retraction is a Big Deal in science, especially from a prominent journal. What’s strange in this story is how the conflict of interest intersects with the science. The conflict of interest was apparent immediately upon publication, but it wasn’t until major problems in the underlying science were revealed that an investigation was launched, and the paper eventually retracted.

Cabral et al. 2020 claimed that closing an additional 5% of the ocean to fishing would increase fish catches by 20%. That snappy statistic made for a great headline—the paper was immediately covered by The Economist, Forbes, Anthropocene Magazine, and The Conversation when it was published in October 2020. It made its way through the popular press (the New York Times, Axios, National Geographic, and The Hill have all cited the paper), and eventually into the U.S. congressional record—it was submitted as supporting evidence for a bill by then Representative Deb Haaland, now the Secretary of the Interior. Cabral et al. 2020’s Altmetric Attention Score, a measure of how widely a scientific paper is shared, is in the top 5% all-time.

But with increased press comes increased scrutiny. Several close collaborators of the Cabral et al. group wrote scientific critiques that PNAS published earlier this year. The critiques pointed out errors and impossible assumptions that strongly suggested the paper was inadequately peer reviewed.

PNAS later determined that the person responsible for assigning Cabral et al.’s peer reviewers, Dr. Jane Lubchenco, had a conflict of interest. She collaborated with the Cabral et al. group and was the senior author on a follow-up paper published in Nature in March 2021. That follow-up paper, Sala et al. 2021, included the authors of Cabral et al. and depended on the same MPA model meant to be reviewed in PNAS.

Shortly after the Nature paper was published, Dr. Magnus Johnson (of the University of Hull in the U.K.) wrote a letter to the editor-in-chief of PNAS reporting the conflict of interest; an investigation was launched, and PNAS decided to retract Cabral et al. 2020 on October 6th, 2021—nearly a year from its original publication.

According to the editor-in-chief of PNAS, the frequent collaboration relationship Lubchenco had with the authors constituted a conflict of interest, as did the personal relationship with one of the authors, Dr. Steve Gaines—her brother-in-law. She should not have accepted the task of editing the paper. These conflicts of interest were clear and apparent from the time Cabral et al. 2020 was first submitted, but it wasn’t until the follow-up paper, Sala et al. 2021, received more press than any other ocean science paper in recent memory that eyebrows were raised.

Now the Sala et al. follow-up paper is being questioned—more potential inaccuracies have been found.

timeline of events that led to the retraction of cabral et al. 2020

A highly flawed computer model with poor assumptions

Cabral et al. 2020 assembled a computer model out of several kinds of fishery data to predict where marine protected areas (MPAs) should be placed to maximize global sustainable seafood production. The model produced the map below, where the areas in green are high priority for MPAs and the orange areas are low priority.

Figure 2a from Cabral et al. 2020
Figure 2a from the now retracted Cabral et al. 2020, A global network of marine protected areas for food.

MPAs meant to increase food production do so by reducing fishing pressure in places where it is too high (overfishing). Asia and Southeast Asia have some of the highest overfishing rates in the world—reducing fishing pressure there is a no-brainer, but the model determined many of those areas to be low priority for protection.

The map above should have been a big red flag for the peer reviewers of Cabral et al. 2020. Why were MPAs prioritized all around the U.S., where overfishing has been practically eliminated, but not prioritized around India, Thailand, Indonesia, Malaysia, Vietnam, and China?

Clearly, something was wrong with the model.

Several researchers with a long history of collaboration with the Cabral et al. authors noticed the oddity in the MPA prioritization and pointed out a fundamental issue: the model contained biologically impossible assumptions. It assumed that unassessed fish populations were globally linked—in the model, their geographic ranges could stretch across multiple oceans and their growth rates were based on global data rather than more-precise local data.

An “unassessed” fish population means there is no consistent scientific assessment of its status. Data on those fisheries is sparce. They comprise about half of the world’s catch with the other half monitored and assessed. In monitored or assessed fisheries, all kinds of data are consistently collected and stored in the RAM Legacy Database.

With little data, uncertainty about the future of unassessed fish stocks requires assumptions to be made. But the need for assumptions doesn’t excuse impossible ones. The model in Cabral et al. assumed unassessed fish populations could travel and mate across the species’ entire range rather than just within the population. This is akin to assuming North Sea Atlantic cod could interact with Gulf of Maine Atlantic cod who live over 3,000 miles away. There were cases in the model that assumed MPAs in the Atlantic would benefit fish in the Pacific.

Cabral et al. also assumed density dependence was global rather than local or regional, meaning recruitment of new fish to a population (basically a birthrate) depended on its global abundance rather than local abundance. In reality, density dependent effects are only relevant to the specific population of a particular species, e.g. North Sea cod versus all Atlantic cod; the abundance of North Sea cod has no relation to the abundance of Gulf of Maine cod despite being the same species.

The first critique pointing out issues with the model was published in April by Ray Hilborn (founder of this site). Another critique by Dan Ovando, Owen Liu, Renato Molino, and Cody Szuwalski (all of whom did their Ph.D.’s or a postdoc with members of the Cabral et al. group) expanded on Hilborn’s critique by digging into the math. They found that, due to the assumption that species were connected globally, Cabral et al.’s model overestimated the geographic range of unassessed fish by a factor of seventeen, compared to the scientifically assessed stocks.

Perhaps because it is biologically impossible, there is little precedent for modeling the dynamics of a species as one globally connected population. However, there is precedent for modeling unassessed fish populations at regional scales. Hilborn, Ovando, Szuwalski, Cabral, and many other authors of Cabral et al. 2020 were all authors on Costello et al. 2016, Global fishery prospects under contrasting management regimes, a seminal paper that modeled the range of unassessed fisheries on a regional scale. The authors of Cabral et al. 2020 had a path to follow from Costello et al. 2016, but changed assumptions.

Data errors

Since the authors of the Ovando et al. critique had been intimately involved in the Costello et al. 2016 paper, they were uniquely capable of looking at and interpreting the code for Cabral et al. They found two major errors:

  1. Cabral et al. inadvertently created and used incorrect estimates of fishing mortality for the world’s assessed fisheries. This resulted in an overestimation of the amount of food benefits that MPAs could produce, and the size of MPAs that would produce those benefits. This error also contributed to the map that incorrectly prioritized areas with good fisheries management for MPA implementation.
  2. They mistakenly included a large (~3 million metric tons) and nonexistent stock from an outdated version of the RAM legacy database. They also placed this stock in the wrong ocean for their analysis.

Ovando et al. corrected the coding errors and reran the analysis. They found that the proposed benefits of MPAs for food decreased by 50% but still produced strange results.

Ovando et al. note (emphasis added):

Using the corrected [model], Cabral et al.’s food-maximizing MPA network would close 22% of the United States’ exclusive economic zone (EEZ) to fishing, yet places only 2.5% of India’s, 10% of Indonesia’s, and 12% of China’s EEZs in MPAs… the median F/FMSY (fishing mortality rate F relative to the fishing mortality rate producing maximum sustainable yield FMSY) of fisheries in India, Indonesia, and China is nearly twice that of the United States, creating almost 5 times as much potential food upside from fishery reforms in those regions relative to the United States.

In their response to Ovando et al., the authors of Cabral et al. acknowledge the model is not particularly realistic:

The key assumption we made—that populations are well mixed throughout their geographic range—is indeed a heroic one

However, in their retraction note, the authors maintain that their conclusions are valid and intend to resubmit the paper.

Connection to Sala et al. 2021

Their persistence may be tied to Sala et al. 2021, Protecting the global ocean for biodiversity, food, and climate, the prominent follow-up paper published this past March in Nature. It presents several computer models that predict that an increase in MPAs to reduce fishing has benefits for biodiversity, food production, and carbon emissions. The food provisioning MPA model used by Sala et al. 2021 is the same one as Cabral et al. 2020 and was justified based on the results of the now-retracted paper.

Indeed, all the Cabral et al. 2020 authors were authors on the Sala et al. paper, including the first four authors of the Sala paper (authors are generally ordered in order of contribution, except for the “senior author,” who is the last listed). The Sala et al. paper was the most prominent ocean science paper of the year with an Altmetric score 4x higher than Cabral et al. 2020—it was covered in nearly every major newspaper in North America and Europe.

The acknowledged outright errors from Cabral et al. 2020 were corrected in the Sala et al. paper, but the biologically impossible assumptions that unassessed fish can travel across oceans, and that density dependence is global rather than local, remain.

The same authors from the Ovando et al. critique of the Cabral paper have responded to the Sala et al. paper, demonstrating that Sala et al.’s estimates of the effects of a global MPA network on food production were unreliable.

In the original Cabral et al. critique, the Ovando et al. authors argue that “omitting distance from MPA models produces results that are not credible.” Before it was retracted, the Cabral et al. authors responded saying their results were “a useful starting point.”

However the Ovando et al. critique of Sala et al. shows why that isn’t true:

Instead of just arguing the assumptions were poorly chosen, the recent Ovando et al. re-ran Sala et al.’s analysis with the assumption that fish stay in their region (defined by the U.N. FAO) and are dependent on local factors (the same, more realistic assumptions from Costello et al. 2016 that they all worked on together and that both Cabral et al. 2020 and Sala et al. 2021 were based on).

By changing only two assumptions made by Sala et al. 2021 to different and equally if not more plausible assumptions, we produced a starkly different picture of the magnitude of potential food benefits from MPAs, and the location of priority areas for MPAs designed around food security.

Costello et al. 2016 set a reasonable standard for evaluating unassessed fish stocks. That paper assumed fish live in their FAO region and are dependent on local abundance for population growth rates—about the best assumptions you can make about unmonitored fish populations given available data.

Sala et al. and Cabral et al. modified those assumptions to say that unassessed fish stocks are interconnected around the world and depend on global ecology for population growth rates. Why do this when more realistic assumptions are available and had been previously used by the authors? Both the Cabral and Sala papers used values from the Costello et al. paper as the basis for the model then changed the assumptions to less plausible ones.

Peer review was flawed – how much was due to the conflict of interest?

Cabral et al. clearly suffered from an inadequate peer review. An appropriately thorough reviewer would have seen the map of proposed MPAs, wondered why MPAs were prioritized in the U.S. but not overfished regions in Asia, and pushed the authors to explain why the map seemed “off.” Catching the coding errors would be a difficult task; perhaps only those who contributed to the original code on the earlier Costello et al. paper could have found them, but scrutinizing the map and clarifying the assumptions should have been primary, first principle peer-reviewing steps that should have led to the discovery of errors.

So how did Cabral et al. end up in PNAS, one of the most prestigious journals in the field, then get reproduced in Nature in the most covered paper of the year? The first decision was made by the editors at PNAS who read the paper, thought it was worthy of consideration, then assigned an individual PNAS editor to dive deeper and find peer-reviewers for it. In this case, the editor assigned to Cabral et al. was Dr. Jane Lubchenco, the former NOAA administrator and notable MPA scientist and advocate. She would make perfect sense as a choice to edit and find reviewers for MPA models, but she had a conflict of interest:

Cabral et al. was submitted to PNAS on January 6th, 2020. Notably, the Sala et al. paper was submitted to Nature two weeks prior, on December 19th, 2019. The senior author on the Sala et al. paper was Jane Lubchenco. She should not have been allowed to submit the Sala paper alongside other authors and then assign reviewers for a fundamental part of the paper two weeks later. Her brother-in-law, Dr. Steve Gaines, was also an author on both papers—familial relationships are another conflict of interest.

The editor in chief of PNAS told Retraction Watch both conflicts of interest would have been enough for retraction, even “absent the data errors.”

It will be interesting to see where the Cabral paper is resubmitted and how it is reviewed.

More scrutiny of the other models presented in Sala et al. 2021

You probably saw a headline covering Sala et al. 2021. Most of the press focused on its carbon model that concluded, Bottom Trawling Releases As Much Carbon as Air Travel.

Headline by Vox covering Sala et al. 2021
Headline by Time covering Sala et al. 2021
Headline by New York Times covering Sala et al. 2021
These headlines are almost certainly not true.

The carbon model was the first attempt to quantify the global climate change impact of bottom trawling, a type of fishing in which nets are dragged along the seafloor. Bottom trawling kicks up sediment; the researchers tried to figure out how much carbon stored in sediment is redissolved into seawater due to trawling disturbances. More carbon dissolved in seawater means less atmospheric carbon can be absorbed by the ocean, contributing to climate change. Carbon dissolved in seawater also causes ocean acidification.

Sala et al. claimed their carbon model is a “best estimate,” but other scientists disagree and have pointed out issues in the model that echo the same problems with the Cabral et al. model: impossible assumptions.

A response from Hiddink et al. noted one of the carbon model’s untrue assumptions: that sediment is inert until disturbed by trawling. According to Hiddink et al., this ignores “decades of geochemical research on natural processing of [carbon] in marine sediments.” There are many sea creatures that burrow in the seafloor—nearly all of them cycle carbon back into seawater (most organisms, like humans, respirate carbon).

Hiddink et al. also claim that the Sala et al. model greatly overestimated the amount of sediment that is disturbed: The model assumed all the sediment in the penetration depth is resuspended in the water column, whereas “field observations show that trawling resuspends only [~10%].”

Hiddink et al. say the Sala et al. model overestimates carbon impacts by an order of magnitude or more.

Was this another case of inadequate peer-review? An order of magnitude or more is a substantial error.

The carbon and food models weren’t the only ones with questionable assumptions. The biodiversity model in Sala et al. claimed that with increased MPAs, ocean biodiversity would increase. This is undoubtedly true inside an MPA, but the model assumed fishing rates remain constant outside the proposed MPAs, meaning effort that was inside the MPA disappears, rather than moving elsewhere. This is in direct conflict with the assumptions of the food provision model presented in their primary results which assumed the effort from inside the MPA moved elsewhere.

Not only is this picking and choosing MPA assumptions to present; in real life, this is rarely what happens. When fishermen are told they can’t fish in a particular area, they generally fish harder in other areas. Assuming fishing rates remain the same outside of MPAs probably exaggerates the practical benefits of MPAs for biodiversity.

The picking and choosing of model assumptions in Sala et al. has drawn yet another critique by Hilborn and Kaiser (not yet published on a preprint server). Sala et al. 2021 did report consistent fishing pressure assumptions in secondary results and supplementary materials, however those were not part of the main paper.

When asked about the status of the three known responses to Sala et al. (Ovando et al., Hiddink et al., and Hilborn & Kaiser), Nature had no comment as the review process is confidential.   

Predictions need more scrutiny and less press

Regardless of any conflict of interest, the science in both Cabral et al. and Sala et al. is critically flawed, but being used to advocate for public policy. Both follow a recent trend of publishing predictions that use a limited set of assumptions (in a very uncertain world) to produce global maps that get published in high-profile journals and garner considerable media and political attention.

Computer models are essential tools for science and management, but the accuracy of their predictions depends on both the quality of the data and the assumptions they are based on. Often, a problem is so complex that several assumptions may be equally plausible; readers need to be made aware when different assumptions lead to vastly different outcomes.

The Cabral et al. and Sala et al. papers disregard uncertainty in favor of set values for their model parameters. They don’t account for the enormous uncertainty in these parameters and don’t provide strong evidence that their choice of values was correct. The assumptions and parameters produce big headlines, but are fundamentally unhelpful for the future of ocean governance and sustainability. We expect policy-makers and resource managers to make decisions based on the best available science. Inconsistent and unrealistic assumptions are not that.

Picture of Max Mossler

Max Mossler

Max is the managing editor at Sustainable Fisheries UW.

Share this story:

Share
Tweet
Pin
Post
Email
Link

Subscribe to our newsletter:

Read more:

3 Responses

  1. Thanks for the detailed critique. I scanned the Sala et al paper when it came out and and something felt deeply “off”. Some of this was due to the obvious points you made, such as the assumptions about the carbon model (in shallower seas, where most trawling occurs, sediments are also periodically reworked by storms), but much of it was the apparent implausibility of some of the results and a “gut feeling” that something about the model was off. Unfortunately, with increasing model complexity of these types of global models, a typical reader, even one well versed in fisheries science, will find it almost impossible to fully evaluate such analyses. That likely includes many reviewers of such papers. The current peer review system is not well equipped to catch even critical assumptions and flaws, as long as an analysis produces somewhat reasonable results. I’m not sure if there is an easy solution, but one could argue that in the case of Cabral et al, the system worked and ultimately led to a retraction.

Leave a Reply

Ray Hilborn's every-so-often newsletter

The best way to keep up with our stories.