There has already been enough study on the fish consumption rate

Jerry Cornfield at the Everett Herald reported earlier today about one particular snag in the ongoing budget process: the fight over an additional fish consumption rate study.

Catherine O’Neill’s recent article in the American Indian Law Journal chronicles the recent argument from industry for “more study” on the fish consumption rate, when there has already been lots of study.

Throughout the process of updating the FCR in Washington, there have been broadsides on the science that supports increased rates.  In the Pacific Northwest, the bulk of this scientific data has been produced by tribes and tribal consortia.  As noted above, the CWA anticipates that scientific advances will trigger updates to states’ and tribes’ WQS and EPA’s WQS regulation makes clear that the latest scientific knowledge is the touchstone for EPA review of state and tribal standards’ compliance with the Act. Although the relevant surveys of tribal fish consumption were carefully conducted to ensure their scientific defensibility, and have consistently been found to meet EPA’s (and sister states’) standards in this regard, their validity has nonetheless continued to be challenged by industry and individuals.

Ecology’s initial (Fish Consumption Rate Technical Support Document) FCR TSD considered three studies of tribal fish consumption and one study of Asian and Pacific Islanders in King County, finding each of these four studies to be scientifically defensible. In its FCR TSD, Ecology developed a set of criteria to determine the technical defensibility of fish consumption survey data, to be used in assessing the data’s  relevance  and  appropriateness  to  the  regulatory  context  in Washington, i.e., for use in standards for water quality, surface water cleanup, and sediment cleanup.  …

As documented at length in the FCR TSD, each of the tribal studies considered – that is, the CRITFC  survey,  the  Tulalip  and  Squaxin  Island  survey,  and  the Suquamish survey – was found to have “satisfied” Ecology’s measures of technical defensibility.

Moreover, the scientific defensibility of each of the tribal studies had previously been considered and affirmed in various assessments by EPA and by sister states.

After an evaluation of the surveys according to five criteria, including the study’s “soundness,” “applicability and utility,” “clarity and completeness,” its handling of “uncertainty and variability,” and whether the study’s methods and information were “independently verified, validated, and peer reviewed,” EPA selected each of the tribal studies for inclusion in its general guidance document for conducting exposure assessments, the Exposure Factors Handbook.

EPA Region X, moreover, recommends the Tulalip/Squaxin Island and Suquamish studies in its guidance for cleanups in Puget Sound, giving “highest preference” to these “well-designed consumption surveys.”

Oregon’s independent Human Health Focus Group conducted an extensive year-long review and found each of these studies to be scientifically defensible, deeming them both “reliable” and “relevant.”

Still, the scientific defensibility of the tribal studies has been questioned,  repeatedly,  by  individuals  and  industry  as  part  of  the Washington process. Some commenters asked that the tribal survey data be  “verified”  or  sought  additional  “peer-reviewed  studies  generated through traditional means.”

Some commenters called for the raw data (as opposed to the studies summarizing the survey results) to be “turned over” for “independent review” – a highly unusual request in general, given the ethical protocols that govern studies with human subjects, and a request in this context that is at the very least insensitive, given tribal populations’ understandable mistrust of handing over their raw “data” to outsiders.

Although the Suquamish study explicitly considered the appropriate treatment of high-end responses (so-called “outliers”), and its analysis and conclusions underwent external technical review, this commenter claimed that, “[a]pparently, the study authors never questioned whether these respondents  were  truthful  and  whether  their  responses  should  be included.”

This commenter criticized the study authors’ self-conscious determination that these were values that were not in fact recorded in error, and so ought not be excluded from the dataset, as one that “presses the limits of credibility” – despite the fact that this determination comports with best practices and operates here to reduce bias in reporting survey results.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.