Lewis College hosts Kevin Elliott to speak on the importance of values in science

Date: 
Fri, 2016/09/30
By: 
Reno Waswil
Science is held by a good many people, to be an objective—or at least one of the most objective—practices in which we can engage. It is for this reason that often it is seen as advantageous or even necessary to remove personal, social, and commercial values from the process.

Kevin Elliott, an associate professor at Lyman Briggs College at Michigan State University and author whose book, “A Tapestry of Value: Inevitability and Social Goals,” being released January 2017, argues against this notion. He presented his arguments to the Illinois Tech populous in a talk hosted by Lewis College of Humanities in the Rettaliata Engineering Center (Crawford) Auditorium entitled “Values in Science: How to Throw Out the Bathwater and Keep the Baby,” on Friday, September 23, at which ample refreshments were provided, drawing in students and faculty alike.

Elliott began the presentation proposing two projects for the talk: defending “science values” and arguing that values should not be excluded from the process of doing science. Values, he defines, are desirable qualities or states either of a personal or societal nature, such positive health, environmental protection, etc. The problem, as well as much of the criticism, comes when these values start to influence scientific reasoning.

He lists some popular examples for how valuation has been of detriment to science: citing studies showing that over half of research fail reproductions and instances of design bias, publication bias, falsification of data, and misleading rhetoric. He also spoke to the all too well known cases of corporate biasing of science, such as has been committed by certain pharmaceutical and pesticide companies about the effectiveness and harm of their products as well as by the tobacco industry about the dangers of the substance.

However, Elliott claims that values, in addition to being unavoidable in some sense, have a huge relevance to scientific work given the role of science in society. By the nature of values, they often result in research being done to fit social priorities. It may be said that scientists have something of an obligation on what questions to focus their attention. What is studied possibly has a huge social impact, and choosing what to study (maximizing the yield of a crop,) how to study it (how long to run the experiment, scales,) and how to interpret the available data (whether it is better in any given situation to overestimate or underestimate effects) are all questions that require evaluation.

Thus, Elliott proposed, instead, for a focus on three criteria for the evaluation of science that will, ideally, help with the negative effects of valuation within it: transparency, reproducibility, and critical review. He then proceeded comment on each one.

For transparency, he concluded that the attempts at it has had mixed success. The Food and Drug Administration (FDA) enacted in 2007 that all pharmaceutical companies were required to register all of their trials after reporting results. The effectiveness of this suffered due to spotty compliance, exceptions from this act, limited requirements within the act itself, and that it was not retroactive. Attempts have been, according to Elliott, even less effective in chemical safety as well as other sectors where it is necessary.

As for reproducibility, there is an interesting dynamic where industry laboratories actually have more reproducible data than independent/academic ones due to more access to funding. However, Elliott said that there are problems even with this, arguing that “results can be reproducible without addressing the questions being asked.” Inadequate endpoints, doses, and lack of population variability in experiments severely limit the real world applications of many of this research. As an example, he cited that although a few academic tests, particularly one published in Andrology, have shown endocrine disruptive effects associated with Bisphenol-A (BPA), very little is being done because the vast majority of journal accepted results are the ones being done by industry that do not show these effects. These are signs of a systematic problem with the way experimental results are being verified and accepted.

As a response to these issues, Elliott proposed the necessary practice of critical review in quality controlling research, with an emphasis on the word critical. This can be best done with oversight from boards of qualified individuals making sure that the research and results that are approved are up to snuff and are, more importantly, relevant to the questions that they are proposing.

Strengthening critical review relies on strengthening the involvement of regulatory agencies, advisory boards and panels, and funding boards and governments. These organizations can—and should—focus more on informatively deciding what studies are good studies, and making sure that the criteria for being so do not discount academic and independent researchers or studies that are actually relevant in comparison to ones that may seem more reproducible at face value.

Elliott concluded by reaffirming that he firmly believes that science should try to exclude values, but needs to be scrutinizing them with effective critical review. There are numerous limitations, he admits, as nearly two-thirds of scientific funding comes from industry, which makes legitimate review much more difficult to impose. However, he sees this as the best method to ensure science can thrive given its heavy role in society.