I recently came across a couple of items responding to reactions to research findings that were considered objectionable.
First, Noah Smith responds to calls to suppress findings that parental incarceration is correlated with the following benefits to their children, “reducing their likelihood of incarceration by 4.9 percentage points and improving their adult neighborhood quality.”
Smith has argued against mass incarceration and shares other’s concerns about how these findings may be used but pushes back against any suggestion that findings like these ought to be suppressed.
So what should scientists do? Should they become a caste of pure truth-seekers, removing themselves utterly from the world of politics and public affairs in order to pursue the facts in as objective a way as possible? Certainly not. Scientists are humans like everybody else, and they have a right — and perhaps even a duty — to try to make the world a better place through politics.
But in order to follow both the scientific ideology and their own political ideology at the same time, scientists have to manage a special sort of internal bifurcation. When seeking the facts they have to be as objective as they can, and faithfully publish what they find, but then immediately forget about that objectivity as soon as they’re doing anything other than research!
He argues that this may tie researchers up in knots, but it’s the best way to proceed.
Next, on Twitter, an addiction medicine doc and advocate responds to reactions to an unpublished paper that suggests naloxone distribution creates a moral hazard. (I posted about this a few years ago.)
He cautions against trashing the researchers on pragmatic grounds–one of them is doing work that supports progressive criminal justice agendas, that trashing from the right adversaries can actually increase her influence, and the bad optics of a bunch of men engaging in personal attacks on a woman. He also points out the frequent hypocrisy of “it’s science” arguments and, instead, asks for help picking apart the methods used in the paper.
Around the same time, I also came across this article in The New Social Worker about integrating evidence into practice. The author urges readers to consider 4 things: “what intervention/s was/were tested; who was sampled in the study; what measures were used in the study (if a quantitative study was conducted), and; what the results of the study were).”
Good advice, but that isn’t what caught my attention. After considering those 4 elements, the reader should synthesize the findings and “tell the story“ of what the studies say.
It seems to me that this is often the crux of the problem–the story we tell based on a set of findings. Despite all of the calls for us to “follow the science“, science can’t determine our priorities, values, goals, etc. We have to decide what endpoint we want to pursue and science can tell us something about how realistic it is and what pathway might be most effective at getting us there.
I’m not a scientist or researcher, but many of these arguments strike me as missing what science is capable of, particularly in the social sciences, and what we should do with science.
First, the notion that science will be free of bias strikes me as wishful thinking. Science is biased by the profession and orientation of those conducting research, by what can get funded, by what can be easily measured, by what can be accomplished in the time-frames acceptable to funders and researchers, by what can get published, by the questions that interest the researchers, etc.
Second, I tend to think that what research can tell us is fairly limited. Science, generally, cannot tell us what to do. Research can provide information that can help us decide what to do. The information it provides is often narrow and context-specific. It can’t tell us The Truth, but it sometimes can tell us a truth. Even that narrow truth is subject to review and change as new information becomes available.
Third, it’s up to us to decide our goals, priorities, what’s an acceptable outcome, what’s intolerable, how we define our context, what information is most applicable to the context we’re working within, and what story the available pieces of information tells us. For example, consider a patient who is receiving end-of-life care. Science may be able to tell us treatments that might extend her life for 6 or 12 months, but it can’t tell us whether that’s good. She and her family have to make judgments about her quality of life and priorities. Maybe her quality of life is so poor that extending her life is not desirable. Or, maybe her priority is quality time with her family, including a vacation, and the treatments to extend her life would not permit that.
Fourth, many of these arguments fail to acknowledge, value, and integrate experiential knowledge. Empirical knowledge is important, but it’s only one way of knowing and is incomplete. Borkman explained the importance of experiential knowledge this way, “As sources of truth, professional and experiential knowledge are not mutually exclusive. First, even professionals use some experiential knowledge in their practice. Newly credentialed professionals are not regarded as seasoned until they have had personal experience in the real world of work.” Borkman, T. (1976). Experiential Knowledge: A New Concept for the Analysis of Self-Help Groups. Social Service Review, 50, 445 – 456.
So… when we encounter findings that trouble us, how should we respond? We can question the methods and whether these findings are likely to be replicated. We can question whether it’s applicable to the contexts we work within. We can question the real-life significance of the findings. We can ask whether the researchers asked the right questions–the questions and answers that are most relevant to the people involved and the practitioners serving them.
In many cases, the objection is less about the actual data and more about the story being told with the information. For example, imagine we found no methodological problems in the study that found the 4.9% reduction in likelihood of incarceration improvements their adult neighborhood quality for children of incarcerated parents. I’d have questions about the significance of those findings and the definition of adult neighborhood quality, but let’s imagine all those questions were satisfied.
What bothers me is the support for a story that parental incarceration benefits their kids. If the information is solid, this information can, and should, provoke questions, including:
- If it’s clear that these kids experience improvements in these domains, how important are they?
- What are the other important domains that influence the present and future quality of life for these kids and their families? How are these affected by parental incarceration?
- Are these improvements evenly distributed?
- For example, do children of violent parents experience significant improvements while other kids experience adverse effects in these areas?
- What are the mechanisms of the improvements these kids experience?
- For example, is it a function of kids moving in with a stable family member?
- If the wrong story is being told, what’s a better story? Something like, “system fails to support kids until a parent is incarcerated”? (I have no idea if that’s accurate, but it’s just an example of a narrative constructed around a finding.)
Most importantly, it should provoke questions about how we might have facilitated these improvements without the incarceration of a parent. (Maybe before the incarceration and even preventing the incarceration.)
Again, I don’t know anything about the motives, methods, and rigor of this research, but some of these arguments turn into battles about who gets to produce and own knowledge. This seems especially strange in the social sciences where we are dealing with so much complexity that isolating causal factors often seems hopeless.
Instead, we might be better off using these moments to explore the story being told and the values involved. If we wish to reduce incarceration and improve the lives of at-risk kids, can we use this information and discussion as an opportunity to ask better questions, to commit to pursuing more helpful information, to clarify our values, to better live our values, to improve the lives of these kids and their parents?
These disputes come up all of the time in all sorts of areas. Unfortunately these disputes too often resemble power struggles seeking professional and ideological domination and invalidation of “adversaries”rather than the expansion of knowledge and interventions. I’ve participated in these dynamics and I regret it. Today, I try to engage troubling findings in good faith to see what I can learn and add to the discussion.
One thought on “What should we do with objectionable findings?”
As a social scientist I really appreciate this. Knowledge production is certainly influenced by many forces that go well beyond whatever findings one may elucidate. Funding alone has a tremendous impact of what gets studied, and how it is studied. The addiction and recovery space is vastly under theorized, primarily because those without lived experience have trouble grasping what drives seemingly irrational behaviors to the point of social decline and death. The relentless pursuit for a “smoking gun” of causation have led the field down some strange alleys, and probably lumped together people whose only commonality is substance using.
Science can give you a strand of limited truth. In order to weave a tapestry you need multiple strands. This is one reason why getting proof of concept, small N, exploratory, and pioneering work out into the field as quickly as possible is so important. We often mistake rigo for meaningful research.
For example, A study of 150 people and how they utilize something like a recovery friendly employer can illuminate a meaningful conversation and spark programming and endeavors that can have immediate impacts on people’s lives, and such a small study is easily done without funding. If we were to compare this, say, to a large scale rigorous double blind study of medication formulations, we would see that while the latter is useful to a specific field, and a specific population, and would be generously funded, the results, while generalizable, do not necessarily benefit the population in question beyond a narrow slice (those requiring medication), whereas the former gives communities, people in recovery, and businesses something to work with immediately.
Comments are closed.