My weekly email from Science Daily arrived a few days ago. (Science Daily is daily but I get a weekly roll-up of the research posted). I always find it fascinating and this week's was no exception. What caught my eye was a piece titled "Long Term Use of Oral Bisphosphonates May Double Risk of Esophageal Cancer, Study Finds". It caught my eye because only six lines further down was another piece, this one titled "Drugs Used to Treat Osteoporosis [oral bisphosphonates] Not Linked With Higher Risk of Esophageal Cancer". I looked at both carefully in case they were different cuts on the same piece of research. The answer to that is no they weren't – at least as far as I could tell.
What this reinforced for me was that research presented as fact must always be carefully scrutinized. These two pieces of research seemed to reach very different conclusions. Note that I'm guessing here because I don't know how the research was conducted in either case. I'm just going by the headlines and the synopsis given in the two reports. What if both pieces were run in mainstream media – what are readers supposed to think? A likely question would be "Should I stop taking oral bisphosphonates?" The answer could be yes or no depending on which research report you were reading.
I felt a similar uneasiness when I was invited to complete a McKinsey survey on women in business. On completion I received one of their articles on the topic: "A business case for women" . The survey asked me for my views on a number of issues, for example this question:
Over the past five years, which specific measures, if any, has your company undertaken to recruit, retain, promote, and develop women?
(Select all that apply)
Programs to encourage female networking and role models
Visible monitoring by the CEO and the executive team of the progress in gender diversity programs
Support programs and facilities to help reconcile work and family life (e.g., childcare, spouse relocation)
Inclusion of gender diversity indicators in executives' performance reviews
Assessing indicators of the company's performance in hiring, retaining, promoting, and developing women
Systematic requirement that at least one female candidate be in each promotion pool
Gender-specific hiring goals and programs
Options for flexible working conditions (e.g., part-time programs) and/or locations (e.g., telecommuting)
Encouragement or mandates for senior executives to mentor junior women
Gender quotas in hiring, retaining, promoting, or developing women
Skill-building programs aimed specifically at women
Programs to smooth transitions before, during, and after parental leaves
Performance evaluation systems that neutralize the impact of parental leaves and/or flexible work arrangements
Other, please specify:
No specific measures
Now, I answered this in good faith but suppose I hadn't and had just randomly checked boxes? Who would be to know? What difference would it make to the solemn research report that is compiled from the completed surveys? Do the researchers build in a certain amount of flex for rogue answerers? What proportion of rogue answerers would be needed to make any research report invalid? Does a write up of a survey like this have any believability whatsoever?
I don't know the answers to any of these questions. But it does make me skeptical about the process. I'd like, at the very least, to see a note that a certain proportion of respondents will be contacted to get some qualitative input to the quantitative survey. The survey doesn't require respondents, or even invite them to, give contact information. But I'm guessing that some tracking mechanism enables McKinsey to see who in their database has responded.
Like the oral bisphosphonates reports research can find, or be interpreted to find, very different results. Making readers aware of the potential lack of validity, or limitations, of the claims would be helpful. These usually appear in academic research reports but often fail to make their way into popular write-ups of them.