For reasons that still make me too cross to mention (sorry if you’ve been on the receiving end of one of my rants over the past week…), I’ve been thinking a lot about the academic training programme for the foundation years.
And then I read Daniel Lumsden’s piece in the RCPCH bulletin this month
As I’ve read it, his point is that all trainees should have an understanding of research in order to be able to deliver the best care to their patients, and ensure the successful delivery of research projects.
I’m really not going to argue with that.
What bothers me most is what this concept of “understanding research” actually means.
A few months ago, I was asked by a colleague for some help in critiquing a paper for journal club. Her problem? She didn’t understand the statistical tests that were being used, and thought I might be able to help. Of course I couldn’t: I’ve never understood statistics before (I’m one of these people that has a notebook with the definitions of sensitivity and specificity written down…); I certainly wasn’t going to understand some convoluted test involving curves and words I’d never heard of before.
What’s more, I didn’t need to know. It didn’t take long to work out that this paper (no naming and shaming I’m afraid!) was reporting a study that was badly designed, and that had chosen an inappropriate methodology to answer their research question. None of that required an in-depth knowledge of statistics. But it did require a basic understanding of research methods and purpose.
Now, I don’t think that most “grown-up” researchers can argue about Popper or post-modernism in research. But maybe that’s part of the problem. Our research training is organised and structured by scientists who structure their research in a particular way. We understand about randomised controlled trials, but less about good epidemiology. We understand chasing a single “truth” more than we appreciate what that truth means. We’re more au fait with numbers than people.
Health services research, in all its unwieldy complexity, is a closed book to most. Restructuring a service, a clinic, an outreach group is going to have more immediate impact on our patient care than the outcomes of a phase 1 trial. But how many of us appreciate how to appraise this, or can critique a paper on this kind of topic?
Understanding the research methodology is essential to understanding research.
I’d go further. I think that we all need an understanding of the history of research, and to understand why our standards have developed in the way they have. We need to accept that there are different schools of thought; that not all research is hunting for a single truth; that generalisability isn’t the goal of all studies; that population statistics are applicable only to that population; that opinions & beliefs have an impact on which research questions are asked (and which are funded).
Only if we appreciate these things (and I think it would take a lifetime to understand them properly) can we spend time thinking about the fine detail of statistical tests. Without understanding why a study has been designed in a certain way, how can we possibly hope to critique the fine detail? There is simply no point looking at the p-value of a study where the question is irrelevant. Using the wrong statistics might get you the wrong answer from the data you have available. Asking the wrong question renders that data meaningless, regardless of the tests you apply to it.
If the Keogh report is right, and junior doctors are “the clinical leaders of today”, then we need to understand the systems that we work with. Research training needs to move on from the days of RCTs, and embrace the complexity of structural change. It’s a big ask: this is complex, involves people and systems. There aren’t simple designs or answers.
But the principles are the same regardless of the research approach: what you’re asking is important; how you answer it is important. Realising that the answer you get at the end depends on the question you ask is something that applies to all forms of research.
Ultimately, understanding this makes us better doctors. I think it goes a long way to making us better researchers too.