Friday, 7 January 2011

Reviewing medical literature, part 2a: Study design

It is true that the study question should inform the study design. I am sure you are aware of the broadest categorization of study design -- observational vs. interventional. When I read a study, after identifying the research question I go through a simple 4-step exercise:
1. I look for what the authors say their study design is. This should be pretty easily accessible early in the Methods section of the paper, though that is not always the case. If it is available,
2. I mentally judge whether or not it is feasible to derive an answer to the posed question using the current study design. For example, I spend a lot of time thinking about issues of therapeutic effectiveness and cost-effectiveness, and a randomized controlled trial exploring efficacy of a therapy cannot adequately answer the effectiveness questions.
If the design of the study appears appropriate,
3. I structure my reading of the paper in such a way as to verify that the stated design is, in fact, the actual design. If it is, then I move on to evaluate other components of the paper. If it is not what the authors say,
4. I assign my own understanding to the actual design at hand an go through the same mental list as above with the current understanding in mind.

Here is a scheme that I often use to categorize study designs:
As already mentioned, the first broad division is between observational studies and interventional trials. An anecdote from my course this past semester illustrates that this is not always a straight-forward distinction to make. In my class we were looking at this sub-study of the Women's Health Initiative (WHI), that pesky undertaking that sank the post-menopausal hormone replacement enterprise. The data for the study were derived from the 3 randomized controlled trials (RCT) of HRT, diet and calcium and vitamin D, as well as from the observational component of the WHI. So, is it observational or interventional? The answer to this is confusing to the point of pulling the wool over even experienced clinicians' eyes, as became obvious in my class. To answer the question, we need to go back to definitions of "interventional" and "observational". To qualify as an interventional, a study needs to have the intervention be a deliberate part of the study design. A common example of this type of a study is the randomized controlled trial, the sine qua non of drug evaluation and approval process. Here the drug is administered as a part of the study, not as a background of regular treatment. In contradistinction, an observational study is just that: an opportunistic observation of what is happening to a group of people under ordinary circumstances. Here no specific treatment is predetermined by the study design. Given that the above study looked at multivitamin supplementation as the main exposure, despite its utilization of the data from RCTs, the study was observational. So, the moral of this tale is to be vigilant and examine the design carefully and thoroughly.

We often hear that observational designs are well suited to hypothesis generation only. Well, this is both true and false. Some studies actually can test hypotheses, while others are relegated to generation only. For example, cross-sectional and ecological studies are well suited to generating hypotheses to be tested by another design. To take a recent controversy as an example, the debunked link between vaccinations and autism initially gained steam from the observation that as the vaccination rates were rising, so was the incidence of autism. The type of a study that shows two events changing at the group/population level either in the same or in the opposite direction is called "ecologic". Similar types of studies gave rise to the vitamin D and cancer association hypothesis, showing geographic variation in cancer rates based on the availability of sun exposure. But, as demonstrated well by the vaccine-autism debacle, running with the links from ecological studies is dangerous, as they are prone to a so-called "ecological fallacy". It occurs when, despite the finding in groups of a linked change of the two factors under investigation, there is absolutely no connection between them at the individual level. So, don't let anyone tell you that they tested an hypothesis in an ecological study!

Similarly in cross-sectional studies an hypothesis cannot be tested, and, therefore, causation cannot be "proven". This is due to the fundamental property of "a snapshot in time" that defines a cross sectional study. Since all events (with few minor exceptions) happen at the same time, it is not possible to assign causation to the exposure-outcome couplet. These studies can merely help us think of further questions to test.

So, to connect the design back to the question, if a study purports to "explore a link between exposure X and outcome Y", either an ecologic or a cross-sectional design is OK. On the other hand, if you see one of these designs used to "test the hypothesis that exposure X causes outcome Y", run the other way screaming.

We will stop here for now, and in the next post will continue our discussion of study designs. Not sure yet if we can finish it in one more post, or if it will require multiple postings. Start praying to the goddess of conciseness now!

    

0 comments:

Post a Comment