How Does Clinical Research Work? A Two-part Primer. Part 1: How to Ask a Research Question and Design a Study

Keeping Current © Peter Rosenbaum, 2015

Developed by CP-NET, an Integrated Discovery Program carried out in partnership with the Ontario Brain Institute.

General Overview

Clinical and health services research in childhood disability are essential if we are to move the field forward and have confidence that what we believe we ‘know’ is in fact based on credible studies. So much of clinical practice in healthcare is based upon the developments and the research findings that have emerged in the modern era of developmental disability, especially in the past 30 years. But there is still so much more to learn! 

Research is both rewarding and challenging in so many ways. It is usually difficult to get research funds to pay the people who work with researchers to collect and analyse the data. Much of the research we need in the field of developmental disability takes time and cannot be rushed.

 Encouraging busy people (parents and children with disabilities) to participate in research is another challenge. Their lives are already full and hectic, and unless the studies to which they are being invited seem important to them they may choose not to join. We are always excited that so many people are still keen to be involved in health services research; many of them understand how important it is for the whole field to be more effective, even if they themselves and their children may not benefit directly from what we are trying to learn. Of course, in today’s world of rapid advances in research and knowledge transfer activities, it is more and more realistic to hope that research participants may indeed benefit from what we learn in their life-time. 

We hope that this brief ‘25,000 foot’ overview of some issues in research in the field of developmental disabilities will provide readers with some insights and ideas the next time they hear about new research. We encourage people to ask analytically critical questions and reflect on whether the findings are credible, relevant and important to them.

Introduction 

Health services research is popular, relevant to the community, and constantly in the news. Health-related research is of course important to everyone for many reasons. We are all concerned about our health and want to be well informed. We want to know about the latest breakthroughs and discoveries. In addition, in most western countries a considerable amount of health research is funded by public agencies such as the Canadian Institutes of Health Research (CIHR) in Canada, the National Institutes of Health (NIH) in the US and the Medical Research Council (MRC) in the UK. This means that we the public are paying for the research we all read about, and thus we should all have an additional personal stake in it! 

We regularly read that “they” have discovered some new finding, after which the report discusses the possible importance of these findings for the community at large. What is less well understood is that the questions asked by the researchers, and the methods they use in their studies, will have a significant impact on the kind of study that is done, the findings that arise from it, the overall credibility of the research, and what the results might mean for people. 

This document has been written to offer readers some general ideas about how all of us can become critical readers of a research study. This may help people to decide first whether the study was well done and second whether the findings might be important to them.

The Research Question 

The first consideration in any study is to understand what specific question was asked. There is often considerable tension between the researchers’ wish to ask – and try to answer – big and important questions about health issues, and the necessity to be focused and specific in any individual study. This tension arises because the bigger the issue, the more challenging it can be for researchers to study it. On the other hand, when a question is very focused about a specific aspect of a specific condition, even the best and most credible findings are likely to be applicable only to a subgroup of the whole population. In other words there is often tension between wanting to ask and answer the big issues and the realities of the focused studies and answers! 

Thus, when one is listening to or reading an account of research findings it is important always to be clear what questions were asked, in order to decide whether that study and those findings are applicable to ourselves or our patients. Even if the topic is broadly relevant and of interest to people, this specific study might not apply to the issues important to us.

How was the Study Designed and Executed? 

In addition to the specificity of the questions being addressed in any research, the way in which studies are designed is very important. A brief outline of some of the common designs for developmental disability research may illustrate this point. 

(i) Many studies are “cross-sectional”. This means that we may look at a number of issues at one point in time, and try to identify and then understand the relationship or association between “this” and “that”. It is often the interpretation of the findings (the explanation) that causes the problem! A classic example from the field of childhood disability was the frequent observation of an association between childhood autism and parental distress and uncertainty about how to parent an apparently ‘disturbed’ child. In the 1950s and 60s, there was a belief (now completely discredited) that ‘refrigerator mothers’ (cold, distant and unengaged with their child) CAUSED the child to develop autism. The idea was that the mothers’ emotional state created the child’s difficulties. (This is a powerful example of the idea “If I hadn’t believed it I wouldn’t have seen it!” – in other words the interpretation of the observations was based upon a pre-existing belief.) 

In the 1970s, people began to understand that autism is a neurodevelopmental condition associated with differences in brain structure and function. It then became possible to see that the association between ‘this’ (i.e., the apparently strange parental behaviour) and ‘that’ (i.e., autistic development) was ‘the other way around’ – it was related to parental challenges in raising a developmentally and behaviourally challenging child. In other words, a different interpretation of the finding of these cross-sectional observations became possible with new information about autism, and new perspectives on the transactional nature of child development and parenting. 

The value of cross-sectional studies is that they enable researchers to see how things might be related. The limitation – often not recognized – is that one must never assume a causal connection between the things that have been measured and found to be ‘associated’. This is because, without knowing how things change and evolve over time, the fact that “this” and “that” are related does not tell us whether one caused the other, whether both were caused by an external factor, or whether they are related at random. Thus, cross-sectional observational studies only allow researchers to find possible associations, and from these ideas they can then explore possible ‘explanations’ in other kinds of research studies (as described below) 

(ii) A study may be “retrospective”. This means that we have ‘looked back in time’ at the reports in the medical charts, or at other information, and have tried to extract the information in which we are now interested. There are some advantages to this kind of study, including the fact the information of interest has already been collected and might be available. It is also relatively faster than some other research approaches. 

On the other hand the major limitation is that the specific things in which we might be interested have often not been collected or at least not collected systematically, or not recalled accurately; and other things we might want to know might not have been collected in the first place. This would be equivalent to someone saving a particular kind of money (for example lira) because they are planning a trip to Italy, and then deciding to go to another country, for which the lira may not be the right currency, or even going to Italy where people now use the Euro and not lira! 

A clinical illustration of the limitations of retrospective studies from developmental disability might be the finding that many children with cerebral palsy seem to have had a ‘difficult’ delivery. Before the modern use of ultrasound during pregnancy it was assumed that the difficult delivery associated with (later) cerebral palsy was the ‘cause’ of the disability. Retrospective studies of children with impairments might indeed have revealed a relatively high frequency of ‘difficult’ deliveries or infant problems in the newborn period, and led to people drawing a causal connection that it was the fault of the delivery (and the medical staff) that led to the later impairment. 

What is now much better understood is that a large majority of children who develop cerebral palsy almost certainly had factors before birth that predisposed them to have a ‘difficult’ birth or problems adjusting to the outside world. Without knowing about the possibility that these ‘problem’ deliveries were associated with earlier problems – and in the absence of information such as abnormalities in the ultrasounds of babies while still in the womb – it would be easy to draw a false conclusion from this kind of retrospective study. 

(iii) A strong research design is provided by a “prospective study”. This is one in which we ‘look forward in time’, knowing specifically what we want to look at, and then collect those data systematically over a number of time points. We usually try to target a ‘cohort’ of people who are at the same point in their experience with a condition. We might, for example, want to explore the journey of parents who have recently found out that their child has a developmental impairment. For such a study we would want to assemble a ‘cohort’ of parents just starting that journey, and travel forward with them over time to learn from them about the nature, challenges and joys of the journey. Unlike a retrospective study in which we ask people to recall feelings and experiences (often years later) there is a ‘realtime’ element to such a study, and we are not relying on memories that might be faulty and are certainly likely to be influenced by later events. 

The Ontario Motor Growth study (http:// motorgrowth.canchild.ca/en/Research/omg.asp; Rosenbaum et al., 2002) is an example of a Canadian study that used a prospective longitudinal design. The purpose of the research was to look at the gross motor development of a large and varied population (cohort) of children with cerebral palsy (657 in total), randomly selected to be in this study. Over a period of more than four years, these children were assessed every six or 12 months (depending on their ages) by trained therapists using a standardized assessment tool. This allowed the researchers to put together several observations from each child into ‘mini growth curves’ of that child’s motor progress. By combining all the findings for the children in each of several functional levels it was possible to create curves of motor development that are now widely used around the world. 

(Note that had the researchers used a retrospective study design to collect these motor development data from the children’s charts they almost certainly would have been frustrated by the missing data, the uncertainty about the data quality, and the probability that the children with more available data were those with more complex conditions, who are usually seen more often. Such a study might then have been biased by these clinical realities.) 

The value of this prospective study design is that the questions are asked before the data are collected; the data are collected systematically for that purpose; and one can look repeatedly at the same issues and people in order to have a good idea of the changes or stability that are associated with that particular aspect of people’s lives.

The limitations of prospective studies include the fact that they take “real” time and as a result are complex and expensive to undertake. They do, however, provide a tremendously useful perspective on a situation, which is very difficult to gain in other ways.

(iv) “Randomized clinical trials” (RCTs) are the strongest design available for ‘human experiments’. RCTs involve assembling a ‘cohort’ of people and allocating them, at random, to receive one or another intervention (or perhaps none at all) and to look at the effect of that particular intervention over time. The intervention may be chemical (e.g., a drug treatment), physical (e.g., physical or occupational therapy), psychological (e.g., counseling or group therapy), technical (e.g., an insulin pump or a brace for an impaired limb), or combinations of these kinds of treatments. The design is meant to assess the effects of the ‘active ingredient’ (the intervention being tested) compared either to a different intervention or to no added treatment.

By using random assignment of people to various kinds of treatment, we hope that we can be reasonably sure that all the rest of the factors that might influence the outcome of interest are similar between the groups. In that case we are in a relatively strong position to assume that any differences we observe between the groups are a result of (caused by) the treatment that one group received. (Before the study starts we will, of course, have asked for and received permission from a Research Ethics Review Committee to do the study, and to guard against harm.)

People talk about ‘blinding’ or ‘masking’ within an RCT. This means that, whenever possible, the people providing the treatment, the people receiving it, and the people measuring the results are all unaware of the specific details of which kind of treatment anyone got. This is always a good idea, and is meant to prevent or at least minimize the possibility of bias (meaning a systematic distortion of the facts based on expectations or problems in the way the study is done). For example, if patients ‘know’ (or even believe) that they are receiving a new and hopeful treatment, they may alter both their beliefs about their well-being and their behaviour (e.g., perhaps doing their therapies more regularly and effectively to speed up the expected effects of the new treatment). If the researchers offering treatment believe that the new approach is superior to the old they may offer additional support, encouragement and even other treatments. They may also ‘see’ and report benefits because they believe these are happening. The people who measure the outcomes of the experimental and conventional treatments (often trained research and clinical staff) also need to be masked to who is receiving what treatment – and perhaps even be unaware of the specific questions being studied – so that they too can be as objective as possible in their assessment of the people who are in the study. The tools used to measure the outcomes should be as free as possible of subjective interpretation.

Randomized clinical trials are an excellent approach to particular kinds of research questions, but are not applicable everywhere. This design is best used when the intervention is specific and relatively discreet (e.g., a new medication or Botox injections) and where the outcomes of interest are both measurable and likely to appear relatively quickly. For this reason certain clinical trials in the field of childhood disability have been very successful and others are almost impossible to carry out. The challenges of RCTs include the fact that many treatments are complex, the effect of the treatment may take a very long time, and the outcomes maybe difficult to measure.

A notable Canadian RCT study (Collett et al., 2001; Hardy et al., 2002) explored whether the claims made about hyperbaric (high-pressure) oxygen therapy (HBOT) for children with cerebral palsy were caused by the HBOT. The study compared the value of ‘true treatments’ with ‘pretend’ (sham) treatments, in which there was a bit of added pressure but not the full treatment as recommended by the proponents of HBOT. The children, parents, researchers and assessors were all masked regarding which children were receiving ‘real’ HBOT and which were in the ‘sham’ group. Even outside researchers who were invited to look at the results were masked until all the analyses had been done. With such a strong and wellcontrolled study design and execution people can be confident in the validity of the findings (in this case, that HBOT does not work better than no HBOT).

(v) The four kinds of research described briefly here are all ‘quantitative’ studies. We count things and report number values, averages, and so on. There are also a variety of “qualitative” research methods. These involve interviewing key individuals, or holding focus groups with people whose perspectives are important. Such data collection is usually tape-recorded (with permission, of course), and then transcribed and read many times by the researchers. What is learned from qualitative studies gives us a considerable amount of insight and understanding of issues, and is in many ways complementary to the numbers-based research that is usually better known.

The second part of this primer looks at the issues involved in measuring the things we want to assess when we do a study.

Want to know more?

For questions about this Keeping Current, please contact Dr. Peter Rosenbaum at rosenbau@mcmaster.ca This Keeping Current was developed as part of the Ontario Brain Institute initiative CP-NET.

Interested in Reading More? 

1. Streiner, DL, & Norman, GR. (2009). PDQ Epidemiology, 3rd Edition. Shelton, CT: People’s Medical Publishing House. 2. Peninsula Cerebra Research Unit (PenCRU). (2014). http://www.pencru.org/ - See What is Research? Section.

Acknowledgements 

The author gratefully acknowledges and thanks his colleagues Dayle McCauley and Dianne Russell, and parents Francine Buchanan and Oksana Hlyva, for their time and thoughtful feedback on this Reflection.

References 

Collet JP, Vanasse M, Marois P et al. (2001) Hyperbaric oxygen for children with cerebral palsy: a randomised multicentre trial. HBOT-CP Research Group. Lancet 24: 582–6. 

Hardy P, Collet JP, Goldberg J et al. (2002) Neuropsychological effects of hyperbaric oxygen therapy in cerebral palsy. Dev Med Child Neurol 44: 436–46. 

Rosenbaum PL, Walter SD, Hanna SE, Palisano RJ, Russell DJ, Raina R, Wood E, Bartlett D, Galuppi B. (2002) Prognosis for Gross Motor Function in Cerebral Palsy: Creation of Motor Development Curves. JAMA 288, No. 11: 1357-1363.