Blog, due 19/2/12
Comments, due 22/2/12
Blog, due 19/2/12
Comments, due 22/2/12
“Everything we think we know may be wrong. The correct results could be sitting in people’s file drawers because they can’t get them published” David Lehrer.
The issue about whether negative results, otherwise known as non-significant results, should be published is a very controversial issue. It has been suggested by The All Results Journal that approximately 60% of experiments conducted fail to produce the desired results and significant findings, however the proportion of these findings that are published is extremely low. In some parts of the USA 95-100% of studies published are of positive results, indicating that very few negative results are published (Morgan, 2010).
Many of you may be wondering why I am implying this is bad. We conduct experiments to try and find a difference between groups and these are the results that are being published so what’s the problem? Well, as this is a stats blog I’d better include a bit of maths in it. If in studies we are looking for results where the alpha level is set at .05 then there is a 1 in 20 chance that a difference will occur due to chance. So using this statistic we can assume that if 20 experiments are conducted then there is also a 1 in 20 chance that 1 of these will turn out to have significant findings due to chance. Therefore, if a study finds positive results then it appears to be significantly valid when it stands alone, but if it was presented in context with the other 19 studies which have negative results (and therefore are unlikely to be published) then we would have a different outlook on it, as explained by Steffer (2011). This demonstrates how the publication bias towards positive results influences perspective on positive results and creates an unbiased outlook.
Journal publishers say that both negative and positive results are equally considered when deciding what to publish, however research has shown that results take significantly longer to get published if they are negative rather than positive, if they get published at all, see Stern and Simes (1997). The graph below, taken from their research, shows that the amount of results unpublished remains higher for negative results over increasing periods of time compared to positive results.
The awareness of the bias of publication is increasing and there are now journals which are dedicated to publishing negative results, such as the Journal of Articles in Support of the Null Hypothesis for psychology. However, as a researcher planning a study are you likely to go and look for literature in a journal of negative results rather than a more reputable journal such as the Journal of Experimental Psychology which is more likely to publish positive results? I doubt it!
So what are the major implications of the bias of not publishing negative results? Well for starters it can hinder scientific research. If studies are not published then different researchers will continue to run almost identical studies and continue finding the same negative results which remain unpublished, which wastes a lot of time, effort and money. Sometimes it can be just as important to know that one variable does not affect another, as it is to know that it does, and to show that there is obviously something wrong with the paradigm you are investigating. Even if these results may not appear relevant to current research, future research can stem from this and may lead to other findings, whether positive or negative (Rice, 2011). The increased publication of negative results may also take the pressure off researchers to find positive results in their data so may reduce the likelihood of exaggerated results and the manipulation of data by only reporting findings from certain participants or parts of the methodology, therefore creating a more realistic image of the findings. But no matter what, the lack of publication of negative results creates a biased view of studies and can lead to a Type 1 error.
But let’s try and view this from another perspective, because other than positive results being far more exciting there must be other reasons why this bias exists. Firstly, can you imagine the size of journals if they were to publish the 60% of negative results as well as the 40% of positive results they already publish (assuming all studies are reported)? It would be a nightmare for researchers, or students, to sift through all this literature to find the material that is relevant and useful for what they want. Writing up a report to be published takes a lot of time, so would it be more productive to leave these studies the way they are and begin on a new investigation instead of wasting time writing up a failed one? Negative results can also have psychological effects. The reputation of a discipline can be greatly impacted on if the majority of the results published were negative. People may no longer give money to a charity for research if it appears like it is being wasted on studies that are not finding anything significant. This can reduce the motivation and faith in research, which can be especially important for many areas of research, such as treatments for cancer where the hope that research will find a cure is what can spur patients on.
So what can be done to reduce this issue? It has been suggested that researchers should enter their hypothesis and methodology into a database before conducting a study, and must therefore insert the data found afterwards regardless of the outcome, and even if it isn’t written up as a report (Schooler, 2011). This method has already been used successfully in clinical and educational research fields in the US.
After the information presented above I hope as fellow students you can now appreciate how all findings are important and should have equal opportunities for publishing in order to create a more balanced and realistic view of research. Negative results make up a large proportion of the data that is obtained however it is not adequately represented in literature so has a major impact on our perceptions, and can influence future research.
This article shows many varying views regarding whether negative results should be published or not.
Blog, due Sunday 5th Feb
Comments, due Wednesday 8th Feb
A single-subject design is an experimental design where one participant is used as both the control and treatment groups. It is important to distinguish a single-subject design from a case study design. Case studies also use one participant who is analysed, however case studies are descriptive methods and are often used for forming hypotheses and are often qualitative, whereas cause-and-effect can be established using a quantitative single-subject design as it is experimental.
A series of observations are made over time of one participant during various phases. In most cases an ABAB design is used, where phases will alternate between the baseline phase and the treatment phase. This alternating treatments design can often be used to compare different treatments as well as recurrence of one treatment, as discussed by Barlow and Hayes (1979). In the baseline phase observations are made when there is no treatment. This serves as the control condition for the participant. Once a treatment has been administered this becomes the treatment phase. Many observations are made during this phase to establish the effect of the treatment in comparison to the baseline phase. This treatment is then removed, thus returning to the baseline phase again. The aim of this phase is to establish if the effects in the treatment condition were due to the intervention or due to a different confounding variable. It is expected that if the effect was due to the intervention then this second baseline phase should be equivalent to the first baseline phase as the effects of treatment have been removed. Sometimes this is not shown when a treatment has long lasting effects on a patient. After this second baseline phase the treatment is administered again to compare to the other treatment phase to ensure that it was the treatment causing the effects for the first interval. The image below, taken from Horner, Carr, Halle, McGee, Odom and Wolery (2005), shows an example of an ABAB design.
Single-subject designs are not usually analysed using the traditional statistical methods used for many other methods of research. The data is frequently presented graphically for visual inspection. The graph is measured on 2 main features, its level and trend. A level is the magnitude of the participant’s responses, which should be approximately a horizontal line. A trend is when differences from one measurement to the next follow the same direction and magnitude. On a graph this would be shown by clustering points along a sloping line. These 2 features are described in terms of stability, based on the consistency of levels and trends.
A main practical use of a single-subject design is the N of 1 trial. This is a clinical trial where the participant serves as both the control and the patient. This can be used in a similar way to the ABAB design as described above, or can be adapted to establish the effects of various types of a drug, or to test a drug against a placebo. These trials are flexible towards the individual, and the rate of success for each individual is much higher than that of using traditional group methods of testing, see Kravitz et al. (2008). However, this trial is not used very often in today’s society, even though it has the potential to be much more effective than other methods of treatment. It is very costly and time-consuming to perform long-term and detailed examination of each individual’s reactions to certain drugs in order to provide the appropriate care required, however it could be argued that in many cases drugs are being prescribed on the basis of group research when an individual may benefit more from a different drug which is produced more cheaply, and may not require the drug for as long as the average patient, therefore money could also be saved. Kravitz et al. (2008) argue that it is unjust that methods such as X-rays are used for diagnostic precision, however clinicians will not readily use the N of 1 trial to increase therapeutic precision and to facilitate modern clinical care.
Single-subject designs have several advantages, the main one being that it has the potential to increase the success of treatments for individuals. Cause-and-effect can be established using this method, and only 1 participant is needed, thus reducing the need of standardized treatments like in group research. It is also helpful to observe long lasting effects which would not usually be tested using group research. However there are also many limitations for using this research design. As the research is only using one participant it is difficult to generalise as individual differences could have a major impact on the effectiveness of the intervention. Also, multiple observations can lead to sensitization and carry over effects which can alter the measurements of behaviour. Reliance on a graph for interpretation can also limit the effectiveness of the research as it can be based on individual interpretation and can require large and immediate changes in order to perceive any effects. However, it is possible to do statistical analysis along with visual representation.
So, to conclude, single-subject designs can be very useful, especially in the clinical field, to find cause and effect relationships using few participants and to establish longer term results than usually investigated in traditional group methods, however it is hard to generalise these results for the research to have an impact on a larger population.
For more information see Gravetter & Forzano (2009), Research methods for the behavioural sciences, chapter 14.