What is educational research? It is a cyclical process of steps that typically begins with identifying a research problem or issue of study. It then involves reviewing the literature, specifying a purpose for the study, collecting and analyzing data, and forming an interpretation of information. This process culminates in a report, disseminated to audiences that is evaluated and used in the educational community. (Creswell, 2002)
Experimental research is commonly used in sciences such as sociology and psychology, physics, chemistry, biology and medicine etc. It is a collection of research designs which use manipulation and controlled testing to understand causal processes. Generally, one or more variables are manipulated to determine their effect on a dependent variable. The experimental method is a systematic and scientific approach to research in which the researcher manipulates one or more variables, and controls and measures any change in other variables.
Experimental Research is often used where:
1. There is time priority in a causal relationship (cause precedes effect)
2. There is consistency in a causal relationship (a cause will always lead to the same effect)
3. The magnitude of the correlation is great.
The word experimental research has a range of definitions. In the strict sense, experimental research is what we call a true experiment. This is an experiment where the researcher manipulates one variable, and control/randomizes the rest of the variables. It has a control group, the subjects have been randomly assigned between the groups, and the researcher only tests one effect at a time. It is also important to know what variable(s) you want to test and measure.
A very wide definition of experimental research, or a quasi experiment, is research where the scientist actively influences something to observe the consequences. Most experiments tend to fall in between the strict and the wide definition.
A rule of thumb is that physical sciences, such as physics, chemistry and geology tend to define experiments more narrowly than social sciences, such as sociology and psychology, which conduct experiments closer to the wider definition.
AIMS OF EXPERIMENTAL RESEARCH
Experiments are conducted to be able to predict phenomenon. Typically, an experiment is constructed to be able to explain some kind of causation. Experimental research is important to society – it helps us to improve our everyday lives.
Identifying the Research Problem
After deciding the topic of interest, the researcher tries to define the research problem. This helps the researcher to focus on a more narrow research area to be able to study it appropriately. Defining the research problem helps you to formulate a research hypothesis, which is tested against the null hypothesis.
The research problem is often operationalizationed, to define how to measure the research problem. The results will depend on the exact measurements that the researcher chooses and may be operationalized differently in another study to test the main conclusions of the study. An ad hoc analysis is a hypothesis invented after testing is done, to try to explain why the contrary evidence. A poor ad hoc analysis may be seen as the researcher’s inability to accept that his/her hypothesis is wrong, while a great ad hoc analysis may lead to more testing and possibly a significant discovery.
Constructing the Experiment
There are various aspects to remember when constructing an experiment. Planning ahead ensures that the experiment is carried out properly and that the results reflect the real world, in the best possible way.
Sampling Groups to Study
Sampling groups correctly is especially important when we have more than one condition in the experiment. One sample group often serves as a control group, whilst others are tested under the experimental conditions.
Deciding the sample groups can be done in using many different sampling techniques. Population sampling may chosen by a number of methods, such as randomization, “quasi-randomization” and pairing.
Reducing sampling errors is vital for getting valid results from experiments. Researchers often adjust the sample size to minimize chances of random errors.
Here are some common sampling techniques:
• probability sampling
• non-probability sampling
• simple random sampling
• convenience sampling
TYPICAL DESIGNS AND FEATURES IN EXPERIMENTAL DESIGN
• Pretest-Posttest Design
Check whether the groups are different before the manipulation starts and the effect of the manipulation. Pretests sometimes influence the effect.
• Control Group
Control groups are designed to measure research bias and measurement effects, such as the Hawthorne Effect or the Placebo Effect. A control group is a group not receiving the same manipulation as the experimental group. Experiments frequently have 2 conditions, but rarely more than 3 conditions at the same time.
• Randomized Controlled Trials
Randomized Sampling, comparison between an Experimental Group and a Control Group and strict control/randomization of all other variables
Experiments can be conducted either in the field or in a laboratory setting. When operating within a laboratory environment, the researcher has direct control over most, if not all, of the variables that could impact upon the outcome of the experiment. For example, an agricultural research station may wish to compare the acceptability of a new variety of maize. Since the taste characteristics are likely to have a major influence on the level of acceptance, a blind taste panels might be set up where volunteers are given small portions of maize porridge in unmarked bowls. The participants would perhaps be given two porridge samples and the researcher would observe whether they were able to distinguish between the maize varieties and which they preferred. In addition to taste testing, laboratory experiments are widely used by marketing researchers in concept testing, package testing, advertising research and test marketing.
Experimentation offers the possibility of establishing a cause and effective relationship between variables and this makes it an attractive methodology to marketing researchers. An experiment is a contrived situation that allows a researcher to manipulate one or more variables whilst controlling all of the others and measuring the resultant effects on some independent variable.
Experiments are of two types: those conducted in a laboratory setting and those which are executed in natural settings; these are referred to as field experiments. Laboratory experiments give the researcher direct control over most, if not all, of the variables that could affect the outcome of the experiment. The evidence for drawing inferences about causal relationships takes three forms: associative variation, consistent ordering of events and the absence of alternative causes. There are a number of potential impediments to obtaining valid results from experiments. These may be categorised according to whether a given confounding factor has internal validity, external validity, or both. Internal validity is called into question when there is doubt that the experimental treatment is actually responsible for changes in the value of the dependent variable. External validity becomes an issue when there is uncertainty as to whether experimental findings can be generalised to a defined population. The impediments to internal validity are history, pre-testing, maturation, instrumentation, sampling bias and mortality. Impediments to external validity are: the interactive effects of testing, the interactive effects of sampling bias and errors arising from making use of contrived situations.
The main forms of experimental design differ according to whether or not a measure is taken both before and after the introduction of the experimental variable or treatment, and whether or not a control group is used alongside the experimental group. The designs are: after-only, before-after, before-after with control group, after-only with control group and ex post facto designs.
• Dunning, Thad. Natural Experiments in the Social Sciences: A Design-Based Approach. Cambridge University Press.
• • *Hinkelmann, Klaus and Kempthorne, Oscar (2008). Design and Analysis of Experiments, Volume I: Introduction to Experimental Design (Second ed.). Wiley. ISBN 978-0-471-72756-9.
• • David A. Freedman, R. Pisani, and R. A. Purves. Statistics, 4th edition (W.W. Norton & Company, 2007)  ISBN 978-0-393-92972-0
• • David A. Freedman (2009) Statistical Models: Theory and Practice, Second edition, (Cambridge University Press)  ISBN 978-0-521-74385-3
• Bailey, R. A (2008). Design of Comparative Experiments. Cambridge University Press. ISBN 978-0-521-68357-9. Pre-publication chapters are available on-line.