Up
A Brief Quiz
Experimental Effects Identification

Chapter 10:

Design of Experimental Research
in
Communication

 

Outline

 

Concepts

 

I. The Notion of an
   Experiment
experiment: the study of the effects of variables manipulated by the researcher in a situation in which all other variables are controlled, completed for the purpose of establishing causal relationships
confounding: when variation from one source is mixed (or confused) with variation from another source so that it is impossible to know whether effects are due to the impact of either variable separately or some combination of them
--deception: an ethical
  problem created when
  research participants
  have been either (1)
  uninformed that an
  experiment was being
  conducted or (2)
  intentionally misled
  about the nature of the
  study in which they are
  participating
   A.  Questions and
         Hypotheses in
         Experimental
         Designs
         --hypotheses in
           experiments are
           phrased to
           explore cause-
           and-effect
           relationships
        --experiments
           require that
           variables be
           capable of
           manipulation
experimental independent variables:  independent variables that are manipulated by the researcher
   B. The Concept of
        Control
control: methods researchers use to remove or hold constant the effects of nuisance variables
methods of control:
elimination and removal: removing a nuisance variable from the experimental setting
holding constant: limiting the range of intervening variables so that they are equal across studies (by 1. limiting the population, 2. using subjects as their own controls, 3. Counterbalancing [rotating the sequence in which experimental treatments are introduced to subjects in an effort to control for extraneous variables, such as fatigue or cumulative learning effects])
matching: pairing subjects on some variable on which they share equal levels and then assigning them to experimental or control conditions
blocking: adding a nuisance variable into the design as another independent variable of interest
randomization: assigning subjects so that each event is equally likely to belong to any experimental or control condition
--respondents may be
  selected at random from
   the population; they also
   may be assigned at
   random to experimental
   or control conditions
statistical control: use of statistical tools such as analysis of covariance and partial correlation to hold a nuisance variable constant
additional sources of error:
halo effect: influences from strong
positive or negative impressions of a
source of communication that affect
other ratings that follow
placebo effect: an occurrence in which
subjects show change even though
there is no experimental treatment
John Henry effect: subjects' inconsistency with their normal activity because they try extra hard when participating in experiments
"do nothing" control groups: since experimental and control groups should share all things except the experimental variable, control groups that "do nothing" are apt to differ from the treatment groups in additional ways
II.  Experimental Validity
    and Invalidity
experimental invalidity: errors that prevent researchers from drawing unequivocal conclusions
    A. Internal Invalidity
         --if all sources of
           internal invalidity
           are not controlled,
           it is not possible
           to claim that any
           observed effects
           were caused by
           the independent
           variable in the
           experiment
internal invalidity: the presence of contamination that prevents the experimenter from concluding that a study's experimental variable is   responsible for the observed effects
history: events not controlled by the researcher that occur during the
experiment between any pretest and posttest;
selection: sampling biases in selecting or assigning subjects to experimental or control conditions;
maturation: changes that naturally occur over time;
testing: alterations that occur when subjects are tested and made testwise or anxious in ways that affect them when they are given a second test;
instrumentation: changes in the use of measuring instruments from the pretest to the posttest, including changes in raters or interviewers who collect the data in different conditions;
statistical regression: shifts produced when subjects are selected because of very high or very low scores on some test and then changes on that measurement are tracked in the experiment;
experimental mortality: biases introduced when subjects differentially (nonrandomly) drop out of the experiment;
interaction of elements: effects created by the interaction of selection biases with differential levels of maturation, history, or any other source of variation
    B. External Invalidity external invalidity: the degree to which experimental results may not be generalized to other similar circumstances
interaction of testing and the experimental variable: (pretest sensitization) a defect created when the pretesting makes subjects either more or less sensitive to the experimental variable
interaction of selection and the experimental variable: effects created by sampling groups in such a way that they are not representative of the population since they are more or less sensitive to the experimental variable than other subsamples from the same population
reactive arrangements: elements in the experimental setting that make subjects react differentially to the experimental arrangements rather than to the experimental variable alone
multiple treatment interference: if subjects are exposed to repeated additional experimental treatments, they may react in ways that are not generalizable to subjects who are uncontaminated by such additional independent variables
III. Specific Experimental
    Designs
    A. Notation for
         Experimental
         Designs
 

O: an observation of the study's dependent variable
X: the experimental variable
R: randomization

     B. Pre-Experimental
          Designs
          1. one shot case
              study
one shot case study: an experimental treatment is introduced and researchers look at effects on some output (dependent) variable without benefit for a control group
          2. one group
              pretest-posttest
              study
one group pretest-posttest: a case study with an additional pretest so that subjects can serve as their own controls
          3. static group
              comparisons
static group comparisons: a design adding a control group, but the two groups are not known to be comparable
    C. True Experimental
         Designs
        1. Pretest-Posttest
            Control Group
            Design
            --researchers
              avoid using
              "change scores"
              since (1) change
              scores may not
              have
              distributions
              that make them
              easy to interpret
              with standard
              statistical tools,
              (2) change
              scores have
              lower reliability
              than the original
              measures
pretest-posttest control group design: a design including pretesting and posttesting individuals in a randomly assigned experimental group and control group
        2.  Solomon Four
             Group Design
Solomon four group design: a design that adds control groups to examine pretesting effects directly
        3.  Posttest Only
             Control Group
             Design
posttest only control group design: a design that controls for pretest sensitization by deleting the pretest entirely
    D. Factorial Designs factorial designs: experimental designs that include more than one independent variable
         1.   Uses in Research factors: variables that are broken down into levels; a.k.a. "variable factors"
levels: categories of each factor
single subject experiments:
experiments in which experimental conditions are presented to or deleted from the same subjects over time
--limitations: (1) most applicable
   to questions of individual
   difference, rather than patterns
   across people; (2) limited
   generalizability prevents
   examining hypotheses of social
   significance to most
   communication researchers; (3)
   limits the sorts of statistical tools
   that might be used to help analyze
   results.
         2. Interpreting Factorial
             Results
             a. Main effects main effects: dependent variable effects from independent variables separately
             b. Interaction
                 effects
                 --interactions
                   are indicated
                   by lines that
                   are not
                   parallel to
                   each other
                   when
                   relationships
                   are graphed
                --unlike ordinal
                   interactions,
                   when crossed
                   interactions
                   are found,
                   researchers
                   are forbidden
                   from interpreting
                   the main effects
                   for the variables
                   involved since such
                   interpretations
                   would be misleading
IV.  Some Elements Found in
      Good Experiments
interaction effects: dependent variable  effects from independent variables taken together
ordinal interaction: if lines drawn in graphic displays of the effect are not parallel and do not cross each other, the pattern is called an ordinal interaction
disordinal interaction: patterns of graphic displays of the effect that occur when lines drawn for each independent variable cross each other (a.k.a. "crossed interaction")
      A. The Pilot Test pilot test: inquiries that usually involve small samples of people (sometimes as small as ten or twenty people) who take part in an experiment to determine any difficulties with experimental materials
      B. Manipulation Checks manipulation check: a researcher's measurement of a secondary variable to determine that an experimental variable actually operated in a study
quasi experiments: experimental work where random assignment and control are not possible
--time series designs: measurement
   of subjects across different times
--separate sample posttest designs:
  posttests completed from another
  group of subjects that are close but
  not assured to be equivalent to the
  experimental group
--counterbalanced designs: designs
   that introduce several different
   experimental treatments presented
   to subjects in different orders or
   sequences
      C. Care in Interpretation
           1. Resisting the
               Tendency to Infer
                Long Term
                Effects from Short
                Term Experiments
           2. Searching for
                Nonlinear
                Relationships
           3.  Desirability of
                Multiple
                Dependent
                Variables