Up
A Brief Quiz
Operational Definitions
Measurement of an Attitude

Chapter 8

Measurement in Communication Research

 

Outline

Concepts

 

I.  The Role of Sound
    Measurement in
    Communication
    Research
    A.  Measurement as
          a Foundation for
          Research
          --sound
             measurement
             required:
         1.  to isolate
              variables
         2.  to prevent
              attenuation
attenuation of results: a reduction of the size of observed effects because of errors in measurement
    B.  Levels of
          Measurement
measurement: assigning numbers to variables according to some system
nominal level measurement: use of numbers as simple identification of variables
ordinal level measurement: use of rank order to determine differences
interval level measurement: distances between measured items are assessed as matters of degree
ratio level measurement: extension of interval measurement to include an "absolute zero."
--an "absolute zero" means
   that a score of 0 indicates
   that the property
   measured is completely
   missing
II.  Characteristics of
     Operational
     Definitions and
     Measures
     A. The Requirement
          of Reliability
 

the fallacy of the false precision: a tendency for researchers to claim precision in their measurements that is not founded in the data

          1. Defining
              Reliability
              --factors
                contributing
                to the
                unreliability
                of a test are:
                1. familiarity
                    with the
                    particular
                    test form;
                2. fatigue;
                3. emotional
                    strain;
                4. physical
                    conditions;
                5. respondent
                    health;
                6. differences
                     in memory;
                7. respondent
                    experience;
                8. knowledge
          2. Methods to
              Assess
              Reliability
reliability: the internal consistency of a measure
reliability coefficient: a correlation that measures the consistency of a measure (ranging from 0 to 1)
              a.   test-retest
                   reliability
test-retest reliability: giving the measure twice and looking at the consistency between scores
              b. alternative
                   forms
                   reliability
alternate forms: constructing different forms of the same test from a common pool of measurement items and then giving different forms to the same group of people and assessing consistency
              c. split half
                  reliability
split half: examining the consistency between two halves of a test scored separately
              d. item to
                  total
                  reliability
item to total: computing the correlation of items with the total test
              e. intercoder
                  reliability
intercoder reliability: assessing consistency with which different raters look at behavior and categorize it by using some sort of check sheet--Scott's pi used in content analysis
               f. statistical
                  shortcuts
statistical shortcuts: methods to obtain reliability coefficients rapidly
--K-R 20 (Kuder-
  Richardson formula 20)
  used when researchers
  want to determine the
  reliability of a measure
  that has items that are
  scored as "correct" or
  "incorrect" answers
--Cronbach's coefficient
   alpha: used to reveal
   reliability for test items
   that are on the interval
   scales and for which no
   "correct" answers are
   identified


     B. The Requirement
          of Validity
          1. Validity
              Defined
 

validity: the consistency of a measure with a criterion (the degree to which a measure actually measures what is claimed)

          2. The
               Relationship
               Between
               Reliability
               and Validity
               --though a
                 reliable test
                 may not be
                 valid, one
                 cannot have
                 a valid
                 measure
                 without its
                 first being
                 reliable
            3. Methods to
                Assess
                Validity
                a. face validity face validity: researchers' looking at the content of the measurement items and advancing an argument that, on its face, the measure identifies what is claimed
                b. expert jury
                     validity
expert jury validity: having a group of experts in the subject matter of the measurement judge its merit
                c. concurrent
                    validity
concurrent validity: correlating a new measure with a previously validated measure of the construct
               d. predictive
                   validity
predictive validity: the degree to which a measure predicts known groups in which the construct must exist
               e. construct
                    validity
construct validity: administering a new measure to subjects along with at least two other measures (one of these measures should be a valid measure of a construct that is known conceptually to be directly related to the new measure, and another measure should be known conceptually to be inversely related to the construct).
III. Popular Tools in
    Communication
    Studies
    A.  Using Existing
          Measures
          (sources of
          measures in
          communication)
          --selecting
            measures
            depends on:
            suitability to the
            sample;
            reliability and
            validity; length
            of the measure;
            format of the
            measure
         use of self-report
         measures:

          --concerns with
             direct account
             reports: subjects
             may not know
             what they are
             being asked;
             subjects may
             exaggerate the
             frequency of
             socially
             desirable
             behavior;
             researchers
             must add control
             checks
          --concerns with
             "recall" studies:
             ability to recall
             accurately may
             be limited
             (control checks
             often must be
             added)
    B.  Composing
          Measures
          steps:
         
1. examine
               conceptual
               definitions and
               search
               scholarly
               discussions
          2.  decide on a
               format
          3.  secure
               feedback from
               a small group
               of people from
               the population
               in which you
               plan to
               conduct the
               final study
          4.  attempt to get
               evidence of
               reliability and
               validity of the
               measure
    C.  Popular Methods
          for Measurement
          1.  Methods to
               Measure
               Judgments
               a. Thurstone
                    Equal
                    Appearing
                    Interval
                    Scales
Thurstone equal appearing interval scales: composed of statements for which a point value is associated
               b.   Likert
                    Scales
Likert scales: statements expressing a point of view on an issue by having subjects indicate their responses on scales ranging from Strongly Agree, Agree, Neither Agree nor Disagree, Disagree, Disagree Strongly
               c.   Guttman
                    Scalogram
Guttman scalogram: a series of statements on a topic arranged according to their level of intensity (since statements are arranged on a single continuum, the number of statements with which a person agrees also reveals which statements the person accepted)
               d.   Semantic
                    Differential-
                    Type
                    Scales
semantic differential-type scales: a type of scale using pairs of adjectives (often separated by seven points)
               e.   Methods
                    for Measure-
                    ment of
                    Achievement
forced choice format: a method in   which researchers give respondents questions that request them to choose between two alternatives
paired comparison method: subjects are given all alternatives in combinations taken two at a time
categories of measurement in communication:
--cognitive assessments:
  measures of things that
  people know or believe
  including aptitude and
  achievement measures
--affective assessments
   assessments: measures of
   sentiments and feelings
   people have toward things
   including preferences,
   attitudes, and socio-
   emotional characteristics
--perceptual motor assessments:
   measures that deal with one's
   aptitude toperform specific
   tasks, including skills involving
   manual activity
--personality assessments:
   measures that isolate elements
   of an individual's character
--behavior assessments:
   observation measures that
   identify the activities of people
--demographic assessments:
   measures that identify
   environmental or physical
   conditions