Up A Quiz for Chapter 4 Operational Definitions Attitude Measurement

 

Chapter 4:  Measurement in Communication Research

 Chapter Outline 

Outline

Concepts

I.        The Role of Sound Measurement in

      Communication Research

      A.  Measurement as a Foundation for Research

            --sound measurement required:

           1.  to isolate variables;

 

           2.  to prevent attenuation

attenuation of results:  a reduction of the size of observed

effects because of errors in measurement

      B.   Levels of Measurement                                      

Measurement:  assigning numbers to variables according to some system variables according to some system

nominal level measurement:  use of numbers as simple identification of variables

ordinal level measurement:  use of rank order to determine differences

interval level measurement:  distances between measured items are assessed as matters of degree

ratio level measurement:  extension of interval measurement to include an "absolute zero."

--an "absolute zero" means that a score of 0 indicates that the
   property measured is completely missing

II.     Characteristics of Operational Definitions and

      Measures

      A.  The Requirement of Reliability

 

           1.   Defining Reliability

                 --factors contributing to the unreliability that
                   of a test are:  1. familiarity with the particular
                   test form; 2. fatigue; 3. emotional strain; 4.
                   physical conditions; 5. respondent health; 6.
                   differences in memory; 7. respondent experience;
                   and 8. knowledge                                                   

reliability:  the internal consistency of a measure

reliability coefficient:  a correlation measures the consistency of a measure (ranging from 0 to 1)

            2.  Methods to Assess Reliability
                 a.   test-retest reliability

 

 

test-retest reliability:  giving the measure twice and looking at the consistency between scores

                 b.  alternative forms reliability

alternate forms:  constructing different forms of the same test from a common pool of measurement items and then giving different forms to the same group of people and assessing consistency

                 c.  split half reliability

split half:  examining the consistency between two halves of a test scored separately

                 d.  item to total reliability

item to total: computing the correlation of items with the total test

                 e.  intercoder reliability

intercoder reliability:  assessing consistency with which different raters look at behavior and categorize it by using some sort of check sheet

--Scott's pi used in content analysis

                 f.   statistical shortcuts

Statistical shortcuts:  methods to obtain reliability coefficients rapidly

--K-R 20 (Kuder-Richardson formula 20) used when researchers want to determine the reliability of a measure that has items that are scored as "correct" or "incorrect" answers

--Cronbach's coefficient alpha:  used to reveal reliability for test items that are on the interval scales and for which no "correct" answers are identified

the fallacy of the false precision:  a tendency for researchers to claim precision in their measurements that is not founded in the data

      B.   The Requirement of Validity

            1.   Validity Defined


Validity
:  the consistency of a measure with a criterion (the degree to which a measure actually measures what is claimed)

            2.   The Relationship Between Reliability and

                  Validity

                  --though a reliable test may not be valid,

                     one cannot have a valid measure without

                     its first being reliable
            3.   Methods to Assess Validity

 
                  a.  face validity

face validity:  researchers' looking at the content of the measurement items and advancing an argument that, on its face, the measure identifies what is claimed

                  b.   expert jury validity expert jury validity:  having a group of experts in the subject matter of the measurement judge its merit
                  c.   concurrent validity concurrent validity:  correlating a new measure with a previously validated measure of the construct
                  d.   predictive validity predictive validity:  the degree to which a measure predicts known groups in which the construct must exist

                  e.  construct validity

construct validity:  administering a new measure to subjects along with at least two other measures (one of these measures should be a valid measure of a construct that is known conceptually to be directly related to the new measure, and another measure should be known conceptually to be inversely related to the construct).

C.  Reliability and Validity in Qualitative Research

Reliability and validity issues also influence
researchers who attempt to draw conclusions
from studies using qualitative research methods.

·        Whereas reliability in quantitative studies
involves exploring whether different
individuals use instruments with consistency
when looking at the same things, in most
qualitative research, the researcher is the
instrument. So, the issues of reliability and
validity involve basic issues about
dependability and trustworthiness.

·        Some qualitative researchers dismiss issues
of reliability and validity because they believe
they are tied up with philosophical
assumptions about reality that they reject.

--Other qualitative researchers have
  observed that reliability and validity
  involve questions about the researcher as
  interpreter. In qualitative research “the key
  reliability question is: would any qualitative
  [researcher] . . . examining the texts or
  images that constitute the data develop
  (roughly) the same analytic description?”
  (Warren & Karner, 2005, p. 217)

1.   “To demonstrate what may be taken as a
substitute criterion for reliability—

 
               dependability—the naturalist seeks means for
               taking into account both factors of instability
               and factors of phenomenal or design induced 
               change” (Lincoln & Guba, 1985, p. 299).

To assure such dependability, researchers

may use:

♦ overlap methods in which multiple
methods are used to triangulate or converge
on a set of dependable interpretations;

♦ stepwise replication, in which requires at
least two qualitative researchers who
conduct fieldwork separately compare
their interpretations at different times
(sometimes daily or at critical points
where previous qualitative research plans
need to be reconsidered); and

♦ dependability audits in which experts are
called in to examine the process and the
interpretations involved in the qualitative
research.

dependability in qualitative research, the counterpart of measurement reliability in which efforts are made to assure stability in identifications and interpretations.
          2.   For scholars of qualitative work, the question
                of validity is transformed into a question about
                the trustworthiness of the data and the
findings.
                To assure such trustworthiness, researchers
                may use:

   extended participation in the field
  experience;

   persistent observation so that key
  phenomena are not likely to be
  overlooked;
 negative case analysis, which requires
  researchers to draw conclusions only
  after accounting for any negative cases
  that disprove the general relationship;
 referential adequacy, which includes
  recording qualitative interviews and
  observations, and carefully keeping
  records of actual encounters under
  analysis;
♦ member checks in which the
  researchers check with some
  members of the group studied to
  assess whether the concepts,
  reconstructions, categories, and
  interpretations are accurate and make
  sense.

III.  Popular Tools in Communication Studies

 A.        Using Existing Measures (sources of

            measures in communication)

            --selecting measures depends on:

               suitability to the sample;  reliability and

               validity;  length of the measure;  format

               of the measure

            --use of self-report measures:

·        concerns with direct account reports:

                 participants may not know what they are
                 being asked;  subjects may exaggerated

                 the frequency of socially desirable

                 behavior;  researchers must add control

                 checks

·        concerns with "recall" studies:  ability

to recall accurately may be limited

(control checks often must be added)

B.     Composing Measures

steps:

1.     examine conceptual definitions and

search scholarly discussions

2.     decide on a format

3.     secure feedback from a small group of

people from the population in which you

plan to conduct the final study

4.     attempt to get evidence of reliability and

validity of the measure

C.     Population Methods for Measurement

1.   Methods to Measure Judgments

trustworthiness in qualitative research, the counterpart of measurement validity in which efforts are made to persuade audiences that study findings are credible and deserving of attention
               a.   Thurstone Equal Appearing Interval Scales  Thurstone equal appearing interval scales:  composed of statements for which a point value is associated
               b.   Likert Scales

Likert scales:  statements expressing a point of view on an issue by having subjects indicate their responses on scales ranging from Strongly Agree, Agree, Neither Agree nor Disagree, Disagree, Disagree Strongly

               c.   Guttman Scalogram

Guttman scalogram:  a series of statements on a topic arranged

according to their level of intensity (since statements are arranged on a single continuum, the number of statements    with which a person agrees also reveals which statements the person accepted)

               d.   Semantic Differential-Type Scales

semantic differential-type scales: a type of scale using pairs of adjectives (often separated by seven points)

               e.   Methods for Measurement of Achievement   forced choice format:  a method in which researchers give respondents questions that request them to choose between two alternatives
paired comparison method:  subjects are given all alternatives in combinations taken two at a time categories of measurement in communication:
--cognitive assessments:  measures of things that people know
  or believe including aptitude and achievement measures
--affective assessments assessments:  measures of sentiments
  and feelings people have toward things including preferences,
  attitudes, and socio-emotional characteristics
--perceptual motor assessments:  measures that deal with one's
  aptitude to perform specific tasks, including skills involving
  manual activity
--personality assessments: measures that isolate elements of an
    individual's character
--behavior assessments:  observation measures that identify
  the activities of people
--demographic assessments: measures that identify
   environmental or physical conditions