Child Development 133 (03, 04,
72, 73 & 74)
Research Methods in Human Development

Hembree            Spring, 2013

 

Exam Guide #1

Check out the exam handout for more information about the first exam (9/24). You do NOT need to bring a scantron sheet, but you should bring a BLUE BOOK, CALCULATOR and the variance handout with you to the exam.

Terms:
Science and the Scientific Method (Ch. 1):
ways of knowing (tenacity, authority, logic, common sense, science)
steps in the scientific method
goals of behavioral research (describe, explain, predict...)
importance of objectivity, replication in science
basic/applied/evaluation research
developmental research
what science can and cannot address
empiricism
parts of a research article (intro, method, etc.)
theory/Functions of theory & models in research
sources for research ideas/questions (gap in knowledge, contradictory results, need to explain finding as sources of hypotheses)
hypothesis/characteristics of a good hypothesis
deduction and induction (and how related to theory building & theory testing)
basic goals and differences between different research designs (descriptive, experimental, correlational, quasi--experimental, developmental)
 

Variance (Ch. 2)
connection between behavioral variability and the research process
mean, variance (s2) and standard deviation (s) (conceptually and statistically - you may bring this handout to the exam)
systematic vs. error variance (examples of each, and which is more problematic)
variables in experimental research: independent, dependent
 

Measurement (Ch. 3 and 4)
constructs, variables
conceptual and operational definitions

validity and reliability
measurement error
scales of measurement (nominal, ordinal etc.)
validity of measures (face, construct, criterion, predictive) and how to evaluate
measure reliability and how to evaluate (test-retest, split-half, observer agreement)
ways to increase reliability
methods for collecting data (report, observation, performance, physiological…)
advantages and disadvantages of using different measures
bias in measurement (especially report measures)
issues related to choosing a setting for research (naturalness vs. control)
observational methods (narrative record, time sampling, checklists, event sampling, ratings)
participant observation (advantages and disadvantages)
field notes
reactivity
measurement error and sources of measurement error
advantages/problems with report (and self-report) measures (e.g., bias)
social desirability/ nay-saying response biases
performance measures
questionnaires vs. interview (advantages, disadvantages)
good practices for developing and conducting interviews
physiological measures
archival data
content analysis

Descriptive Research  (Ch. 5; Ch. 6  (pp. 117-123); Supplementary Reading #2)
types of descriptive research (survey, demographic, epidemiological,
qualitative
sample vs. population
sampling error
probability sampling (simple random, stratified random)/advantages of probability sampling
problem of nonresponse
nonprobability sampling (convenience sample, cluster sampling, quota, purposive sampling)
representative sample
Qualitative versus quantitative approaches (advantages and disadvantages)
when qualitative methods are appropriate
field observation
Ethnography
Grounded theory
Action Research
strengths and weaknesses of qualitative methods
methods for qualitative research (e.g., observation, focus groups/interviews)

 

Short Essay

One of the following questions will be selected for the essay portion of the exam.

1)   What does it mean to take a scientific approach to the study of human behavior and development?  Specifically, what is necessary for an inquiry to be scientific? How is science different from “common sense” as a way of knowing about the world and what are some limits to the scientific method?

2)   How do scientists go about measuring a construct? What procedures do scientists use to establish that the measure if reliable and valid? Imagine that you want to develop a questionnaire to assess “happiness”. How might you go about doing this and what steps would you undertake to ensure that your measure was both reliable and valid?

3)  How does one decide which method to use to collect data (i.e., what should be taken into consideration in choosing a technique)? As an example, discuss the advantages and disadvantages of using observational  vs. report (interview and questionnaire) methods to collect data. When are each of these methods appropriate?

  

 

Send problems, comments or suggestions to: hembrees@csus.edu

California State University, Sacramento

College of Education

Department of Child Development

Updated: January 25, 2013

 Back to top