Seven-Step Model of Assessment
It is important to note that the language of the “Seven Steps” below is still evolving. What is more important than the terms themselves, however, is a divisional consensus and thorough understanding of what actually occurs during each step—that is, a real understanding of the method and purpose of assessment. Thus, detailed definitions and examples for each step are provided.
- Research Questions
- Program Objectives and Learning Outcomes
- Methods and Measurements
- Conclusions / Status
The departmental mission should give a clear idea of the purpose and function of the department, and should be directly aligned with those of the University and the Division. The mission identifies:
- The name of the department
- Primary functions
- Primary activities / modes of delivery
- Target audience
- Description of the department (if not readily discernible to an unfamiliar audience)
- Mission statements should be no more than 3 – 4 sentences
The departmental mission should guide the departmental goals in step 2.
The example below shows how the University Strategic Plan guides the Academic Advising Center’s Mission statement. One goal of the University’s Strategic Plan is to “Implement a strategically focused, campus-wide effort to improve recruitment, retention, and graduation rates.”
Example Mission: Academic Advising Center
The Academic Advising Center offers new student orientation, mandatory freshman advising, and advising on General Education and graduation requirements for all students. The Center engages students in a developmental process that helps clarify and implement individual educational plans consistent with their skills, interests, and values. Through individual appointments, group advising sessions, and presentations the professional staff, faculty advisors, and student interns help students understand the university’s academic requirements as well as its policies and procedures. As a result, students are better prepared to take responsibility for their education and persist towards a timely graduation.
Note that in the mission statement above, the department name, the primary functions of the department, the target audience, and the primary activities of the department are all clearly outlined, and speak to the University Strategic Plan statement on “retention and graduation.”
- Statements that describe the overarching, long-range intentions of an administrative unit
- Usually too broad to be directly measurable
- Primarily used for general planning as the starting point for the development and refinement of program objectives or student learning outcomes
Tips for Writing Goals
In assessment reports, goals should:
- Be presented in a bulleted list
- Begin with a verb
- Be balanced, flow well, and read easily; all bullets should be grammatically parallel
Example Goals: Academic Advising Center
- Help students gain an understanding of the university’s academic requirements as well as its policies and procedures
- Help students clarify and implement individual educational plans which are consistent with their skills, interests, and values
- Prepare students to take responsibility for their education and persist towards a timely graduation
Step 3: Research Questions
Assessment is a tool for evaluating our success in meeting the important objectives of our work. According to managers and administrators, one of the first challenges they face in performing assessments in their units is deciding what objectives to assess.
One of the best ways to overcome this challenge is to start with an important question to be answered, a skill to be developed, or a problem to be solved. Once the question, skill or problem is clearly identified, managers will find it much easier to formulate an assessment research question. Additionally, assessment that grows naturally out of such questions or identified needs is usually assessment that units really value, and which yields truly valuable results.
Formulating such a question is one of the first steps the investigator must take. Questions must be accurately and clearly defined, and can be the central element of both quantitative and qualitative research. These questions often stem from problems that investigators notice, e.q.:
- “Even though filing applications on time is critical to receiving financial aid, why do so many students file late?”
- “How are no-shows at the Counseling Center handled? How should they be?”
- “Do our first year students make appropriate decisions regarding important academic deadlines?
The important thing to remember is that assessments, whatever form they take, assess the effectiveness of some measure or intervention. In basic terms, an assessment answers the questions: “Did ___ work? How well did it work? How do we know how well it worked? What can we do to make it better?”
The Relationship between Research Questions and Assessment (Reports)
Depending on the research question, assessment can take place at different points of the inquiry/examination process; that is, multiple, Learning Objectives or Program Objectives could grow out of individual questions, any or all of which can in turn lead to valuable improvement in those areas. Take for example the first bullet above.
Question: “Even though filing applications on time is critical to receiving financial aid, why do so many students file late?”
- The Financial Aid Director noted that one of the main reasons students do not receive their financial aid disbursements at the beginning of the semester is that they have filed their financial aid application after the stated deadline. He and his staff discussed the importance of students learning the relationship between missing important deadlines and problems with their financial aid.
- To hopefully increase the number of students who file financial aid applications on-time, the financial aid staff created an intervention strategy: Financial Aid Awareness Month.
- Although the staff’s intent is to increase student understanding of the relationship between financial aid deadlines and timely award disbursement, they realized that measuring the effect of the Financial Aid Awareness month on the number of students who filed on-time would only be an indirect measure of learning. Although the staff could directly measure some variables involved with FA Awareness month, they realized there was no way to make a direct cause-and-effect relationship between those activities and an increase in the number of students filing on time—even though there would likely be a correlation (because there are many reasons a student might or might not file on time, over many of which the University has no control). So, they decided to create a Program Objective.
- The Program Objective they developed is: After implementing Financial Aid Awareness Month, the number of continuing students who file their FAFSA by March 2nd will increase by at least 10%.
Depending on at what point in the assessment cycle the reporting period falls, similar program objectives stemming from the same fundamental question could be formulated, such as the following:
Investigate why a larger-than-predicted number of students file their FA applications late, and design improved measures to intervene and/or mitigate these factors.
Or, more simply:
Design measures to increase the number of students filing their FA applications prior to the deadline.
Beginning with a question like this can also lead to learning outcomes:
Students who receive FA information through Financial Aid Awareness Month will demonstrate increased knowledge of the FA process and important related deadlines; furthermore, more of those students receiving such information will file on time.
Step 4: Program Objectives and Student Learning Outcomes
- Both program objectives and learning outcomes are measurable statements that provide evidence as to how well the unit is reaching its goals.
- Both relate directly to university mission and departmental goals .
- Learning outcomes and program objectives should be worded in a more specific way than goals are worded.
- Typically, program objectives can help managers determine if a program is working, whether it reached the right target audience, was offered in a timely fashion or at a convenient time, how satisfied participants are with the program, if the costs are viewed as appropriate, etc.
- i.e. “New Student Orientation will increase* the number of new transfer and incoming freshmen participating in program X during spring semester 2011.”
- Learning outcomes address what a student learns or how they change through program participation or utilization of the service.
- i.e. “Student-athletes who complete the X, Y, Z program in spring semester 2011 will demonstrate increased understanding* of NCAA compliance issues.”
*Note: It may not always be necessary to include the exact degree of change in the learning outcomes or program objectives when first presenting them (i.e. 10% more, 90% of students responding to survey X); it is not as important to include how the outcome will be measured, as it is to be sure that the outcome is measurable. Technically, the manner in which the goal will be measured falls under Step 4: Methods and Measurements.
Program objectives typically help managers determine if a program is working by focusing on improving efficiency, processes, or procedures. As such, POs address whether a program:
- was offered in a timely fashion
- was scheduled at a convenient time
- reached the appropriate target group members
- satisfied participants
- was cost-effective, etc.
Example: Student Health and Counseling Services
Student Health and Counseling Services gathers many types of data to “measure what matters” and formulates program objectives based on these measurements:
- Number of Patient Visits
- Number of Patient Visits/Provider
- Appointment “No Show” Rate
- Wait Times
- Patient Satisfaction
- Patient Utilization rates
- Cost of Care
- Number of Students Left Without Being Seen (LWBS)
Example Program Objectives: Student Health and Counseling Services
Program Objective 1
Counselors will spend an average of 60% of their time providing direct client service.
Program Objective 2
Medical providers will screen 90% of patients seen in the primary and urgent care clinics for depression using the Patient Health Questionnaire (PHQ 2 and/or PHQ 9 version) over the next academic year 2011-12 and make appropriate referral to CAPS for follow-up as necessary.
Example Program Objective: Student Orientation
Program Objective 1
New Student Orientation will increase by 200 attendees or by 10% the number of 2012 new transfer and incoming freshmen participating in the “Responsible Decision-Making Workshop” offered as a part of New Student Orientation. This increase will be measured against Summer 2011 participation (~2000 attendees).
Learning outcomes address what a student learns or how he or she changes through participating in a program or utilizing a service.
Learning Outcomes: Where We Started
Early in the process of developing a culture of assessment, our learning objectives were rather rote—they measured short term or memorized “learning.” Additionally, they only measured “small picture” student learning, such as:
- Whether students could retain a few key ideas from a workshop
- Whether students learned processes such as how to apply for financial aid or apply to graduate
Furthermore, many early assessment efforts were “satisfaction” based, or otherwise relied heavily on self-reported or “indirect” data from students.
Example Learning Outcome: Student Athlete Resource Center
Learning Outcome 1
Student-athletes who complete the Intercollegiate Eligibility Workshop in Spring 2011 will demonstrate increased understanding of NCAA compliance issues. (This outcome was measured by a short pre-test and post-test.)
Example Learning Outcome: University Union
Learning Outcome 1
80% of student employees at the University Union will report that their leadership skills were improved through their employment.
Example Learning Outcome: Office of Global Education
Learning Outcome 1
After completing a short online survey (usually administered late spring) 90% of the Office of Global Education’s clients will agree or strongly agree their experience with OGE is positive.
Learning Outcomes: Where We Are Now
Now, directors have been asked to tailor their learning outcomes more to “big picture” learning—that which ties into Division and University priorities such as academic success (GPA), retention, and graduation. Some directors have even begun attempting to measure behavior change.
Learning Outcome Examples: Housing & Residential Life
Learning Outcome 1
Sac State students living on-campus will have a higher average grade point average at the end of their freshmen year than students who live off-campus at the end of their freshmen year.
Learning Outcome 2
First time freshmen who live on-campus will be more likely to persist to their second year of college than first time freshmen who live off-campus.
Note: These statements do not assume a direct cause and effect relationship between the variables; however, there may be a significant correlation between them. Such correlations warrant further research.
Learning Outcome Example: Student Health and Counseling Services
Learning Outcome 1
By Fall 2009, residents participating in the Choices Level One Alcohol Education Class will demonstrate the following:
- 50% of referred residents will be able to cite at least one thing they learned from the class & have incorporated into their drinking behavior that has reduced the risk associated with their drinking (reduction in how much or how often the student drinks, increase in use of protective behaviors).
- 50% of referred residents will report a reduction in the number of occasions in which they consume alcohol.
- By Fall 2012, 65% of all first-year students will successfully view and complete the student success online alcohol tutorial program, “Zombies, Alcohol, and You” with a passing grade of 75% or better on the post test.
Step 5: Methods and Measures
This step describes exactly how the objective is being assessed, and what indicators are being used as supporting data.
- Methods are procedures, techniques, or ways of doing something, especially in accordance with a definite plan (e.g. pre-/post-test of first year orientation participation, scored role-plays of RAs who complete a 2 week summer training program, etc.).
- Measures can refer to both the assessment instrument itself and the extent to which a program objective or student learning outcome was achieved.
- Together, the Methods and Measures section should include the following components:
- Should explicitly state how the methods/measurements inform the objective
- Short description of assessment instrument being used
- Identification of population being served and the conditions under which they are being served (i.e. “First Year Students who complete X and Y Online Training”)
- Any benchmarks against which to measure change, behavior modification, or learning
Direct Versus Indirect Measurement
- Direct Measurement - Demonstrated outcomes that students achieve—such as an increase in abilities, information, knowledge, attitudinal or behavioral changes—after participating in a program or utilizing a service.
- Indirect Measurement - Self-reported statements, comments, or satisfaction levels that reveal a perceived increase in understanding or appreciation. The perception is not verified through any demonstration of knowledge acquisition or observed behavioral/attitudinal changes.
Measures: Where We Started and Where We Are Now
Early in Student Affairs assessment, many directors used indirect methods to assess student learning. Many assessment instruments asked students to report on their perceived increase in understanding or their appreciation of/satisfaction with a workshop, training, cultural event, etc.
Now, many directors are trying to use more methods to assess student learning.
Traditionally, the most common direct measurement was a pre-/post-test combination, which creates a baseline for student knowledge before some kind of activity, and then measures their knowledge after taking part in the activity.
Examples: Direct Measurement of Student Learning
- To what extent are the GPAs, retention rates, and graduation rates higher for those students who have been targeted for interventions (compared to those who did not receive the intervention)?
Note: These statistics are rarely used to confirm a cause and effect relationship.
- To what degree do student employees (who are being observed by their supervisors in a mock-situation) adequately perform the tasks he or she have been trained in?
- To what extent do print, electronic, or multi-media collections of students’ work over a period of time demonstrate what they have learned in a particular co-curricular experience? Portfolios may be scored by a rubric and used to measure longitudinal change.
- How do students score (usually via a predesigned rubric) on an essay, whose response is meant to reveal what they have learned in a certain activity?
Example Methods and Measures: PRIDE (LGBTQ) Center
Methods and Measures
At the end of Spring 2011, all PRIDE Center student assistants will complete a survey (containing both Likert-style and open-ended questions) which asks them to reflect on the ways in which their employment has affected their leadership skills compared to what they felt their leadership skills were prior to their employment.
Note that in this example, the timeline, the population measured, the instrument used, and the baseline compared against are all clearly presented.
Step 6: Findings
- Findings are the actual data that is gathered though the assessment tool employed (i.e. test scores, student writing scores determined via rubrics, data gathered from workload estimators, numbers of students served by particular programs, events, and services, etc).
- Describes clearly and concisely the degree to which the program objective or student learning outcome was met; avoid analysis here—that comes in Step 6: Conclusions
- Includes macro-level analysis of the data gathered
- Findings should be pointed and succinct—a summary of only the most relevant data (i.e. “students scored above 80% on the majority of the test questions,” rather than “students scored 78% on question 1, 89% on question 2, 80% on question 3, 81% on question 4, etc.”).
- Example: “Student-athletes who completed all three segments of workshop X, Y, and Z scored an average of 10% higher on the post-test than they did on the pre-test”
- Example: “After implementing new protocols X, Y, and Z, Financial Aid saw a two-week decrease in new application review turnaround time”
- In the interest of brevity and readability, charts, examples of participant responses, student work, etc. should be relegated to “Supporting Documents” section
Example Findings: Student-Athlete Resource Center (What to do)
Student-athletes who completed all three segments of the Division I Eligibility and Recruiting Workshop scored an average of 20% higher (the instructor’s desired minimum) on the post-test than they did on the pre-test.
Example Findings: Financial Aid (What to do)
After streamlining key processes and protocols, Financial Aid was able to decrease its application review time from 5 days to 3 days, which is an acceptable period.
Example Findings: Student Athlete Resource Center (What not to do)
Scores recorded on the post-test were better than the pre-test in general, and can be broken down as follows. 85% of student-athletes who completed all three segments of the workshop answered question 1 correctly. 78% of students correctly answered question 2 regarding academic probation GPA, but the staff feel this lower percentage reflects the imprecise wording or the question. Conversely, 95% of student athletes answered question 3 correctly, etc.
The above version of the SARC findings section is too detailed and wordy. Findings should be condensed and succinct where possible.
Step 7: Conclusions / Status
- The Conclusions / Status section is important in reporting the progress of ongoing and long-term assessment projects as well as “completed” assessments
- Could also be called “closing the loop”
- The conclusion section tells the story:
- “Here is where we started (any hypothesis that shaped early work)”
- “Here is what really happened”
- “Here is what we learned”
- “Here is what we will do next”
- Reiterate whether outcomes or objective was or was not achieved
- Do not be defensive or “beat around the bush” if outcomes were not met;
- Rather, try to explain why they were not met if there is a plausible reason; if not, do not “push it”—sometimes, it is acceptable to just present the data
- Explain how the findings will shape what you will do next; if there is anything in the findings that suggest you should modify any of the components involved in the assessment, explain how
- Possible “next steps” when outcomes or objectives were not achieved:
- Modify program objective or learning outcome
- Modify measurement tools
- Modify methodology
- Modify program
- Modify policies or procedures
- Improve collaboration
- Improve communication
- Possible “next steps” when objectives or outcomes were achieved:
- Develop new objectives or outcomes
- Conduct a longitudinal study with current objectives or outcomes
- Raise the criteria for achievement
- Develop more stringent measures
- Assessment efforts can also yield results that help:
- Improve programs or services that are aligned with the university’s priorities
- Understand and eventually increase student learning
- Make better planning or budgeting decisions
- Possible “next steps” when outcomes or objectives were not achieved:
Example Conclusion: Student Athlete Resource Center (What to do)
Based on an aggregate score, over 80% of student-athletes rated athletic advising services as good or excellent; thus, the outcome was achieved. Some specific areas scored lower than 80%, most of which corresponded to areas less-emphasized in orientation. Athletic Advisors will review how they currently provide information in those low-scoring areas, which include information about summer school, handbooks, and other orientations (eligibility, etc.). Advisors may consider further assessment and feedback measures with student-athletes regarding these topics to improve student-athlete satisfaction, and to create an even more comprehensive program.
Example Conclusion: Women’s Resource Center (What not to do)
In the example below, the Women’s Resource Center (WRC) presented a series of movies. After viewing each movie, students wrote brief response papers which WRC staff scored based on rubrics. The assessment activities and methods are not bad—but the write-up and analysis need improvement. (It should be noted that this example is intended only in the spirit of constructive criticism using a real-life example.)
There are several issues that could have impacted results. We used different raters for the different movie presentations, who also were different from the person who developed the rubrics. It is a very subjective evaluation as to whether or not the students actually “hit” the points in their post papers. Some attention to “rater training” might be helpful in the future.
Problems with this conclusion:
- Doesn’t “tell the story”
- Not enough context for the reader; section doesn’t stand alone
- Doesn’t reiterate whether the objective was or was not met (was it met, or was it not?)
- “Beats around the bush;” sounds a bit defensive
- Doesn’t thoroughly explain or analyze shortcomings of the assessment
There are several issues that could have impacted results; these should be explained and presented clearly.
- Women’s Resource Center staff used different “raters” for the different movie presentations.
- These “raters” were different from the staff who developed the rubrics.
- The evaluation itself was very subjective as to whether or not the students actually “hit” the points in their post papers.
Providing some type of “inter-rater reliability training” might be helpful in the future.
Example Conclusion: Women’s Resource Center (Improved)
When first introduced in Fall 2006 only 63% of student participants reported learning key points from the movie presentations they attended; in Fall 2007 that number increased to 73%. Thus, the objective was met, though just barely. Though the scores seem to indicate student learning, WRC staff acknowledges some potential problems with the assessment instrument itself: the rubric proved highly subjective; different raters were used at different points in the assessment; and there was no “norming” of raters to establish a common method of interpreting the rubric. These initial results indicate the WRC movie series shows promise for demonstrable student learning; however, the WRC will continue to refine both this assessment and the program itself in upcoming semesters.