Assessment of student learning can be defined as the systematic collection of information about student learning, using the time, knowledge, expertise, and resources available, in order to inform decisions about how to improve learning (Walvoord, 2004). Assessment can enhance a program's curriculum, pedagogy, structure, advising, and resources.
Assessments are tracked by goals, objectives, outcomes and/or competencies.
- Goals – broad statements aligned with university and/or school mission statements.
- Objectives – statements of accomplishments necessary to achieve the goals.
- Outcomes – statements of the desired result. Student learning outcomes are the particular level of knowledge, skills, and abilities a student has attained at the completion of an academic program, course or experience. Outcomes focus on or combination of three areas: content (cognitive learning), skill acquisition (behavioral learning), and attitudes (affective learning)
- Competencies – statements indicating adequate demonstration of outlined tasks, skill sets, or knowledge.
Choosing the assessment timing and methods can be guided by considering the following:
Formative vs. Summative
- Formative Evaluation - Provides feedback, with the aim of improving teaching, learning, and the curricula; identifying individual students' academic strengths and weaknesses; or assisting institutions with appropriate placement of individual students based on their particular learning needs.
- Summative Evaluation – Facilitates decision making at program level and determines resource allocations.
Indirect vs. Direct
- Indirect Assessment Methods - Questionnaires, interviews, focus groups, satisfaction studies, advisory boards, retention rates, job and graduate school placement data.
- Direct Assessment Methods - Exams, performance assessments, standardized tests, licensure exams, oral presentations, projects, demonstrations, case studies, simulations, portfolios, research papers, and juried activities.
Reliability & Validity
- Reliability Reliability is an estimate of test takers' performance consistency internally, across time, test forms, and raters. Generally, reliability estimates above .70 indicate an acceptable level, although values in the .80 and above are more commonly accepted reliabilities.
- Validity Validity involves "building a case" that a test is related to the construct it is intended to measure. There are three types of validity: content, criterion, and construct. The most important type of validation is construct validity, because it encompasses both content and criterion validity.