The evaluation of a program’s compliance with each of the various features of the Program Educational Objectives, Student Outcomes, and Continuous Improvement Criteria (Criteria 2, 3, and 4) is an important element of ABET’s outcomes-based accreditation criteria and the program’s continuous improvement processes.
Although you as a Program Evaluator (PEV) will be reviewing many aspects of the program visited, your review of the program’s appropriateness of program educational objectives and the program’s process for the assessment, evaluation and implementation of identified needed improvements relative to its stated student outcomes will be an important part of your work. This module will provide you with information that will help you in the evaluation of these processes.
A. Terms And Definitions Used By ABET
Unfortunately, there is no universally accepted set of terms used in the assessment field. Below are terms used in the ABET Criteria and defined in the Accreditation Policy and Procedure Manual.
Program Educational Objectives: Broad statements that describe what graduates are expected to attain within a few years of graduation. Program educational objectives are based on the needs of the program’s constituencies.
Student Outcomes: Statements that describe what students are expected to know and be able to do by the time of graduation. These relate to skills, knowledge, and behaviors that students acquire as they progress through the program.
Assessment: One or more processes that identify, collect, and prepare data to evaluate the attainment of student outcomes. Effective assessment uses relevant direct, indirect, quantitative, and qualitative measures as appropriate to the outcome being measured. Appropriate sampling methods may be used as part of an assessment process.
Evaluation: One or more processes for interpreting the data and evidence accumulated through assessment practices. Evaluation determines the extent to which student outcomes are being attained. Evaluation results in decisions and actions regarding program improvement.
Note: Programs may have adopted a specific language of assessment, which varies from the terms above. Terminology might also vary from one program to another within an institution. If a program is using different terms, it is important that it defines its terms in its self-study and uses them consistently in its documentation for ABET. If the Self-Study Report does not clearly indicate how terms are being used, this should be clarified before the visit.
B. Review Of Program Educational Objectives
Program educational objectives focus on what graduates are expected to attain within a few years after graduation. Due to changes in the Criteria, it is no longer necessary to assess the attainment of the program educational objectives but “There must be a documented, systematically utilized, and effective process, involving program constituencies, for the periodic review of these program educational objectives that ensures they remain consistent with the institutional mission, the program’s constituents’ needs, and these criteria.”
The review of program educational objectives requires appropriate monitoring of the currency of the objectives themselves. The currency of the program educational objectives should be reviewed periodically. The time span will depend on the changing needs of the constituents and mission of the program. Programs in disciplines that are dynamic and rapidly changing will need to have more frequent monitoring cycles to be sure the program educational objectives are current and the student outcomes will enable the attainment of the objectives.
Information on the needs of constituents for the development and revision of the program educational objectives should be gathered in meaningful ways. Determining compliance with this aspect of Criterion 2 will take informed judgment on the part of the evaluator.
C. Assessment And Evaluation Of Student Outcomes
For student outcomes, the focus of the data collection is to answer the question, “To what level have students have attained the stated student outcomes?” The evidence of student learning is then used to identify student strengths and weaknesses related to each of the student outcomes for making decisions about how to improve the program’s teaching/learning processes.
This evidence should be the product of faculty reviewing and/or observing student work related to the program requirements. In preparation for reviewing a program’s processes related to Criterion 4, Continuous Improvement, for student outcomes, it is important to understand several principles of a well-constructed process to enable continuous improvement related to program-level student learning.
- The focus of Criterion 4 (continuous improvement) is on the assessment of the program, not on the assessment of individual students. Assessment of the attainment of student outcomes at the program level focuses on the performances of selected student and graduate cohorts. Program faculty gain insights into how well it is developing its outcomes through the evaluation of student outcome assessment results for the selected student cohort. In general, results are reported in terms of the percentage of students in the student cohort who meet the program’s student outcomes targets. The program’s interpretation of the results informs decision making for continuous improvement purposes.
- The focus of Criterion 4 (continuous improvement related to student outcomes) is on the learning of students and not the assessment or evaluation of individual courses. At the program level, assessment and evaluation should be focused on the learning that has resulted from the experiences in the program by the time of graduation. The purpose is to provide information on the program’s efficacy (its ability to achieve what it was designed to achieve).
- Student outcomes should be defined to provide faculty with a common understanding of the expectations for student learning and to achieve consistency across the curriculum. Well-defined student outcomes also communicate to students what learning will be expected as they progress through the program. Without agreed upon definitions of the student outcomes, faculty may have widely varying understandings of what constitutes performance of a given outcome. When faculty have variable definitions of the student outcomes, it almost impossible to determine the extent to which a student cohort has attained the outcomes. One way to establish a common and consistent understanding of what constitutes measurable performance of a student outcome is for those faculty involved to develop a few performance indicators for each student outcome. For further information on writing performance indicators, see the Student Outcomes and Performance Indicators.
- A program does not have to collect data on every student in every course to know how well it is doing toward attaining student outcomes. In fact, a program does not have to collect evidence of performance on every student. Because the focus of the assessment activity is on the program and not individual students, it is important that the cohort being used for data collection be representative of the range of students in the program. If a sample is drawn from the cohort, it must include the same proportion of student characteristics (grade averages, gender, diversity, etc.) that describe the program’s student population. In programs that have a small graduation class, sampling may not be appropriate. However, if data are collected on a specific student outcome only every three years (see #5 below) a program would, in fact, be sampling regardless of cohort size as it is not collecting data on every student who leaves the program.
- To provide evidence of attainment of student outcomes by the time of graduation for program reporting purposes, programs may choose to evaluate and report only data collected in core upper-level courses. Although not required by the accreditation criteria, a best practice is to sample from strategically selected core courses toward the end of the curricular cycle (meaning those where the most representative sample of student attainment of outcomes can be gathered). There are many reasons why programs should collect data (baseline or other) in the lower-level courses over which they have control for their continuous improvement, but it is sufficient to choose from upper-level courses for ABET reporting purposes. In general, knowledge, skills, or behaviors that students demonstrate in lower-level courses are not as likely a result of the program’s discipline-specific curriculum.
- A program does not have to assess every outcome every year to know how well it is doing toward attaining student outcomes. One approach that often leads to difficulty is to collect too many data on individual students. This is certainly true if a program is requiring that faculty collect data in every course where student outcomes are being “covered.” Not only does this make the data collection process cumbersome, but it also makes it almost impossible to turn the data into useful information. A viable alternative data-collection approach is to use assessment cycles where, on a rotating basis, performance-indicator data for a portion of the student outcomes are sampled from two, or preferably three, core upper-level courses where every course supporting a given outcome is “covered.” Using this approach produces evidence that can be used for evaluation and decisions about actions that should be taken, and relieves faculty of unnecessary data collection. Staggering the data collection over the six-year accreditation cycle produces a process that is continuous and systematic. For an example of a continuous data collection process, click here.
- The focus is continuous improvement based on information for decision making, not just data collection (i.e., data ≠ information). ABET accreditation criteria mandate that program faculty focus on continuous improvement using documented processes for assessing and evaluating attainment of student outcomes. The faculty member time and data collection requirements of these assessment processes should be consistent with day-to-day operations of the program, and the faculty should maintain these processes of assessment and subsequent evaluation across the interval between successive accreditation visits. Assessment processes that focus on the continuous improvement of the program produce results that can be systematically used by faculty and administration in meaningful ways.
The following are the underlying principles of continuous quality improvement of student learning at the program level:
- The focus of Criterion 4 (continuous improvement related to student outcomes) is on the learning of students and not the assessment or evaluation of individual students.
- The focus of Criterion 4 (continuous improvement related to student outcomes) is on the learning of students and not the assessment or evaluation of individual courses.
- Student outcomes should be defined to provide program faculty with a common understanding of the expectations for student learning and to achieve consistency across the curriculum.
- A program does not have to collect data on every student in every course to know how well it is doing toward the attainment of student outcomes.
- To provide evidence of attainment of student outcomes by the time of graduation for program reporting purposes, programs may choose to evaluate and report only data collected in courses towards the end of the curricular cycle.
- A program does not have to assess every outcome every year to know how well it is doing toward the attainment of student outcomes.
- The focus is on continuous improvement based on information for decision-making.
D. How Do I Know If A Program Has An Adequate Continuous Quality Improvement Process For Student Learning?
Evidence of a Continuous Quality Improvement (CQI) process would contain the following:
- A timeline of repeated activities related to assessment and evaluation. Possible question: “What is your data collection and evaluation timeline?”
- Agreed upon definitions of student outcomes. (Identifying a few performance indicators per outcome is an effective way to develop measurable definitions.) Possible question for faculty: “How does your program define its student outcomes to ensure consistent assessment across the curriculum?”
- Systematic data collection that focuses on performance related to the student outcomes. Possible question: “Where do you collect the data that is evidence of the learning of students?”
- Systematic data collection that ensures coverage of each student outcome for the given student cohort. Possible request: “Describe how the data being presented were collected.”
- Data collection and analysis that provide information that enables faculty to identify superior performance and opportunities for improvement related to the outcomes. Possible question: “I see that X% of your students have attained outcome Y. Were there any notable positive or negative aspects of the students’ performance?”
- An evaluation process that clearly communicates to program faculty opportunities for improvement in student learning. Possible request: “Describe how the proposed actions improved student learning (or are anticipated to improve student learning) related to enhancement opportunities that were identified.