Refresher Training

Module 4: Continuous Quality Improvement of Student Learning

The evaluation of a program's compliance with each of the various features of the Program Educational Objectives, Student Outcomes, and Continuous Improvement Criteria (Criteria 2, 3, and 4) is an important element of ABET's outcomes-based accreditation criteria and the program's continuous improvement processes. Although you will be reviewing many aspects of the program visited, your evaluation of the program's process for the assessment, evaluation, and implementation of identified needed improvements relative to its stated program educational objectives and student outcomes will be an important part of your work. This module will provide you with information that will help you in the evaluation of these processes.

Toggle Table of ContentsTable of Contents

A. Terms and Definitions Used by ABET

Unfortunately, there is no universally accepted set of terms used in the assessment field. The table below defines the terms used in the ABET Criteria and defined in the Accreditation Policy and Procedure Manual.

ABET Terms 

Definition 

Program Educational Objectives

Program educational objectives are broad statements that describe what graduates are expected to attain within a few years after graduation. Program educational objectives are based on the needs of the program’s constituencies.

Student Outcomes

 

Student outcomes describe what students are expected to know and be able to do by the time of graduation.  These relate to the knowledge, skills, and behaviors that students acquire as they progress through the program.

Assessment

Assessment is one or more processes that identify, collect, and prepare data to evaluate the attainment of student outcomes and program educational objectives. Effective assessment uses relevant direct, indirect, quantitative, and qualitative measures as appropriate to the objective or outcome being measured. Appropriate sampling methods may be used as part of an assessment process.

Evaluation

Evaluation is one or more processes for interpreting the data and evidence accumulated through assessment processes. Evaluation determines the extent to which student outcomes and program educational objectives are being attained. Evaluation results in decisions and actions regarding program improvement.

Programs may have adopted a specific language of assessment, which varies from the terms above. It is also possible terminology will vary from one program to another within an institution.   If a program is using different terms, it is important it defines its terms in its self-study and uses them consistently in its documentation for ABET. If the Self-Study does not clearly indicate how terms are being used, this should be clarified before the visit.

B.  Review of Program Educational Objectives

Program educational objectives focus on what graduates are expected to attain within a few years after graduation. Due to recent changes in the Criteria, it is no longer necessary to assess the attainment of the program educational objectives but “There must be a documented, systematically utilized, and effective process, involving program constituencies, for the periodic review of these program educational objectives that ensures they remain consistent with the institutional mission, the program's constituents' needs, and these criteria.”

The review of program educational objectives requires appropriate monitoring of the currency of the objectives themselves. The currency of the program educational objectives should be reviewed periodically. The time span will depend on the changing needs of the constituents and mission of the program. Programs in dynamic and rapidly changing disciplines will need to have more frequent monitoring cycles to be sure the program educational objectives are current and the student outcomes will enable the attainment of the objectives. 


Information on the needs of constituents for the development and revision of the program educational objectives should be gathered in meaningful ways. Determining compliance with this aspect of Criterion 2 will take informed judgment on the part of the evaluator.

C. Assessment and Evaluation of Student Outcomes

 

For student outcomes, the focus of the data collection is to answer the question, “Can the program demonstrate the level to which students have attained the anticipated student outcomes?” The evidence of student learning is then used to identify student strengths and weaknesses related to each of the student outcomes for the purpose of making decisions about how to improve the program teaching/learning processes.

This evidence should be the product of faculty reviewing and/or observing student work related to the program requirements. In preparation for reviewing a program’s processes related to Criterion 4, Continuous Improvement, for student outcomes, it is important to understand several principles of a well-constructed process to enable continuous improvement related to program-level student learning.

  1.  The focus of Criterion 4 (continuous improvement) is on the assessment of the program, not on the assessment of individual students.

    Assessment of the attainment of student outcomes at the program level focuses on the performances of selected student and graduate cohorts. A program faculty gains insights into how well it is developing its outcomes through the evaluation of results from student outcome assessment for the selected student cohort.

    In general, results are reported in terms of the percentage of students in the student cohort who meet the program’s student outcomes targets. The program’s interpretation of the results informs decision making for continuous improvement purposes.

     
  2. The focus of Criterion 4 (continuous improvement related to student outcomes) is on the learning of students and not the assessment or evaluation of individual courses. At the program level, assessment and evaluation should be focused on the learning that has resulted from the experiences in the program by the time of graduation. The purpose is to provide information on the program’s efficacy (its ability to achieve what it was designed to achieve).

     
  3. Student outcomes should be defined in order for faculty to have a common understanding of the expectations for student learning and to achieve consistency across the curriculum.

    Well-defined student outcomes also communicate to students what learning will be expected as they progress through the program. Without agreed upon definitions of the student outcomes, faculty may have widely varying understandings of what constitutes performance of a given outcome. When faculty have variable definitions of the student outcomes, it is almost impossible to determine the extent to which a student cohort has attained the outcomes.

    One way to establish a common and consistent understanding of what constitutes measurable performance of a student outcome is for those faculty involved to develop a few performance indicators for each student outcome. For further information on writing performance indicators, see the BONUS reading  Student Outcomes and Performance Indicators (PDF).

     
  4. A program does not have to collect data on every student in every course to know how well it is doing toward attaining student outcomes.

    In fact, a program does not have to collect evidence of performance on every student. Because the focus of the assessment activity is on the program and not individual students, it is important the cohort used for data collection be representative of the range of students in the program.

    If a sample is drawn from the cohort, it must include the same proportion of student characteristics (e.g., grade averages, gender, diversity, etc.) that describe the program’s student population. In programs that have a small graduation class, sampling may not be appropriate.

    However, if data are collected on a specific student outcome only every three years (see #5 below) a program would, in fact, be sampling regardless of cohort size as it is not collecting data on every student who leaves the program.

     
  5. To provide evidence of attainment of student outcomes by the time of graduation for program reporting purposes, programs may choose to evaluate and report only data collected in core upper-level courses. Although not required by the accreditation criteria, a best practice is to sample from strategically selected core courses toward the end of the curricular cycle (meaning those where the most representative sample of student attainment of outcomes can be gathered).

    There are many reasons why programs should collect data (baseline or other) in the lower-level courses over which they have control for their continuous improvement, but for the most part it is sufficient to choose from upper-level courses for ABET reporting purposes. In general, knowledge, skills, or behaviors students demonstrate in lower-level courses are not as likely a result of the program’s discipline-specific curriculum.

     
  6. A program does not have to assess every outcome every year to know how well it is doing toward attaining student outcomes.

    Collecting too many data on individual students often leads to difficulty. This is certainly true if a program is requiring faculty collect data in every course where student outcomes are being “covered.” Not only does this make the data collection process cumbersome, but it also makes it almost impossible to turn the data into useful information.

    A viable alternative data-collection approach is to use assessment cycles where, on a rotating basis, performance-indicator data for a portion of the student outcomes are sampled from two, or preferably three, core upper-level courses where the outcomes are “covered.” Using this approach produces evidence that can be used for evaluation and decisions about actions that should be taken, and relieves faculty of unnecessary data collection. Staggering the data collection over the six-year accreditation cycle produces a continuous and systematic process. For an example of a continuous data collection process, click here.

     
  7. The focus is continuous improvement based on information for decision-making, not just data collection (i.e., data ≠ information).

    ABET accreditation criteria mandate program faculty focus on continuous improvement using documented processes for assessing and evaluating attainment of student outcomes. The faculty member time and data collection requirements of these assessment processes should be consistent with day-to-day operation of the program, and the faculty should maintain these processes of assessment and subsequent evaluation across the interval between successive accreditation visits. Assessment processes focusing on the continuous improvement of the program produce results faculty and administration can systematically use in meaningful ways.
     

The following are the underlying principles of continuous quality improvement of student learning at the program level.

  1. The focus of Criterion 4 (continuous improvement related to student outcomes) is on the learning of students and not the assessment or evaluation of individual students.
     
  2. The focus of Criterion 4 (continuous improvement related to student outcomes) is on the learning of students and not the assessment or evaluation of individual courses.
     
  3. Student outcomes should be defined in order for faculty to have a common understanding of the expectations for student learning and to achieve consistency across the curriculum.
     
  4. A program does not have to collect data on every student in every course to know how well it is doing toward the attainment of student outcomes.
     
  5. To provide evidence of attainment of student outcomes by the time of graduation for program reporting purposes, programs may choose to evaluate and report only data collected in courses towards the end of the curricular cycle.
     
  6. A program does not have to assess every outcome every year to know how well it is doing toward the attainment of student outcomes.
     
  7. The focus is on continuous improvement based on information for decision-making.
     

D. How do I know if a program has an adequate continuous quality improvement process for student learning?

Evidence of a CQI process would contain the following:

  1. A timeline of repeated activities related to assessment and evaluation. Possible question: “What is your data collection and evaluation timeline?”  
  2. Agreed upon definitions of student outcomes. (Identifying a few performance indicators per outcome is an effective way to develop measurable definitions.) Possible question for faculty: “How does your program define its student outcomes to ensure consistent assessment across the curriculum?"  
  3. Systematic data collection focusing on performance related to the student outcomes. Possible question: “Where do you collect the data that is evidence of student learning?"  
  4. Systematic data collection ensuring coverage of each student outcome for the given student cohort. Possible request: “Describe how the data being presented were collected.” 
  5. Data collection and analysis providing information that enables faculty to identify superior performance and opportunities for improvement related to the outcomes. Possible question: “I see X% of your students have attained outcome Y. Were there any notable positive or negative aspects of the students’ performance?” 
  6. An evaluation process clearly communicating to program faculty opportunities for improvement in student learning. Possible request: “Describe how the proposed actions improved student learning (or are anticipated to improve student learning) related to the enhancement opportunities identified.

 

Go to Proficiency Assessment #2 

Featured ABET Event

ABET Facts

Accredited Programs at Women’s Colleges

Smith College was the first women's college to have an ABET-accredited program. Its engineering science program has been accredited since 2003.