Module 5: Applying the Criteria

You have reviewed many of the fundamentals of the ABET Accreditation Process. Now, you will apply what you have learned in a series of exercises and in a check for understanding.

A. Criteria Application Basics

  • ABET accredits educational programs leading to degrees. (ABET does NOT accredit institutions, departments, or degrees.)
  • ABET defines an educational program as an integrated, organized experience that culminates in the awarding of a degree. The program will have program educational objectives, student outcomes, a curriculum, faculty, and facilities.

You can find a thorough explanation of the above in the Accreditation Policy and Procedure Manual, Section I.C.

B. Understanding the Criteria

Using the Criteria to evaluate a program’s compliance begins with understanding the Criteria. ABET criteria are based on the principles of continuous quality improvement. General Criteria cover the following areas of an educational program:

  • Students
  • Program Educational Objectives
  • Student Outcomes
  • Continuous Improvement
  • Curriculum
  • Faculty
  • Facilities
  • Institutional Support

Your member society may also have additional criteria covering minimum standards for the specific program discipline you will be evaluating. These are called “Program Criteria.” Note, there no longer is a Criterion 9, Program Criteria, in the General Criteria.

You will find current criteria listed by accreditation commission here.

C. Common Issues Associated with Each Criterion

The ABET Criteria are minimum standards you apply with judgment. Over the years, ABET has identified common issues that may surface as you review a program’s Self-Study and make observations during a visit for evidence of compliance.

The issues listed below for each criterion area are not exhaustive. You may identify additional issues as you review the Self-Study. Additionally, some issues listed here may not by themselves represent a shortcoming relative to the criteria, but rather may indicate a need to seek additional information in order to determine whether there is a shortcoming. Remember each shortcoming must refer to specific requirements in the criteria that are not fully met or potentially may not be met in the future.


  • Problems with student advising, including ineffective or inconsistent advising and a lack of understanding of curricular requirements, especially if many options are available.
  • Ineffective monitoring, including no documentation of course substitutions or missing prerequisites.
  • Problems with transfer process, including no documentation on acceptability of transfer credits.

Program Educational Objectives

  • Program educational objectives are not published or readily accessible to the public.
  • Program educational objectives are not related to institutional mission or are inconsistent with the mission.
  • Program educational objectives are not consistent with the needs of the program’s various constituencies.
  • Program educational objectives do not describe what graduates are expected to attain within a few years after graduation.
  • There is no indication as to who are the program’s constituents.
  • There is no evidence the needs of the program’s constituents have been considered in the formulation of the program’s educational objectives.
  • There is no process to periodically review and revise the program educational objectives.
  • There is no evidence of constituency involvement in the periodic review and revision of program educational objectives.

Student Outcomes

  • There is no process to periodically review and revise the student outcomes (ANSAC, CAC, ETAC only).
  • Student outcomes and any supporting performance indicators are stated such that attainment is not measurable. (Note: Having student outcomes whose attainment is not measurable is not by itself a violation of any criterion, but if attainment of an outcome is not measurable then the extent to which it is attained may not be appropriately evaluated, as required in Criterion 4.)
  • There is missing or incomplete justification as to how the student outcomes prepare graduates to attain the program educational objectives.
  • The student outcomes do not reflect what the students should know and be able to do at the time of graduation.

Continuous Improvement


  • The assessment and evaluation processes are not documented.
  • The program cannot demonstrate their assessment and evaluation processes accomplish what they claim.
  • The assessment, evaluation, and improvement cycle is not complete.


  • Indicators of student performance have not been defined and/or no a priori level of student performance has been established. (Although there is no criteria requirement for performance indicators or a priori levels of performance, without these or something equivalent it may be difficult to appropriately evaluate the extent to which student outcomes are attained, and additional information may be needed to determine the appropriateness of the evaluation process for outcomes attainment.)
  • The program uses only anecdotal results (versus measured results).
  • The program relies only on course grades as assessment for one or more student outcomes. There are many factors, rarely all relating to a single student outcome for the program, used to determine a course grade. Thus the level of granularity of course grades relative to student outcomes is almost always too coarse for course grades to be used as reliable indicators for attainment of specific student outcomes.
  • There is an over-reliance on student self-assessment (e.g., surveys) as opposed to assessment methods based on actual student performance. As a rule, student self-assessment of outcomes attainment is considered much less reliable than attainment data from actual student performance relative to each outcome.
  • Assessment data are being collected for only some outcomes.


  • The data collected are not analyzed and used as input to a program improvement process.
  • The continuous improvement process appears to ignore evidence students are not attaining the student outcomes at the expected level of student performance.
  • The evaluation of data does not provide the information needed to make program improvements.


  • Program improvement plans are developed but not implemented.
  • There is no documentation of how the results of assessment and evaluation processes are used to determine needed program improvements.
  • Results of the evaluation of student outcomes are not used to make needed improvements to the student outcomes.
  • There is no evidence improvement efforts are being assessed and evaluated.

Keep in Mind: You do not have to be an expert on assessment. The program must provide evidence that it has a working and effective system in place. Note: outcomes and continuous improvement are linked closely together.


A program’s curriculum provides the foundation for entry into the profession. The curriculum criterion varies among the commissions. The following issues related to this criterion may or may not be applicable:

  • Curriculum fails to meet credit hour requirements (if specified by criterion)
  • Quality of the culminating or integrating experience, comprehensive project, capstone or major design experience (if required by the criterion)
  • No culminating experience
  • Several courses with elements of a comprehensive project but not identified as the culminating experience
  • Multiple culminating courses or courses taught by different instructors that do not all satisfy the requirements of the criteria
  • Culminating design experience not addressing multiple constraints and appropriate standards (EAC only).


  • Insufficient number to support concentrations, electives, etc. or to maintain continuity and stability
  • Poor faculty morale affecting the program
  • Lack of professional development
  • Excessive workloads
  • Retention / turnover rate
  • Heavy reliance on temporary faculty appointments or adjuncts, potentially jeopardizing program stability
  • Insufficient responsibility and authority to improve the program


  • Insufficient space
  • Overcrowded laboratories and classrooms
  • Laboratories – unsafe conditions, essential equipment inoperable, or lack of modern instrumentation
  • Lack of software / hardware needed to support the curriculum

Institutional Support

  • Unstable leadership affecting programs
  • Dean and/or program head positions open or filled by interim appointments for an extended period
  • Frequent turnover of university administration and unit leadership
  • Inadequate operating budget affecting acquisition and maintenance of laboratories and appropriate equipment, faculty salaries, promotions, and professional development, or hiring and retention of faculty and staff
  • Insufficient support staff, including teaching assistants, technicians for instructional laboratories, machine shops, and laboratory services or administrative / clerical staff

D. The Decision-Making Process


Using the Program Evaluator Worksheet and Program Evaluator Visit Report specific to your commission, you should be able to make a preliminary evaluation of the program based on your review of the program’s Self-Study. You should make a list of those issues requiring further investigation on site and discuss these with your team chair.

Please Note: Your draft visit plan should detail with whom you will visit on site to resolve any issues with program compliance with criteria that are not answered to your satisfaction in the Self-Study Report.

On Site

Once on site, you may revise your evaluation after conducting interviews with faculty members, students, and administrators; reviewing documentation; and visiting facilities. You will share your findings with your team members at team meetings on Sunday and Monday nights. This will assist you in refining your recommended action. At the conclusion of the visit, you will provide your team chair with the recommended action for your program and an Exit Statement to support that action.


The Team Chair develops the Draft Statement to the institution by combining and editing the program exit statements from the Program Evaluators and adding material that applies to the institution as a whole. Two editors and ABET’s headquarters’ staff review the Draft Statement for adherence to standards and consistency with other statements. It is then sent to the institution, which has 30 days to respond. The Team Chair uses the response from the institution to prepare the Final Statement, which is edited again and then provided to the full commission for action. In preparing the Final Statement, the Team Chair may consult with the Program Evaluators as needed to determine whether there are any changes to the recommended accreditation action because of the institution’s actions since the visit. Final accreditation decisions are made at the Commission Meeting in July.

E. Evaluating a Program’s Compliance with the Criteria

To decide if a program complies with each criterion and to recommend an accreditation action, follow these steps:

  • Identify issues by criterion. Remember you may find issues not listed in the Common Issues Associated with Each Criterion section above.
  • Determine the appropriate finding by citing evidence that supports the finding level.
  • Select the key term that applies for the finding. Base your decisions on the criteria and evidence, NOT on your opinion. Consider the resulting recommended action. Is it consistent with the nature of the shortcoming?
  • Justify each concern, weakness, and deficiency in relation to the specific criterion using wording consistent with the definition of the shortcoming -for example, “may” is appropriate for a Concern but not a Weakness.
  • Recommend the accreditation action. Prior to the site visit, your Team Chair will ask you where the program stands in overall compliance to ABET Criteria. Based on your preliminary review, you will select one of the following potential actions as described in the Accreditation Policy and Procedure Manual, Section I.E.12. This preliminary judgment may be revisited after you gather more information during the campus site visit. Your available accreditation actions include:
    • Next General Review (NGR)
    • Interim Report (IR)
    • Interim Visit (IV)
    • Report Extended (RE)
    • Visit Extended (VE)
    • Show Cause Report (SCR)
    • Show Cause Visit (SCV)
    • Show Cause Extended (SE)
    • Not to Accredit (NA)
    • Terminate (T)

In The Accreditation Process, you read about Levels of Compliance, statements of compliance, concern, weakness, and deficiency, as well as observations with regard to your findings when evaluating a program. You can also find these in the Accreditation Policy and Procedure Manual, Section II.E.8.a.(2).

General Review Terminology vs. Action

If the evidence supports a program Weakness for a given criterion, you must recommend either an Interim Report or an Interim Visit action (if there is no Deficiency). Note, there is no difference in severity for the IR and IV actions. The only difference is whether the adequacy of the corrective action(s) can be determined based on a written report (with appropriate supporting documentation), or whether a visit is required in order to assess the adequacy of the action(s).

If the evidence supports a program Deficiency for a given criterion, you must recommend a Show Cause action if this is a re-review or a Not-to-Accredit action if this is an initial review. Also, note Not-to-Accredit action can result from a show cause visit.

Please refer to chart below for reference:

AC Actions Table

F. Applying the Criteria: Consistency Counts

Accreditation actions must be consistent across all programs with similar shortcomings (Weaknesses, Deficiencies) and across all institutions. As a PEV, it is essential you compare your findings with those of the other programs being evaluated at the institutions and ensure different findings do not result from similar observations. The visit team should work together to resolve potential differences in findings for situations that are the same or very similar. Consistency is checked throughout the ABET Accreditation Process.

G. Sample Situations

What would you decide? Try out these sample situations. You want to determine the best response given the information you are given.

Situation #1

You are a part of a team evaluating five programs at Bay State University. Four of the five programs do not have their educational objectives published anywhere. As a result, the team determines these four programs do not comply with the program educational objectives criterion, and the team agrees to give those programs a deficiency with respect to this criterion. However, the program you are evaluating has its program educational objectives published on its website but not in the university catalogue nor in any of the departmental promotional materials handed out to both enrolled and prospective students.

Select the best program evaluator response:

  1. The program you are evaluating is not in compliance since the objectives are not published in each location where prospective students and the public might look for them, and there needs to be consistency among the actions taken for different programs at the same university. Therefore, the team should agree to give the program a deficiency.
  2. The team agrees you will discuss this with the program head to try to resolve the issues before the team leaves the site. If the response of the program satisfactorily resolves the problem, you will not report it in the Exit Statement or exit meeting.
  3. The program is in partial compliance with the criterion but lacks the strength of full compliance and the team agrees to give the program a weakness.
  4. Since the objectives are published someplace and the criterion is silent on where the program educational objectives need to be published, the Program Evaluator should consider this as a concern at most and perhaps not mention it at all to the team.

Answer Key

  1. This would not be appropriate because the criterion does not require the objectives be published in all documents related to the program. Further, consistency among the program actions is not an issue; different actions for different programs are appropriate if the characteristics that are the basis for a decision are different.
  2. This is not appropriate because all the relevant facts are known, and it is not appropriate to leave without making decisions on all issues relative to the criteria.
  3. This may be the best alternative, although if there is evidence the program’s website is the most-referenced source of information about the program, then a Concern could be appropriate.
  4. This could be appropriate if the website is clearly the primary source of information about the program for prospective and current students. However, it should be mentioned to the team in any case, especially given the other programs do not publish their objectives anywhere.

Note: It is not necessary the program educational objectives be published in every document about the program, but they should be included in all documents that are readily used by the public and by current and prospective students to obtain information about the program.

Situation #2

You are evaluating a program and its Self-Study indicates a significant portion of the student outcomes assessment data will be made available during the site visit. Once on site, you find these materials are neatly compiled statistical results of several assessment instruments – recent teaching evaluations, an alumni survey, employer interviews and a project portfolio with multiple rater feedback from panels. These are claimed to support the Continuous Improvement Criterion as evidence of an assessment process with documented results.

Select the best program evaluator response:

  1. Review the materials and identify trends in the responses that would suggest appropriate ways to improve the program you are visiting; report these to the program chair.
  2. Request the program provide an executive summary of the results before you depart from the site.
  3. Request additional documentation of how the results of the surveys etc. are applied in program improvement. If such evidence is not available, discuss with the team chair and other team members the option of citing the program as being deficient with respect to the Continuous Improvement Criterion, as assessment results are not utilized for the continuous improvement of the program.
  4. Tell the team chair the program will be cited for a Deficiency with respect to the criterion and they face a probable Show Cause recommended action.

Answer Key

  1. The Program Evaluator is not expected to engage in data analysis. It is up to the program faculty members to make the case for their claims.
  2. This approach may seem appropriate on the surface, but the most important issue in this case is how the data are used. (See next option.)
  3. This is the best first step. The Program Evaluator may find the program has used the data in some way, but has not documented it.
  4. This would be an appropriate action if the Program Evaluator determines this is all that was done, there was no serious attempt to use the data, and the faculty and program chair did not seem to be committed to using the data in any manner. However, the recommended action should not be mentioned explicitly.

Situation #3

The program you are visiting has in its Educational Objectives the program’s ability to prepare its undergraduates for successful careers both in the traditional chemical process industry and in the novel field of microelectronics. The faculty members’ CVs, in the Self-Study, indicate that none of the faculty has expertise in the microelectronics area. After the faculty interviews, you confirm this is the case and  the faculty member in charge of teaching the advanced senior-level course in micro-electronics processing has been learning the course material on the fly from a new textbook in the area. Interviews with the students provide evidence the faculty member teaching the microelectronics processing course is a favorite instructor. Students like the course and find the material to be very easy. Consultation with the program head suggests the course content has very little rigor for a senior-level course. You conclude the evidence indicates coverage of microelectronics is weak at best.

Select the best program evaluator response:

  1. As the Program Evaluator, you encourage the chairperson either to remove the objective from the published information or to provide the faculty with development opportunities to enhance their expertise in the microelectronics field. In the meantime, the program is cited with a weakness because of the lack of strong compliance with the Faculty Criterion: The faculty . . . must have the competencies to cover all of the curricular areas of the program. The faculty . . . must ensure the proper guidance of the program and its evaluation and development.
  2. In your Exit Statement, cite a deficiency for the Faculty Criterion and suggest that the administration provide funds for hiring additional faculty members to provide expertise in the microelectronics field.
  3. You commend the chairperson for the program’s attempts at providing innovation in its Educational Objectives. No weaknesses, concerns, or deficiencies need be cited.
  4. Report, with respect to the Curriculum Criterion, a concern about the microelectronics curriculum.

Answer Key 

  1. This may be the best response if the Program Evaluator believes the faculty member who is teaching the course is committed and capable of developing the needed expertise quickly, with the proper support. However, care should be taken to avoid appearing to prescribe any solution to the problem.
  2. This is an inappropriate response unless the Program Evaluator determines there are other factors, such as local industry and/or the school administration, demanding the microelectronics option be provided. However, care should be taken to avoid appearing to prescribe a solution to the problem.
  3. This is an unacceptable option in view of the findings of the Program Evaluator.
  4. This is an unacceptable response in two ways. First, there is no explanation of the basis for this concern. Second, it ignores the issue of faculty expertise as required by the Faculty Criterion and the need for this emphasis in the program (constituency support indicated through Program Educational Objectives Criterion).