Assessment of CAC Self-Study Report

Size: px
Start display at page:

Download "Assessment of CAC Self-Study Report"

Transcription

1 Assessment of CAC Self-Study Report Curtis Cook 1, Pankaj Mathur 2, and Marcello Visconti 3 Abstract - In 2000, the Accreditation Board for Engineering and Technology, Inc. (ABET) changed the way computer science (and engineering) programs are accredited from a checklist approach to an outcomesbased approach. While this approach gives more freedom to the program to establish its own set of objectives, it has also created considerable anxiety among people who are responsible for preparing their programs for accreditation. The Self-Study Report which plays an important role in the ABET accreditation process, describes how the computer science program satisfies the statement of intent and standards of the accreditation criteria. However, preparation of the Report has become more difficult with the change to an outcomes-based approach. We have developed model of an ideal program based on CAC guidelines and standards and a tool that assesses the thoroughness and completeness of the Report compared to the model. Programs seeking accreditation can use the tool to get information on any deficiencies prior to submitting the Report to the evaluation team. Index Terms - Accreditation, Assessment, CAC Self-Study Report, INTRODUCTION Accreditation is becoming more common for computer science programs as the computer science and computer engineering professional societies have joined forces under ABET in this effort. The benefits are an objective evaluation of the program, recognition of the quality of a program by an outside agency, and attractiveness to students and prospective employers. Prior to 2001 the Computer Science Accreditation Commission (CSAC), operating under the auspices of the Computing Sciences Accreditation Board (CSAB), carried out computer science accreditation. The change is the integration of CSAB into ABET (Accreditation Board for Engineering and Technology) and CSAC has become CAC (Computing Accreditation Commission) operating under the ABET umbrella. Now programs are ABET-accredited. Besides the organizational change, the accreditation criteria have changed from a checklist approach to an outcomes-based approach. In a checklist approach the evaluation is based on a set of criteria that prescribed most of the courses and activities that a program had to offer to be accredited. However, in an outcomes-based approach the program specifies outcomes (skills, knowledge, behavior) that its students should acquire as they progress through the program [2] and some type of assessment mechanism to determine how well the outcomes are being achieved. There are seven Criteria, and for each Criterion a statement of Intent and a list of Standards. The statement of Intent provides the underlying principles for that criterion while the Standards describe how a program can minimally meet the statement of Intent. This change to an outcomes-based approach and stress on outcomes, objectives and assessments has created considerable anxiety among computer science programs preparing for the accreditation process ([2], [5], [9]). The primary cause of this concern is Criteria I (Objectives and Assessments) and the statement of Intent (underlying principles) for Criteria I [3]: The program has documented measurable objectives, including expected outcomes for graduates. The program regularly assesses its progress against its objectives and uses the results of the assessments to identify program improvements and to modify the program s objectives. The Standards for Criteria I are given in Figure 1. Thus for a program to meet the statement of intent, it must satisfy all standards for that criteria or demonstrate an alternative approach to achieving the Intent of the Criteria [5]. However, satisfying all Standards is not as easy as it seems. Criterion I. Objectives and Assessments Standards I-1. The program must have documented, measurable objectives. I-2. The program s objectives must include expected outcomes for graduating students. I-3. Data relative to the objectives must be routinely collected and documented, and used in program assessments. I-4. The extent to which each program objective is being met must be periodically assessed. I-5. The results of the program s periodic assessments must be used to help identify opportunities for program improvement. I-6. The results of the program s assessments and the actions taken based on the results must be documented. FIGURE 1 STANDARDS FOR THE CRITERION I. 1 Curtis Cook, Oregon State University, Corvallis, OR, , cook@eecs.orst.edu 2 Pankaj Mathur, Oregon State University, Corvallis, OR, , mathurpa@eecs.orst.edu 3 Marcello Visconti, Departamento de Informática, Universidad Técnica Federico Santa María, Valparaíso, CHILE, visconti@inf.utfsm.cl T3G-12

2 A Guidance document [4] that provides statements of generally acknowledged ways to satisfy Standards is not comprehensive and does not address all of the standards [5]. This is especially true for the Criteria I Standards. While Criterion I is essentially a new criterion, the other six criteria are similar to the previous checklist accreditation criteria. The evaluation of a program is based on the Self-Study Report supplemented by an on-site visit by the evaluation team. The Self-Study Report describes the program and how it satisfies the standards and intent statements of all seven criteria. The purposes of the on-site visit are to assess factors that cannot be adequately described in the Self-Study Report, to help the institution in its assessment, and to examine in more detail some of the information provided by the institution. Since the Self-Study Report plays the key role in evaluation it is crucial that it be thorough and complete. Furthermore, the new outcomes-based approach has made preparation of the Report more difficult as it is important that none of the criteria intent statements or standards are skipped or not properly addressed. We have developed a tool to assess the thoroughness and completeness of the Report prior to its submission to the evaluation team. The tool consists of a questionnaire and an assessment report generated from the responses to questionnaire. The person or persons mainly responsible for the Self-Study Report complete the questionnaire. The assessment report identifies deficiencies (items not satisfied) and possible deficiencies (not clear from the information provided that an item has been adequately addressed). OUTCOMES-BASED APPROACH There are two parts to an outcomes-based approach. First, is the development of objectives that describe outcomes or what the student will be able to do after completing the program. The second part is an assessment of whether the objectives are being met. Developing measurable objectives and assessing them for Criteria II-VII standards [3] is reasonably direct; however, the Criteria I Standards and the Guidelines [4] for them are not as specific and there is considerable freedom in defining the program objectives and assessing them. The latter has created concern among computer science programs seeking accreditation under the new ABET/CAC approach of continuous improvement via objectives and assessment feedback cycle. Several papers [1,7,9] have described the effectiveness of assessment methods in terms of faculty time and effort. However, this focus on outcomes assessment diverts attention away from the real purpose of the assessment as a means of feedback for improvement. That is, besides collecting data from assessments, recommendations for program improvement are generated from the analysis of the assessment data and actions taken on these recommendations are be reported. Furthermore, the results of the program assessment analysis, and recommendations and actions taken must be documented. The net result is a continuous process of improving the computer science program which is the ultimate goal of the Criterion I and to a lesser extent Criteria II-VII. META-MODEL FRAMEWORK Software process improvement (SPI) methods were developed as a solution to the software crisis the problem of software being late, over-budget, unreliable and of reduced functionality. The idea behind these methods is that improving existing software development processes will improve the quality of the software produced [10]. In assessment based SPI the existing process is assessed to identify key practices that need to be improved, a plan to improve a selected set of these practices needing improvement is developed, the plan is implemented, the results are evaluated and the entire improvement process is repeated starting with the assessment. Hence the result is a process for continual improvement. The assessment or diagnosis part of these methods compares the existing process with a benchmark or model - either the practices of an organization acknowledged to be a leader or what are considered best practices. The Software Engineering Institute Capability Maturity Model (SEI CMM) [8] is by far the most common SPI assessment based model. The Meta-Model Framework (Table 1) was designed to aid organizations in identifying key practices and subpractices to initiate and sustain a software process improvement effort focused on a single process area. Table 1 gives an overview of the framework. From Table 1 we see that the structure of the meta-model reflects not only the process itself, but also the quality and usability/customer satisfaction dimensions of the products or service produced by the process. It defines four major action phases (Identify, Monitor, Measure, Feedback) for the three dimensions: Core Process, Quality Assurance and Usability/Customer Satisfaction. The Identify phase defines the product or service, its importance to the organization, the practices in the process and the components of the process. The Monitor phase checks for evidence of organizational support and that the practices and components of the Identity phase are being done. The Measure phase defines measures for each of the three dimensions and activities to collect and analyze these measures. Finally, the Feedback phase generates recommendations for improvements from the measurements, evaluates and prioritizes them, and generates plans to incorporate them into the process. Once the key practices and sub-practices have been identified, the next step is to develop an assessment mechanism for determining the degree to which these practices are satisfied. A questionnaire is commonly used in this step. A final step in the assessment process is to report the results of an analysis of the data gathered by the assessment mechanism. ASSESSMENT TOOL It was felt that the Meta-Model Framework was general enough to be applied outside of the software arena and the Self-Study Report seemed to be a good product to test it on. The first step was to develop the key practices and sub- T3G-13

3 practices. Since the Self-Study Report is based on the Criteria for Accrediting Computing Programs [3], we used the Criteria and Guidance for Interpreting the Criteria for Accrediting Computing Programs [4] as the basis for constructing these practices. There are seven Criteria categories: Objectives and Assessments, Student Support, Faculty, Curriculum, Laboratories and Computing Facilities, Institutional Support and Financial Resources, and Institutional Facilities. Criterion I (Objectives and Assessments) is new and represents the biggest challenge in the switch to an outcomesbased approach. Two major challenges are the objectives and outcomes are different for each program and it is the responsibility of the program to demonstrate by some suitable assessment mechanism that the objectives are being met and the outcomes are being achieved. We felt that the key to meeting these challenges is to notice the similarity of the intent of this criterion and assessment based SPI. This observation is in line with the process-oriented framework for satisfying Criteria I recommended by Jones [5]. Jones recommended first establishing a process of creating objectives and assessing results. This includes defining the sequence of steps, activities, and responsibilities in each step. Then document the assessment results, and changes to program considered and taken. Finally establish a repository for the documentation. Therefore we feel that if a program is to satisfy Criterion I, the program must first develop objectives and expected outcomes, then assess the program against these and finally make changes to the program and objectives based on the assessment results. Clearly for this to happen the objectives and expected outcomes must be documented and measurable and the assessments and changes must be done on a regular basis. Furthermore, this define, assess, analyze, change cycle must be done for both program objectives and course learning objectives. The Key Practice and sub-practices for Criteria category I constructed using the Meta-Model Framework are given in Figure 2. The Key Practice corresponds with the overall objectives of Criteria I. Comparing Figures 1 and 2, we see that for the most part the Key Practices and sub-practices in Figure 2 extend and are more explicit than the Standards in Figure 1. For example, Sub-practice e extends Standard I-4 to include not only program objectives but also program outcomes and course learning objectives. Sub-practice f and g extend I-5 in explicitly requiring recommendations based on the assessment results and selection and implementation of these recommendations. The corresponding mapping between Figures 1 and 2 is I-1:a,;I-2:b,c; I-3:d, I-4:e; I-5: g,f; I-6: h. A complete list of the Key practices and sub-practices for all seven Criteria categories will be available shortly in a technical report [6]. Standards for the other six Criteria are similar to the earlier checklist version. For the most part, they are specific and easily translate into Key Practices or sub-practices. For example in the Curriculum category, the Standards list the minimum number of semester hours for fundamental and advanced computer science material, mathematics and science courses. Key Practice I. Definition and assessment of program objectives and expected outcomes Sub-Practices a. The program objectives include documented, measurable outcomes. b. Each course has documented course learning objectives. c. Course learning objectives cover the program outcomes for graduating students. d. Data relative to the program objectives is collected and documented on a regular basis. e. Program objectives, program outcomes, and course learning objectives are assessed on a regular basis. f. Evaluate assessment data for program objectives, program outcomes and/or course learning objectives and generate recommended changes for program improvement. g. Process or mechanism to select set of recommended changes and implement these changes to program objectives, program outcomes and/or course learning objectives. h. Results of assessments, recommended changes, and actions taken are documented. FIGURE 2 KEY PRACTICE AND SUB-PRACTICES FOR THE CRITERIA I CATEGORY. QUESTIONNAIRE Once the Key practices and sub-practices are created, the next step was to develop a questionnaire that aids in determining how the computer science program s practices compare with the model s practices. Developing questions for very specific sub-practices was not difficult. For example in the subpractices for Criterion IV (Curriculum) a specific minimum number of credits in fundamental and advanced computer science, mathematics, and science courses are specified. However, for other sub-practices developing questions was not as direct, especially for Criterion I. Figure 3 gives a sample of questions for sub-practices for Criterion 1. Questions 7-14 are related to Criterion 1 sub-practices d-g for Program Objectives (Figure 2.) The questions for subpractice II-a, There are established standards and procedures to ensure that graduates meet the requirements of the program, are given in Figure 4. The questionnaire has about 100 questions with at least one question for each sub-practice. A complete list of the questions is given in [6]. SCORING The main purpose of the assessment report is to identify Standards that are deficient (missing or not fully addressed) or possibly deficient (cannot determine from information provided). The practices and sub-practices were designed so that there was a simple mapping between them and the corresponding Statements of Intent and Standards for each of the criteria. Hence the degree of satisfaction for each Key practice and sub-practice can be used to determine the degree of satisfaction of the corresponding Criteria category and T3G-14

4 Standard. We decided that each question would be graded on a Satisfied, Not Satisfied, Possibly Satisfied (insufficient information to make a decision) or Cannot Decide (response is Don t Know). A sub-practice is scored Satisfied if all the questions related to the sub-practice are scored as Satisfied, is scored Not Satisfied is at least one question related to the subpractice is scored as Not Satisfied, is scored Cannot Decide if all questions related to the sub-practice are scored Cannot Decide, and is scored Possibly Satisfied otherwise. A Key practice is scored Satisfied if all of the sub-practices for it are scored Satisfied, is scored Not Satisfied if at least one of its sub-practices is scored Not Satisfied, is scored Cannot Decide if all of its sub-practices are scored Cannot Decide, and is scored Possibly Satisfied otherwise. Because of the simple mapping between the sub-practices and standards, the standards can be scored using the same scoring scheme as used for the Key practices. The scoring for Key practices and sub-practices is done automatically by a spreadsheet that scores each question and then uses these scores to score each sub-practice and Key practice ASSESSMENT REPORT The assessment report gives an analysis of the assessment results. The person or persons most directly involved in the preparation of the Self-Study Report complete the questionnaire because they are most knowledgeable about the contents of the Report and are likely the most interested in the deficiencies or possible deficiencies in its Self-Study Report. That is, which Standards are or may not be satisfied in their Self-Study Report? The standards scores conveniently provide this information. Standards scored as Not Satisfied translate into deficient standards and standards scored as Possibly Satisfied or Don t Know translate into possibly deficient standards. This scoring can also be extended to whether or not individual Criteria are satisfied. CONCLUSIONS We have described an assessment tool for the CAC Self-Study Report. The next step is to validate the tool. Feedback from CAC reviewers and several faculty responsible for preparing Self-Study Reports has been positive and provided suggestions for improvement. We have run two pilot validations of the tool for computer science departments that were recently accredited. The person in each department most responsible for the accreditation Self-Study report completed the questionnaire. The tool assessment report pointed out several possible minor deficiencies with Program 1 and one major deficiency with Program 2. In particular, Program 1 had oversized upper division classes and a weak science requirement and Program 2 had not done much to address Standard I-5 that deals with using the results of the program s periodic assessment to identify opportunities for program improvement. Although both departments were eventually accredited, the evaluation team commented on several of these deficiencies, and in the case of the major deficiency, the department expended considerable effort in addressing it. Our validation plan is to assess several computer science programs that have recently undergone accreditation review and several computer science programs applying for review. An on-line version of the questionnaire is available ( SurveyID=531&cmd=survey). It produces a spreadsheet file from the responses. Our eventual goal is to automate the scoring and assessment report generation process. Finally, we believe that a similar tool could be developed for assessing the ABET Self-Study Report for other engineering programs using much the same process as we used REFERENCES [1] Blandford, D. and D. Hwang, Five Easy but Effective Assessment Methods, Proceedings SIGCSE 03, Reno, Nevada, February 2003, pp [2] Couch, D. and L. Schwartzman, Computer Science Accreditation The Advantages of Being Different, Proceedings SIGCSE 03, February 2003, Reno, Nevada, pp, [3] Criteria for Accrediting Computing Programs, [4] Guidance for Interpreting the Criteria for Accrediting Computing Programs, [5] Jones, L. G. and A. L. Price, Changes in Computer Science Accreditation, CACM 45 (8) August 2002, pp [6] Mathur, P. and C. Cook, Assessment Tool for CAC Self Study Report, Technical Report , School of Electrical Engineering and Computer Science, Oregon State University, [7] Sanders, K. and R. McCartney, Program Assessment Tools in Computer Science: A Report from the Trenches, Proceedings SIGCSE 03, Reno, Nevada, February 2003, pp [8] Somerville, I., Software Engineering. Addison-Wesley, Reading, MA [9] Soundarajan, N., Objectives, Outcomes and Assessment mechanisms for CS Programs, 31 st ASEE/IEEE Frontiers in Education Conference, October 2001, Reno, NV, pp. T2A-17-T2A-21. [10] Visconti, M. and C. Cook, A Meta-model Framework for Software Process Modeling, Proceedings 4 th International Conference, PROFESS 2002, (LNCS 2559), Rovaniemi, Finland, December 2002, pp T3G-15

5 7. What methods are used to collect data that may be used to measure degree of student achievement for each program outcome (Circle all that apply)? Senior exit survey Senior exit interview Alumni survey Employer survey Local written exam National written exam Oral exam Industrial advisory panel Capstone course(s) Other Don t know 8. How often is this data in Question 7 collected? Every year Every 2 years Seldom Never Don t know 9. How often are program objectives assessed? Every year Every 2 years Seldom Never Don t know 10. Who is involved in assessing program objectives? (Circle all that apply) Department head Faculty Member Faculty Committee Industry representatives Student representatives Alumni Other Don t know 11. Are the assessment results evaluated and recommendations made for changes to documented program objectives? 12. Is there a process or mechanism to select which recommended changes to program objectives will be implemented? 13 How many of these recommended changes have led to changes in program objectives? None Some Most All Don t know 14. How often is the impact of changes to program objectives evaluated? Every year Every 2 years Seldom Never Don t know FIGURE 3 SAMPLE OF THE QUESTIONS FOR SEVERAL SUB-PRACTICES OF CRITERION I. T3G-16

6 50. Is there a mechanism to check that the graduates fulfill the program requirements? If Yes, is the mechanism documented? If Yes, how are results made available to the students? If Yes, how long before graduation are results made a available to students these? One term One year Other Don t know If Yes, does the mechanism provide for handling exceptions (e.g. transfer students, recent changes in program requirements)? 51. Is there documentation that describes graduation requirements? If Yes, does the documentation cover exceptions such as transfer courses or who will be impacted by changes in graduation requirements? 52. Is the documentation describing graduation requirements available to students? If Yes, how do students access it? (Circle all that apply) Web page Main office Student advisor Other FIGURE 4 SAMPLE QUESTIONS FOR SUB-PRACTICE II-A: THERE ARE ESTABLISHED STANDARDS AND PROCEDURES TO ENSURE THAT GRADUATES MEET THE REQUIREMENTS OF THE PROGRAM. TABLE 1 THE META-MODEL FRAMEWORK FOR KEY PRACTICES Meta Practices for each Dimension Phase Core Process Quality Assurance Usability/Customer Satisfaction Identify Monitor Define important practices of process for generating product or providing service Monitor adherence to process Define important quality assurance practices for product or service Monitor quality assurance activities Define important practices for product usability or customer satisfaction Monitor usability/customer satisfaction activities Measure Feedback Define, collect and analyze measures Generate, evaluate, prioritize and incorporate recommendations T3G-17