1 Introduction. 2 Program Evaluation Standard Statements. Int'l Conf. Frontiers in Education: CS and CE FECS'16 173

Size: px
Start display at page:

Download "1 Introduction. 2 Program Evaluation Standard Statements. Int'l Conf. Frontiers in Education: CS and CE FECS'16 173"

Transcription

1 Int'l Conf. Frontiers in Education: CS and CE FECS' Evaluating Program Assessment Report Nasser Tadayon Department of Computer Science and Information Systems, Southern Utah University Cedar City, Utah, USA Abstract - The term meta-assessment is used to evaluate an assessment process with the purpose of weighing the quality of assessment. The objective of meta-assessment is to investigate the factors that leads to the improvement of the assessment process in order to get better informed decisions about student learning outcomes. The process for meta-assessment should use both quantitative and qualitative methods with all possible resources such as external reviewer. To the external viewer, one of the main methods used to evaluate an assessment of the program, is to examine the program final report. The Joint Committee on Standards for Educational Evaluation (JCSEE) introduces a set of standards that are classified into categories of Utility, Feasibility, Propriety, Accuracy, and Evaluation Accountability. In this paper, the internal/external review of the assessment report as one of the most common meta-assessment process used at many institutions is checked against JCSEE standards. In conclusion, the strength, the weakness, and some suggestions for improvement of the process is pointed out. Keywords: Evaluation, Program Report, Assessment, Metaassessment, Standards 1 Introduction The concept of assessment and evaluation of processes has been in use for many decades and it has evolved and shaped based on its application and intent. There is a distinction between the words assessment and evaluation even though in many cases they have been used interchangeably and as synonym. Based on MIT Teaching and Learning lab [1], assessment is defined as systematic collection of data to monitor the success of a program/course in achieving intended learning outcomes for students and evaluation is a judgment by the instructor or educational researcher about whether the program or instruction has met its Intended Learning Outcomes. In WIKIPEDIA [2], assessment is based on its purpose but Educational assessment is defined by the process of documenting knowledge, skills, attitudes, and beliefs. In other hand, evaluation is defined as a systematic determination of a subject's merit, worth and significance, using criteria governed by a set of standards. There are many other interpretations of the words from educational entities who are involved in assessment process. There are variety of assessment processes used in academic programs both within and across institutions mainly because of their difference in goals and objectives. However, within academic, the assessment processes used, have two conflicting objectives: quality improvement and accountability. [3] Universities mostly emphasize on quality improvement, which has been their main tool in competing against other institutions, while the government is mostly concern with accountability, which guarantees certain standard for the quality of the services provided to society by higher education institutions. [4] Educational institute may have many levels of assessment used for different accreditation at program or university level. However, most institution involved in assessment process, agree that the main objective of their assessment or evaluation is to obtain continues improvement. In general, evaluation involves both quantitative and qualitative analysis whereas assessment has more emphasis on broader aspect of collection, analysis, and interpretation of information associated to a particular outcome/objective [5]. Assessment processes are usually adjusted based on the outcome/objective of the assessment. There are different types of assessment tasks that can be classified as diagnostic, formative, integrative, and summative. [6] There are two aspect of evaluating assessment process. One relates to verifying assessment which relates to identify set of activities to ensure that the assessment process is correctly implemented. The other is validation which refers to set of tasks that ensure that the assessment process is traceable to the objective. Evaluator concern for a consumer-oriented, professionally developed set of guidelines or evaluation criteria led to the development of standards for judging an evaluation. A profession- wide Joint Committee on Standards for Educational Evaluation (JCSEE) published a set of thirty standards in 1981 refined in 2011, called the Standards for Evaluation of Educational Programs. The standards were approved as an American National standard, were classified in to different categories. 2 Program Evaluation Standard Statements The evaluation standard has helped institutions explain the quality of assessment at every level of a university from program to department to college to the university. Metaassessment helps to identify a university shortcoming or area of improvement within its assessment practices and weights the assessment process against the objective of assessment. Explicit or implicit, a primary role of an assessment office and people who are involved, is to improve assessment practice. However, many assessment processes cannot provide data about the aggregate quality of assessment.

2 174 Int'l Conf. Frontiers in Education: CS and CE FECS'16 The program evaluation standards are classified into Utility, Feasibility, Propriety, Accuracy, and Evaluation Accountability as follows: Utility Standards: The utility standards are intended to increase the extent to which program stakeholders find evaluation processes and products valuable in meeting their needs. Feasibility Standards: The feasibility standards are intended to increase evaluation effectiveness and efficiency. Propriety Standards: The propriety standards support what is proper, fair, legal, right and just in evaluations. Accuracy Standards: The accuracy standards are intended to increase the dependability and truthfulness of evaluation representations, propositions, and findings, especially those that support interpretations and judgments about quality. Evaluation Accountability Standards: The evaluation accountability standards encourage adequate documentation of evaluations and a meta-evaluative perspective focused on improvement and accountability for evaluation processes and products. 3 Meta-Assessment Meta-assessment goes beyond assessment as it examines elements and tools of assessment as well as the necessary and sufficient conditions and the need for assessment. The formative and ongoing evaluation can be used to improve assessment by analyzing the quality of process used and its relevant to the objective. In contrast, the summative evaluation is used to rank the overall assessment results. The are many article related to formative assessment vs summative assessment in general [7] and the use and structure of a formative meta-assessment [8]. Meta-assessment relates to the question of process evaluation vs outcome evaluation. When involved in meta-assessment of programs, the term ipsative vs normative evaluation [11] has been raised which means if the objective (in most cases, it would be improvement) is apparent if the assessment result shows improvement from one year to other (ipsative) or it is compared against another (normative). There is a sequence of activities that should be followed in order to implement an assessment. The TLL (Teaching and Learning Lab) at MIT, identified the general steps and summarized it as follows: Identify Intended Learning Outcomes Identify Research Questions Develop Research Design Select Sampling Frame Select Appropriate Data Collection Methods Construct Measurement Instruments Select Appropriate Data Analysis Techniques Consider Communication and Dissemination of Findings Same as any assessment process, meta-assessment must also have a sequence of steps with the main objective as to advance and improve the assessment process which means the closing of the loop in their process. This means that the results of evaluations must be used to improve assessment process otherwise it would be redundant activity. As an example, the following rubric is used in meta-assessment for closing the loop at Ohio State University [12] Meta-assessment involves evaluating the quality of the overall assessment process itself. [9] Ory suggested to consider the following tips related to conducting meta-assessment: At the outset decide why your university should conduct meta-assessment. If it is to help with accreditation then make sure that your evaluation criteria, which are usually articulated through a rubric, are synced with the accreditor s standards. If you would like the process to help faculty and staff learn about assessment, then make sure that this group participates in evaluating assessment reports. This participation should include training from an assessment expert. Last, help administration make decisions based on metaassessment results. For example, if the results indicate that a few programs are struggling with assessment then the administrators could allocate money to the respective program coordinators for training. [10] However, the steps that can be used to assess most institutional and departmental assessment plans and programs with the goal to improve students academic achievement should exemplify the following five principles: [13] Mission and educational goals are reflected in the assessment process. The conceptual framework is effective. Institutional personnel are involved in the design of the assessment process. Data are collected through multiple measures.

3 Int'l Conf. Frontiers in Education: CS and CE FECS' An assessment of assessment activities is established Furthermore, the evaluation of assessment process in university should be established by means of an office, director, and committee, that would review the entire assessment process not just the result. The assessment office is responsible to evaluate and provide feedback on the assessment plan in terms of its design, utility, external reviews, standards, and the recommendations suggested by the data. 4 Assessment Report Academic Program Review Committee (APRC) is the university level committee based on Policy 6.41, to oversee the process of academic program reviews and monitor and improve the quality of degree programs offered at institution. As such, the committee reviews programs based on the reports compiled by the college housing the programs. Each program report is provided by the department offering the program. This process is part of cyclical academic program review and reports. The review process would also go through external reviewers before presenting to the commissioner s office. As a part of review process for the programs, academic program review committee, would assess the report as well as all related data based on several categories which are defined as: Section 1: Purpose and R411 Data 1.A. Mission Statement (Including program goals and objectives) 1.B. R411 Data Form Section 2: Operations 2.A. Faculty Characteristics 2.B. Administrative Support 2.C. Program Resources 2.D. Student Development 2.E. Program Climate Section 3: Instructional Programs (address each academic unit) 3.A. Curriculum 3.B. Student Learning Outcomes 3.C. Assessment 3.D. Special Considerations or Issues The general report for a program at institutional level should contain statements addressing the following items: 1.A.1 Mission Statement: Clear published mission statement and goals reflecting its purpose, characteristics, and expectations, give direction for its efforts, and derive from, and are generally understood by, its community. 1.A.2 Definition of Mission Fulfillment: The program defines mission fulfillment in the context of its purpose, characteristics, and expectations. Guided by that definition, it articulates program accomplishments or outcomes that represent an acceptable threshold or extent of mission fulfillment. 2.A.1 Qualification and Sufficiency of Faculty: Consistent with its mission, intended outcomes, services, and characteristics, the program employs a sufficient number of qualified faculty to achieve its educational objectives and to assure the integrity and continuity of its programs and services, wherever offered and however delivered. 2.A.2 Faculty Evaluation: Faculty are evaluated in a regular, systematic, substantive, and collegial manner based on clearly established criteria that reflect duties, responsibilities, and authority of their position. 2.A.3 Professional Development for Faculty: The program provides faculty with appropriate opportunities and support for professional growth and development to enhance their effectiveness in fulfilling their roles, duties, and responsibilities. 2.B.1 Qualification and Sufficiency of Staff: Consistent with its mission, intended outcomes, services, and characteristics, the program employs a sufficient number of qualified administrative leadership and other personnel to achieve its educational objectives, assure the integrity and continuity of its programs and services, wherever offered and however delivered, and maintain its support and operations functions. 2.B.2 Staff Evaluation: Administrative leadership and other personnel are evaluated in a regular, systematic, substantive, and collegial manner based on clearly established criteria that reflect duties, responsibilities, and authority of the position. 2.B.3 Professional Development for Staff: The program provides administrative leadership and other personnel with appropriate opportunities and support for professional growth and development to enhance their effectiveness in fulfilling their roles, duties, and responsibilities. 2.C.1 Financial Stability: The program demonstrates financial stability. 2.C.2 Resource Planning and Development: Resource planning and development include realistic budgeting, enrollment management, and responsible projections of grants, donations, and other non-tuition revenue sources. 2.C.3 Physical Infrastructure: Consistent with its mission, intended outcomes, and characteristics, the program's physical facilities and equipment are accessible, safe, secure, and sufficient in quantity and quality to ensure healthful learning and working environments. 2.C.4 Technological Infrastructure: Consistent with its mission and characteristics, the program has appropriate and adequate technology systems and infrastructure to support its management and operational functions, and it's academic and support services, wherever offered and however delivered. 2.D.1 Student Development: Students receive effective and sufficient support and opportunities beyond the classroom in an effort to facilitate their academic success and to enhance their overall development.

4 176 Int'l Conf. Frontiers in Education: CS and CE FECS'16 2.E.1 Program Work Environment: The program has a positive and stimulating work environment in which mutual respect, shared responsibility, and equitable problem solving are demonstrated and differences are utilized as strengths for advancing the program. 2.E.2 Program Contribution and Reputation: The program shares responsibility at the university level, is engaged with the community outside the institution, and is reputed to be functional, contributing, and talented. 3.A.1 Admissions and Graduation Requirements: Admission and graduation requirements are clearly defined and widely published. 3.A.2 Curriculum Content: The program provides a curriculum with appropriate content and rigor and consistent with its learning outcomes. 3.A.3 Curriculum Coherence: The curriculum demonstrates a coherent design with appropriate breadth, depth, sequencing of courses, and synthesis of learning. 3.B.1 Course and Program Learning Outcomes: Academic programs identify and publish expected course and program student learning outcomes that are clearly stated. 3.B.2 Alignment with Institutional Learning Outcomes: The course and program learning outcomes are aligned with the institutional student learning outcomes. 3.C.1 Assessment of Outcomes: The program documents, through an effective, regular, and comprehensive system of assessment, achievement of its intended outcomes and the students who complete its educational courses, programs, and degrees, wherever offered and however delivered, achieve identified course and program learning outcomes. 3.C.2 Assessment of Internal and External Environment: The program regularly monitors its internal and external environments to determine how and to what degree changing circumstances may impact its mission and its ability to fulfill that mission. 3.C.3 Assessment of Assessment Processes: The program regularly reviews its assessment processes to ensure they appraise authentic achievements and yield meaningful results that lead to improvement. 3.C.4 Dissemination of Assessment Results: The program disseminates assessment results and conclusions concerning mission fulfillment to appropriate constituencies. 3.C.5 Use of Assessment Results: The program uses the results of its assessment to inform its planning and practices that lead to enhancement of the achievement of intended outcomes including student learning achievements. The feedback from APRC on the above items is to identify and define a list of Strengths, Weaknesses, and Recommendations for the programs. 5 Report Standards meeting the JCSEE Standards The items within section 3.C of the program review relates to the assessment of the program. Based on JCSEE standard, the question is, does the evaluation of the assessment process in the programs by APRC, meets the JCSEE standards? We investigate each of JCSEE Utility standards [14] against policy standard. Utility Standard: Utility standards focus primarily on the qualities that prepare stakeholders to use the processes, descriptions, findings, judgments, and recommendations in ways that best serve their needs. Within Utility standard, the questions to answer would be Does our program make a unique contribution to the community/agency/organization? 2.E.2 - Program Contribution and Reputation To what extent is the program meeting its stated goals? 1.A.2 - Definition of Mission Fulfillment How do we think our program should be working? 2.D.1 -Student Development 2.E.1 - Program Work Environment What are the discrepancies between the intended program and the program-in-action? 3.C.1 - Assessment of Outcomes Are there better ways to do what we re doing? 2.A.3 - Professional Development for Faculty 2.B.3 - Professional Development for Staff 2.D.1 - Student Development 2.E.1 - Program Work Environment 3.C.5 - Use of Assessment Results What should be the future direction of our program? 1.A.2 - Definition of Mission Fulfillment Are we doing a good job of reaching and servicing our potential user groups? 2.A.2 - Faculty Evaluation 2.B.2 - Staff Evaluation 2.D.1 - Student Development 3.C.1 - Assessment of Outcomes 3.C.2 - Assessment of Internal and External Environment 3.C.3 - Assessment of Assessment Processes How can we adapt this program in light of budget cuts? 2.C.1 - Financial Stability 2.C.2 - Resource Planning and Development 2.C.3 - Physical Infrastructure 2.C.4 - Technological Infrastructure

5 Int'l Conf. Frontiers in Education: CS and CE FECS' Are we optimizing use of our human and fiscal resources? 2.A.1 - Qualification and Sufficiency of Faculty 2.B.1 - Qualification and Sufficiency of Staff 2.C.3 - Physical Infrastructure 2.C.4 - Technological Infrastructure What does it mean to have a program that is adaptive and responsive? 2.C.3 - Physical Infrastructure 2.C.4 - Technological Infrastructure 2.E.1 - Program Work Environment How can we do a better job of advocating for our program? 3.A.1 - Admissions and Graduation Requirements 3.A.2 - Curriculum Content 3.A.3 - Curriculum Coherence 3.B.1 - Course and Program Learning Outcomes 3.B.2 - Alignment with Institutional Learning Outcomes 2.A.3 - Professional Development for Faculty 2.B.3 - Professional Development for Staff 2.D.1 - Student Development 3.C.4 - Dissemination of Assessment Results 3.C.5 - Use of Assessment Results Other standards are mostly based on the evaluation/assessment process. They may not be directly/indirectly associated or mentioned in the program report document. The followings are the Feasibility, Propriety, Accuracy, and Evaluation Accountability standards: Feasibility Standards The feasibility standards are intended to increase evaluation effectiveness and efficiency. They are mostly based on the way the evaluation is done and may not directly/indirectly mentioned or documented. F1 Project Management: Evaluations should use effective project management strategies. F2 Practical Procedures: Evaluation procedures should be practical and responsive to the way the program operates. F3 Contextual Viability: Evaluations should recognize, monitor, and balance the cultural and political interests and needs of individuals and groups. F4 Resource Use: Evaluations should use resources effectively and efficiently. Propriety Standards The propriety standards support what is proper, fair, legal, right and just in evaluations. P1 Responsive and Inclusive Orientation Evaluations should be responsive to stakeholders and their communities. P2 Formal Agreements Evaluation agreements should be negotiated to make obligations explicit and take into account the needs, expectations, and cultural contexts of clients and other stakeholders. P3 Human Rights and Respect Evaluations should be designed and conducted to protect human and legal rights and maintain the dignity of participants and other stakeholders. P4 Clarity and Fairness Evaluations should be understandable and fair in addressing stakeholder needs and purposes. P5 Transparency and Disclosure Evaluations should provide complete descriptions of findings, limitations, and conclusions to all stakeholders, unless doing so would violate legal and propriety obligations. P6 Conflicts of Interests Evaluations should openly and honestly identify and address real or perceived conflicts of interests that may compromise the evaluation. P7 Fiscal Responsibility Evaluations should account for all expended resources and comply with sound fiscal procedures and processes. Accuracy Standards The accuracy standards are intended to increase the dependability and truthfulness of evaluation representations, propositions, and findings, especially those that support interpretations and judgments about quality. A1 Justified Conclusions and Decisions Evaluation conclusions and decisions should be explicitly justified in the cultures and contexts where they have consequences. A2 Valid Information Evaluation information should serve the intended purposes and support valid interpretations. A3 Reliable Information Evaluation procedures should yield sufficiently dependable and consistent information for the intended uses. A4 Explicit Program and Context Descriptions Evaluations should document programs and their contexts with appropriate detail and scope for the evaluation purposes. A5 Information Management Evaluations should employ systematic information collection, review, verification, and storage methods. A6 Sound Designs and Analyses Evaluations should employ technically adequate designs and analyses that are appropriate for the evaluation purposes. A7 Explicit Evaluation Reasoning Evaluation reasoning leading from information and analyses to findings,

6 178 Int'l Conf. Frontiers in Education: CS and CE FECS'16 interpretations, conclusions, and judgments should be clearly and completely documented. A8 Communication and Reporting Evaluation communications should have adequate scope and guard against misconceptions, biases, distortions, and errors. Evaluation Accountability Standards The evaluation accountability standards encourage adequate documentation of evaluations and a meta-evaluative perspective focused on improvement and accountability for evaluation processes and products. E1 Evaluation Documentation Evaluations should fully document their negotiated purposes and implemented designs, procedures, data, and outcomes. E2 Internal Meta-evaluation Evaluators should use these and other applicable standards to examine the accountability of the evaluation design, procedures employed, information collected, and outcomes. E3 External Meta-evaluation Program evaluation sponsors, clients, evaluators, and other stakeholders should encourage the conduct of external meta-evaluations using these and other applicable standards. [7] "Carnegie Mellon University," [Online]. Available: [8] M. Rodgers, "Improving Academic Program Assessment: A Mixed," Springer, pp , [9] J. Ory, "Meta-assessment: Evaluating assessment activities," Research in Higher Education, pp , [10] K. H. F. &. M. R. Good, "The Surprisingly Useful Practice of Meta-Assessment," [Online]. Available: [11] "Normative vs Ipsative," [Online]. Available: [12] "meta-assessment-rubric-for-closing-the-loop," [Online]. Available: [13] D. A. WALKER, "A MODEL FOR ASSESSING ASSESSMENT ACTIVITIES," College Student Journal, [14] "TLL Teaching," [Online]. Available: 6 Conclusions The cyclic program evaluation based on the program report, contains the data as well as the general concepts and details of the program assessment. The program report is the only base of review and therefore evaluation of the program by APRC. The program report content is checked against recognized standard for evaluation by JCSEE. The program report maps to the JCSEE standards for evaluation based on utility but in order to meet other standards, the evaluators need to be part of the program. 7 References [1] "TLL," [Online]. Available: [2] "Wikipedia," [Online]. Available: [3] L. Weber, The Legitimacy of Quality Assurance in Higher Education, Council of Europe Publishing, [4] B. Howard. [Online]. Available: [5] L. Suskie, "Assessing student learning," [Online]. Available: [6] G. T. Crisp, "Integrative Assessment: Reframing Assessment Practice for Current and Future Learning," Assessment & Evaluation in Higher Education v37, pp , 2012.