ARTICLES FOR UTM SENATE MEMBERS

Size: px
Start display at page:

Download "ARTICLES FOR UTM SENATE MEMBERS"

Transcription

1 ARTICLES FOR UTM SENATE MEMBERS Key Performance Indicators in Higher Education TITLE Application Of The Balanced Scorecard In Higher Education Opportunities And Challenges. SOURCE ProQuest Database Qualitative Indicators For The Evaluation Of Universities Performance. Assessing Space Utilisation Relative To Key Performance Indicators How Well, Not How Much, Space Is Used. Developing An Educational Performance Indicator For New Millennium Leaners Science Direct Database EbscoHost Database Academic One File 7th May 2014 SOURCE : PERPUSTAKAAN UTM

2 TITLE SOURCE Application Of The Balanced Scorecard In Higher Education Opportunities And Challenges. ProQuest Database

3 Application of the Balanced Scorecard in Higher Education Opportunities and Challenges An evaluation of balance scorecard implementation at the College of St. Scholastica. by Cindy Brown Introduction Cindy Brown, DNP, MPH, RD, RN is an assistant professor in the School of Nursing at the College of St. Scholastica in Duluth, Minnesota. Her professional expertise is in public health, nutrition, and chemical dependency. She also provides nursing services at a housing facility for residents with chronic alcoholism. She has an interest in performance improvement evaluation in higher education; this article is a culmination of her review and application conducted as part of her graduate course work for her doctorate of nursing practice degree. In the 1990s a new way of evaluating performance improvement in the business industry was introduced. The balanced scorecard (BSC) emerged as a conceptual framework for organizations to use in translating their strategic objectives into a set of performance indicators. Rather than focusing on operational performance and the use of quantitative financial measures, the BSC approach links the organization s strategy to measurable goals and objectives in four perspectives: financial, customer, internal process, and learning and growth (Niven 2003). The purpose of this article is to evaluate the use of the BSC in the nonprofit sector, specifically at an institution of higher education. Case studies in higher education and personal perspectives are presented, and the opportunities for and challenges of implementing the BSC framework in higher education are discussed. Balanced Scorecard Principles Achievement of equilibrium is at the core of the BSC system. Balance must be attained among factors in three areas of performance measurement: financial and nonfinancial indicators, internal and external constituents, and lag and 40 July-September 2012 Copyright Society for College and University Planning (SCUP). All rights reserved.

4 Application of the Balanced Scorecard in Higher Education: Opportunities and Challenges lead indicators. Equilibrium must also be attained between financial and nonfinancial measures; nonfinancial measures drive the future performance of an organization and are therefore integral to its success. Further, the use of nonfinancial measures allows problems to be identified and resolved early, while they are still manageable (Gumbus 2005). The sometimes contradictory needs of internal constituents (employees and internal processes) and external stakeholders (funders, legislators, and customers) should be equally represented in the scorecard system (Niven 2003). A key function of the BSC is its use as a performance measurement system. The scorecard enables organizations to measure performance through a variety of lead and lag indicators relating to finances, customers, internal processes, and growth and development (Niven 2003). According to Niven (2003), lag indicators are past performance indicators such as revenue or customer satisfaction, whereas lead indicators are the performance drivers that lead to the achievement of the lag indicators (p. 23). The BSC s cascading process results in a consistent focus at all levels of the organization. The BSC framework provides tools to assist business organizations in mapping their performance improvement strategies and establishing connections throughout the various levels of the organization. Additionally, the framework identifies cause-and-effect relationships. The strategy map component of the BSC provides a graphical description of the organization s strategy, including the interrelationships of its elements. This map is considered the blueprint for the organizational plan (Lichtenberg 2008). Further, the BSC s cascading process gives the organization a tool for taking the scorecard down to departmental, unit, divisional, or individual measures of performance, resulting in a consistent focus at all levels of the organization. Ideally, these measures of performance at the various levels directly relate to the organizational strategy; if not, the organization is just benchmarking its metrics. The cascading of the scorecard also presents employees with a clear image of how their individual actions make a difference in relation to the organization s strategic objectives. The cascaded scorecard creates alignment among the performance measurement outcomes throughout the various levels of the organization (Lichtenberg 2009). The BSC has evolved into a powerful communications tool and strategic management system for profit-based organizations. Harvard Business Review has recognized the framework as one of the top 75 most influential ideas in the 21st century (Niven 2003). Its successful use in the for-profit arena has been clearly demonstrated, but does it have applicability in the nonprofit sector, specifically in institutions of higher education (IHEs)? Use of Performance Indicators in Higher Education Like other nonprofit organizations, IHEs are increasingly under pressure to provide external stakeholders such as communities, alumni, and prospective students with performance indicators that reflect the overall value and excellence of the institution. Historically, however, performance indicators in higher education have emphasized academic measures (Ruben 1999). Driven by external accountability and comparability issues, IHEs often focus on quantitative academic variables such as faculty demographics, enrollment, grade point average, retention rates, faculty-student ratios, standardized test scores, graduation rates, faculty teaching loads, and faculty scholarly activity (Ruben 1999). IHEs often assume that measuring external accountability through one-dimensional parameters such as college rankings or accrediting agency mandates will influence internally driven parameters related to institutional effectiveness; yet, unless these indicators are linked in a meaningful way to the drivers of institutional effectiveness, desired improvements in service, productivity, and impact are unlikely to occur (Stewart and Carpenter-Hubin ). Additionally, some of these academic variables do not reflect the value that the IHE adds through the teaching and learning process but instead reflect students existing capabilities (Ruben 1999). Another challenge in using traditional measures of excellence in higher education is their failure to capture a comprehensive image of the institution s current status (Ruben 1999; Stewart and Carpenter-Hubin ). Further, the tendency for IHE performance indicators to focus on external accountability fails to account for the importance of internal assessment. Inclusion of internal assessment indicators broadens perspectives and, if done correctly, provides a connection between the institution s values and goals (Stewart and Carpenter-Hubin ). Indicators used in traditional higher education performance Planning for Higher Education Search and read online at: 41

5 Cindy Brown measurement frameworks cannot be adequately translated into meaningful applications for the purpose of monitoring, planning strategically, or conducting comparative evaluations against standards of excellence among IHEs (Johnson and Seymour 1996, as cited in Ruben These traditional performance indicators also lack the predictive power necessary to adequately alert IHEs of needed changes in a timely manner. In addition, traditional models for measuring higher education performance are constrained by departmental boundaries and are limited in their ability to link individual performance objectives and performance evaluation processes with institutional performance (Hafner 1998). Traditional models are limited in their ability to link individual performance objectives with institutional performance. Not as much emphasis is placed on other less tangible indicators in higher education such as relevance, need, accessibility, value added, appreciation of diversity, student satisfaction levels, and motivation for lifelong learning; yet, a common mission of IHEs is to foster lifelong learning. Many of these indicators, especially those related to student and faculty expectations and satisfaction levels, deserve greater attention; recruiting, retaining, and nurturing the best and brightest individuals is the primary goal of IHEs (Ruben 1999). Despite this, the five most common performance-based measures used in higher education are retention and graduation rates, faculty teaching load, licensure test scores, two- to four-year transfers, and use of technology/distance learning (Burke 1997). Absent from these common performance-based indicators are the measurement categories and specific metrics suggested by a BSC approach. IHEs need measurable indicators that reflect value and excellence achieved through investments in technology, innovation, students, faculty, and staff (Nefstead and Gillard 2006). Current ranking systems in higher education consider the multiple facets of higher education but do not offer guidance on the selection and organization of performance measures in terms of performance drivers or diagnostic indicators. Moreover, these ranking systems often do not relate performance indicators to the institution s mission or provide guidance toward continuous quality improvement (Beard 2009). The Balanced Scorecard and Higher Education While implementation of the BSC cannot guarantee a formula for accurate decision making, it does provide higher education with an integrated perspective on goals, targets, and measures of progress (Stewart and Carpenter-Hubin , p. 40). Some IHEs have taken the step of measuring performance indicators through the implementation of a BSC approach. These IHEs have identified the important characteristics of the scorecard: inclusion of a strategic plan; establishment of lag and lead performance indicators; improvement of efficiency, effectiveness, and overall quality; and inclusion of faculty and staff in the process (Rice and Taylor 2003). Successful implementation of the BSC framework in higher education relies on the progression through various steps as part of the process. The first step is clear delineation of the mission and vision, including translating this vision into specific strategies with a set of performance measures. The next step is establishing communication and linkage among schools, departments, student support services, institutional advancement, and other offices such as physical plant and maintenance services. This step is important in establishing direct connections between the individual unit goals and objectives and the macro-level institutional goals. To increase the potential for success, it is imperative that administrators develop specific strategies to achieve goals and allocate sufficient resources for these strategies. Credible measures of progress toward these goals must also be instituted. The final step involves creating a feedback mechanism whereby the IHE can evaluate its overall performance using updated indicators and revise its strategies when needed (Stewart and Carpenter-Hubin ). Application of the Balanced Scorecard Framework in IHEs There is a dearth of published literature regarding BSC applications in IHEs. Beard (2009) believes that this may be attributed to a lack of knowledge and awareness of the opportunities for BSC application rather than to incongruence between the BSC approach and higher education strategic planning. Scholey and Armitage (2006) suggest that as IHEs are expected to develop more innovative programs and also demonstrate greater fiscal and customer 42 July-September 2012 Search and read online at:

6 Application of the Balanced Scorecard in Higher Education: Opportunities and Challenges accountability, more will adopt the BSC framework. Others contend that the lack of a detailed, systematic process for executing the BSC model has hindered its widespread use in IHEs; as a result, they have developed models for its application in higher education (Asan and Tanyas 2007; Karpagam and Suganthi 2010). Asan and Tanyas (2007) presented a methodology that integrates the BSC (a performance-based approach) with Hoshin Kanri (a process-based approach). Karpagam and Suganthi (2010) created a generic BSC framework to assist IHEs in assessing overall institutional performance through the use of identified higher education measurement criteria that lead to the establishment of benchmarks and quality improvement goals. Despite the reluctance of IHEs to adopt standard innovations (Pineno 2008), there are some documented case studies in which the BSC approach has been successfully implemented in IHEs both nationally and internationally. From an international perspective, authors in both Australia and Canada have published case-study data on the use of the BSC approach (Cribb and Hogan 2003; Mikhail 2004). Additionally, colleges and university systems in the United States that have documented their use of the BSC include the University of California System, Fairfield University, University of Wisconsin-Stout, and the University of Minnesota College of Foods, Agricultural and Natural Resource Sciences (Nefstead and Gillard 2006). Bond University in Australia initiated a BSC approach for performance improvement. The library unit at Bond University used the university s vision, mission, strategies, and performance goals to develop and implement its own BSC. As part of the process, the library s senior and middle managers provided input on strategic objectives and proposed metrics. This process also included the linkage of measures through cause-and-effect relationships. An identified challenge involved narrowing the list of possible measures to the select few that would best capture the core of the desired strategy (Cribb and Hogan 2003). The library s objective for each of its perspectives closely aligned with the university s objectives. For example, under the customer perspective, the university defined customer satisfaction as an objective. The library then identified its own objectives focused on the assurance of customer satisfaction through a variety of strategies, including an emphasis on available resources and services as well as effective collaboration and communication with academic staff. In developing financial measures, Bond University initially decided to use library resources in relation to student numbers to measure the library s role in achieving cost-effectiveness. However, since the university had lower student enrollment and smaller economies of scale in comparison to other universities in Australia, this financial measure did not adequately reflect the relationships among library expenditures, usage, student educational achievement, and customer satisfaction. Therefore, additional measures were identified to more accurately support both the library s and university s objectives. A key factor that contributed to the successful implementation of the BSC at Bond University was the involvement of staff in the process; staff involvement created an alignment between both the library s and university s strategic objectives (Cribb and Hogan 2003). Ontario Community College in Canada also shared its application of the BSC. The college substituted a strategic goals perspective for the financial perspective typically used in the BSC framework. This perspective was intended to identify how we should appear to our shareholders in order to succeed (Mikhail 2004, p. 9). The college identified the following strategic goals: (1) achieve academic/service excellence, (2) manage enrollment growth, (3) develop strategic partnerships, (4) achieve organizational success, and (5) manage cost-effectiveness and achieve a balanced budget (Mikhail 2004). In the mid-1990s, the University of California System initiated a Partnership for Performance, a collaborative effort involving the development and implementation of a BSC framework throughout the nine distinctly different campuses. The system executed specific approaches that contributed to the overall success of this initiative. Senior administrative managers from each campus participated in the development of the overall vision and goals for business administration and operations. This administrative group also served as a steering committee over the life of the initiative by providing direction, prioritizing, solving problems, and encouraging and motivating their staff to participate. The five business areas on each campus human resources, facilities management, environmental health and safety, information technology, and financial operations piloted the development of common BSC measures. Creating a performance measurement culture was challenging, but part of the system s success in achieving this culture resulted from the creation of performance champions groups that met quarterly to exchange dialogue and Planning for Higher Education Search and read online at: 43

7 Cindy Brown information related to organizational performance measurement and management. As a result of the initiative, two of the campuses adopted the BSC as a strategic planning tool for business administration at the university level (Hafner 1998). The Fairfield University School of Business designed a phased approach for the implementation of the BSC framework at the academic unit level. The phases of the strategy revitalization process included building a foundation, developing the scorecard, compiling measures, analyzing results, recommending changes, revising measures, and implementing initiatives (McDevitt, Giapponi, and Solomon 2008). The university also defined its own perspectives that it felt were more appropriate to academics, including growth and development, scholarship and research, teaching and learning, service and outreach, and financial resources. In some instances, the university needed to adopt a benchmarking program; in others, it changed its metrics because information was not available or easily accessible. During the analysis phase, metrics were reevaluated and faculty members were assessed on their ability to meet goals. Faculty metrics included numbers and types of intellectual contributions, measured through refereed publications and attendance at or sponsorship of pedagogical seminars (McDevitt, Giapponi, and Solomon 2008, p. 45). Fairfield University s School of Business had difficulty in maintaining momentum throughout the implementation of the program. The institution found it challenging to develop effective measures to meet long-term qualitative goals and to create effective communication strategies across work groups, which led to delays in establishing consensus within and among the various groups. Key outcomes of this revitalization program included creating a communications network between faculty and staff, increasing faculty awareness of the institution s goals and objectives, and identifying and documenting needs for the purpose of determining budget and funding (McDevitt, Giapponi, and Solomon 2008). The University of Wisconsin-Stout, another BSC implementer, was one of the first three organizations to receive the Baldrige education award (Karathanos and Karathanos 2005). The Malcolm Baldrige Education Criteria for Performance Excellence was designed to recognize integrated performance measurement in IHEs that includes (1) the delivery of ongoing value to stakeholders, (2) the improvement of the institution s overall effectiveness and capability, and (3) the promotion of organizational and individual learning. The Baldrige National Quality Program criteria focus on results and creation of value. Its requirement of an institutional report with comprehensive measures comprised of both leading and lagging performance indicators is consistent with the basic premise of the BSC framework (Beard 2009; Karathanos and Karathanos 2005). Balanced Scorecard Application at a Select IHE A BSC, including a strategy map and departmental improvement plan, was developed for a select IHE (figures 1, 2, and 3, respectively). This IHE is a small liberal arts college located in northern Minnesota, rich in its Benedictine heritage and Catholic tradition. Applying Stewart and Carpenter-Hubin s ( ) process, the IHE first identified strategies/objectives and performance measures that fit with the distinct mission and vision of the college. The strategy map (figure 2) was an invaluable resource in expressing the cause-and-effect relationships among the various perspectives. The strategy map provided useful visual connections that illustrated the college s overall calculated planning process, which helped generate faculty and staff buy-in to the BSC approach. For example, faculty could see how their work in strengthening and creating new academic programs and program delivery systems affected other performance indicators such as improving student satisfaction and increasing enrollment growth in extended studies programs. Similarly, staff could gain an understanding of how their commitment to strengthening student support services and enhancing service-learning experiences affected the student experience and community partnerships. Once the overall strategies were identified in each of the four perspectives financial, internal processes, students and community, and learning and growth it was relatively easy to develop School of Nursing (SoN) and undergraduate nursing department-based objectives that fit with the institution s overall objectives/goals through the process of cascading, as illustrated by the BSC performance improvement plan (figure 3). As a nursing faculty member, it was beneficial to see how the undergraduate nursing department s objectives were linked to the overall college objectives. For example, strategies from the internal process dimension at the college level included strengthening the Benedictine Liberal Arts (BLA) program and enhancing student service-learning experiences. Measures related to achieving these strategies at the undergraduate nursing 44 July-September 2012 Search and read online at:

8 Application of the Balanced Scorecard in Higher Education: Opportunities and Challenges Figure 1 Balanced Scorecard STRATEGY A. Manage enrollment growth B. Secure capital funds C. Increase student satisfaction D. Strengthen the Benedictine Liberal Arts program E. Enhance service-learning experiences F. Support faculty professional practice and research G. Strengthen information technology infrastructure MEASURE Increase student enrollment in Adult Day and Evening Programs (ADEP), extended sites, and with the three online initiatives. Seek private donor funding through capital campaign. Increase students overall satisfaction with their college experience. Develop a Benedictine Liberal Arts program that aligns itself with the mission and values of the college. Increase service-learning opportunities and student participation. Expand faculty development funding to support faculty advance practice and research. Provide a competitive technology infrastructure that supports the needs of students, faculty, and staff. TARGET 1) Ten percent increase in student enrollment at each ADEP extended site: Brainerd, St. Cloud, St. Paul, and Rochester. 2) Twenty percent increase in the three online initiatives: RN to BS, HIIM Master s, and DPT programs. Obtain 10 percent of estimated $15 million for Science building expansion from private donations. 1) One hundred percent of students will report being satisfied or very satisfied with their overall experience at the college. 2) One hundred percent of students will report being satisfied or very satisfied with their preparation for future work. 1) Implementation of new Benedictine Liberal Arts program beginning Fall of ) By Fall of 2011, 25 percent of the Benedictine Liberal Arts program will be available in an online format. 1) Each school Education, Management, Business & Technology, Health Science, Sciences, Nursing, and Arts & Letters will add at least two new service-learning experience options each semester. 2) Prior to graduation, 100 percent of students will participate in a servicelearning experience. 1) Five percent of entire faculty each year will become eligible for associate professor status through achievement of a terminal degree and advance research. 2) These faculty receive 50 percent funding, up to $10,000/year, for advanced education/research. 1) Each school within the college has its own designated academic IT development/support staff in proportion to its number of programs and departments. 2) IT Help desk support is available 7 days per week. 3) One hundred percent of college classrooms and labs are evaluated for supportive technology needs. FREQUENCY Monthly Quarterly Annual Graduation Satisfaction Survey Annual Semi-annually Semi-annually Quarterly FINDINGS TRENDING Planning for Higher Education Search and read online at: 45

9 Cindy Brown Figure 2 Balanced Scorecard Strategy Map Finance As financial stakeholders, how do we intend to meet the mission and vision and foster the Benedictine values? A Managed enrollment growth Achieve financial stability with reserves Increase financial resources Improve operating efficiency Secure capital funds B Students and Community What do the community and students expect, want, and need from the college? Advance student success and graduation rates Improve student satisfaction C STUDENTS Optimize student learning experience COMMUNITY Create community partnerships Develop community leaders Internal Processes As members of the staff, what do we need to do to meet the needs of our students and our community? Create distinctive programs D Strengthen the Benedictine liberal arts program Increase learning delivery formats Strengthen student support network E Enhance service learning experiences Learning and Growth As an organization, what type of culture, skills, training, and technology are we going to develop to support our processes? Retain qualified faculty & staff F Support faculty professional practice & research G Strengthen Information Technology (IT) infrastructure Enhance faculty & staff development resources Build service learning awareness & training department level included expectations that five percent of nursing faculty would teach in the BLA program and that service-learning experiences would be offered each semester at all three levels of the nursing program: sophomore, junior, and senior. The cascading tool proved useful throughout the college, especially when used as a basis for developing and justifying departmental budgets. Budget allocation could be directly linked to the college s BSC strategic plan and subsequent SoN and departmental performance improvement plans. 46 July-September 2012 Search and read online at:

10 Application of the Balanced Scorecard in Higher Education: Opportunities and Challenges Figure 3 Balanced Scorecard Performance Improvement Plan: Undergraduate Department of Nursing FINANCE A. Manage enrollment growth B. Secure capital funds STUDENTS AND COMMUNITY C. Improve student satisfaction INTERNAL PROCESSES D. Strengthen the Benedictine Liberal Arts (BLA) program E. Enhance student service-learning experiences SCORECARD Increase enrollment in ADEP extended sites and the three college online initiatives. Private donor funding for capital campaign for Science building. Increase students overall satisfaction with their college experience. Alignment of BLA program with mission, vision, and values of the college. Increase service-learning opportunities and student participation. DEPARTMENT LEVEL Undergraduate Nursing Increase enrollment in the online RN to BS program by 10 percent. School of Nursing Identify community benefactors in the health care field. Undergraduate Nursing One hundred percent of nursing students report being satisfied with availability and variety of course offerings in the program. Undergraduate Nursing Five percent of nursing faculty teach BLA courses. One hundred percent of nursing students participate in service-learning opportunities ACTION PLAN Department Initiatives 1) Develop and revise RN to BS program for rolling admission, online format. 2) Nursing faculty training necessary for successful online courses implementation. School of Nursing Initiative School of Nursing solicits identified benefactors for capital funds. Department Initiatives 1) Evaluate nursing elective courses for the purpose of aligning offerings with students needs. 2) Identify additional ways of meeting program requirements through a variety of course or service-learning opportunities. Department Initiatives 1) Adjustment of nursing faculty workload to accommodate teaching of BLA courses. 2) Nursing faculty representation and participation in BLA program planning initiative. 1) Embed service-learning opportunities in undergraduate nursing program curriculum. 2) Make service-learning opportunities available each semester at each program level: sophomore, junior, and senior. LEARNING AND GROWTH F. Support faculty professional practice and research G. Strengthen information technology infrastructure Expand faculty development to support advanced practice and research. Provide a competitive technology infrastructure. Undergraduate Nursing Nursing faculty funding sources are available for advancing education and research experience. Integrate nursing informatics into the curriculum. Department Initiatives 1) Obtain grant funding to support nursing faculty education and research. 2) Offer nursing faculty and student collaboration experiences to advance evidenced-based nursing practice. 1) Advance the use of simulation and the academic electronic health record in the curriculum. 2) Increase student didactic and clinical experiences with nursing informatics. Planning for Higher Education Search and read online at: 47

11 Cindy Brown This writer agrees with several authors assessments that modification of the BSC is necessary for successful application in IHEs. As a nursing faculty member, it was difficult to identify objectives and develop specific performance measures from a financial perspective. As Mikhail (2004) suggests, it would have been useful to replace the financial perspective with a strategic goals perspective. These strategic goals could be established to support the college s financial priorities: to contain costs and to increase enrollment and revenue in extended campuses and online programs. Further, future IHE BSC implementations should consider including the service and outreach perspectives (McDevitt, Giapponi, and Solomon 2008), especially since these are congruent with this college s mission and vision. This perspective could also be reasonably addressed through the splitting of the customer perspective into two parts, with students as one customer and the community as another, as demonstrated in the strategy map (figure 2). Recommendations It would be advantageous for this select liberal arts college in northern Minnesota to adopt the BSC framework as a communication tool and strategic management system. Prior to implementation, it is imperative to name organizational champions to lead the process, garner support, and gain the momentum necessary to execute the BSC framework. These champions should include not only administrators, but also faculty and staff representatives from the various schools and administrative departments that support the college s academic programs. A valuable resource already exists in the college s strategic plan for , which directly links to the mission and vision of this IHE. The champions could take the strategic goals found in the plan and articulate appropriate measures for their attainment through the development of a BSC that considers all four perspectives: financial, internal processes, students and community, and learning and growth. The SoN could serve as the pilot for implementing the BSC approach; the SoN is in a position to greatly benefit from such an approach. Having grown in recent years to become one of the largest nursing programs in Minnesota, the school faces challenges in organizing its complex structure, which is composed of undergraduate programs taught in traditional, accelerated, and online formats and graduate programs that include baccalaureate and master s degree tracks to doctoral degrees and master s degree tracks to five different advanced nursing practice options. Historically, the SoN s goals were established without measurable outcomes and without direct linkages to departmental budgets. When these goals were revisited at the end of the academic year, faculty questioned how their achievement was being measured. While the SoN s goals do connect to the college s mission and vision, nursing faculty have requested that a long-term strategic plan be developed to manage the school s growth and assist in identifying priorities. Adopting the BSC would enable the nursing faculty to participate in the identification of SoN priorities and then, through the BSC improvement plan, develop school- and department-specific objectives with performance measure outcomes. The BSC improvement plan would also establish connections and improve communication among the four nursing departments and the school. Clear alignment of performance measurement indicators with the institution s mission, values, and strategies is an imperative in the BSC approach. Further, nursing education accreditation standards, which have the purpose of ensuring the quality of baccalaureate and graduate nursing programs (Commission on Collegiate Nursing Education 2009), mandate that the SoN s mission, goals, and outcomes fit with the college s mission and vision. The BSC improvement plan can serve as the working document that illustrates the achievement of this important quality standard. After successful implementation of the BSC framework in the SoN, momentum could be maintained by disseminating the approach to the other schools until the entire college has adopted the system. Using the college s strategy map (figure 2), improvement plans could be developed by the various schools and departments, starting with school plans and cascading down to designated department-level plans. This systematic approach would help to minimize any difficulty in obtaining consensus in setting performance measures and would enhance communication within each school. Moreover, this process would delineate how each school supports the college s mission and values. IHE accrediting organizations require an institution to demonstrate the fulfillment of its mission through organizational structure and system processes. This quality indicator can be validated through the use of the BSC approach, which links the college s mission and values with specific performance measures in each of the four perspectives that then cascade down to school and departmental improvement plans. 48 July-September 2012 Search and read online at:

12 Application of the Balanced Scorecard in Higher Education: Opportunities and Challenges A common issue in IHEs is a disconnect between faculty and administrators. Communication in IHEs often flows in a top-down, vertical-type way. Feeling some of these same sentiments, the faculty at this liberal arts college have asked for a shared-governance structure. In whole system shared governance (WSSG), the organizational structure is decentralized and accountability-based. WSSG operates from its core where its mission, vision, and values should be most visible (Crow and DeBourgh 2010, p. 216). Implementation of the BSC framework at this college would help build relationships among faculty, staff, and administrators, a start in the process of shared governance. The BSC framework serves to build alignment around key performance indicators. This writer believes that this college has historically functioned in a reactive manner. With its emphasis on continuous improvement processes, the BSC would better position the college to operate in a proactive mode, since the scorecard s lead indicators link college strategies and mission with measurable outcomes that then drive future endeavors and initiatives. An efficient and effective way to gauge and/or predict upcoming trends and issues is through active engagement and alignment with a variety of stakeholders; this alignment and engagement is encouraged with the BSC approach. The college is also challenged by growth related to distant campuses and online formats, which may contribute to isolation and inconsistency in measuring and achieving quality performance standards; the BSC framework serves to foster connections and build alignment around key performance indicators. Conclusion The BSC framework is an excellent strategy-based management system that can be used in IHEs to assist them in clarifying their mission and vision and translating their vision into strategies. These strategies, in turn, can serve as the basis for developing operational objectives or actions with measurable indicators for the purpose of evaluating performance improvement and achieving success. In these tumultuous economic times, the use of the scorecard, with its inclusion of nonfinancial measures, paradoxically provides IHEs with a way to develop strategic priorities for resource allocation. Monitoring nonfinancial measures also affords IHEs the opportunity to consider student and stakeholder feedback, faculty and staff satisfaction, and the internal efficiency of the institution s processes. The scorecard can serve as an effective communication tool for IHEs. The BSC approach enhances communication with internal and external stakeholders; it also provides a venue for identifying what really matters to these stakeholders. Improved communication flow builds trust within and outside the IHE. Since successful execution of the BSC requires engagement and cooperation among all levels in the institution, it promotes collaboration and alignment, which are key motivators in pursuing continuous quality improvement strategies (Rice and Taylor 2003). Further, the cascading of the BSC also creates the alignment of performance measures. With the proliferation of IHE learning formats to include virtual sites and extended campuses, decentralization, isolation, and quality control can be problematic. The collaboration and alignment that drives the development of BSC performance measures fosters consistency and motivates action and change at the institutional level. References Asan, S. S., and M. Tanyas Integrating Hoshin Kanri and the Balanced Scorecard for Strategic Management: The Case of Higher Education. Total Quality Management and Business Excellence 18 (9): Beard, D. F Successful Applications of the Balanced Scorecard in Higher Education. Journal of Education for Business 84 (5): Burke, J. C Performance-Funding Indicators: Concerns, Values, and Models for Two- and Four-Year Colleges and Universities. Albany, NY: Nelson A. Rockefeller Institute for Government. Commission on Collegiate Nursing Education Standards for Accreditation of Baccalaureate and Graduate Degree Nursing Programs. Washington, DC: Commission on Collegiate Nursing Education. Retrieved May 2, 2012, from the World Wide Web: Cribb, G., and C. Hogan Balanced Scorecard: Linking Strategic Planning to Measurement and Communication. Paper delivered at the 24th Annual IATUL Conference, Ankara, Turkey, June 2 5. Retrieved May 2, 2012, from the World Wide Web: Planning for Higher Education Search and read online at: 49

13 Cindy Brown Crow, G., and G. A. DeBourgh Combining Diffusion of Innovation, Complex, Adaptive Healthcare Organizations, and Whole Systems Shared Governance: 21st Century Alchemy. In Innovation Leadership: Creating the Landscape of Health Care Learning, ed. T. Porter-O Grady and K. Malloch, Boston: Jones & Bartlett Learning. Gumbus, A Introducing the Balanced Scorecard: Creating Metrics to Measure Performance. Journal of Management Education 29 (4): Hafner, K. A Partnership for Performance: The Balanced Scorecard Put to the Test at the University of California. Retrieved May 2, 2012, from the World Wide Web: /10-98-bal-scor-chapter2.pdf. Karathanos, D., and P. Karathanos Applying the Balanced Scorecard to Education. Journal of Education for Business 80 (4): Retrieved May 2, 2012, from the World Wide Web: /applying-bsc-in-education.pdf. Karpagam, U., and L. Suganthi A Strategic Framework for Managing Higher Educational Institutions. Advances in Management 3 (10): Lichtenberg, T Strategic Alignment: Using the Balanced Scorecard to Drive Performance. Presentation retrieved from course lecture notes Aligning Performance Through Cascading. Podcast recorded as part of the Oregon Office of Rural Health s Flex Webinar Learning Series, March 4. Retrieved May 2, 2012, from the World Wide Web: McDevitt, R., C. Giapponi, and N. Solomon Strategy Revitalization in Academe: A Balanced Scorecard Approach. International Journal of Educational Management 22 (1): Mikhail, S The Application of the Balanced Scorecard Framework to Institutions of Higher Education: Case Study of an Ontario Community College. Presentation given as part of a workshop. Retrieved May 2, 2012, from the World Wide Web: Nefstead, W. E., and S. A. Gillard Creating an Excel-Based Balanced Scorecard to Measure the Performance of Colleges of Agriculture. Paper presented at the American Agricultural Economics Association Annual Meeting, Long Beach, CA, July Retrieved May 2, 2012, from the World Wide Web: Niven, P. R Balanced Scorecard: Step-by-Step for Government and Nonprofit Agencies. Hoboken, NJ: John Wiley & Sons. Pineno, C. J Should Activity-Based Costing or the Balanced Scorecard Drive the University Strategy for Continuous Improvement? Proceedings of ASBBS 15 (1): Retrieved February 24, 2010, from the World Wide Web: Rice, G. K., and D. C. Taylor Continuous-Improvement Strategies in Higher Education: A Progress Report. Educause Center for Applied Research Research Bulletin, vol. 2003, no. 20. Retrieved May 2, 2012, from the World Wide Web: Ruben, B. D Toward a Balanced Scorecard for Higher Education: Rethinking the College and University Excellence Indicators Framework. Higher Education Forum 99 (2): Scholey, C., and H. Armitage Hands-on Scorecarding in the Higher Education Sector. Planning for Higher Education 35 (1): Stewart, A. C., and J. Carpenter-Hubin The Balanced Scorecard: Beyond Reports and Rankings. Planning for Higher Education 29 (2): July-September 2012 Search and read online at:

14 TITLE SOURCE Qualitative Indicators For The Evaluation Of Universities Performance. Science Direct Database

15 Available online at Procedia Social and Behavioral Sciences 2 (2010) WCES-2010 Qualitative Indicators for the evaluation of universities performance Fereydoon Azma a * a Islamic Azad University, Aliabad Katoul Branch, Aliabad Katoul, Golestan, Iran Received November 11, 2009; revised December 1, 2009; accepted January 22, 2010 Abstract Since the world universities come in to new anxiety in the new generation and higher education in not of its last stability ( Clark, 1998 ), to recognize all issues concerned and also to evaluate the university performance based on the studied and precise conceptual frameworks are of significance. The main purpose of this study is to find the key performance indicators (KPIs) and also to present a conceptual framework for the evaluation of the performance of the universities according to the key performance indicators (KPIs). This study was pilot based on a combination of the research methods (descriptive and deductive) and also survey. Factor analysis and (Kaiser-Meyer-Olkin Measure of Sampling) KMO were used to analyze the data. The inner validity was also measured using SPSS and Alpha Koronbach. In the end, based on the findings, the researcher portraits 151 indicators and 3 conceptual frameworks. 10 factors including accommodation, research and scientific journals, processes, ICT, social and cultural services, faculty members, students, university staff excluding faculty members, and financial affairs. Regarding the universities, 9 and 10 factors were introduced respectively. The Second Framework with 9 Factors Including Area and ICT, Communications, graduates, Social and cultural services, periodicals and journals, employees, student Affairs financial affairs and processes, faculty members was designed 2010 Elsevier Ltd. All rights reserved. Keywords: Key indicators; higher education; performance evaluation. 1. Introduction Key performance indicators (KPIs) are the most comprehensive objectives in any organizations that direct the managers` activities to make them attainable. They are so important that in the literature they are considered of significance in quality improvement and objectives attainment. There is a good deal of studies on the role of key performance indicators that some come as follows. Hubert (1984) postulated that "Without a general understanding of past events, there will be no permanent change and improvement." Hence, without the evaluation of performance based on key factors and indicators, there will be no permanent change and improvement in the enhancement of the quality of the universities. Since the main function of management is to evaluate the performance to apply and attain the main strategies of the organization, performance evaluation is one of the indispensable needs of universities. The recognition of key performance indicators is one of the principal steps to performance evaluation. * Fereydoon Azma.. Tel.: address teflsh@yahoo.com Published by Elsevier Ltd. doi: /j.sbspro

16 Fereydoon Azma / Procedia Social and Behavioral Sciences 2 (2010) Fiksel (2002) states that to choose the key indicators, one should at first step consider the needs of the organization and benefit makers. Then, the key indicators and objectives are going to be stabled and recognized. Finally, they should be used in a convenient model of performance evaluation. Key performance indicators are the guide for decision making for universities. For instance, an attempt has been made to rank the universities by the US Educational council. An attempt in the early 1970s has also been made to design the national indicators to compare the universities, colleges and their programs that include "Ranking of Ph.D program", "Carnegie Classification" and "Gorman Classification of MA and Ph.D programs." Since then, the term "performance indicators" has been introduced in High education in Europe governmental section that was the advent of performance indicator worldwide (Borden et al., 1994). Answering the question "Are there any suitable quantitative indicators of performance?", Guy and Chris (2005) stated that "Regarding the classification of the potential indicators, it is necessary to offer an introductory framework to investigate the rate and limit of internationalization". The studies carried out by the Department of Education, Science and Technology (2001) revealed that the interest in the performance indicators in high education system has been on the rise in some countries, Australia in particular. Peiro (2003) introduces two kinds of managerial qualities for managers: "Technical skills" and "General Skills". Sarmad (2004) states that area and the facilities are also significant factors in the evaluation of university performances. Another factor related to universities is curriculum planning that includes organizing a series of teaching and learning activities to create favorite changes in the learners` behavior and evaluating the rate of attainment of these objectives. (Hsieh, Ling Feng, 2003). In the investigation of some of the performance indicators in UK high education system, Birch (1977) analyzing the data of Lough Borough university showed that KPIs play a very important role in the regular collection of data on education and the models of the use of internal sources of the organization. Concerning the "performance indicators and remote education officials", Shale and Comes (1998) concluded that "High education systems around the globe are under the careful supervision of people and government and remote education has been of great concern." Some other studies indicate Research and Announcing systems as other important indicators in evaluating universities (Sydman, 2003). For instance, Fang (2004) proposed a suitable model of performance indicators of electronic Library of university to evaluate the electronic Libraries in Taiwan. Hignez (1989) classified the key indicators for evaluation of the universities in the three main categories: internal, external and applied. Cave et al. (1992) elaborated on the expansion of the indicators in practice using a critical analysis in the high education systems. There are also other researches on the KPIs in universities. For example, applying KPI in strategic decision making, Dolence (1994) suggests a 9-step method to define and follow KPI in light of Strategic design process style and describes its application and results in Benedictine university. In an attempt to interpret the KPIs, Collin (1990) also states that the current economic and political situation indicates that the current policies in high education are formed based on limited data that consequently lead to unsuitable decisions. Therefore, in order to make sound decisions, one requires other KPIs and variables. Thus, data is a very important factor in evaluating the performance of the universities. 2. Research Method The research method is correlation. After reviewing the related literature, KPIs and the variables regarding the universities have been found. A researcher-made questionnaire was used to collect the data. SPSS software was used to calculate the internal validity and alpha coefficient %938 was measured. The research subjects are all managers of Islamic Azad universities of the region (78 managers) and some of the faculty members (242). KMO test and factorial analysis were used to analyze the data. 3. Research Findings The first research question was: "What are the suitable KPIs for evaluating the performance of the Islamic Azad universities of the region?" According to the related literature, the factors were found and were subject to voting among the sample. After collecting and analyzing the data using statistical tests, 10 factors with 53 variables were

17 5410 Fereydoon Azma / Procedia Social and Behavioral Sciences 2 (2010) validated for evaluating the universities. It was done in a way that the data were calculated using Rotated Component Matrix and 10 out of 15 factors were recognized and confirmed. Regarding the second question: "What conceptual framework can be proposed for evaluating the performance the Islamic Azad universities of the region?", a conceptual framework was produced. A second framework with 9 factors including the "Area and facilities, ICT, communication, graduates, social and cultural services, journal publication, non faculty member employees, financial affairs, faculty members" was also designed. 4. Discussion Regarding the first factor "Area and Facilities" validated with %795 correlation coefficient, as Sarmad (2004) states is one of the factors influencing the students` satisfaction. The area is of different types: cultural area, research area, lab area, office area, education area, sport area. Concerning the second factor "Research and scientific journals" validated with %826, the related literature of some countries worldwide such as Netherlands (Early 1980s), England (1979), Australia (1986), France and the USA (since 1910) and Germany (since 1976) have been considered such as "holding scientific lectures, holding conferences, faculty members` attending the conferences, faculty members` publications, expanding the library sources and the access to data banks." The third factor "Processes" validated with %879 was also the focus of some researches. Busin (2003), for instance, accentuated how to improve the processes for the managers using KPIs. The forth factor "education and technology" validated at %717 correlation coefficient was in accord with some research findings such as Birch (1977), Snow (1990) and Feng (2003). The fifth factor " cultural and social services" validated at %885 was in concur with the findings of some others, Vandan (2002) for instance who believes that KPIs are helpful measurements for the evaluation of performance and planning for the development of cultural and social services. The sixth factor "faculty members" and the eighth "employees" are accentuated by Draker (2000) who asserted that the most valuable capital of any organization is the scientific employees and their productivity that is a very important factor for evaluation of performance. The report of project management institute (2004) and the research carried out by Samadzade (2005) likewise are in accord with this assertion. The seventh and the ninth factors "students and graduates' respectively are in concur with the studies of Hallahan & Cofman (1989) and Hilder (1990). The tenth factor "financial affairs" is also a very significant factor in evaluation of performance of universities as stated by Robinson (2005) and James (2005). 5. Limitations This research as any other researches faces with some limitations such as the shortage of high education experts and impracticality of regular schemes of management in university area. The findings of this study, however, propose an up to date approach in the evaluation of the performance of the universities. 6. Conclusion and Implementations As Clark (1998) stated, the universities worldwide have been entering a period of limitless chaos that has been on the rise in the last three years. Thus, the high education has lost its stability. So knowing the university problems and evaluating of their performance according to the proposed conceptual framework are of significance. The new economic social and cultural planning of the country has made universities a leading figure in this way.

18 Fereydoon Azma / Procedia Social and Behavioral Sciences 2 (2010) References Agut, S. Grau. R. Peiro. J. M. (2003); Individual and Conceptual Influences on Managerial Competency Needs ; Emerald Journal of Management Development, Vol. 22, pp Alemanni, M. (2007); Key Performance Indicators for PLM benefits evaluation ; The Alcatel Alenia Space case study, Available at: Borden, B.(1994); Using performance indicators to Guide Strategic decision making ; San Francisco: Jossey Bass publishers. Birch, D. w. (1977); A case study of some performance indicators in higher education in the united kingdom ; Available at: Busin, J. (2003 ); Effective measurement & use of key performance indicators ; Available at: Carrin, G. James, C. ( 2005); Key Performance Indicators for the implementation of social health insurance ; Available at: www. ideas. repec. org Cave, m. (1997); The use of performance indicators in higher education ; a critical analysis of developing practice (2nd), London: Jessica Kingsley. Clark, B. R.(1998); Creation entrepreneurial universities, Oxford: Pergamon. Dolence, Michael G. ; Norris, Donald M. (1994); Using key performance indicators to drive strategic decision making, Available at: Eynde, Julie Vandan (2002 ); A case study of global performance indicators in crime prevention, Available at: Green, K. (2001); Compus Computing (2000 ) Encino, CA : campus computing. Hsieh, L. F. (2004 ); The performance indicator of university e-library in Taiwan, Available at: Human Resources Development working group (2007); Developing KPIs & productivity, performance benchmarks for performance based remuneration systems report. Kells, H.R. (1990), The Inadequacy of Performance Indicators for Higher Education ; The need for a more comprehensive and development construct. Jossy Bass Publishers Lee, Fang c ( 2004), Environmental performance Indicator Environment. Available at: Guigan, Mc. Brendan (2003); What are key performance indicators. Available at: Project Management Institute (2004), What else the PM knowledge competencies the Individual and organization need to learn? Available at: www. google.com (2006/8/23) Power, Collin (1990), Higher education indicator. An exercise in interpretation, Available at: Rowe, ken, Lievesley, Denis (2007), Construction & using educational performance indicators, Available at: Rubinson, Joel. Pfeiffer,Markus (2005 ), Brand key performance indicator as a force for Brand Equity management, Available at: Sydman, Stuart K.(2003), Technology In Jame J.Forest and kevin kinser higher education in the United States : an encyclopedia. Santa Barbara :ABC CLO Strand, Steve (2004), " Key performance indicators for primary school imbruement ", Available at: www. ema.sagepub.com Shale, Doug. Comes, Jean (1998), Performance indicator & university Distance providers", Available at: www. cade. athabascau. ca UGWVO; HAUPTT.C. (2007), Key performance indicator & assessment methods for structure sustainability ", Available at:

19 TITLE SOURCE Assessing Space Utilisation Relative To Key Performance Indicators How Well, Not How Much, Space Is Used. EbscoHost Database

20 Journal of Higher Education Policy and Management Vol. 34, No. 5, October 2012, Assessing space utilisation relative to key performance indicators how well, not how much, space is used Simon Fleming a *, Nathan Apps a, Paul Harbon a and Clive Baldock a,b a School of Physics, University of Sydney, Sydney, Australia; b Faculty of Science, Macquarie University, Sydney, Australia Efficient use of resources, including space, is critical in academic departments. Traditional space auditing simply assesses occupancy levels. We present a novel approach which assesses not just the extent to which space is used, but also how well it is used. We link space use quantitatively to key performance indicators in a research-intensive university department. A clear picture of how well each room is being used to meet a range of performance goals is obtained. Performance criteria are developed based on the survey data distributions and benchmarked against performance expectations. These criteria are applied to analyse the survey data, readily identifying underperforming space. This provides management, through a transparent process, with evidence-based information with which to facilitate change management to improve performance. Keywords: KPIs; space usage; space utilisation Introduction Space is a valuable resource, and one that is often challenging to manage in an academic environment. There is a reasonable body of literature back to at least the 1970s on academic space utilisation (Sharma, 1991), including substantial guidelines on space planning in tertiary education (Tertiary Education Facilities Management Association (TEFMA), 2009). However, these focus to a very large extent on teaching space allocation and the challenges of improving utilisation of this space (Billing, 1995; TEFMA, 2009). The literature provides only basic information on staff space: indicative office areas (e.g m 2 ) for different grades of staff and a recommendation of 16 m 2 of lab space to any grade of staff that need a lab. A more sophisticated, and perhaps nuanced, approach is required, especially in a research-intensive university department, to optimise the utilisation of the majority of space, which is not used for direct teaching purposes. Pressure on space has a tendency to bring to the fore the intrinsic tension between departmental management, with ultimate responsibility for the allocation of resources, and the researchers who win major competitive grants, fellowships and research centres, which all have substantial space implications. Where space utilisation is not close to capacity, a simple survey of occupancy will reveal unused space that can be assigned to new activities. However where space utilisation is at, or very near capacity, a more targeted approach may be appropriate. It is necessary to *Corresponding author. simon.fleming@sydney.edu.au ISSN X print/issn online 2012 Association for Tertiary Education Management and the LH Martin Institute for Tertiary Education Leadership and Management

21 504 S. Fleming et al. assess not only how heavily space is used, but how well space is used. This requires linking space use to key performance indicators (KPIs). Whilst space use rationalisation in such a situation is always going to be challenging for School management, problems can potentially be mitigated by a transparent process that generates quantitative metrics derived from widely accepted and publicly available performance data. This paper describes the novel methodology of a strategic space audit and its application to a university school of physics with very strong research performance both from theoreticians and experimentalists; the latter with wide-ranging laboratory requirements. Method Typically space audits take a snapshot of how space is utilised on a particular day, or over a relatively short period. Often this snapshot is generated by visiting all space and assessing its occupancy. This has several shortcomings: It only indicates how space is used, not how well it is used. Space may be in use, but that use may not be productive or may not be aligned to a school s KPIs. It assumes that occupancy and utilisation are equivalent. In research laboratories there can be important, but largely unattended, experiments which generate research outcomes and bring in funding. A short duration snapshot could be very misleading in a school where activity, and hence space use, varies greatly depending on whether it is taken in or out of a teaching semester. It can also be time consuming. We have developed a sophisticated analysis to generate an annualised picture of how well space is used to meet a department s KPIs. Several sets of inputs were used for the analysis: A standardised list of rooms. A list of all members of the School: staff, students (postgraduate and honours only), honorary and visitors. A self-reported survey of space usage. A survey that links KPIs (primarily research publications) to space usage. A report of publications by members of the School. Online databases of journal impact factors and quality ratings. A report of research and teaching income generated by members of the School. The two surveys were required specifically for this exercise, whilst the remaining inputs were from existing data. Self-reported survey The detailed analysis discussed later needed data on how every member of the School uses space. All members of the School were required to complete a web-based survey (SurveyMonkey, n.d.). Respondents reported which rooms they used and for what fraction of the time, on an annualised basis i.e. averaging over the whole year, to remove semester fluctuations etc.

22 Journal of Higher Education Policy and Management 505 Data on space with a small fractional usage were also requested because such space used by many people, all for small time fractions, was likely to be important. Whilst members of the School work outside ordinary hours in both School space and elsewhere, only data about where work was performed during normal working hours was gathered as this is the resource that School management controls. Where a recent change of space use had occurred that was likely to be on-going (such as relocation to another office space) the respondent was asked to project the current situation back over the whole reporting period. It was considered that in the situation of a change of office, it was more useful to know that the current office was (or strictly, would have been) used Z per cent, rather than the current office was used X per cent and some other office used Y per cent (where Z = X+Y). The survey gathered information on everyone with allocated space in the School: Academic and professional staff including continuing, research-only, fixed-term and casual. Postgraduate students PhD and Masters by research and coursework. Honours students Students with projects are assigned desks and frequently use laboratory space. Honorary staff Typically recent retirees that are based in the School and actively contribute to teaching and research. Those with no space allocated in the School were not surveyed. Visitors Given the transient nature of visitors it was not possible to gather complete data. The completion rate of 98 per cent was very satisfactory. The missing responses were mainly a few postgraduate coursework students whose space use was small and known. The principle output from the data was a matrix of 389 people by 231 rooms, sparsely populated with percentage use of each room by each person. KPI survey The important and novel feature of this audit was the linking of space usage to KPIs. The core business of the University is teaching and research (Spence et al., 2010), thus the related KPIs were the primary consideration. Space for teaching was assessed directly from School management data. The KPI survey thus focussed primarily on quantifiable research performance. It also gathered data on other activities such as administration and community engagement. The Australian government collects information on the research performance of universities through the Higher Education Research Data Collection process (Higher Education Research Data Collection, n.d.). This provides a detailed data set which relates directly to the KPIs by which the performance of the University, the School and individuals are measured. The KPI survey was developed as a mechanism to link this research performance to the use of space. Publications were a major part of these data. Based on the lead author for the School (i.e. the first named author from the School) the KPI survey determined where the work for each paper was performed. The information requested was the relative importance of (not necessarily the amount of time spent by the researchers in) different offices, laboratories etc. in performing the research that resulted in the production of each paper. Contribution,

23 506 S. Fleming et al. rather than time, was used since some rooms potentially housed equipment that required little human intervention but which was critical to research outcomes. Again, data were requested even on small fractional use and intelligent adjustments were made where there had been recent changes in space use, especially to accommodate the delay between the work being performed and the paper being published, or where new staff had joined the School or students had left. On multiple author papers where some authors were not members of the School, information was only sought on where the School authors performed that work. Income The data on the income to the School from the University contained useful performance information. From teaching income, data on teaching performance of individuals were readily derived. This data included the number of undergraduate and fee-paying students taught and student load and postgraduate completions. Research income data provided information on money earned for the School by individuals in relation to grants, publications and contracts: not the research funds themselves, but associated support and operational funds. Whilst generating income in its own right is not a KPI, it assists the School to provide an environment that supports delivery of the KPIs. Thus aspects of income generation were also considered in the survey. The details of these income streams are beyond the scope of this paper; however the critical point is that the contribution to each item could be readily calculated for each member of the School. With this information it was then possible to determine from their use of space how that space contributed to the generation of income. Different approaches were taken for teaching and research income. For example, teaching is largely performed in centrally timetabled teaching spaces. Whilst these are mainly within the School s buildings, they are not under the School s control, are not a cost to the School and are not directly linked to generation of this income (in principle, if they were unavailable the University would allocate space elsewhere on campus). Thus teaching income was associated with the office space used by the staff who generated the income, on the basis that this is where they prepare their lectures etc. and perform marking. Thus an office associated with a teaching-intensive member of staff with little research output will be seen to be making a valuable contribution to the School. Whilst this appears in the form of income, it is being used as a proxy for one of the areas of core business of the School, i.e. teaching. Teaching income associated with each member of staff was distributed across the office space they stated they used in the self-reported survey in the proportions they reported. If they reported they spent A per cent of time in Office 1, B per cent of time in Office 2, C per cent of time in a research laboratory and D per cent of time outside the School (A + B + C + D = 1), then the allocation of teaching income would be A/(A + B) to Office 1 and B/(A + B) to Office 2. That is, all the teaching income was distributed across only that office space which was used for teaching-related activities in the School. Research income was allocated differently because the research is clearly performed across a wide variety of space types: offices, laboratories, computing facilities, workshops, teleconference rooms, meeting rooms. Thus the research income was distributed across the total School space that the member of staff stated they used in the self-reported survey, in the proportions they reported. If they reported the same distribution as above, then the allocation of research income would be A/(A + B + C) to Office 1, B/(A + B + C)

24 Journal of Higher Education Policy and Management 507 to Office 2 and C/(A + B + C) to the laboratory. That is, all the research income was distributed across the space used for research-related activities in the School. The research income was also treated differently to the teaching income in one further, very important, manner. Whilst the teaching income is almost invariably directly associated with the person who actually performs the related teaching work and hence the space they use, this is often not the case for research income. The research income, as reported in the School management data, is associated with the Chief Investigators of the relevant grant or contract. Two issues arise from this when mapping this income onto the space that is used to perform the related work. The first issue is that some of the grants have multiple Chief Investigators, some of whom are associate members of staff, or past employees who no longer use space in the School. The work associated with this income is still performed in the School so the income cannot be disregarded. In every such case there was at least one other Chief Investigator who remained a space-using member of the School. Thus the research income for a Chief Investigator who does not use space in the School was reallocated to their co-chief Investigators who do use space in the School. The second, and more significant, issue is that in many cases the Chief Investigators employ research staff to perform the work. Thus the space required to perform the work that generates this income is not solely, or in many cases even largely, that used by the Chief Investigator. The Chief Investigators research income was thus distributed pro-rata across the Chief Investigators and their research staff, taking into account several factors, such as whether the Chief Investigator held a regular academic position or a research fellowship. Thus the distribution of research income across space was based, not solely on the person who earned it, but on the staff performing the work. Postgraduate research students were considered in a different manner to undergraduate students as they are already assessed in other aspects of the audit research performance and occupancy. It is important to recognise that the School management relationship with research students is different from that with research staff. With research staff the goal is for them to generate the best research outcomes and maximise income from the resources (including space) they are assigned. Whilst research outcomes are also of great importance to research students, the primary obligation is to provide them with excellent research training, and hence the resources, including space, required for that. This obligation is already accounted for in the analysis of occupancy. Additionally, data on their space usage performance are included in research income and research outputs. Results This section presents the data in each category and discusses the interpretation. Analysis in terms of performance is presented in the next section. Occupancy Data on occupancy represent the most basic information provided, and is similar to that given by a traditional space audit. These data provide occupancy in terms of full time equivalent individuals averaged over the year and in terms of total users (i.e. a head count of people who have used the room). We can compare this to the capacity (how many people the room is intended to accommodate) and the nominal allocation (how many people are actually allocated to the room).

25 508 S. Fleming et al. The ratio of total nominal allocation to total capacity gives the nominal filling of all space in the School. This is 84 per cent. The majority of this apparent under filling is several multi-occupancy student offices. Note that 100 per cent is not the target and 84 per cent is arguably close to the maximum practical capacity. The ratio of the total self-reported space usage to total capacity gives an actual filling fraction of 72 per cent, reflecting off-site work (home, library, overseas, etc.). Teaching usage Recall that Teaching Income was used as a proxy for undergraduate teaching, and postgraduate usage was included in other categories. There was thus no direct teaching data, although in the analysis section teaching space usage performance is assessed from these related metrics. Research usage The KPI Survey provided a matrix of publications and rooms, showing the contribution of each room to each publication. Weightings were applied to the papers, in accordance with accepted metrics for impact and quality. The Australian government currently uses publication data to assess the impact of university research performance, and has also used it recently to assess quality. Impact is based on the well-established Impact Factor of the journal (Journal Citation Reports, n.d.) (e.g. Physics Review Letters = 7.328, Nature = 34.48). The contribution of each room to the creation of this Impact Factor was calculated as follows. School members and affiliates publish n papers in total; each paper, x, has an Impact Factor, IF x, and a total number of authors, A T, of which A S are School authors. There are m rooms that may contribute to each paper. (1) In line with the Higher Education Research Data Collection assessment of impact, we took into account the number of authors by multiplying the journal Impact Factor by the ratio of School authors to total authors giving the School contribution to the Impact Factor, SIF x. SIF x = IF x A s A T (2) The contribution from room y to paper x is (from the KPI survey data) C yx where: m C yx = 1 y=1 (3) The contribution of each room, y, to the School Impact Factor of each paper, x, can be calculated by multiplying C yx and SIF x. Thus the total contribution of each room, RIF x, to the total School publications Impact Factor is calculated: RIF y = N SIF x C yx x=1

26 Journal of Higher Education Policy and Management 509 This is then a useful metric of how well each room is being used to contribute to the research impact KPI. Quality is the other research performance dimension; whilst difficult to assess quantitatively, metrics are proposed from time to time. During 2010 and 2011 a quality metric was used as part of the Excellence in Research for Australia Initiative (The Excellence in Research for Australia (ERA) Initiative, n.d.). We took a very similar approach to the analysis of space contribution to research quality. Part of the ERA initiative was that scholarly journals were ranked according to an assessment of whether they fell into one of four percentile groupings. Numerical values had to be assigned to the ratings. A = 10 (top five per cent of journals), A = 4 (the next 15 per cent), B = 1 (the next 30 per cent), C = 5 (the remaining 50 per cent) were used, and analysis showed that the results were fairly insensitive to the actual values. Also, as the Excellence in Research for Australia assessment of quality did not take into account the number of authors, the scaling (step 1 above) was not performed. The contribution of each room to the School s research impact and quality was now known. However, it is more meaningful to make comparisons based on area (for example a two-person office with two equally talented researchers should be twice as productive, but use approximately twice as much space). Scaling to the area of each room gives two novel but useful quantities, for which we coin the terms (Research) Impact Intensity and (Research) Quality Intensity. Figure 1 shows histograms of the Impact Intensity and Quality Intensity for the trial data set. To put Impact Intensity and Quality Intensity in context, consider the following. A paper published in a previously top graded journal (A in the Excellence in Research for Australia assessment) was ascribed a value of 10. From the Quality Intensity distribution it was clear that the activities in the highest performing space result in one such paper for approximately every 2 m 2. A typical professorial (Level E) office is 18 m 2, thus this top level of Quality Intensity corresponds to such a person being an author on around nine such papers each year, which is a high level of research performance. It is worth also considering the Impact Intensity in this manner. The Impact Factor of the top journals starts at 3.5. The highest Impact Intensity is around 1/m 2. Using the previous office size, that corresponds to a top researcher making an Impact Factor contribution of 18. That would require about five sole-authored, or ten jointly-authored, papers in top journals. Again, this is a high level of research performance. These metrics thus appear to be useful for making comparisons. Other research output was reported in the survey, including refereed conference publications, books, book chapters, patents, public talks and media coverage. No reliable Number of Rooms Impact Intensity of Room (m 2 ) Number of Rooms Quality Intensity of Room (m 2 ) Figure 1. Intensity. Histograms of distribution of rooms contributions to Impact Intensity and Quality

27 510 S. Fleming et al. 120 Number of Rooms $0 $10 $100 $1k $10k $100k Room Income $/m 2 Figure 2. Histogram of distribution of income generation across rooms, scaled to area. approach was apparent for comparing or quantifying this output, so a simple sum of the number of instances was calculated and then distributed, using the above methodology, across the rooms that contributed. This information was treated with caution as it aggregates disparate data. If a decision regarding space performance relied solely on this performance dimension, reference was made back to the source data to determine the actual quality of these outputs. Income-generating usage In the trial data there was a large spread of income generation, from 43 per cent of rooms not generating any income to one room alone generating as much as $750,000. Again the data were scaled to room area, and this is presented in a histogram in Figure 2, with a logarithmic scale because of the large spread. Analysis The survey data permitted the performance of space to be analysed in several ways. Here we concentrated on identifying those rooms that were underperforming. To do this it was necessary to set criteria or thresholds which represent the level below which the space is considered to be clearly underperforming. The choice of these thresholds was not straightforward. We were not aware of any established benchmarks for performance. There is also an inherent prioritisation introduced by the relative levels that are chosen between the different performance dimensions. (In an ongoing implementation of this methodology for the purpose of improving the productivity of the use of space, it would be critical for School management to carefully choose these performance criteria so they drive behaviour in a desired manner.) For this survey we chose a set of criteria, and the basis of those choices is explained. Occupancy Rooms were considered not to be underperforming in terms of occupancy if they met any of the following criteria: The ratio of the number of people allocated to the room to the nominal capacity of the room exceeded 75 per cent. The room was centrally allocated teaching space.

28 Journal of Higher Education Policy and Management 511 The actual usage in terms of self-reported occupation of the room resulted in a filling density less than 5 m 2 per (full time equivalent) individual. (Very few rooms met this criterion. Those that do warrant further investigation, as it may be that people who are assigned one room have chosen to occupy a different room.) Specially designated administration space. (Seventeen rooms were so designated. Of these, six relied solely on this criterion to avoid being deemed underperforming.) A total of 57 per cent of rooms met one or more of the above criteria; that is they were not underperforming on occupancy. Research To apply research performance criteria it is necessary to consider the different space usage of different types of research; particularly the significant laboratory needs of some types of experimental research. A direct comparison of experimental and theoretical research is problematic because, whilst theoretical research rarely uses laboratories and is typically performed exclusively in offices, nominally experimental research uses both laboratories and offices. We expected that the space performance of offices and laboratories would be substantially different; that laboratory-based research would need substantially more space and would thus result in a lower Impact or Quality Intensity than theoretical research. This bimodality was not apparent from the smooth distributions in Figure 1. However, their individual distributions may be sufficiently broad as to render any difference in the means indistinguishable. Figure 3 shows the replotting of the histograms for Impact and Quality Intensity breaking out Office and Laboratory space. There are clear differences between the distributions for office and laboratory space in both cases. This is also apparent in the means of these two metrics for these two types of space (Table 1). Perhaps coincidentally, the average office performance is 2.3 times better than the average laboratory performance for both the Quality and the Impact metrics (Table 1). This confirms that rather than attempt to separate theoretical and experimental research, a better approach is to apply different performance criteria to offices and laboratories. This is not a perfect solution, since, for instance, one would expect a predominantly theoretical researcher s office to perform significantly better than the office of a predominantly experimental researcher whose performance was otherwise equivalent. The histograms in Figure 3 have features (especially the peak at 0.3 in office impact intensity) that appear Number of Rooms Office 60 Office Lab 50 Lab Impact Intensity of Room Quality Intensity of Room Figure 3. Histograms of distribution of contributions by rooms to Impact Intensity and Quality Intensity comparing office and laboratory space. Number of Rooms

29 512 S. Fleming et al. Table 1. Mean and relative performance of space by type of space Quality and Impact. Quality Impact Office Laboratory Office/Laboratory Table 2. Threshold performance criteria for different spaces and metrics, and percentage of space exceeding thresholds. Threshold %Exceed Impact Intensity Office % Laboratory % Quality Intensity Office % Laboratory % to support this. Nevertheless, this appears to be a better approach than applying the same criterion to both classes of space. A consistent and simple approach to selecting these criteria for research usage of these spaces was chosen and that was to use a value of half the mean. Applying these criteria to the data, the rooms that met each criterion could also be simply determined and a summary of this, along with the threshold value, is presented in Table 2. For other research-related output (assessed by simply counting the number of events or publications), again the threshold was set at half the mean. Of the 10 per cent of rooms that exceeded this threshold, for only three was this the sole criteria where that room was not underperforming. That the majority also had adequate performance in some other respect is not surprising space that results in good journal publications is also likely to result in good conference publications. The three rooms that solely relied on this criterion to avoid classification as underperforming were considered in more detail. Income In 2010 the University introduced a new economic model which included a levy on space within the institution. The implementation of this so-called space charge resulted in all university units, departments and schools being charged for the space that they occupied, room-by-room. For the purpose of the School space audit, rooms were considered not to be underperforming in relation to income if they generated ten times the designated space charge for that room. This factor of ten was chosen because space is not provided for the purpose of generating revenue, however if it generates sufficient revenue (through appropriate and relevant activities), then it can assist in supporting core activities. Only 26 per cent of rooms met this demanding criterion. Other Six rooms, which met no other criteria of good performance, were manually assigned the status of acceptable use. These included toilets, plant rooms and a laboratory used to provide external services as part of a strategic national collaboration.

30 Journal of Higher Education Policy and Management 513 Aggregate performance A simple means of aggregating performance was used. If a room exceeded the threshold on any category it was deemed not to be underperforming. A more complex combined assessment might be more appropriate, but it would require a quantitative means of combining dissimilar items. The potential disadvantage of using this simple aggregate performance seems to be at the threshold a room that just exceeds the threshold on only one category and makes no contribution in any other category would be deemed adequate use of space, whereas a more valuable use of space by a room which was just sub-threshold in every category would be deemed underperforming. Nevertheless, a pass on one, pass on all aggregation is straightforward and permits a much clearer understanding of the different ways in which space contributes. Applying this aggregated performance criterion, 81 per cent of rooms were performing at least adequately in at least one category. Some satisfy the criteria of multiple categories (e.g. good research output and good income) as shown in Figure 4. The analysis thus provided a list of 45 rooms that apparently make no, or very little, useful contribution to the School (noting that fairly demanding criteria were applied); 1290 m 2 or 20 per cent of the School space. This appeared to represent a worthwhile opportunity for improvement. This list provided, by a transparent process, valuable quantitative data that could be used by School management as the basis for improving the productive and strategic use of space. Whilst a substantial fraction of space appeared to be underused, deeper analysis revealed that about a quarter of these rooms were anomalies, such as corridor space within a now subdivided larger room, which should simply be excluded. It is important to recognise that space identified as underperforming is generally not making zero contribution. Typically it is making a contribution, but not a sufficient contribution in any category to meet the performance criteria. Nevertheless, the survey revealed the scope for freeing space by more efficient use. Whilst 15 per cent was genuinely under-used, the scope for improvement was in the region of 5 7 per cent, because the target should actually not be 100 per cent. Completely filling space reduces flexibility to zero and creates significant School management and administration problems. Discussion This novel survey has been valuable in several ways. In particular, it provided management data supported by evidence, which could be used both internally and externally to 120 Number of Rooms Number of Categories Exceeding Threshold Figure 4. Number of rooms meeting different numbers of performance criteria.

31 514 S. Fleming et al. the School. Furthermore, it raised awareness across the School of the need for efficient use of space. From a management perspective with a focus on the internal issues, it has been important. The specific circumstances of the School are of an immediate space crisis, with a need to accommodate an imminent significant influx of personnel due to recent growth. Whilst most research areas are experiencing significant growth, there is a perception that some areas are in decline. Making more efficient use of space clearly entails rebalancing allocations. This sort of change is very difficult, especially in a collegial university environment. This survey provides solid evidence of how well space is used, based on data largely supplied by the users. It is worth noting that the survey generally accorded with School management perceptions of use. However, in a few cases, there were important surprises regarding the value of space to the School. This information allowed management to avoid changes that would have had significant hidden deleterious impact on performance. The survey permitted a selective approach to optimisation, based on actual performance. Turning now to the management benefit considering a perspective external to the School, this survey provided evidence for making the case to senior University management that additional space is genuinely needed not that current space is just full, but that it is full and being used well. This survey provides solid evidence, based on a transparent process and on information derived from central management data, as to how much and how well the space is being used. As well as providing the data and evidence, it also demonstrates a professional and impartial approach by School management to space allocation. This process took 400 person hours, of which perhaps half was in the initial effort of developing the process. The remaining time was split between running the process and the distributed effort of those completing the two surveys. The distributed effort across the School had two main components. There was a small effort required from everyone in completing the online survey. This was designed to take only a few minutes and for the large majority with simple work patterns this was probably correct. Nevertheless with several hundred people surveyed, the aggregate effort was non-trivial. The publications/outputs survey took much more effort from each of a much smaller number of people. Regarding the use of space in the specific example used in this study, there were a number of clear conclusions. The School s space was very heavily used. The basic level of occupancy at 84 per cent approaches the maximum practical level. However maximising output, or even output per square metre, does not correspond simply to maximising occupancy. Importantly, this survey also assessed how well or productively space is being used. The survey provided strong evidence that to a very great extent space was also used very effectively. The survey provided a list of rooms, which comprised 20 per cent of usable space, that appear to be underperforming. School management is currently exploring the better use of these rooms with the people to whom they are allocated. It is anticipated that this will result in about half being identified as genuinely poor use of space, and that perhaps in total 5 per cent of space can be freed up. Conclusions This study has demonstrated that it is entirely practical, and not particularly onerous, to assess not just how much space is used, but how well it is used. The evidence-based, quantitative metrics for space performance provide a valuable tool for School management to improve the use of space in delivering against KPIs.

32 Journal of Higher Education Policy and Management 515 New metrics for space use, particularly in an academic environment, have been developed. Of particular note are the research Impact Intensity and Quality Intensity, which permit ready comparison of research performance of academic space. Additionally, a set of performance criteria or thresholds for the analysis of these novel metrics has been developed. This was necessary because this audit broke new ground and there were no appropriate existing benchmarks. These metrics and associated criteria allowed assessment of relative performance within the scope of this survey. However importantly, if this approach and these metrics are applied by others, it will be possible to make comparisons of relative performance between similar organisations. Acknowledgements We are grateful to Ms Marlyn Horgan, Ms Chindy Praseuthsouk, and all other members of the School of Physics, Professor Trevor Hambley for useful discussions and the Research Office, University of Sydney. References Billing, D. (1995), Accommodation planning and learning environments in a large, dispersed university. Higher Education Quarterly, 49, doi: /j tb01663.x Higher Education Research Data Collection (n.d.). Retrieved from Research/ResearchBlockGrants/Pages/HigherEducationResearchDataCollection.aspx Journal Citation Reports (n.d.). Retrieved from resources.do?highlighted_tab=additional_resources&product=ua&sid=u118ppb5inmmdn plmib&cacheurl=no Sharma, R. (1991). Space planning and utilisation in tertiary education. Retrieved from aair.org.au/app/webroot/media/pdf/aair%20fora/forum1991/sharma.pdf Spence, M., Garton, S., Armstrong, D., Brewer, A., Hearn, J., & Trewhella, J. (2010). The University of Sydney Green Paper. Sydney, Australia: The University of Sydney. SurveyMonkey (n.d.). Retrieved from TEFMA (2009). Space Planning Guidelines, Edition 3, Tertiary Education Facilities Management Association (TEFMA) Incorporated, The Excellence in Research for Australia (ERA) Initiative (n.d.). Retrieved from au/era

33 TITLE SOURCE Developing An Educational Performance Indicator For New Millennium Leaners Academic One File

34

35

36

37

38

39

40

41

42

43

44

45

46

47