Managing better: Measuring institutional health and effectiveness in vocational education and training

By Graham Maxwell, Peter Noonan, Ian Hardy, Mark Bahr Research report 7 December 2004 ISBN 1 920896 08 2

Description

Vocational education and training (VET) policy is increasingly focused on the importance of quality in each VET institution's capacity to deliver effective programs. This report addresses institutional-level monitoring and evaluation of performance and provides a comprehensive model which institutes can use for this purpose. The model draws on background theory and practice and identifies a range of relevant indices across three dimensions: inputs, processes, and outputs/outcomes. The results are an important first step to an improved and empirically based understanding of the factors that contribute to successful outcomes from VET providers.

Summary

About the research

  • Current national system-level performance measures are insufficient for producing and maintaining quality vocational education and training (VET) institutions.

  • The model of performance measures suggested for VET institutions identifies relevant indices across the three dimensions of inputs (institutional resources, staff and student characteristics), processes (for example, quality of decision-making, institutional climate and culture) and outputs/outcomes (for example, student and employer satisfaction). Because of the importance of processes in linking and mediating inputs and outcomes, the model gives priority to measures relating to processes (such as quality of decision-making and institutional climate and culture).

  • Institutional self-monitoring and self-evaluation are considered significant challenges in improving the effectiveness of the VET system.

Executive summary

This report addresses institution-level monitoring and evaluation, which is seen as the next challenge for improving the effectiveness of the vocational education and training (VET) system. The report argues that monitoring and evaluation at the institution level are best implemented through self-monitoring and self-evaluation. A process such as this requires attention to appropriate indices or measured indicators of trends of institutional performance. However, these indices are not necessarily the same as those needed for system monitoring and evaluation.

Current accountability requirements would appear to be insufficient for producing and maintaining quality institutions, since these are designed to service the needs of national system reporting and accountability rather than individual institutions. They are not especially appropriate — certainly not sufficient — for servicing the needs of institutions to manage better.

A new system is needed whereby institutions assume responsibility for their own improvement on the basis of empirical evidence of their own health and effectiveness. It is important that institutions adopt their own methods of data collection tailored to their own goals, context, characteristics and planning strategies. This requires attention to both the constraints on institutional decision-making and the corresponding opportunities for action. Technical and further education (TAFE) institutions do not have the same level of autonomy as universities, and the level of autonomy varies between jurisdictions. Nevertheless, in all cases, there are many institutional practices that are controlled locally.

The basis for developing institutional capacity for self-monitoring and self-evaluation already exists in the Australian Quality Training Framework. The framework establishes standards for judging the quality of the institution's delivery and assessment systems as well as their client services and administrative systems, and institutions will have to be compliant with them. One aspect of the standards is the requirement to conduct an annual audit. Institutions should use this requirement to implement comprehensive monitoring and evaluation of the institutional components and programs.

The existing national data collection is too narrow and too cumbersome to be of much benefit in institutional self-monitoring and self-evaluation. Some of the data may be relevant and potentially useful, particularly in the form of benchmarks, but there are issues of accessibility and immediacy which need to be overcome to enable these data to be readily useable for institution-level monitoring and evaluation. Data need to be collected and used within short timeframes to ensure their currency and relevance. Further, much of the data relevant to state or nation-wide systems are not directly applicable to individual institutions, or are too sparse for reliable conclusions to be drawn internally. Institution-level monitoring and evaluation requires reliable data on institutional sub-systems, departments, units and programs. In addition, monitoring and evaluation at institution level needs to attend to issues relevant to local communities and industries, and this requires tailoring of questions and methods to fit the context.

Institutions need to build their own capacity for data collection and data analysis. This report offers a model of performance measures which could be used for this purpose. The model draws on background theory and practice reviewed in the report and identifies a range of relevant indices across the three dimensions: inputs; processes; and outputs/outcomes. These indices will need to be supported by relevant data drawn from existing collections (for example, student, staffing and finance), but with additional measures, particularly those related to processes (for example, quality of decision-making, institutional climate and culture) and perceptions of outcomes (for example, student and employer satisfaction).

The model gives greater priority to the importance of processes (that is, program characteristics, procedural characteristics) in linking and establishing relationships between inputs and outcomes (student achievements), and identifies areas where prioritised and targeted actions can be taken to improve the effectiveness of the VET institution. The proposed indices can be selectively used and supplemented to examine institutional and program performance at the local level. Investment of resources in institutional self-monitoring and self-evaluation systems and processes increases the capacity of the institution to base planning decisions on deliberate and relevant empirical information about local institutional performance.

National bodies such as the Australian National Training Authority (ANTA) and the National Centre for Vocational Education Research (NCVER) should play a key role in assisting institutions to build their institutional capacity for self-monitoring and self-evaluation. These organisations are ideally placed to develop and provide guidelines, procedures, techniques, resources and advice, and to disseminate examples of good practice. Enhancing institutional capacity will require investment in development and implementation at an institutional level. Furthermore, institutions will need some assistance in determining how to develop and implement appropriate strategies and, in this context, new strategies should ideally build on existing practice, and emerge through the encouragement of new ideas. To build on current practice, consideration should be given to funding case studies of interesting institutional practice. To promote new ideas, consideration should be given to funding innovative projects with the potential to provide exemplary models for other institutions, and as the basis of benchmarking between institutions.

Download

TITLE FORMAT SIZE
nr2011 .pdf 674.1 KB Download