Description
This study considers the status of validity in the context of the assessment of VET in Australia. The project has involved reviewing the literature, reporting the outcomes of case studies, presenting the key findings and developing a diagnostic tool to guide assessors.
Summary
Executive summary
This study considers the status of validity in the context of vocational education and training (VET) in Australia. This has involved reviewing the literature, reporting on case studies, presenting key findings and recommending a tool to guide assessors.
The study reports that while validity is an issue that has been considered in previous Australian studies, the approach used has been based upon early models developed in the United States that are now believed to be inadequate. The study therefore turns to more recent approaches to validity to examine their suitability for the Australian environment of competency-based assessment.
One of these approaches takes validity as a unitary entity which, nevertheless can be viewed in a variety of ways: it presents various aspects or facets to which users of assessment outcomes might appeal in seeking to establish the soundness (or otherwise) of the interpretations they make. Thus the focus is shifted from the validity of an individual assessment instrument to the broader issue of the validity of the interpretation and use of an assessment outcome. The eight facets of validity proposed by Nitko (1996) are the focus of the study.
This approach to validity is examined in the context of the following questions:
- Are these proposed facets of validity meaningful for the competency-based assessment approach now used in Australia's vocational education and training system?
- For a sample of practitioners, to what extent are these expanded notions of validity already familiar?
- If the notions are familiar, are there clear benefits for assessments to be derived from an acceptance and use of this approach to validity?
- Can a 'diagnostic' tool be devised to facilitate use of this approach to validity?
Two groups were identified to be included in the study: organisations that both assess and train assessors at certificate and diploma level, and companies within the retail industry. Six case studies were carried out.
Each case study involved:
- matching the assessment procedures used against industry standards
- study of the organisations' assessment guidelines and instruments and judging them against published national guidelines and assessment component requirements
- interviews with representatives of participating organisations
- preparation of confidential evaluations of assessment procedures (returned to participants and not included in the report)
- preparation of final summaries of each organisation's input (approved by the participating organisations)
The eight-facet approach to validity leads to the identification of eight different kinds of evidence. The report then shows how each of these kinds of evidence, namely:
- representativeness of content
- relationships among the assessment tasks (internal coherence)
- relationships of assessment results to other variables (external coherence)
- reliability of assessors and of the assessment over time (stability)
- coverage of thinking skills and processes ( substantive evidence)
- cost, efficiency and practicality features
- generalisability for different types of applicant and under different conditions
- value of intended and unintended consequences (consequential evidence)
may in principle be appropriate within a competency-based approach to assessment.
The next step was to develop an interview schedule to explore perceptions of assessment as they presently existed within the two groups being studied. The interview schedule was designed to be carried out in stages, and in association with a consideration of the actual assessment procedures and instruments used in each organisation. The interview schedule focussed on gathering information related to the current use of evidence of the eight types identified by Nitko.
For example, in dealing with the first kind of evidence, the following questions were asked. Do the assessments used in your company/organisation cover all or only some of the content of your training program? Can you say with confidence that your assessments consistently reflect work practice? Do you think any additional assessment is necessary? (If so, what would this be?) Could any assessments or parts of assessment be omitted? Does the emphasis or balance in the assessment match the emphasis on the job? Is the assessment 'up to date'? Are the assessment tasks worthwhile in themselves? (For example, do they contribute to learning?)
That these questions are not Dorothy Dixers (which would make the exercise useless) is demonstrated by the fact that only the last of these six questions led to the same answer in all six case studies.
A table sets out the details of each case study in summary, organised in terms of the eight types of validity evidence.
The study showed that the eight types of evidence suggested by Nitko were, with a single exception, regarded as important by the participants. The exception, consequential evidence, relates, in part, to the impact of assessment on 'third parties' and would therefore be expected to be less apparent to the organisations being studied.
Based on the information gathered in the case studies, a short self-administered questionnaire with supporting advice has been prepared for assessors, and appears as appendix A in the report. This diagnostic tool includes practical illustrations that emphasise the importance of the various kinds of validity evidence.
The study presents firm evidence that the approach to validity indicated by Nitko can be fruitful in improving competency-based assessment. The report shows that competency-based assessment is not self-validating, and that this is already recognised by industry.
Several other findings are reported:
- There was evidence that some practitioners were reluctant to allow any scrutiny of their assessment practices. Although this was limited in extent, it was nevertheless a cause for concern.
- Because this study required consideration of assessment records kept by participants, it was notable that in some cases, the storage of records and access to them were not well-developed, to the extent that the capacity of the records to be audited was in doubt.
- Participants used ideas of 'recognition of prior learning' and 'recognition of current competencies' in various ways, indicating that there was no uniform understanding of this area.
- The use of 'integrated competency assessment' (holistic assessment), while valuable, appeared to raise some issues regarding validity that have yet to be resolved.
- While the issue of grading arose in the course of the study, its role is complex and issues of validity would vary depending upon use.
One influence on employment and on-job success that also has an impact on validity could not be considered in the study because it appears to be an unstated factor. This is the matter of attitude. In some cases this may be a major factor, and its absence from consideration in training packages is a cause for concern. This was beyond the scope of the present research, but it did place some limitations upon the results produced by this study.
The study has been able to demonstrate that, within the industrial areas studied, the proposed expanded approaches to validity are already generally regarded as important. The use of the broader notion of validity would allow a clearer understanding of practical issues. A diagnostic tool has been developed that will facilitate the development of such understanding.
Finally, as a consequence of this broader approach, a revised definition of validity is proposed that sees it as the extent to which the interpretation and use of an assessment outcome can be supported by evidence.
Download
TITLE | FORMAT | SIZE | |
---|---|---|---|
Improving-the-validity-621 | 383.2 KB | Download |