Description: This strand of analysis looks at fundamental questions for assessment quality. It is delivered as a workshop to discuss the results of the analysis we have undertaken.
Description: This strand looks at carrying out data analysis to answer marker performance questions and also provides best practice advice on marker training and standardisation processes (with the intention of maximising the quality of standardisation while the marking is taking place, and then undertaking analysis to measure the quality and provide intelligence on which to base future training and standardisation processes). It is delivered as a workshop to discuss the results of the analysis we have undertaken.
Providing best practice advice and guidance in areas such as:
Description: Reliability is a key indicator of assessment quality. It is about the consistency of results; would test takers get a different result if they attempted a different version of the same test, or took the test on a different day, or in a different test window? It also quantifies the extent of error in results; how much of the scoring on the test concerned can be explained by the subject of the assessment, and how much is random error?
This is provided as a training course. We customise the content to match the particular assessment scenarios that the AO is using, coupled with reviewing the analysis that we have undertaken.
Analysis and outputs:
Description: This consultancy service helps assessment organisations to design qualifications to maximise validity and reliability. This is typically provided as a workshop to discuss the results of the sample development work we have undertaken.
It focuses on the principles of meaningful learning outcomes and assessment criteria, design appropriate grading schemes and create meaningful combinations of units.
We also provide advice on how to gather and organise underpinning evidence for successful regulatory applications.
Description: We assist your team in planning, running and documenting the results of grading or standards setting meetings. This is typically provided as a workshop to discuss the options for approaches to grading design and standard setting activity.
Description: Using statistical techniques such as Classical Test Theory and Item Response Theory to understand the extent to which tests and questions are comparable in terms of difficulty. This can either be provided as a training programme, or as a workshop following analysis by our statistics experts.
Topics covered may include the following areas of comparability:
Using effective qualitative analysis techniques to establish the demands (the intellectual content) of assessments. For example:
Description: This service allows assessment organisations to balance assessment requirements such as security (prevention of cheating and other aspects of maladministration), reliability and comparability against practical drivers such as producing a feasible number of questions, requiring test versions to be live for economic time periods (i.e. not requiring excessive quantities of tests and items to be produced). This can either be provided as a training programme, or as a workshop following analysis.
The consultancy would look at issues such as: