By Zafar Iqbal
An assessment is reliable when using similar passing scheme and criteria, the assessors achieve identical judgments about the learners’ work. The Institute of Education (2004) defines reliability as “in terms of measurement how accurate the assessment is and if repeated how far the second result from first outcome.” In other words, irrespective of the variations of the assessors, the scores or result should be the same. To make assessments reliable, it is vital that learners are explicitly aware about learning and assessment criteria and the assessors remain consistent in the assessment process.
An assessment is called ‘valid’ when it fulfills its core objective, which means it should measure what it is supposed to measure, for instance, if the objective of the assessment is ‘to analyse’ a specific ability, it is not acceptable to consider some additional deliberations such as competency in the language, etc. while making decisions about the performance of the learners. Validity has different forms such as concurrent, content, criterion- related, construct, consequential and like. For Messick (1989), a criterion- related validity shows the students’ performances in a test comparing it with other interrelated factor, such as when a student shows higher grades in college diagnostic test, it improves his/ her performance in the college. Concurrent validity demonstrates the correlation of results one instrument with another by assuming to assess the same capability, knowledge or skill (ibid). ‘Content Validity’ assess with the provided answer, matching with the subject areas they intended to assess (The College Board, 2015). For instance, the Edexcel/Pearson Education has provided a list of indicative content in the unit for each task, which is considered by the assessors while assessing student work. Validity and reliability are interconnected. Low reliability may cause a low validity in the assessment (Institute of Education, 2004).
An assessment is ‘fair’ when all learners are provided equal opportunity to perform their knowledge and abilities notwithstanding the discrepancies in their experiences and they also consider that assessment instruments and processes are equally available to them. For instance, providing some additional time to a learner in written examination or presentation to a particular candidate is against the notion of ‘fairness’ as it is an unfair judgement. In consideration of above-mentioned principles, it is the responsibility of the tutors/teachers/lecturers/assessors to ensure that the planning, delivery and marking of the instrument processes are fair, reliable, valid and consistent. For example, in an international learning environment there is possibility that some of the learners cannot pronounce business terminology of corporate sector correctly due to their non-English accent; however, this aspect of their performance should not be considered a major weakness because it is not a core objective of the assessment. In such situation, learners should be provided additional support in their study skill classes.
Institute of Education (2004) A Systematic review of the evidence of reliability and validity of assessment by teachers used for summative purpose. London: University of London.
Messick, S. (1989). Validity. In R. Linn (Ed.), Educational measurement (3rd ed., pp. 13103). New York: Macmillan
The College Board (2015) The types of validity [online] available at:
Accessed on 05/01/2015
The journey to Excellence (2015) Assessment for learning [online] available at
http://www.journeytoexcellence.org.uk/resourcesandcpd/research/summaries/rsassessment.asp Assessed on 05/01/2015
Accessed on 05/01/2015