Scoring the Exam
Development of Scoring Items
Item Writing: Item writers determine the best answer (on the written exam) and weighting of the scoring items on the OSCE for each item by providing appropriate references, including a link to the relevant section of the competency document.
The Written and OSCE examinations are criterion-referenced assessments that focus on whether candidates meet the required performance level for entry-level candidates. Candidates are not compared to each other, and there are no predetermined pass rates. It is possible for all candidates to be successful on the examination.
During the course of each OSCE administration, examiners score each candidate’s station performance on a global rating scale ranging from “Outright Fail” to “Excellent”. This global scale placement, along with other scoring data, is used to calculate what is known as the borderline pass performance level. These calculations from each station are added together to determine the pass-score for the OSCE exam.
Scoring the Written and OSCE Exams
The foundation of setting a fair and defensible pass score for each exam component begins with an item validation process that ensures the relevancy and accuracy of the selected correct response. Optometrists develop the content for OEBC exam components based on the Blueprint, set the standards, and review items that did not perform as expected. Under the guidance of psychometricians, these processes ensure each test item’s validity.
To set standards for its Written Exam, OEBC uses the Angoff method, the best practice for static assessments. For its OSCE, OEBC uses borderline regression, the best method to set the standards for dynamic exams. The determination of the standard for scoring purposes is completed by optometrists under the guidance of a psychometrician.
OEBC normalizes the pass score to 1.000 to facilitate the comparison of exams over time. Candidates’ results provide a Pass / Fail decision based on the total score, compared to the pass score of “1.000,” which is the score required to pass the exam. Fail results undergo increased review.
After an examination, all results are verified before being sent to candidates.
Written:
The written exam is case-based and has four questions per case. Each item on the written exam is worth 1 mark. The candidate’s total score is a sum of the number of correct items.
The computer-based exam is marked electronically based on the candidate’s entries. The candidate’s ID number is linked to their access code. This process virtually eliminates coding errors.
A panel of optometrists under the guidance of a psychometrician complete a review of test items. Certain items may be deleted from scoring for all candidates if they do not meet appropriate psychometric characteristics. For example, if the item fails to discriminate positively. Deleted items are removed from scoring for all candidates, thereby ensuring that reported results are valid and fair.
OSCE:
The OSCE component has 12 interactive stations that simulate a clinical situation. The various scenarios are presented through the standardized patient and presentation of clinical data. In addition, three of these stations assess technical skills, patient interaction and solving specific issues.
Optometrists rate candidates’ performance. These examiners have been trained in using the standardized checklist criteria for the stations and the patient interaction scale.
First, a candidate’s total raw score is calculated by adding all ratings across all stations. This raw score is then converted to a scaled score for reporting failures.
The results of all failed candidates are reviewed by a panel of at least three optometrists to ensure that the score resulted from that candidate’s performance and not from any other extraneous factor determined by the panel to be relevant. If an extraneous factor during the exam (e.g., documented in writing at the exam site in the form of an Incident Report) impacted the results of an individual candidate, after a review the scoring may be adjusted for that candidate to establish a fair and valid result.