How Does Score Dependability Change
Table 2 presents a series of dependability indices with a varying number of raters and tasks for the NS and NNS groups. 3 The two groups showed similar patterns of score dependability: as the number of raters increased in both groups, the dependability index (O) increased. NS and NNS raters also achieved almost Chanel Jewelry the same score dependability when the number of tasks given for students to complete was constant. For example, when students were assumed to have completed three tasks, dependable scores could be obtained by the same number of NS and NNS raters: three NS and three NNS raters. Similarly, when students were assumed to have completed five tasks, dependable scores could be obtained by two raters from each NS and NNS group. This study utilized G-theory to examine whether NS and NNS raters assess ESL students' speaking performance differently, with particular attention paid to (i) the relative effects of NS and NNS raters on score dependability and (ii) changes in score dependability when the number of NS and NNS raters is varied. The results indicated that most score variability was attributable to students' speaking ability, with a very small rater effect. The NS and NNS rater groups also rarely differed, with raters in each group exhibiting almost the same severity patterns across all students and contributing little variance to total score variability. When the number of raters involved in the assessment was varied, the two groups also showed similar patterns of score dependability. There was a noticeable difference between the two groups with Pandora Jewelry regard to the interaction between students and raters, however, with the NS group exhibiting a greater interaction effect than the NNS group. This indicated that NS raters were more biased to a particular group of students than NNS raters, and suggests that NS raters might exhibit more severe or more lenient rating patterns towards certain students than NNS raters. Further research is recommended to examine why this might be the case. christian louboutin shoes louboutin shoes christian louboutin The results of this study suggest that NS and NNS raters contributed similarly to the score variability in their ESL speaking performance assessments, and those NNS raters might be equally as reliable as NS raters. This echoes Brown's (1995) and Kim's (2009a, 2009b) findings, which suggested that NS and NNS raters were similar in that they exhibited little difference in severity and internal consistency in an English speaking test. This is particularly encouraging in the light of the increasing influence of English in expanding circle countries where people speak it as a foreign language. In a specific, local EEL context, NNS raters might arguably be more suitable language assessors because their knowledge of local English varieties and variations, including so-called 'nativized' English (Taylor 2006), would be useful in determining acceptability criteria in assessments. Nonetheless, care must be taken with this interpretation because it is also possible that NS raters respond differently to differences from Englishes spoken in inner circle countries. The results of this study should be considered preliminary due to several limitations. First, the NS and NNS rater groups consisted of only Canadian and Korean teachers of English, respectively, thus limiting the possibility of how raters from different native language backgrounds might affect score variability. Indeed, in a well-known study in ESL writing, Santos (1988) found that NNS professors teaching at an American university were more severe than NS professors in assessing NNS students' academic writing. Further, the small number of student participants in the study resulted in large standard error (i. e. S. E = 0. 41 and 0. 44 for NS and NNS raters, respectively). Future research that takes a larger number of students into account will provide more reliable research findings. It is also recommended that further research examine the relationship between the distinctive features of Kachru's (1982) three concentric circles and assessors from such contexts. The findings of such research will certainly enrich our limited knowledge of the NS/NNS factor in language assessment.