Rater Facets Models with Item/Rater Intercepts and Slopes
rm.facets.RdThis function estimates the unidimensional rater facets model (Lincare, 1994) and an extension to slopes (see Details; Robitzsch & Steinfeld, 2018). The estimation is conducted by an EM algorithm employing marginal maximum likelihood.
Usage
rm.facets(dat, pid=NULL, rater=NULL, Qmatrix=NULL, theta.k=seq(-9, 9, len=30),
    est.b.rater=TRUE, est.a.item=FALSE, est.a.rater=FALSE, rater_item_int=FALSE,
    est.mean=FALSE, tau.item.fixed=NULL, a.item.fixed=NULL, b.rater.fixed=NULL,
    a.rater.fixed=NULL, b.rater.center=2, a.rater.center=2, a.item.center=2, a_lower=.05,
    a_upper=10, reference_rater=NULL, max.b.increment=1, numdiff.parm=0.00001,
    maxdevchange=0.1, globconv=0.001, maxiter=1000, msteps=4, mstepconv=0.001,
    PEM=FALSE, PEM_itermax=maxiter)
# S3 method for rm.facets
summary(object, file=NULL, ...)
# S3 method for rm.facets
anova(object,...)
# S3 method for rm.facets
logLik(object,...)
# S3 method for rm.facets
IRT.irfprob(object,...)
# S3 method for rm.facets
IRT.factor.scores(object, type="EAP", ...)
# S3 method for rm.facets
IRT.likelihood(object,...)
# S3 method for rm.facets
IRT.posterior(object,...)
# S3 method for rm.facets
IRT.modelfit(object,...)
# S3 method for IRT.modelfit.rm.facets
summary(object, ...)
## function for processing data
rm_proc_data( dat, pid, rater, rater_item_int=FALSE, reference_rater=NULL )Arguments
- dat
- Original data frame. Ratings on variables must be in rows, i.e. every row corresponds to a person-rater combination. 
- pid
- Person identifier. 
- rater
- Rater identifier 
- Qmatrix
- An optional Q-matrix. If this matrix is not provided, then by default the ordinary scoring of categories (from 0 to the maximum score of \(K\)) is used. 
- theta.k
- A grid of theta values for the ability distribution. 
- est.b.rater
- Should the rater severities \(b_r\) be estimated? 
- est.a.item
- Should the item slopes \(a_i\) be estimated? 
- est.a.rater
- Should the rater slopes \(a_r\) be estimated? 
- rater_item_int
- Logical indicating whether rater-item-interactions should be modeled. 
- est.mean
- Optional logical indicating whether the mean of the trait distribution should be estimated. 
- tau.item.fixed
- Matrix with fixed \(\tau\) parameters. Non-fixed parameters must be declared by - NAvalues.
- a.item.fixed
- Vector with fixed item discriminations 
- b.rater.fixed
- Vector with fixed rater intercept parameters 
- a.rater.fixed
- Vector with fixed rater discrimination parameters 
- b.rater.center
- Centering method for rater intercept parameters. The value - 0corresponds to no centering, the values- 1and- 2to different methods to ensure that they sum to zero.
- a.rater.center
- Centering method for rater discrimination parameters. The value - 0corresponds to no centering, the values- 1and- 2to different methods to ensure that their product equals one.
- a.item.center
- Centering method for item discrimination parameters. The value - 0corresponds to no centering, the values- 1and- 2to different methods to ensure that their product equals one.
- a_lower
- Lower bound for \(a\) parameters 
- a_upper
- Upper bound for \(a\) parameters 
- reference_rater
- Identifier for rater as a reference rater for which a fixed rater mean of 0 and a fixed rater slope of 1 is assumed. 
- max.b.increment
- Maximum increment of item parameters during estimation 
- numdiff.parm
- Numerical differentiation step width 
- maxdevchange
- Maximum relative deviance change as a convergence criterion 
- globconv
- Maximum parameter change 
- maxiter
- Maximum number of iterations 
- msteps
- Maximum number of iterations during an M step 
- mstepconv
- Convergence criterion in an M step 
- PEM
- Logical indicating whether the P-EM acceleration should be applied (Berlinet & Roland, 2012). 
- PEM_itermax
- Number of iterations in which the P-EM method should be applied. 
- object
- Object of class - rm.facets
- file
- Optional file name in which summary should be written. 
- type
- Factor score estimation method. Factor score types - "EAP",- "MLE"and- "WLE"are supported.
- ...
- Further arguments to be passed 
Details
This function models ratings \(X_{pri}\) for person \(p\), rater \(r\) and item \(i\) and category \(k\) (see also Robitzsch & Steinfeld, 2018; Uto & Ueno, 2010; Wu, 2017) $$P( X_{pri}=k | \theta_p ) \propto \exp( a_i a_r q_{ik} \theta_p - q_{ik} b_r - \tau_{ik} ) \quad, \quad \theta_p \sim N( 0, \sigma^2 )$$ By default, the scores in the \(Q\) matrix are \(q_{ik}=k\). Item slopes \(a_i\) and rater slopes \(a_r\) are standardized such that their product equals one, i.e. \( \prod_i a_i=\prod_r a_r=1\).
Value
A list with following entries:
- deviance
- Deviance 
- ic
- Information criteria and number of parameters 
- item
- Data frame with item parameters 
- rater
- Data frame with rater parameters 
- person
- Data frame with person parameters: EAP and corresponding standard errors 
- EAP.rel
- EAP reliability 
- mu
- Mean of the trait distribution 
- sigma
- Standard deviation of the trait distribution 
- theta.k
- Grid of theta values 
- pi.k
- Fitted distribution at - theta.kvalues
- tau.item
- Item parameters \(\tau_{ik}\) 
- se.tau.item
- Standard error of item parameters \(\tau_{ik}\) 
- a.item
- Item slopes \(a_i\) 
- se.a.item
- Standard error of item slopes \(a_i\) 
- delta.item
- Delta item parameter. See - pcm.conversion.
- b.rater
- Rater severity parameter \(b_r\) 
- se.b.rater
- Standard error of rater severity parameter \(b_r\) 
- a.rater
- Rater slope parameter \(a_r\) 
- se.a.rater
- Standard error of rater slope parameter \(a_r\) 
- f.yi.qk
- Individual likelihood 
- f.qk.yi
- Individual posterior distribution 
- probs
- Item probabilities at grid - theta.k
- n.ik
- Expected counts 
- maxK
- Maximum number of categories 
- procdata
- Processed data 
- iter
- Number of iterations 
- ipars.dat2
- Item parameters for expanded dataset - dat2
- ...
- Further values 
References
Berlinet, A. F., & Roland, C. (2012). Acceleration of the EM algorithm: P-EM versus epsilon algorithm. Computational Statistics & Data Analysis, 56(12), 4122-4137.
Linacre, J. M. (1994). Many-Facet Rasch Measurement. Chicago: MESA Press.
Robitzsch, A., & Steinfeld, J. (2018). Item response models for human ratings: Overview, estimation methods, and implementation in R. Psychological Test and Assessment Modeling, 60(1), 101-139.
Uto, M., & Ueno, M. (2016). Item response theory for peer assessment. IEEE Transactions on Learning Technologies, 9(2), 157-170.
Wu, M. (2017). Some IRT-based analyses for interpreting rater effects. Psychological Test and Assessment Modeling, 59(4), 453-470.
Note
If the trait standard deviation sigma strongly
differs from 1, then a user should investigate the sensitivity
of results using different theta integration points theta.k.
See also
See also the TAM package for the estimation of more complicated facet models.
See rm.sdt for estimating a hierarchical rater model.
Examples
#############################################################################
# EXAMPLE 1: Partial Credit Model and Generalized partial credit model
#                   5 items and 1 rater
#############################################################################
data(data.ratings1)
dat <- data.ratings1
# select rater db01
dat <- dat[ paste(dat$rater)=="db01", ]
#****  Model 1: Partial Credit Model
mod1 <- sirt::rm.facets( dat[, paste0( "k",1:5) ], pid=dat$idstud )
#****  Model 2: Generalized Partial Credit Model
mod2 <- sirt::rm.facets( dat[, paste0( "k",1:5) ],  pid=dat$idstud, est.a.item=TRUE)
summary(mod1)
summary(mod2)
if (FALSE) {
#############################################################################
# EXAMPLE 2: Facets Model: 5 items, 7 raters
#############################################################################
data(data.ratings1)
dat <- data.ratings1
#****  Model 1: Partial Credit Model: no rater effects
mod1 <- sirt::rm.facets( dat[, paste0( "k",1:5) ], rater=dat$rater,
             pid=dat$idstud, est.b.rater=FALSE )
#****  Model 2: Partial Credit Model: intercept rater effects
mod2 <- sirt::rm.facets( dat[, paste0( "k",1:5) ], rater=dat$rater, pid=dat$idstud)
# extract individual likelihood
lmod1 <- IRT.likelihood(mod1)
str(lmod1)
# likelihood value
logLik(mod1)
# extract item response functions
pmod1 <- IRT.irfprob(mod1)
str(pmod1)
# model comparison
anova(mod1,mod2)
# absolute and relative model fit
smod1 <- IRT.modelfit(mod1)
summary(smod1)
smod2 <- IRT.modelfit(mod2)
summary(smod2)
IRT.compareModels( smod1, smod2 )
# extract factor scores (EAP is the default)
IRT.factor.scores(mod2)
# extract WLEs
IRT.factor.scores(mod2, type="WLE")
#****  Model 2a: compare results with TAM package
#   Results should be similar to Model 2
library(TAM)
mod2a <- TAM::tam.mml.mfr( resp=dat[, paste0( "k",1:5) ],
             facets=dat[, "rater", drop=FALSE],
             pid=dat$pid, formulaA=~ item*step + rater )
#****  Model 2b: Partial Credit Model: some fixed parameters
# fix rater parameters for raters 1, 4 and 5
b.rater.fixed <- rep(NA,7)
b.rater.fixed[ c(1,4,5) ] <- c(1,-.8,0)  # fixed parameters
# fix item parameters of first and second item
tau.item.fixed <- round( mod2$tau.item, 1 )    # use parameters from mod2
tau.item.fixed[ 3:5, ] <- NA    # free item parameters of items 3, 4 and 5
mod2b <- sirt::rm.facets( dat[, paste0( "k",1:5) ], rater=dat$rater,
             b.rater.fixed=b.rater.fixed, tau.item.fixed=tau.item.fixed,
             est.mean=TRUE, pid=dat$idstud)
summary(mod2b)
#****  Model 3: estimated rater slopes
mod3 <- sirt::rm.facets( dat[, paste0( "k",1:5) ], rater=dat$rater,
            est.a.rater=TRUE)
#****  Model 4: estimated item slopes
mod4 <- sirt::rm.facets( dat[, paste0( "k",1:5) ], rater=dat$rater,
             pid=dat$idstud, est.a.item=TRUE)
#****  Model 5: estimated rater and item slopes
mod5 <- sirt::rm.facets( dat[, paste0( "k",1:5) ], rater=dat$rater,
             pid=dat$idstud, est.a.rater=TRUE, est.a.item=TRUE)
summary(mod1)
summary(mod2)
summary(mod2a)
summary(mod3)
summary(mod4)
summary(mod5)
#****  Model 5a: Some fixed parameters in Model 5
# fix rater b parameters for raters 1, 4 and 5
b.rater.fixed <- rep(NA,7)
b.rater.fixed[ c(1,4,5) ] <- c(1,-.8,0)
# fix rater a parameters for first four raters
a.rater.fixed <- rep(NA,7)
a.rater.fixed[ c(1,2,3,4) ] <- c(1.1,0.9,.85,1)
# fix item b parameters of first item
tau.item.fixed <- matrix( NA, nrow=5, ncol=3 )
tau.item.fixed[ 1, ] <- c(-2,-1.5, 1 )
# fix item a parameters
a.item.fixed <- rep(NA,5)
a.item.fixed[ 1:4 ] <- 1
# estimate model
mod5a <- sirt::rm.facets( dat[, paste0( "k",1:5) ], rater=dat$rater,
             pid=dat$idstud, est.a.rater=TRUE, est.a.item=TRUE,
             tau.item.fixed=tau.item.fixed, b.rater.fixed=b.rater.fixed,
             a.rater.fixed=a.rater.fixed, a.item.fixed=a.item.fixed,
             est.mean=TRUE)
summary(mod5a)
#****  Model 6: Estimate rater model with reference rater 'db03'
mod6 <- sirt::rm.facets( dat[, paste0( "k",1:5) ], rater=dat$rater, est.a.item=TRUE,
             est.a.rater=TRUE, pid=dat$idstud, reference_rater="db03" )
summary(mod6)
#**** Model 7: Modelling rater-item-interactions
mod7 <- sirt::rm.facets( dat[, paste0( "k",1:5) ], rater=dat$rater, est.a.item=FALSE,
             est.a.rater=TRUE, pid=dat$idstud, reference_rater="db03",
             rater_item_int=TRUE)
summary(mod7)
}