Kappa on a single item kappa(si) is proposed as a measure of the interrater agreement when a single item or object is rated by multiple raters. A statistical test and Monte Carlo simulations are provided for testing the statistical significance of kappa(si) beyond chance agreement.