As many on this forum know, I am often for an R solution. However, in this case it is reinventing the wheel, and in a much less robust way. There is a great piece of free software, Map Comparison Kit (MCK), that implements many published and novel validation statistics for rasters. Of particular interest in this case are the Kappa, fuzzy Kappa and weighted Kappa.
Now, if you want to implement something in R there are many approaches you can take that depend on the complexity of the validation statistic. In a univariate case you can easily pass a function to "focal" to calculate uncertainty within a defined neighborhood. Moving into a bivariate case, you would want to vectorize the problem and define a function that would take two independent data into account. I do not believe that "movingFun" or "focal" will take two rasters into account. You can however, use "overlay", "getValuesBlock" or ideally"getValuesFocal" all of which will operate on stack/block objects.
Here is a worked example of calculating Kappa, using a 3x3 window, with "getValuesFocal". In the for loop the lapply function is reclassifying simulated probabilities [p >= t |1| else |0|], The parameter to adjust the sensitivity is "p" and "ws" adjust the size of the focal window extracted. I wrote this to be memory safe so, it writes a file ("Kappa.img") to disk in the defined working directory.
require(raster)
require(asbio)
setwd("D:/TEST")
ws <- 3 # window size
p=0.65 # probability threshold
# Create example data
pred <- raster(ncol=100, nrow=100)
pred[pred] <- runif(length(pred[pred]),0,1)
obs <- pred
obs[obs] <- runif(length(pred[pred]),0,1)
obs.pred <- stack(obs,pred)
names(obs.pred) <- c("obs","pred")
# Create new on-disk raster
s <- writeStart(obs.pred[[1]], "Kappa.img", overwrite=TRUE)
tr <- blockSize(obs.pred)
options(warn=-1)
# Loop to read raster in blocks using getValuesFocal
for (i in 1:tr$n) {
# Get focal values as list matrix object
v <- getValuesFocal(obs.pred, row=tr$row[i], nrows=tr$nrows[i],
ngb=ws, array=FALSE)
# reclassify data to [0,1] using lapply
v <- lapply(v, FUN=function(x) {
if( length(x[is.na(x)]) == length(x) ) {
return( NA )
} else {
return( ifelse(x >= p, 1, 0) )
}
}
)
# Loop to calculate Kappa and assign to new raster using writeValues
r <- vector()
for( j in 1:dim(v[[1]])[1]) {
Obs <- v[[1]][j,]
Obs <- Obs[!is.na(Obs)]
Pred <- v[[2]][j,]
Pred <- Pred[!is.na(Pred)]
if( length(Obs) >= 2 && length(Obs) == length(Pred) ) {
r <- append(r, Kappa(Pred, Obs)$khat)
} else {
r <- append(r, NA)
}
}
writeValues(s, r, tr$row[i])
}
s <- writeStop(s)
k <- raster("Kappa.img")
plot(k)
There is a confusionMatrix() function implemented in R:
library(caret) #required for confusionMatrix()
#example values
a <- c(1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1) #values from classification
b <- c(1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0) #reference values (observed/checked) for validation
table(a,b) #shows confusion matrix
confusionMatrix(table(a,b)) #confusion matrix with Accuracy, kappa ....
Note: confusionMatrix() only works if there no empty classes in a or b.
Best Answer
You can use Semi-Automatic Classification Plugin. It has Accuracy assessment calculation, as you can see below, but I didn't try it yet.
There is also a help about the plugin, and Youtube video about accuracy assessment.