Solved – Does KNN have a loss function

machine learning

I didn't find a definition of loss function on wiki in the context of machine learning.

this one is less formal though, it is clear enough.

At its core, a loss function is incredibly simple: it’s a method of evaluating how well your algorithm models your dataset. If your predictions are totally off, your loss function will output a higher number. If they’re pretty good, it’ll output a lower number. As you change pieces of your algorithm to try and improve your model, your loss function will tell you if you’re getting anywhere.

it seems that the error rate of KNN is not the function that could guide the model itself optimize, such as Gradient Descent.

so, Does KNN have a loss function?

Best Answer

$k$-NN does not have a loss function that can be minimized during training. In fact, this algorithm is not trained at all. The only "training" that happens for $k$-NN, is memorising the data (creating a local copy), so that during prediction you can do a search and majority vote. Technically, no function is fitted to the data, and so, no optimization is done (it cannot be trained using gradient descent).