Solved – why do we calculate risk when we already have loss functions

loss-functionsmachine learningrisk

If we already have let's say mean squared error as a loss function which can tell how good our algorithm is, then why we calculate the expectation of loss function as Risk?

Apologies, if this a naive question since i am new to machine learning.

Best Answer

The risk in machine learning context is the expectation of the loss function over the random variable.

Let's say you have a random variable $x$. If you apply the loss function $L(.)$ to this random variable you get another random variable $y=L(x)$. The expectation is not a function, it's rather an operator $E[.]$, when you apply it on the random variable $y$ you get the constant value $\mu=E[y]$.

So, the result of applying a loss function is the transformation of the random input into another random input. When you get the expectation of that new (transformed) random variable, you get back a number -- risk.

Related Question