Solved – MLE of a function of a parameter

estimationlikelihoodmaximum likelihoodself-study

I am working on a problem where we are interested in finding the MLE for a function of two parameters.

I am having problems with going about finding this. Intuitively, the idea makes sense. I am just wondering about the definition of the MLE of a function of two parameters (Google isn't turning up much). The question is as follows:

Question: Suppose that $X_1,\ldots,\,X_n$ are iid $N(\mu, \sigma^2)$ with unknown $\mu,\sigma^2$. Find the MLE for $\frac{\mu}{\sigma}$.

Note that this is not just a homework problem, but part of a take home final. I really am not looking for much of an answer, but more or less the idea for such problems.

Edit

Apparently MLE's are invariant under function. TY

Best Answer

The question was answered with a link in a comment, so let me just give here the argument from the link, for future completeness.

We assume a statistical model for data $X$ is parameterized by a parameter $\theta$ (which can be scalar or vector, or even more general). Let the likelihood function be $L(\theta)$ and the value of $\theta$ maximizing that be the maximum likelihood estimator $\hat{\theta}$ (mle). We do assume that estimator exists and is unique. Wanted is the mle of $g(\theta)$, a function of $\theta$. First we assume that $g$ is one-to-one. Then we can write $$ L(\theta) = L(g^{-1}(g(\theta)) $$ and both functions are clearly maximized by $\hat{\theta}$, so $$ \hat{\theta} = g^{-1}(\hat{g(\theta)}) $$ or $$ g(\hat{\theta}) = \hat{g(\theta)} $$ If $g$ is many-to-one, then $\hat{\theta}$ which maximizes $L(\theta)$ still corresponds to $g(\hat{\theta})$, so $g(\hat{\theta})$ still corresponds to the maximum of $L(\theta)$. (Argument paraphrased from the link in the comment above).