# [Math] Maximum Likelihood Estimator (MLE) of $\theta$ for the PDF $f( x; \theta) = \frac{1}{2}(1+\theta x)$

maximum likelihoodoptimizationparameter estimationprobabilitystatistics

I need to find de maximum likelihood estimator of $\theta$ for
$f(x)=\frac{1}{2}(1+\theta x)$, $-1 \leq x \leq 1$

I start with: $L(\theta)=f(x_1,\theta)f(x_2,\theta)\cdots f(x_n,\theta)$

$$L(\theta)=\frac{1}{2}(1+\theta x_1)\frac{1}{2}(1+\theta x_2)\cdots \frac{1}{2}(1+\theta x_n)$$

$$\ln L(\theta)=n \ln\frac{1}{2}+\sum_{i=1}^n \ln(1+\theta x_i)$$
$$\frac{\partial\ln L(\theta)}{\partial \theta}=0+\sum_{i=1}^n \frac{x_i}{(1+\theta x_i)}=0$$

And here i get stuck and don't know how to proceed. Any sugestion in how to solve (maximize this) for $\theta$ ?

You got the correct Objective Function.
As Michael Hardy noted you need to pay attention to the valid domain of $\theta$ which is $\theta \in \left[ -1, 1 \right]$.

In order to verify solution I created a small simulation.
The first step is to create a a sampler from the given distribution. The (One) way to do so is using the Inverse Transform Sampling (Nice example is given at Generate a Random Variable from a Given Probability Density Function (PDF)).

In the above case we have:

$$\int_{-1}^{x} 0.5 \left(1 + \theta s \right) ds = 0.25 \theta {x}^{2} + 0.5 x - 0.25 \theta + 0.5$$

Setting $u = 0.25 \theta {x}^{2} + 0.5 x - 0.25 \theta + 0.5$ yields 2 solutions for $x$:

$${x}_{1, 2} = \mp \frac{\sqrt{ {\theta}^{2} + \theta \left( 4 u - 2 \right) + 1 } \pm 1}{\theta}, \; \theta \neq 0$$

As can be seen, the above holds for $\theta \neq 0$ (As in that case we have uniform model which can be sampled directly).

With simple chack of validty one could see that the valid root is given by:

$$x = \frac{\sqrt{ {\theta}^{2} + \theta \left( 4 u - 2 \right) + 1 } - 1}{\theta}, \; \theta \neq 0$$

So, all needed is to generate random samples from $u \sim U \left[ 0, 1 \right]$ and apply the transformation on them to get $x \sim f \left( x ; \theta \right) = 0.5 \left( 1 + \theta x \right)$.

One this is done it is easy to check and verify the optimization problem (In this example using $\theta = 0.3$): The code is available at my StackExchange Mathematics Q2821115 GitHub Repository.

Remark
As @Did noted the simulation fails for $\theta = \pm 1$.
The reason is I use MATLAB fzero() which assumes the function is unconstrained. There 2 solutions to this: Use a grid search which works or maximize the Log Likelihood Function (By minimizing its negation) using fminbnd() which supports boundaries for the solution.