Your random number generator will produce (pseudo) random numbers uniformly distributed on $(0,1)$.
Suppose that we want to simulate a random variable $X$ which takes on value $3$ with probability $0.25$, value $7$ with probability $0.35$, and value $45.6$ with probability $0.40$ (of course, this is not your problem, just a similar one).
Use the following idea. If the random number generator produces a number between $0$ and $0.25$, report that $X$ has taken on value $3$.
If the random number generator produces a number between $0.25$ and $0.25+0.35$, that is, between $0.25$ and $0.60$, report that $X$ has taken on the value $7$.
Finally, if the random number generator produces a number between $0.25+0.35$ and $1$, report that $X$ has taken on the value $45.6$.
Note that the probability that the random number generator produces a number between $0$ and $0.25$ is $0.25$. So with probability $0.25$ we will be reporting that $X$ has taken on value $3$. The probability that the random number generator produces a number between $0.25$ and $0.25+0.35$ is $0.35$, so with probability $0.35$ we will be reporting that $X$ has taken on value $17$. And so on.
Remarks: 1) In principle, the random number generator produces "random" reals between $0$ and $1$. So the probability it produces a particular number $u$, like $1/\pi$, is $0$. In practice, the numbers produced are say decimals, to $10$ decimal places. So our program needs to deal with the highly improbable but possible situations where the random number produced is exactly at a boundary, like $0.25$. It doesn't really matter what we do.
2) The procedure we use can also be described in terms of the cumulative distribution function of the random variable we are trying to simulate. Perhaps you are expected to do it in that style, in preparation for more complex problems when we are simulating a continuously distributed random variable $X$. If a description in terms of the cdf is required, please indicate.
You will want to using the following theorem that
For rv $X$ with cdf $F_{X}(t)$, $Y=F_{X}(X)\sim Uni(0,1)$
Proof: We will look at case that $F_{X}(t)$ is invertible, which you will see is true in with your example. So to find distribution of $Y$ we will see what cdf is. Thus we have
$$F_{Y}(t)=P(Y<t)=P(F_{X}(X)<t)=P\left(F^{-1}_{X}(F_{X}(X))<F^{-1}_{X}(t)\right)=P(X<F^{-1}_{X}(t))=F_{X}(F^{-1}_{X}(t))=t$$ where $0<t<1$. Thus $Y\sim Uni(0,1)$
Now with this knowledge we see that $F^{-1}_{X}(Y)=F^{-1}_{X}(F_{X}(X))=X$. Thus the great result we have from this is that we can simulate from $X$ by essentially simulating from $Uni(0,1)$ variable then inputting that into the inverse cdf, $F^{-1}_{X}(t)$
Now, what I leave to you, is to find inverse cdf.
Best Answer
Suppose $X$ is a random variable with that distribution on $[-1,1]$ and $Y$ is uniformly distributed on $[0,1]$. Note that for $x \in [-1,1]$ $$\mathbb{P}(X < x) = \int_{-1}^x\frac{1+s}{2}ds = \frac{(x+1)^2}{4} = \mathbb{P}(Y < \frac{(x+1)^2}{4}) = \mathbb{P}(2 \sqrt{Y} - 1 < x).$$
So $2\sqrt{Y} - 1$ has exactly the given distribution. This procedure works in principle for other distributions. However, it works so well here because
You will not always be so lucky. For example, to generate a normally distributed random variable from a uniform one requires another smart trick.