You are asking about the nonnegative least squares problem. Matlab has a function to solve this type of problem. Here is a paper about an algorithm to solve nonnegative least squares problems:
Arthur Szlam, Zhaohui Guo and Stanley Osher, A Split Bregman Method for Non-Negative Sparsity Penalized Least Squares with Applications to Hyperspectral Demixing, February 2010
I also think the proximal gradient method can be used to solve it pretty efficiently.
Here are some details about how to solve this problem using the proximal gradient method. The proximal gradient method solves optimization problems of the form
\begin{equation}
\text{minimize} \quad f(x) + g(x)
\end{equation}
where $f:\mathbb R^n \to \mathbb R$ is convex and differentiable (with a Lipschitz continuous gradient) and $g:\mathbb R^n \to \mathbb R \cup \{\infty\}$ is convex. (We also require that $g$ is lower semi-continuous, which is a mild assumption that is usually satisfied in practice.) The optimization variable is $x \in \mathbb R^n$. The nonnegative least squares problem has this this form where
$$
f(x) = \frac12\| Ax - b \|_2^2
$$
and
$g$ is the convex indicator function for the nonnegative orthant:
$$
g(x) =
\begin{cases}
0 & \quad \text{if } x \geq 0, \\
\infty & \quad \text{otherwise.}
\end{cases}
$$
The proximal gradient iteration is
$$
x^{k+1} = \text{prox}_{tg}(x^k-t \nabla f(x^k)).
$$
Here $t > 0$ is a fixed step size which is chosen in advance, and which should satisfy $t \leq 1/L$ (where $L$ is a Lipschitz constant for $\nabla f$). The prox-operator of $g$ is defined by
$$
\text{prox}_{tg}(x) = \text{arg}\,\min_u \, g(u) + \frac{1}{2t} \| u - x \|_2^2.
$$
For our specific choice of $g$, you can see that $\text{prox}_{tg}(x) = \max(x,0)$, the projection of $x$ onto the nonnegative orthant.
The gradient of $f$ is given by
$$
\nabla f(x) = A^T (Ax - b).
$$
So, it's as simple as that. There is also a line search version of the proximal gradient method which I recommend using. The line search version adjusts the step size $t$ adaptively at each iteration, and it often converges much faster than the fixed step size version.
Your problem can be formulated as convex quadratic programming problem with a linear equality constraint and nonnegativity constraints on $x$.
$\min \frac{1}{2}\left( x^{T}(A^{T}A)x - 2(b^{T}A)x + b^{T}b \right)$
subject to
$\sum_{i=1}^{n} x_{i}=1$
$x \geq 0$.
As a practical matter, you're probably best off using a well-written library routine for linearly constrained quadratic programming. There are many of them available in a variety of programming languages. If you need to implement this yourself, then implementing an active set QP solver is relatively straight forward.
Best Answer
"Straightforward" least squares cannot include constraints. It has a simple function to optimize, the sum of squared residuals, and that's it.
Exactly because of that, the solution $x$ can be found by means of linear algebra, i.e. it is the result of applying a linear operator to $b$. This linear operator is described by the matrix $(A^T A)^{-1} A^T$ if the system has a unique solution; if not, the pseudo-inverse $A^-$ defines a linear operator that gives the minimum-norm solution. Though it is recommended to use the backslash operator in Matlab for numerical reasons, that doesn't change this basic idea. As soon as you introduce constraints what you are doing is no longer "least squares", and the solution is no longer given by a linear operator applied to $b$.
To my knowledge, the task of optimizing a quadratic function under linear (equality or inequality) constraints is the subject of quadratic programming.