In many ways, I am atypical in the way that I approach a problem, but it works for me. Specifically, I try to understand an example in as much detail as I possibly can. If the example, is too complicated, then I make a simpler example. As much of the intricate detail that I can bring to bear on the example is brought.
For example, instead of trying to understand Lie groups and Lie algebras in general, start with the circle and the line that is tangent at the point (1,0). What is the exponential map? Oh, OK. Now how about $SU(2)$ and $su(2)$? Can you understand that the Lie group is the $3$-dimensional sphere? Can you understand the coordinates? Can you understand the equators? How do $i,j$ and $k$ really work?
What is the difference between the multiplication rule $i\times i =0$ and $i^2=-1$?
I spend time pondering. And often my notebooks will contain tangential problems or specific computations. I will keep doing the computation until I get it right! If necessary, I will write a program to complete the computation. When I understand the example completely, it is usually easy to abstract.
Then I follow up, usually writing in a notebook or several notebooks before I begin writing on the computer. I have an advantage in that I have long-distance collaborators, so it becomes necessary to explain the idea to the collaborator(s). That is the first writing stage: write for someone who knows your short-hand and your metaphors. the second stage is to write for someone who does not. Then I write with a set of colleagues in mind, but I assume the colleagues do not remember anything from the previous work. I also try to explicate the notation writing for example "the function $f$, the knot $k$, or the tubular neighborhood $N$.
A complex analytical colleague only uses $z$ for a complex number, $x$ for a real variable, and $n$ for an integer. These variable choices are culturally determined, and so one keeps with the culture of the discipline unless there is good reason to deviate. As a final example of this, the variable $A$ in the bracket polynomial is known to everyone in the field. The variables $q$, $t$, $X$ etc. are less known and involve different normalizations. So it is the burden of the author to relate these to the more well known choices.
If you really want to use a "formal" and "succint" mathematical expression, you can use the Hadamard Product, where the Hadamard product of two matrices is got by mutiplying the corresponding entries.
So if the input is treated as a vector: $\text{inp}$, then the output vector $\text{out}$ is given by
$$ \text{out} = \text{inp} \circ \text{inp}$$
where $\circ$ is the Hadamard product.
Note that you can treat a vector as a $1\times n$ matrix.
Of course, this will probably force your readers to try and figure out what a Hadamard product is, and you might as well just write
$$ \text{out}[i] = \text{inp}[i]^2 \quad \forall i \in \{0, 1, \dots , n-1\} $$
Best Answer
Well yeah, sort of. But not really, IMO.
What most programming languages do is a awkward, mathematically speaking. An imperative for-loop increments a variable. That shouldn't really be possible: maths doesn't know time. If you define something once then it'll conceptually hold always, i.e. if you start out with $i = 0$ it's not possible to later have $i=3$.
What's really going on, mathematically, is that you write an abstraction, an expression for all $i$. Then you apply to that expression the higher-order operation “sum over all such $i$ in some range”. The variable $i$ never actually has any particular value, it's just a placeholder for “consider value here”.
There is a proper mathematical framework that can nicely express computations like a sum. It's called lambda calculus. In this, such an abstraction over some variable is called a lambda expression, written like* $$ \lambda i \mapsto \frac1{2^i} $$ More commonly, this kind of abstraction would be called a function and written $$ f : \mathbb{N}\to\mathbb{Q}, \quad f(i) = \frac{1}{2^i} $$ or, in programming languages, something like
Now, what a summation operator does is, it takes such a lambda function and yields a number equal to the function evaluated with all possible arguments in a range and summed up. For instance, $$ S = \sum_{i=0}^{5}\frac1{2^i} $$ is in a sense shorthand notation for something $$ S = \Sigma(0,5,f). $$ Note that I didn't write $f(i)$ but just $f$: this is not the result of applying $f$ to any particular $i$, but the function object $f$ itself. It is not a number but, well, a function.
It's not necessary to first give $f$ a name and then use it in the sum, I can also just put in the lambda expression: $$ S = \Sigma(0,5,\lambda i\mapsto 1/2^{i}). $$ I understand sums as a special way of writing this expression.
Now as for how to the summation operator can actually be implemented in a computer? Well: in an imperative language, a for-loop would certainly be a reasonably way to do it. Like†,
But as I said, mathematically it doesn't really make sense to change the value of a variable. However, it is possible to define the equivalent of such a loop directly in lambda-calculus: through recursion. This requires some a bit weird definitions to use it in the original maths/logic setting, but can also be written in programming languages:
Now, in the recursive call,
i
will have the value ofi+1
from the outer call. So, am I doing mathematically unsound mutation here again? No actually! Thei
function argument is only an abstraction. In the inner function call, it's simply a different variable, which just happens to also be calledi
(but that's an irrelevant implementation detail).You may well think this silly. If the computer can just re-use the same variable
i
by incrementing it, why should we bother with defining a new one? You'd be right in a way.sumFromTo_rec
would in fact be inefficient in a language like C, because the computer would have to allocate an actual new integer on the stack in each function call.However, quite practically speaking, mutating variables also opens up numerous possibilities for programming bugs. Functional programming language therefore eschew (or entirely forbid) this. These languages have come up with various optimisations like tail-calls to avoid the overhead of allocating new variables in every recursive call (so basically, you write clean mathematical semantics but under the hood the same memory is actually re-used, like it would be in an imperative language).
If you're a mathematically-interested programmer (or a computation-interested mathematician) I recommend you check out Haskell. It's a modern functional programming language with very clean semantics, yet great performance and good compatibility to “real world applications”. The above recursive summation would in Haskell look as simple as
That's similar enough to how the rigorous recursive definition of $\sum$ in maths notation would look: $$ \sum_{i=i_0}^{i_{\text{end}}} f(i) = \begin{cases} f(i_0) + \sum_{i=i_0+1}^{i_{\text{end}}}f(i) & \text{if $i_0 < i_{\text{end}}$} \\ 0 & \text{otherwise} \end{cases} $$ Observe that $f(i)$ as such is never actually evaluated: it's always $f(i_0)$ (however, that variable has a different value in each recursion level). This is better reflected in a programming language than it is in the maths notation: the former really make a point of not applying $f$ to anything when you're talking about the function itself, rather then the result from applying that function to any particular argument.
*The classical standard notation in lambda calculus is actually $\lambda i.\ 1/{2^i}$, but I reckon the arrow makes it clearer what's going on than a dot. Most programming languages use an arrow symbol if they offer lambda functions, e.g. in Haskell you'd write
†Unlike in maths, it's generally preferred in programming (except in Fortran and Matlab, which I don't consider very well-designed) to express the range with upper bound excluded. This turns out to be the most useful convention in practice; see what famous Dijkstra wrote on the matter.