It is well known that some statements about partial sums of multiplicative functions are extremely hard. For example, the Riemann hypothesis is equivalent to the assertion that $|\mu(1)+\mu(2)+\dots+\mu(n)|$ is bounded above by $n^{1/2+\epsilon}$ for all $\epsilon > 0$. However, I want to know about lower bounds for such partial sums, valid for infinitely many n. For instance, is it known that the sums of the Möbius function must infinitely often be somewhere near $n^{1/2}$ in magnitude? Can one at least prove that they are unbounded? And what about the Liouville function $\lambda$? Are its partial sums unbounded? If so, can anyone give me a good reference or quick proof? And how about general completely multiplicative functions that take values of modulus 1? Is anything known about those?
[Math] Partial sums of multiplicative functions
analytic-number-theorynt.number-theorypolymath5reference-request
Related Solutions
As a matter of fact, it isn't hard to construct a multiplicative sequence $a_n$ such that $f(z)$ is an entire function without zeroes. Unfortunately, it is completely useless for the questions that you brought up as "motivation".
Here is the construction.
Claim 1: Let $\lambda_j\in [0,1]$ ($j=0,\dots,M$). Assume that $|a_j|\le 1$ and $\sum_ja_j\lambda_j^p=0$ for all $0\le p\le P$. Then, if $P>2eT$, we have $\left|\sum_j a_je^{\lambda_j z}\right|\le (M+1)(eT/P)^P$
Proof: Taylor decomposition and a straightforward tail estimate.
Claim 2: Let $P$ be large enough. Let $\Delta>0$, $M>P^3$ and $(M+1)\Delta<1$. Let $I_j$ ($0\le j\le M$) be $M+1$ adjacent intervals of length about $\Delta$ each arranged in the increasing order such that $I_0$ contains $0=\lambda_0$. Suppose that we choose one $\lambda_j$ in each interval $I_j$ with $j\ge 1$. Then for every $|a_0|\le 1$, there exist $a_j\in\mathbb C$ such that $|a_j|\le 1$ and $\sum_{j\ge 0} a_j\lambda_j^p=0$ for $0\le p\le P$.
Proof: By duality, we can restate it as the claim that $\sum_{j\ge 1}|Q(\lambda_j)|\ge |Q(0)|$ for every polynomial $Q$ of degree $P$. Now, let $I$ be the union of $I_j$. It is an interval of length about $M\Delta$, so, by Markov's inequality, $|Q'|\le CP^2(M\Delta)^{-1} A\le CP^{-1}\Delta^{-1}A$ where $A=\max_I |Q|\ge |Q(0)|$. But then on the 5 intervals $I_j$ closest to the point where the maximum is attained, we have $|Q|\ge A-5\Delta CP^{-1}\Delta^{-1}A\ge A/2$. The rest is obvious.
Claim 3: Suppose that $a_0$ is fixed and $a_j$ ($j\ge 1$) satisfy $\sum_{j\ge 0} a_j\lambda_j^p=0$ for $0\le p\le P$. Then we can change $a_j$ with $j\ge 1$ so that all but $P+1$ of them are exactly $1$ in absolute value and the identities still hold.
Proof: As long as we have more than $P+1$ small $a_j$, we have a line of solutions of our set of $P+1$ linear equations. Moving along this line we can make one of small $a_j$ large. Repeating this as long as possible, we get the claim.
Now it is time to recall that the logarithm of the function $f(z)$ is given by $$ L(z)=\sum_{n\in\Lambda}a_ne^{-z\log n} $$ where $\Lambda$ is the set of primes and prime powers and $a_n=m^{-1}a_p^m$ if $n=p^m$. We are free to choose $a_p$ for prime $p$ in any way we want but the rest $a_n$ will be uniquely determined then. The key point is that we have much more primes than prime powers for unit length.
So, split big positive numbers into intervals from $u$ to $e^\Delta u$ where $\Delta$ is a slowly decaying function of $u$ (we'll specify it later). Formally we define the sequence $u_k$ by $u_0=$something large, $u_{k+1}=e^{\Delta(u_k)}u_k$ but to put all those backward apostrophes around formulae is too big headache, so I'll drop all indices. Choose also some slowly growing functions $M=M(u)$ and $P=P(u)$ to be specified later as well.
We need a few things:
1) Each interval should contain many primes. Since the classical prime number theorem has the error term $ue^{-c\sqrt{\log u}}$, this calls for $\Delta=\exp\{-\log^{\frac 13} u\}$ Then we still have at least $u^{4/5}$ primes in each interval (all we need is to beat $u^{1/2}$ with some margin).
2) We should have $M\Delta\ll 1$, $M>P^3$, and $u\left(\frac{eT}P\right)^P\le (2u)^{-T-3}$ for any fixed $T>0$ and all sufficiently large $u$. This can be easily achieved by choosing $P=\log^2 u$ and $M=\log^6 u$.
3) At last, we'll need $M(P+\sqrt u)\ll u^{4/5}$, which is true for our choice.
Now it is time to run the main inductive construction. Suppose that $a_n$ are already chosen for all $n$ in the intervals up to $(u,e^{\Delta}u)$ and we still have almost all primes in the intervals following the current interval free (we'll check this condition in the end). We want to assign $a_p$ for all $p$ in our interval for which the choice hasn't been made yet or was made badly. We start with looking at all $a_p$ that are not assigned yet or assigned in a lame way, i.e., less than one in absolute value. Claim 3 (actually a small modification of it) allows us to upgrade all of them but $P+1$ to good ones (having absolute value $1$) at the expense of adding an entire function that in the disk of radius $T$ is bounded by $(2u)^{T} u\left(\frac{eT}P\right)^P\le u^{-3}$ to $L(z)$. Now we are left with at most $\sqrt u$ powers of primes and $P+1$ lame primes to take care of. We need the prime powers participate in small sums as they are and we need the small coefficients to be complemented by something participating in small sums too. For each of them, we choose $M$ still free primes in the next $M$ intervals (one in each) and apply Claim 2 to make a (lame) assignment so that the corresponding sum is again bounded by $u^{-3}$ in the disk of radius $T$. We have at most $u$ such sums, so the total addition will be at most $u^{-2}$. This will finish the interval off. Now it remains to notice that we used only about $\sqrt u+P$ free primes in each next interval and went only $M$ intervals ahead. This means that in each interval only $M(\sqrt u+P)$ free primes will ever be used for compensating the previous intervals, so we'll never run out of free primes. Also, the sum of the blocks we constructed will converge to an entire function. At last, when $\Re z>1$, we can change the order of summation and exponentiate finally getting the Dirichlet series representation that we need.
The end.
The answer is "yes" - the order mod p of 2 is almost always as large as the square root of p (actually you get epsilon less than this in the exponent). If you take r multiplicatively independent numbers and ask for the group they generate mod p, the exponent is r/(r + 1). This is a paper of mine, and then in a paper of the Murtys, and I think is referenced in some form by Heath-Brown (it is the less deep part of his technique - to get something serious out of it you need something like Chen's method for Goldbach).
Best Answer
This reference contains the best result of this kind currently known for $\mu(n)$:
Tadej Kotnik and Herman te Riele The Mertens Conjecture revisited. Algorithmic number theory, 156--167, Lecture Notes in Comput. Sci., 4076, Springer, Berlin, 2006.
They prove that $\limsup_{x \rightarrow +\infty}M(x)/\sqrt{x} \geq 1.218$ and that $\liminf_{x \rightarrow +\infty}M(x)/\sqrt{x} \leq -1.229$. Here
$ M(x) = \sum_{n \leq x}\mu(n) $
is the conventional notation for the summatory function of the Möbius function. Their proof is a mixture of analytic number theory and large scale computations. They also have a survey of what is known and what is conjectured about the size of $M(x)$.
Now on to the Liouville function $\lambda(n)$ and its summatory function $L(x)$. The latter is very closely connected with $M(x)$, for
$ \sum_{n \leq \sqrt{x}}\mu(n)L\left(\frac{x}{n^2}\right) = \sum_{n \leq \sqrt{x}}\mu(n)\sum_{m \leq x/n^2}\lambda(m) = \sum_{N \leq x}\sum_{mn^2 = N}\mu(n)\lambda(m) = \sum_{N \leq x}\mu(N) = M(x). $
Thus
$ |M(x)| \leq \sum_{n \leq \sqrt{x}}|L\left(\frac{x}{n^2}\right)| $
and so the assumption
$ L(x) = o\left(\frac{\sqrt{x}}{\log^{1 + \epsilon}(x)}\right) $
for example, leads to a contradiction with the Kotnik-te Riele result (or earlier results) for any $\epsilon > 0$.
My guess is that if one looks up the old (pre-computer) results on $|M(x)|$ from below, one might prove that $\limsup_{x \rightarrow +\infty}|L(x)|/\sqrt{x} > 0$. This may even be in the literature somewhere.
Alternatively
$ \sum_{n = 1}^{\infty}\lambda(n)n^{-s} = \prod_{p}\sum_{k=0}^{\infty}\lambda(p^k)p^{-ks} = \prod_{p}\sum_{k=0}^{\infty}(-1)^kp^{-ks} = \prod_{p}(1 + p^{-s})^{-1} = \frac{\zeta(2s)}{\zeta(s)} $
by the Euler product formula, and from here on it is more elementary than it was with $M(x)$ in the argument that David Speyer gave, because we don't need the zeros on the critical line. For $\zeta(2s)$ has a pole at $s = 1/2$ which is not canceled by a pole of $\zeta(s)$ at the same point. Thus $L(x) = O(x^{\alpha})$ is impossible with $\alpha < 1/2$ by partial summation.
For multiplicative functions of modulus $1$, the situation is much less clear. For simplicity, assume that $f(n)$ is a totally multiplicative function (multiplicative and $f(p^k) = f(p)^k$) with $|c_p| = 1$ where $c_p = f(p)$. The Liouville function is the case $c_p \equiv -1$. Then
$ \sum_{n = 1}^{\infty}f(n)n^{-s} = \prod_{p}\frac{1}{1 - c_pp^{-s}}{\quad},{\quad}A(x) = \sum_{n \leq x}f(n) $
by the Euler product formula. The basic principle is that if $A(x) = O(x^{\alpha})$, then the Dirichlet series on the left hand side is convergent in the half plane $\sigma > \alpha$, so the sum is holomorphic in that half plane. If we can find a singularity $s_0$ of the product on the right hand side with $\mathrm{Re}(s_0) = \sigma_0$, that tells us that $A(x) = O(x^{\alpha})$ with $\alpha < \sigma_0$ is impossible. The bad thing now is that the product may diverge at a point without having a singularity there, because the product may diverge to zero, and a holomorphic function may be zero at a point without being singular there.
But it is straightforward to show that if $\mathrm{Re}(c_p) \geq \delta > 0$ for all $p$, then $A(x) = O(x^{\alpha})$ is impossible for any $\alpha < 1$, by showing that the series $ \sum_{p}c_pp^{-\sigma} $ goes to infinity as $\sigma \rightarrow 1^{+}$.