If $f(f(x)) = x+1, f(x+1) = f(x) + 1$, where $f: \Bbb R \rightarrow \Bbb R$ is real-analytic, bijective, monotonically increasing, is it true that $f(x) = x + 1/2$?
I have tried to represent $f(x)$ as power series in a neighborhood of arbitrary $x_{0}$ and $y_{0} = f(x_{0})$. It's obvious that we can choose such a neighborhood $U_{0}(x_{0})$ that it's image by $f$ is $U_{1}(y_{0})$. Then, we can try to find the power series of $f(f(x))$, which is equal to $x+1$. But I cannot see, how such an equation shows that $f(x) = x + 1/2$.
If $f(f(x)) = x+1, f(x+1) = f(x) + 1$, is it true that $f(x) = x + 1/2$
calculusfunctional-equationsiterated-function-system
Related Solutions
FYI this was problem B5 in the 2010 Putnam exam, so you can find it here: http://amc.maa.org/a-activities/a7-problems/putnamindex.shtml
They had a pretty succinct solution. Suppose $f$ is strictly increasing. Then for for any $y_0$ you can define an inverse funciton $g(y)$ for $y>y_0$ such that $x=g(f(x))$. Differentiating, we get $1=g'(f(x))f'(x)=g'(f(x))f(f(x))$, so that $g'(y)=\dfrac{1}{f(y)}$. We know that $g$ obtains arbitrarily large values since it is the inverse function of $f$ and $f$ is defined for all $x$, which means $g(z) - g(y_0) = \displaystyle \int_{y_0}^zg'(y)dy = \int_{y_0}^z\frac{dy}{f(y)}$ must diverge as $z\rightarrow\infty$.
Now all we have to do is show that $f$ is bounded below by a function that causes the integral to converge. For $x>g(y_0)\equiv g_0$, we have $f'(x)>g_0$, so we can assume that for some $\beta$ and $x$ large enough, $f(x)>\beta x$. Iterating this argument, we get that $f(x)>\alpha x^2$ for some $\alpha$ and $x$ large enough. So we can assume that $f(x)$ is asymptotically greater than $\alpha x^2$. But then the integral above converges, contradicting that $g(z$) is unbounded as $z\rightarrow\infty$. Thus, we conclude that $f$ cannot be strictly increasing.
It's not that your understanding is wrong, it's more that the wording of the question suggests that the definitions, properties, applications, and "cultural context" of power series are a bit tangled.
For concreteness and definiteness, let's work over the real numbers. If $x_{0}$ is a real number, and if $(a_{k})_{k=0}^{\infty}$ is an arbitrary sequence of real numbers, the associated power series with coefficients $(a_{k})$ and center $x_{0}$ is the expression $$ \sum_{k=0}^{\infty} a_{k} (x - x_{0})^{k}. \tag{1} $$
Power series make sense as purely algebraic entities. For example, two power series (with the same center) can be added termwise and multiplied using the "Cauchy product": \begin{gather*} \sum_{k=0}^{\infty} a_{k} (x - x_{0})^{k} + \sum_{k=0}^{\infty} b_{k} (x - x_{0})^{k} = \sum_{k=0}^{\infty} (a_{k} + b_{k}) (x - x_{0})^{k}, \\ \left(\sum_{k=0}^{\infty} a_{k} (x - x_{0})^{k}\right) \left(\sum_{k=0}^{\infty} b_{k} (x - x_{0})^{k}\right) = \sum_{k=0}^{\infty} \left(\sum_{j=0}^{k} a_{j} b_{k - j}\right) (x - x_{0})^{k}. \end{gather*} These definitions make sense whether or not the individual series converge (next item). The crucial point is, each coefficient of the sum or product is a finite algebraic expression in the coefficients of the "operands".
For each real number $x$, the power series (1) is an infinite series of real numbers, which may converge (the sequence of partial sums $$ s_{n}(x) = \sum_{k=0}^{n} a_{k} (x - x_{0})^{k} $$ converges to a finite limit) or diverge (otherwise). Clearly, (1) converges for $x = x_{0}$. It's not difficult to show that: a. If (1) converges for some $x$ with $|x - x_{0}| = r$, then (1) converges for every $x$ with $|x - x_{0}| < r$;
b. If (1) diverges for some $x$ with $|x - x_{0}| = r$, then (1) diverges for every $x$ with $|x - x_{0}| > r$.
It follows that for every power series (1), there exists an extended non-negative real number $R$ (i.e., $0 \leq R \leq \infty$) such that (1) converges for $|x - x_{0}| < R$ and diverges for $|x - x_{0}| > R$. (If $R = 0$, the former condition is empty; if $R = \infty$, the latter is empty.) This $R$ is called the radius of the power series (1). The power series $$ \sum_{k=0}^{\infty} k!\, x^{k},\qquad \sum_{k=0}^{\infty} \frac{x^{k}}{R^{k}},\qquad \sum_{k=0}^{\infty} \frac{x^{k}}{k!} $$ have radii $0$, $R$, and $\infty$, respectively.
If (1) has positive radius (i.e., $0 < R$), the sum of the series defines a function $$ p(x) = \sum_{k=0}^{\infty} a_{k} (x - x_{0})^{k} $$ whose domain is the set of $x$ with $|x - x_{0}| < R$. In this region, $p$ is infinitely differentiable, and its derivatives are found by termwise differentiation, e.g., $$ p'(x) = \sum_{k=1}^{\infty} ka_{k} (x - x_{0})^{k-1} = \sum_{k=0}^{\infty} (k + 1) a_{k+1} (x - x_{0})^{k}. $$ Each derivative is itself a power series, convergent in the same region.
A function $f$ is analytic at $x_{0}$ if $f$ is represented by a convergent power series in some open interval about $x_{0}$, and is analytic if $f$ is analytic at $x_{0}$ for every interior point $x_{0}$ of the domain of $f$.
If $f$ is infinitely-differentiable at some point $x_{0}$, the Taylor series of $f$ at $x_{0}$ is the power series $$ \sum_{k=0}^{\infty} \frac{f^{(k)}(x_{0})}{k!} (x - x_{0})^{k}. $$ As is well-known, a function can be infinitely differentiable at $x_{0}$ without the Taylor series converging to $f$ in any open interval of $x_{0}$. The standard example is $f(x) = e^{-1/x^{2}}$ if $x \neq 0$, and $f(0) = 0$, whose Taylor series is identically zero.
A transcendental function is real-analytic by definition. As Paramanand Singh notes, the defining property of a transcendental function is not "requires an infinite series" (i.e., "not a polynomial"), but "does not satisfy a polynomial equation in two variables" (i.e., "is not an algebraic function").
Power series are important for many reasons. The most common rationale for introducing them in calculus is to obtain algebraic/analytic expressions for the exponential function (and the closely related circular and hyperbolic functions, and power functions with non-integer exponents), etc., etc. For example, from the exponential power series, one obtains $$ e = \exp(1) = \sum_{k=0}^{\infty} \frac{1}{k!}, $$ from which one can easily show $e$ is irrational.
Another application is to obtain power series solutions of linear ordinary differential equations with non-constant coefficients. Power series are by no means the only infinite series useful for studying functions; Fourier series and wavelets come immediately to mind.
As for generalizations: Power series with complex coefficients (and complex centers) make perfect sense; the wordings and notation above are chosen to minimize the modifications required to consider complex power series. There are useful notions of matrix-valued power series, operator-valued power series, etc.
Complex power series exhibit interesting behavior (such as monodromy) not encountered over the reals, because a connected open set in the plane need not be simply-connected. Further, every holomorphic (i.e., complex-differentiable) function of one complex variable is automatically analytic. The logical strength of being "once complex-differentiable" gives complex analysis a completely different flavor from real analysis.
Best Answer
The answer is no. All that can be said is that $f$ is conjugate to the shift $x\mapsto x+1/2$. More precisely:
Theorem A real analytic function $\newcommand{\R}{{\mathbb R}}f:\R\to\R$ satisfies $f(f(x))=x+1$ for all real $x$ if and only if it can be written $$\tag{1}f(x)=h\left(h^{-1}(x)+\tfrac12\right)$$ where $h:\R\to\R$ is real analytic and satisfies $h(x+1)=h(x)+1$ and $h'(x)>0$ for all real $x$.
Remarks: 1. (1) is equivalent to $h^{-1}\circ f\circ h(x)=x+\frac12$, that is $h$ conjugates $f$ to the shift $x\mapsto x+\frac12$.
2. The Theorem is also valid for continuous functions instead of real analytic functions. The statement "$h'(x)>0$" has to be replaced by "$h$ strictly increasing".
3. An analogous theorem holds for real analytic $f$ satisfying $f^{\circ n}(x)=x+1$ where $f^{\circ n}$ denotes the repeated composition of $n$ copies of $f$.
Proof: The sufficieny of (1) had already been checked in previous answers or comments: $$f(f(x))=h(h^{-1}(x)+1)=h(h^{-1}(x))+1=x+1.$$ So let us prove the necessity as well. Recall from the comments to the question that $f(x+1)=f(x)+1$ for all $x$ because both expressions are equal to $f(f(f(x)))$ and that $f$ is necessarily bijective with inverse $f^{-1}(x)=f(x-1)=f(x)-1$. Since $f$ is continuous and bijective, it is also strictly monotonous. As $f(x+1)=f(x)+1$, it must be strictly increasing.
In order to motivate the construction, let us discuss the uniqueness of $h$ in (1), if it exists. We claim that $\tilde h=h\circ \phi$ with a strictly increasing real analytic $\phi:\R\to\R$ satisfying $\phi(x+\frac12)=\phi(x)+\frac12$ works as well as $h$. Indeed $$\tilde h(\tilde h^{-1}(x)+\tfrac12)=h(\phi(\tilde h^{-1}(x)+\tfrac12))=h(\phi(\tilde h^{-1}(x))+\tfrac12)=h(h^{-1}(x)+\tfrac12).$$ As $g(x)=h(x)-1$ is 1-periodic, it can be written $g(x)=u(x)+v(x)$ where $u(x)=(g(x+\frac12)+g(x))/2$ is $1/2$-periodic and $v(x)=(g(x)-g(x+\frac12))/2$ satisfies $v(x+\frac12)=-v(x)$. Hence $h=\psi+v$, where $\psi(x)=x+u(x)$ satisfies $\psi(x+\frac12)=\psi(x)+\frac12$. Observe that $\psi(x)=(h(x)+h(x+\frac12))/2$ also has a positive derivative and hence an inverse function $\psi^{-1}$. Going over to $\tilde h=h\circ \psi^{-1}$, we arrive at some $\tilde h$ also working in (1) but such that $\tilde v(x)=\tilde h(x)-x=v\circ \psi^{-1}(x)$ satisfies $\tilde v(x+\frac12)=-\tilde v(x)$.
In summary: If some $h$ with (1) exists then there also exists an $h$ satisfying additionally $h(x)=x+v(x)$ where $v(x+\frac12)=-v(x)$.
Now let a real analytic function $f:\R\to\R$ satisfying $f(f(x))=x+1$ be given. We are looking for some $h(x)=x+v(x)$ with $v(x+\frac12)=-v(x)$ satisfying $f(h(x))=h(x+\frac12)$ or equivalently $f(x+v(x))=x+\frac12-v(x)$ or, with $F(x)=x+f(x)$, $$F(x+v(x))=2x+\tfrac12.$$ This motivates our definition $$h(x)=x+v(x)=F^{-1}\left(2x+\tfrac12\right).$$ This is well defined because $F'(x)\geq1$ and it is well known that it defines a real analytic function. We find $h'(x)=2/F'(h(x))>0$ and, via $F(x+v(x))=2x+\frac12$, we obtain that $$\tag{2}f(x+v(x))=x+\tfrac12-v(x)\mbox{ for all }x.$$ We claim that $v(x+\frac12)=-v(x)$ which shows that $f(h(x))=h(x+\frac12)$ and $v$ is 1-periodic and thus completes the proof.
For a proof of the claim, (2) implies that $x+v(x)+1=f(f(x+v(x))=f(x+\frac12-v(x))$ and therefore, replacing $x$ by $x+\frac12$, we find $x+\tfrac32+v(x+\tfrac12)=f(x+1-v(x+\tfrac12))$and thus, using $f(x+1)=f(x)+1$, $$x+\tfrac12+v(x+\tfrac12)=f(x-v(x+\tfrac12)).$$ Hence $F(x-v(x+\frac12))=2x+\frac12=F(x+v(x))$ which implies $-v(x+\frac12)=v(x)$ as wanted.