Preconditions of the definition of a multivariate real-valued function being real-analytic

analytic-functionsmultivariable-calculuspartial derivativereal-analysistaylor expansion

Recently I'm reading a text and got confused about preconditions of the definition of a multivariate real-valued function being real-analytic. Writing the coordinates on $\Bbb R^n$ as $x=(x^1,\dots,x^n)$ and let $p=(p^1,\dots,p^n)$ be a point in an open set $U$ in $\Bbb R^n$. Following is the definition of real-analytic of a function $f:U\to\Bbb R$ in this text:

The function $f$ is real-analytic at $p$ if in some neighborhood of $p$ it is equal to its Taylor series
at $p$: $$f(x)=f(p)+\sum\limits_i\frac{\partial f}{\partial x^i}(p)(x^i-p^i)+\frac{1}{2!}\sum\limits_{i,j}\frac{\partial^2f}{\partial x^i\partial x^j}(p)(x^i-p^i)(x^j-p^j)\\+\cdots+\frac{1}{k!}\sum\limits_{i_1,\dots,i_k}\frac{\partial^k f}{\partial x^{i_1}\cdots\partial x^{i_k}}(p)(x^{i_1}-p^{i_1})\cdots(x^{i_k}-p^{i_k})+\cdots,$$ where the general term is summed over all $1\le i_1,\dots,i_k\le n$.

My question is, do we need to assume as preconditions in the definition of real-analytic: 1) all partial derivatives of all orders in any order exist and 2) the mixed partial derivatives are equal no matter what order with regard to which the variables are taken. As I see it, If 1) does not hold, the Taylor series would not be well-defined, not to mention it being equal to $f(x)$. If 2) is missing, I would be totally confused about what the summands in general term look like. The continuity of mixed partial derivatives can guarantee 2), but I choose not to assume these continuities to keep the list of preconditions to minimum. However, the above preconditions 1) and 2) are not mentioned explicitly in the text. I also checked the definition in some other textbooks and wikipedia, but no source states these preconditions explicitly. So I'd like to ask here for a confirmation: should we assume the above two conditions 1) and 2) in the definition of real-analytic? Two derivative questions follow: do the text and other sources just assume the above preconditions implicitly and is it a kind of convention in math community that when we write down something, all the preconditions that make sense of what is written are assumed to hold in advance and it is the reader's responsibility to figure them out? I'm self-studying so I don't have math professor or TA to ask; something that is obvious to math students may be a big obstacle for me, so I would be greatly grateful for any of your help.

Best Answer

At this point I'm essentially just spouting out stuff which is readily available in Henri Cartan's text on complex analysis. Let us first consider a formal power series in $n$-variables with complex coefficients, \begin{align} S(X_1,\dots, X_n)=\sum_{I\in \Bbb{Z}_{\geq 0}^n} a_I X^I \equiv \sum_{i_1,\dots, i_n\geq 0} a_{i_1,\dots, i_n}X_1^{i_1}\cdots X_n^{i_n} \end{align} So far this is just a formal series, in the sense that this is just a bunch of symbols, with no meaning attached to the sum notation. Let us now denote by $\Gamma_S$ the set of $(r_1,\dots, r_n)$ such that each $r_i\geq 0$ and such that the following sum is finite: \begin{align} \sum_{i_1,\dots, i_n\geq 0}|a_{i_1,\dots, i_n}|(r_1)^{i_1}\cdots (r_n)^{i_n}<\infty. \end{align} The set $\Gamma_S$ is of course not empty, since it contains the origin $(0,\dots, 0)$. We define the domain of convergence of $S$ to be $\Delta_S := \text{int}(\Gamma_S)$. Note that $\Delta_S$ is always an open set, but may or may not be empty. If the domain of convergence is non-empty, this means there exist $r_1,\dots, r_n>0$ such that the above sum is finite, and furthermore it is easy to show (I think it's called Weirstrass's M-test?) that if $r_1,\dots, r_n>0$ are such that the above sum is finite, then the series \begin{align} S(z_1,\dots, z_n)&=\sum_{i_1,\dots, i_n\geq 0}a_{i_1,\dots, i_n}(z_1)^{i_1}\cdots (z_n)^{i_n} \end{align}

converges absolutely and uniformly whenever $|z_i|\leq r_i$ for $i\in\{1,\dots, n\}$ (also, you can show that if $(|z_1|,\dots, |z_n|)$ doesn't lie in the closure of $\Gamma$ then the series $S(z_1,\dots, z_n)$ diverges).


That was a brief intro to the basic definitions we need to talk about analyticity. Given an open set $U\subset \Bbb{R}^n$, a function $f:U\to \Bbb{R}$ is real analytic at $p$ if there exists a formal series $S$ with real coefficients, having non-empty domain of convergence $\Delta_S$, such that for $x\in U$ close to $p$ (i.e $(|x_1-p_1|,\dots, |x_n-p_n|)\in \Delta_S$), we have $f(x)= S(x_1-p_1,\dots, x_n-p_n)$.

With this, you can try to extend the usual $1$-D argument to show that if $f$ is analytic at $p$, then $f$ in some small neighbourhood of $p$, $f$ has continuous partial derivatives of all orders (so the order of differentiation doesn't matter), and that the coefficients are determined uniquely as \begin{align} a_{i_1,\dots, i_n}&=\frac{1}{i_1!\cdots i_n!}\frac{\partial^{i_1+\dots i_n}f}{\partial x_1^{i_1}\dots \partial x_n^{i_n}}(p) \end{align} Of course, since we have absolute convergence, we can rearrange the sums however we wish, so this means in a small neighbourhood of $p$, $f$ will equal its Taylor series about $p$.

Anyway, I would highly suggest you review the relevant definitions of analyticity in $1$-dimension and see what that implies about differentiability, relation to Taylor series etc, because the $n$-dimensional case simply amounts to rewording the definitions slightly and reusing the $1$-variable theorems carefully (so I hope I haven't made any typos lol).