A nicer notion is that of the differential:
$$ \text{If} \qquad z = 5x + 3y \qquad \text{then} \qquad dz = 5\, dx + 3\,dy $$
Then if you decide to hold $y$ constant, that makes $dy = 0$, and you have $dz = 5 \, dx$.
Another notation that works well with function notation is that if we define
$$ f(x,y) = 5x + 3y$$
then $f_i$ means derivative of $f$ with respect to the $i$-th entry; that is
$$ f_1(x,y) = 5 \qquad \qquad f_2(x,y) = 3 $$
This doesn't work well with a common abuse of notation, though; sometimes people write $f(r,\theta)$ when they really mean "evaluate $f$ at the $(x,y)$ pair whose polar coordinates are $(r, \theta)$" rather than the 'correct' meaning of that expression "evaluate $f$ at $(r, \theta)$". So if you're in the habit of doing that, don't try to indicate derivatives by their position.
I confess I really dislike partial derivative notation; when one writes $\partial/\partial x$, one "secretly" means that they intend to hold $y$ constant, then when one passes it through the differential, one gets
$$ \frac{\partial z}{\partial x} = 5 \frac{\partial x}{\partial x} + 3 \frac{\partial y}{\partial x} = 5 \cdot 1 + 3 \cdot 0 = 5$$
However, the suggestive form of Leibniz notation starts becoming very misleading at this point; for example, let's compute other partial derivatives.
- $\partial z / \partial x = 5$, holding $y$ constant as the notation suggests
- $\partial x / \partial y = -3/5$, holding $z$ constant as the notation suggests
- $\partial y / \partial z = 1/3$, holding $x$ constant as the notation suggests
Then putting it together,
$$ \frac{\partial z}{\partial x} \frac{\partial x}{\partial y} \frac{\partial y}{\partial z} = 5 \cdot \left(-\frac{3}{5}\right) \cdot \frac{1}{3} = -1 $$
This is a big surprise if you expect partial derivatives to behave similarly to fractions as their notation suggests!!!
Some key things to remember about partial derivatives are:
- You need to have a function of one or more variables.
- You need to be very clear about what that function is.
- You can only take partial derivatives of that function with respect to each of the variables it is a function of.
So for your Example 1, $z = xa + x$, if what you mean by this to define $z$
as a function of two variables,
$$z = f(x, a) = xa + x,$$
then $\frac{\partial z}{\partial x} = a + 1$ and
$\frac{dz}{dx} = a + 1 + x\frac{da}{dx},$ as you surmised,
though you could also have gotten that last result by considering $a$ as a
function of $x$ and applying the Chain Rule.
But when we write something like
$y = ax^2 + bx + c,$ and we say explicitly that $a$, $b$, and $c$ are
(possibly arbitrary) constants, $y$ is really only a function of one variable:
$$y = g(x) = ax^2 + bx + c.$$
Sure, you can say that $\frac{\partial y}{\partial x}$ is what happens
when you vary $x$ while holding $a$, $b$, and $c$ constant, but that's
about as meaningful as saying you vary $x$ while holding the number $3$ constant.
I suppose technically $\frac{\partial y}{\partial x}$
is defined even if $y$ is a single-variable function of $x$,
but it would then just be $\frac{dy}{dx}$ (the ordinary derivative),
and I can't remember seeing such a thing ever written as a partial derivative.
It would not make it possible to do anything you cannot do with
the ordinary derivative, and it might confuse people (who might try to
guess what other variables $y$ is a function of).
The previous paragraph implies that the answer to your Example 3 is "yes."
It also hints at why I almost wrote "a function of two or more variables"
as part of the first requirement for using partial derivatives.
Technically I think you only need a function of one or more variables,
but you should want a function of at least two variables before you
think about taking partial derivatives.
For Example 2, where we have $x^2 + y^2 = 1$, it is not obvious
what the function is that we would be taking partial derivatives of.
Either $x$ or $y$ could be a function of the other.
(The function would be defined only over a limited domain,
and would produce only some of the points that satisfy the equation, but
it can still be useful to do some analysis under those conditions.)
If you write something besides the equation to make it clear that
(say) $y$ is a function of $x$, giving a sufficiently clear idea which
of the possible functions of $x$ you mean, then I think technically you
could write $\frac{\partial y}{\partial x}$, and you might even find that
$\frac{\partial y}{\partial x} = 2x$, but again this is a lot of trouble
and confusion to get a result you could get simply by using
ordinary derivatives.
On the other hand, suppose we say that
$$h(x,y) = x^2 + y^2 - 1,$$
and we are interested in the points that satisfy $x^2 + y^2 = 1$,
that is, where $h(x,y) = 0$.
Now we have a function of multiple variables, so we can do interesting
things with partial derivatives,
such as compute $\frac{\partial h}{\partial x}$ and
$\frac{\partial h}{\partial y}$ and perhaps use these to look for trajectories
in the $x,y$ plane along which $h$ is constant.
OK, we don't really need partial derivatives to figure out that
those trajectories will run along circular arcs, but we could have
some other two-variable function where the answer is not so obvious.
Best Answer
This is taught very poorly in calculus courses, and you're confused because the notation is sloppy.
The insight you seek is the following1:
You take total derivatives of an expression with respect to a variable.
You take partial derivatives of a function with respect to its parameters.2
Before I go on, it's critical that you understand the following terminology:
A ("formal") parameter is a property of the function description itself.
For example, the $a$ and $b$ in the function definition $f(a,b) = a + b$ are parameters.
An argument, or "actual" parameter, is a property of an expression that is a call to a function.
For example, the $x$ and $y$ in the expression $g(x) + g(y)$ are arguments to $g$.
Now here's the kicker: if $h(x) = x^2$ then partial and total derivatives can be different:
\begin{align*} \frac{\partial}{\partial x} f(x, h)\ =\ 1\ \color{red}{\neq}\ 2x+1\ =\ \frac{d}{dx}f(x,h) \end{align*}
Makes sense? :-)
I hope it doesn't, because it was sloppy.
The notation above is extremely common, but not really correct.3
Remember I just told you partial derivatives are with respect to parameters whereas total derivatives are with respect to variables. This means that, if we've defined $$f(a,b) = a + b$$ as above, then it's actually incorrect (although very common) to write $$\frac{\partial}{\partial x}f(x,h)$$ for three reasons:
$x$ is not a parameter to $f$, but an argument to it. The parameter is $a$.
The second argument to $f$ should be a number (like $h(x)$), not a function like $h$.
$f(x, h)$ is not a function, but a call to a function. It evaluates to a number.
So, to really write the above derivatives correctly, I should have written:
\begin{align*} \left.\frac{\partial f}{\partial a}\right|_{\substack{a=x\phantom{(h)}\\b=h(x)}}\ =\ \left.1\right|_{\substack{a=x\phantom{(h)}\\b=h(x)}}\ =\ 1\ \color{red}{\neq}\ 2x + 1\ =\ \frac{d}{d x} f(x, h(x)) \end{align*}
at which point it should be obvious the two aren't the same.
Makes sense? :)
1 This should be easier to understand if you know a statically typed programming language (like C# or Java).
2 You can define partial derivatives for expressions as well, but it'd just be implicitly assuming you have a function in terms of that variable, which you are differentiating, and then evaluating at a point whose value is also denoted by that variable.
3 Notice the expression wouldn't "type-check" in a statically typed programming language.