Note that correlation conditional on $Z$ is a variable that depends on $Z$, whereas partial correlation is a single number.
Furthermore, partial correlation is defined based on the residuals from linear regression. Thus, if the actual relationship is nonlinear, the partial correlation may obtain a different value than the conditional correlation, even if the correlation conditional on $Z$ is a constant independent of $Z$. On the other hand, it $X,Y,X$ are multivariate Gaussian, the partial correlation equals the conditional correlation.
For an example where constant conditional correlation $\neq$ partial correlation: $$Z\sim U(-1,1),~X=Z^2+e,~Y=Z^2-e,~e\sim N(0,1),e\perp Z.$$ No matter which value $Z$ takes, the conditional correlation will be -1. However, the linear regressions $X|Z$,$Y|Z$ will be constants 0, and thus the residuals will be the values $X$,$Y$ themselves. Thus, the partial correlation equals the correlation between $X$,$Y$; which does not equal -1, as clearly the variables are not perfectly correlated if $Z$ is not known.
Apparently, Baba and Sibuya (2005) show the equivalence of partial correlation and conditional correlation for some other distributions besides multivariate Gaussian, but I did not read this.
The answer to your question 2 seems to exist in the Wikipedia article, the second equation under Using recursive formula.
For time series some version of Pearson correlation is most used, in the form of the autocorrelation function (for one series, correlated with itself at various lags) and the cross-correlation function (for two series) likewise. They are correct when all conditional expectation are linear.
If you suspect that may not be the case, you should start with some visualization of the two series! I have not seen any detailed descriptive analysis of two time series, that would be rather interesting ... In R you could play with the function coplot
and you could make scatterplot matrices, replacing what would be one number in each of the two functions above (autocorrelation, crosscorrelation) with a scatterplot. You could also look into copulas used with time series.
Best Answer
Partial correlation coefficient inhabits the domain of linear relationships/regression. You admitted this yourself when giving the definition for partial r in your question. Partial r is just another way of standardazing the linear regression coefficient, the other way being the standardized coefficient beta. So, partial r cannot exist in the context other than that where usual (zero-order) Pearson r exists; it itself is the Pearson correlation, only refined after washing out some "irrelevant" information from it by means of linear algebra.
Spearman rho - as you might be aware - is just Pearson r computed on ranked data rather than raw data. So, as long as you agree to treat the ranks as the "raw data" (that is, treat ranking as just preprocessing) you may carry the concept of partial r, including interpretation, over to Spearman rho.
Situation with Kendall tau is different. This coefficient, unlike Spearman's, is not based on linear correlation/regression. It has its own ideology maths and interpretation, these of Goodman-Kruskal gamma. Therefore, notion of partial r is inapplicable to it, and if you apply that recursive formula you mention to a matrix of tau's that will mean that you believe its entries are Pearson r's!. If there is possible a proper analogue of "partial correlation" for tau, it must be computed by a very different formula exploiting the concept of conditional probability of co-occurrence instead of linear regression between residuals. (See e.g. Ebuh GU and Oyeka ICA. A Nonparametric Method for Estimating Partial Correlation Coefficient.)