I'm struggling to understand your question. In the true wiki-spirit, let me try this publicly in the hope that you or someone else can clarify it for me.
You start with a finite dimensional definition. We have a linear map A: V⊗V → ℝ. From this you produce a map A: ⋀n V ⊗ ⋀n V → ℝ. I don't see this map. I presume that you are supposed to pair-up elements in the two factors and apply A, so you get A(vi ⊗ uj) for each i and j. That sort-of gives a matrix, not quite well-defined since I can swap rows and columns at the expense of a sign swap. Of course, taking the determinant of this matrix gives me something sensible. That means that I can define det(A) as a function to be det[A(vi ⊗ uj)]. Is this what you mean?
Let's see if that makes sense for your second paragraph where I do think I know what's going on. We start with A: V → V and for A: ⋀n V → ⋀n V. That I'm happy with. Then as that's one dimensional, I get a canonical scalar which I call det(A).
Now imagine we have an identification Ψ:V ≅ V*. Now you say that we can use this to convert an operator into a pairing, presumably via A ↦ (u⊗v ↦ Ψ(u)(Av). Ψ also gives us a volume form for ⋀n V, and I'm quite prepared to believe that the two notions coincide.
Now you want to do this in infinite dimensions.
Firstly, there is a notion of the top exterior power of an infinite dimensional vector space. There's absolutely no problem with that. However, if I'm correct in what you're trying to do about then that wouldn't help you since you need to take the determinant of an infinite matrix in order to define the determinant of a pairing. Also, it doesn't define the determinant of an arbitrary (even continuous) linear operator, so it wouldn't help with the other definition either.
Secondly, the "pairing" between pairings and operators in infinite dimensions isn't so tight as in finite dimensions. You talk blithely about picking an identification V ≅ V*. That suggests that you are thinking about Hilbert spaces. I'm not quite clear what a "smooth derichlet function" is (and I don't mean just the spelling of Dirichlet), so I can't be sure, but your example doesn't feel like a Hilbert space.
There is a notion of an operator with a determinant in infinite dimensions, but I'm not sure that this is going to help either.
A pairing is really a map V → V in disguise. Of course, there are spaces other than Hilbert spaces for which V is isomorphic to V, but generally one thinks about Hilbert spaces in this context. If not, and you only have a non-degenerate "standard" pairing on your space then you get an injection V → V* and there's no guarantee that the image of the map derived from the pairing will end up in V. To ensure that, you have to have strong conditions on the pairing, which probably preclude it from being an operator with a determinant.
However, you bring in zeta renormalisation, at which point I'm afraid I can't help. But I'm not sure that your question is too concerned with zeta renormalisation since that is, for you, a known quantity: you know what the zeta renormalisation of the determinant of your operator is, but you want to get it straight from a definition of the determinant of the pairing. Thus the point is the definition, not the renormalisation. At least, that's my reading of it.
I apologise that this isn't anyway remotely close to an answer, but it's a bit long for a comment and might help someone else give you an actual answer. I thought it worth sharing my attempts to understand the question, though. If not, I'll happily delete it.
Not a complete answer. First, here is an alternate derivation of the result in the finite-dimensional case which might be more enlightening. If $A$ is positive self-adjoint, we can write $A = \exp(L)$ for some self-adjoint $L$. This lets us define
$$A^s = \exp(sL)$$
for all real $s$. The trace
$$\text{tr}(A^s) = \sum_{i=1}^n \lambda_i^s = \zeta_A(s)$$
is then the zeta function associated to $A$ (I am getting rid of all of the minus signs). Now, for small $\epsilon$ we can write
$$A^{s+\epsilon} = A^s A^{\epsilon} = A^s (1 + \epsilon L + O(\epsilon^2))$$
so it follows that
$$\zeta_A(0)' = \text{tr}(L).$$
But Jacobi's identity $\det \exp M = \exp \text{tr } M$ gives
$$\det A = \det \exp L = \exp \text{tr } L = \exp \zeta_A(0)'$$
and we conclude.
So what conceptual significance can we attach to the above? Well, it seems to me like we should think of the map $s \mapsto A^s$ as a representation of the Lie group $\mathbb{R}$, so the zeta function is the character of the corresponding representation. The derivative of the zeta function at zero gives the trace of the infinitesimal generator of this representation, $L$, which generates the abelian Lie algebra $\mathbb{R}$. And this is connected to the determinant of $A$ by Jacobi's identity.
So I think most of what needs explanation is Jacobi's identity. I freely admit that I do not have a good conceptual explanation of Jacobi's identity (beyond the fact that it's obvious for diagonalizable matrices). In these two blog posts I attempted to meander towards a combinatorial proof of Jacobi's identity in the form
$$\det (I - At)^{-1} = \exp \text{tr } \log (I - At)^{-1}$$
(where $A$ was the adjacency matrix of a graph) but didn't quite succeed. There is a combinatorial proof of Jacobi's identity in the literature due to Foata but I haven't gone through it in detail.
Best Answer
I will answer some of my questions in the negative.
3. First consider the case of rescaling an operator A by some (positive) number λ. Then ζλA(s) = λ-sζA(s), and so TR λA = λ TR A. This is all well and good. How does the determinant behave? Define the "perceived dimension" DIM A to be logλ[ (DET λA)/(DET A) ]. Then it's easy to see that DIM A = ζA(0). What this means is that DET λA = λζA(0) DET A.
This is all well and good if the perceived dimension of a vector space does not depend on A. Unfortunately, it does. For example, the Hurwitz zeta functions ζ(s,μ) = Σ0∞(n+μ)-s (-μ not in N) naturally arise as the zeta functions of differential operators — e.g. as the operator x(d/dx) + μ on the space of (nice) functions on R. One can look up the values of this function, e.g. in Elizalde, et al. In particular, ζ(0,μ) = 1/2 - μ. Thus, let A and B be two such operators, with ζA = ζ(s,α) and ζB = ζ(s,β). For generic α and β, and provided A and B commute (e.g. for the suggested differential operators), then DET AB exists. But if DET were multiplicative, then:
DET(λAB) = DET(λA) DET(B) = λ1/2 - α DET A DET B
but a similar calculation would yield λ1/2 - β DET A DET B.
This proves that DET is not multiplicative.
1. My negative answer to 1. is not quite as satisfying, but it's OK. Consider an operator A (e.g. x(d/dx)+1) with eigenvalues 1,2,..., and so zeta function the Reimann function ζ(s). Then TR A = ζ(-1) = -1/12. On the other hand, exp A has eigenvalues e, e2, etc., and so zeta function ζexp A(s) = Σ e-ns = e-s/(1 - e-s) = 1/(es-1). This has a pole at s=0, and so DET exp A = lims→0 es/(es-1)2 = ∞. So question 1. is hopeless in the sense that A might be zeta-function regularizable but exp A not. I don't have a counterexample when all the zeta functions give finite values.
5. As in my answer to 3. above, I will continue to consider the Hurwitz function ζ(s,a) = Σn=0∞ (n+_a_)-s, which is the zeta function corresponding, for example, to the operator x(d/dx)+a, and we consider the case when a is not a nonpositive integer. One can look up various special values of (the analytic continuation) of the Hurwitz function, e.g. ζ(-m,a) = -Bm+1(a)/(m+1), where Br is the _r_th Bernoulli polynomial.
In particular,
TR(x(d/dx)+a) = -ζ(-1,a)/2 = -a2/2 + a/2 - 1/12
since, for example (from Wikipedia):
B2(a) = Σn=02 1/(n+1) Σk=0 n (-1)k {n \choose k} (a+_k_)2 = a2 - a + 1/6
Thus, consider the operator 2_x_(d/dx)+a+_b_. On the one hand:
TR(x(d/dx)+a) + TR(x(d/dx)+b) = -(a2+b2)/2 + (a+_b_)/2 - 1/6
On the other hand, TR is "linear" when it comes to multiplication by positive reals, and so:
TR(2_x_(d/dx)+a+_b_) = 2 TR(x(d/dx) + (a+_b_)/2) = -(a2+2_ab_+b2)/4 + (a+_b_)/2 - 1/6
In particular, we have TR(x(d/dx)+a) + TR(x(d/dx)+b) = TR( x(d/dx)+a + x(d/dx)+b ) if and only if a=_b_; otherwise 2_ab_ < a2+b2 is a strict inequality.
So the zeta-function regularized trace TR is not linear.
0./2. My last comment is not so much to break number 2. above, but to suggest that it is limited in scope. In particular, for an operator A on an infinite-dimensional vector space, it is impossible for A-s to be trace-class for s in an open neighborhood of 0, and so if the zeta-function regularized DET makes sense, then det doesn't. (I.e. it's hopeless to say that det A = DET A.) Indeed, if the series converges for s=0, then it must be a finite sum.
Similarly, it is impossible for A to be trace class and also for A-s to be trace class for large s. If A is trace class, then its eigenvalues have finite sum, and in particular cluster near 0 (by the "divergence test" from freshman calculus). But then the eigenvalues of A-s tend to ∞ for positive s. I.e. it's hopeless to say that tr A = TR A.
My proof for 2. says the following. Suppose that dA/dt A-1 is trace class, and suppose that DET A makes sense as above. Then
d/dt [ DET A ] = (DET A)(tr dA/dt A-1)
I have no idea what happens, or even how to attack the problem, when dA/dt A-1 has a zeta-function-regularized trace.