Infinite-dimensional Bregman divergence

convex-analysislocally-convex-spacesreal-analysisreference-requesttopological-vector-spaces

Let $C$ be a convex subset of $\mathbb R^n$ with nonempty interior. Let $f: C \to \mathbb R$ be a strictly convex function, differentiable in the interior of $C$, whose gradient $\nabla f$ extends to a bounded, continuous function on $C$. The Bregman divergence $d_f$ for $f$ is defined for all $x,y \in C$ by
$$d_f(x,y) = f(x) – f(y) – \nabla f(x) \cdot(y-x).$$

I am wondering if this definition has been extended to infinite-dimensional spaces. In particular, how would one define a Bregman divergence in an arbitrary (locally convex) topological vector space? I would appreciate references but welcome all ideas.

Best Answer

The following is a pretty great monograph on the subject; I've been told it is "the" reference for modern Bregman divergence theory, at least from an optimization/monotone-operator viewpoint. Note that they remain in a Banach space for most of their results, so infinite dimensions are fair game. They sometimes specialize to a Hilbert space, which also can be infinite dimensional. They do a good job of noting when the finite-dimensional setting provides simpler results.

H. H. Bauschke, J. M. Borwein, and P. L. Combettes, Bregman monotone optimization algorithms, SIAM Journal on Control and Optimization, vol. 42, no. 2, pp. 596-636, June 2003.

Preprint is available, e.g., here.

Related Question