When you say "non-convex problems", do you mean specifically non-convex optimization problems? That is what your the tags and the context suggest that is what you mean, but I just want to double-check.
Assuming that is the case, I also have an issue with your use of the phrase "exact solution". It is not going to be possible, even in exact arithmetic, to get the exact solution to most convex programs: the exact optimal value, and/or an exact optimal point. You can, in theory, do this for most linear programs by using the simplex method to identify an optimal basis---but the simplex method has worst-case exponential complexity! In fact, the polynomial algorithms for linear and convex programming are iterative, and approach an exact solution asymptotically. Even in exact arithmetic, you must trade computation time for precision. I am sure what you are referring to here is simply the "global" solution, not the "exact" solution, but I think it was worth making this distinction.
So, let me answer the following revised question: are there broad classes of non-convex optimization problems for which tractable algorithms exist to obtain the global solution to within predefined numerical accuracy, under modest conditions, and assuming exact arithmetic?
I would say the answer is no. The class of convex programs is certainly the most general class of tractable optimization problems that we know of. Yes, there are exceptions; you pointed to (the minimization of) submodular functions, for instance. But you then suggested that's not broad enough to count, and I agree. No such exceptions are even going to approach the breadth of convex programs.
Geometric programs form a class of tractable, non-convex optimization problems. When you consider the larger class of so-called generalized geometric programs (and you should; see the link), it's a relatively extensive class. But in fact, every GP (and GGP) can be mechanically converted to a convex program via a simple change of variables ($x\rightarrow e^y$). So I would argue this is really not a distinct class, but a subset of convex programs "in disguise".
Even SVDs, the other exception you mention, has ties to convex optimization. The maximum singular value is convex in the elements of the matrix, as is the sum of the largest $k$ singular values. So to obtain, say, the $k$th eigenvalue, you could in theory solve two convex programs, one for $\sum_{i=1}^k \sigma_i$ and one for $\sum_{i=1}^{k-1} \sigma_i$, and take the difference. Of course, you'd never do that, just like you'd never use a general purpose convex programming engine to solve a least-squares problem. (And indeed, accuracy would be an issue.) But the fact that you could speaks to the universality of convex optimization.
Best Answer
A problem with integer variables is non-convex pretty much by definition of convexity (a set consisting of isolated points like $\{0,1\}$ is not convex). However, by a slight abuse of language, "mixed-integer convex problem" is used to describe mixed-integer problems where the continuous relaxation is convex. They are already hard because they contain the NP-hard class of mixed-integer linear problems. Like you say, a solver such as MOSEK, and a tool such as CVX can often be used to model and solve such type of problems.
Your problem is not of that form ie. it is even harder because the nonlinear constraints 1 and 2 are non-convex. (Constraint 3 is just a roundabout way to write that $d$ is binary, which you would communicate to the solver more directly anyway). Unless there is some special structure to exploit then you may need a general MINLP solver (see for example list of MINLP solvers in https://en.wikipedia.org/wiki/List_of_optimization_software)