Step-sizes are the crucial and difficult point when using subgradient methods. Basically you need that the step-sizes tend to zero but not too fast. If one uses a-priori step-sizes (e.g. of the form $1/k$) then the method provably converges but in practice it will slow down such that you'll not observe convergence.
The dynamic rule they suggest in the paper (in equation (40)) look like so-called Polyak step-sizes with estimation of the true optimal values (obtained by values of the dual problem). One can prove convergence with these step-sizes under special conditions. I do not know a good reference off the top of my head but many books on (nonsmooth) convex optimization should treat this.
Make no mistake, the book Convex Optimization is a fantastic resource for learning about convex optimization. I am very biased; Prof. Boyd was my Ph.D. advisor, and we still work together. I also know Prof. Vandenberghe personally and respect him greatly as well. But I've also taught courses in convex optimization using the book. I stand by my opinion!
That said, that book does not attempt to teach you how to build a state-of-the-art convex optimization algorithm. Instead, it's just giving you the basics, and in the process it makes some common simplifying assumptions, like strict primal and dual feasibility. Whereas the book teaches you about barrier methods, most state-of-the-art algorithms today employ what is called a symmetric primal dual method. There are very good reasons to do this, including the ability to start from an infeasible point, and even the ability to handle problems that are not strictly feasible in the first place. And yet, the computations involved in barrier methods are actually extremely similar to the ones employed in symmetric primal-dual methods. So the book does a reasonably good job as an introduction to the way things are actually done.
You are not going to get an answer here on Math.SE on how to implement your own custom solver that can handle infeasible starting points. That's simply beyond the scope of the forum. I recommend Google searches for infeasible interior-point methods, symmetric primal dual methods, and homogeneous self-dual embedding. This last term refers to a technique that allows some of the best solvers out there (e.g., MOSEK, Gurobi [i think], ECOS, CVXOPT, SeDuMi, SDPT3) to handle infeasible and unbounded problems. Be prepared for some thick reading material.
Perhaps you should reconsider whether or not it is wise to even try to implement your own algorithm. There are a lot of good convex optimization engines out there, why would you try and reinvent the wheel? Yes, I know you want to do sequential convex optimization. But that just means you have a lot of things to worry about besides whether or not your internal convex optimization loop is as fast as it could be. Get something working first before you spend time reinventing the wheel.
If you must build your own solver: again, do what ever works first. If the numerical results you get out of it are good, then think about what it takes to speed things up; at least you know the effort will be worth it. If the numerical results don't look good, then its speed is irrelevant---unless you like getting bad results faster.
Best Answer
To the contrary, warm starting primal-dual interior point methods can actually cause the method to converge more slowly than starting with a well centered initial solution. The reason is that your near-optimal solution is likely to be very close to the edge of the feasible reason and not well centered.
Conventional active set methods for QP don't have this problem and could be used to refine your ADMM solution.