In regression, it is often recommended to center the variables so that the predictors have mean $0$. This makes it easier to interpret the intercept term as the expected value of $Y_i$ when the predictor values are set to their means. Otherwise, the intercept is interpreted as the expected value of $Y_i$ when the predictors are set to 0, which may not be a realistic or interpretable situation (e.g. what if the predictors were height and weight?). Another practical reason for scaling in regression is when one variable has a very large scale, e.g. if you were using population size of a country as a predictor. In that case, the regression coefficients may be on a very small order of magnitude (e.g. $10^{-6}$) which can be a little annoying when you're reading computer output, so you may convert the variable to, for example, population size in millions. The convention that you standardize predictions primarily exists so that the units of the regression coefficients are the same.
As @gung alludes to and @MÃ¥nsT shows explicitly (+1 to both, btw), centering/scaling does not affect your statistical inference in regression models - the estimates are adjusted appropriately and the $p$-values will be the same.
Other situations where centering and/or scaling may be useful:
when you're trying to sum or average variables that are on different scales, perhaps to create a composite score of some kind. Without scaling, it may be the case that one variable has a larger impact on the sum due purely to its scale, which may be undesirable.
To simplify calculations and notation. For example, the sample covariance matrix of a matrix of values centered by their sample means is simply $X'X$. Similarly, if a univariate random variable $X$ has been mean centered, then ${\rm var}(X) = E(X^2)$ and the variance can be estimated from a sample by looking at the sample mean of the squares of the observed values.
Related to aforementioned, PCA can only be interpreted as the singular value decomposition of a data matrix when the columns have first been centered by their means.
Note that scaling is not necessary in the last two bullet points I mentioned and centering may not be necessary in the first bullet I mentioned, so the two do not need to go hand and hand at all times.
First you'd have to figure out what change in one variable is "equal" to a what change in another. The usual standardization uses the standard deviation, but that may or may not be ideal. It may not be possible to figure this out - particularly if the IVs are related to each other, in which case a change in one would go with a change in another.
Once you've figured that out, you can get the predicted values from various combinations of the IVs, varying each by the amount you thought was "equal" in the first step.
Another thing to do is to graph the predicted results as the independent variables change in value.
Best Answer
NO, just because you standardized the predictors $X$ do not force you to standardize the response $y$. Ask yourself "Why do I standardize?" and see what the standardization is doing. Some answers to that can be found at: What algorithms need feature scaling, beside from SVM? As to the additional question in comments: The arguments in my answer linked at above do also apply for ridge and lasso. The arguments to standardize $X$ in those cases do not apply to $y$ (but if you want you can standardize $y$ too, it does no harm, but can complicate interpretations). The same principles apply to SVR, but I do not know the answer in that case.