If I understand you correctly you have mean centered your independent variable by subtracting the mean value of that variable from all observations.
If so, then the coefficient reflects the effect of a 1 unit increase in your independent variable, just as it would if the variable were uncentered. What centering does is change the interpretation of the intercept/constant as well as any interaction terms involved with the centered variable. Many seem to think that centering resolves collinearity problems when it certainly does not but instead merely masks them by shifting the collinearity onto the intercept.
Try running the model with the uncentered term and the centered term. Do you see any difference in the effect for that variable?
I assume that you are using the OLS estimator on this linear regression model. You can use the inequality constrained least-squares estimator, which will be the solution to a minimization problem under inequality constraints. Using standard matrix notation (vectors are column vectors) the minimization problem is stated as
$$\min_{\beta} (\mathbf y-\mathbf X\beta)'(\mathbf y-\mathbf X\beta) \\s.t.-\mathbf Z\beta \le \mathbf 0 $$
...where $\mathbf y$ is $n \times 1$ , $\mathbf X$ is $n\times k$, $\beta$ is $k\times 1$ and $\mathbf Z$ is the $m \times k$ matrix containing the out-of-sample regressor series of length $m$ that are used for prediction. We have $m$ linear inequality constraints (and the objective function is convex, so the first order conditions are sufficient for a minimum).
The Lagrangean of this problem is
$$L = (\mathbf y-\mathbf X\beta)'(\mathbf y-\mathbf X\beta) -\lambda'\mathbf Z\beta = \mathbf y'\mathbf y-\mathbf y'\mathbf X\beta - \beta'\mathbf X'\mathbf y+ \beta'\mathbf X'\mathbf X\beta-\lambda'\mathbf Z\beta$$
$$= \mathbf y'\mathbf y - 2\beta'\mathbf X'\mathbf y+ \beta'\mathbf X'\mathbf X\beta-\lambda'\mathbf Z\beta $$
where $\lambda$ is a $m \times 1$ column vector of non-negative Karush -Kuhn -Tucker multipliers. The first order conditions are (you may want to review rules for matrix and vector differentiation)
$$\frac {\partial L}{\partial \beta}= \mathbb 0\Rightarrow - 2\mathbf X'\mathbf y +2\mathbf X'\mathbf X\beta - \mathbf Z'\lambda $$
$$\Rightarrow \hat \beta_R = \left(\mathbf X'\mathbf X\right)^{-1}\mathbf X'\mathbf y + \frac 12\left(\mathbf X'\mathbf X\right)^{-1}\mathbf Z'\lambda = \hat \beta_{OLS}+ \left(\mathbf X'\mathbf X\right)^{-1}\mathbf Z'\xi \qquad [1]$$
...where $\xi = \frac 12 \lambda$, for convenience, and $\hat \beta_{OLS}$ is the estimator we would obtain from ordinary least squares estimation.
The method is fully elaborated in Liew (1976).
Best Answer
A predictor variable in a regression analysis can be any type, numerical or categorical. Positive or negative values do not matter. Also, the correlation between DV and IV has nothing to do with the modelling. We check only the residuals distribution of the regression model. Any sample data would be helpful to give you more insights.