As @gung said, it would help if you gave your full equation and DV, but, here, if the interaction between sex-female and mobility is -10.1, it means that the effect of high mobility on the dependent variable is 10.1 units less for women then men. Similarly, the effect of being female on the DV is 10.1 units less for high mobility people than for low.
For continuous variables, it is much the same, except that it is per unit of the other IV. So, 1.3 for the interaction between weight and IQ means that the effect of IQ on the DV is 1.3 units higher for each increase of one unit (pound? kilogram?) of weight, and the effect of weight is 1.3 units higher for each increase of 1 point in IQ. In other words, the effect is more positive for people who are both smart and heavy than for people who are one or the other.
Both statements are true.
For the part of relevant variables I suggest wikipedia's link for both intuition and the algebra behind.
As for adding what you called an irrelevant variable (its true coefficient is 0). Suppose your relevant variables are in a matrix $X$ and you consider another variable $u$. So the new design matrix is $X_{+} = \begin{bmatrix}X &u\end{bmatrix}$. I'll skip some steps, but if you use some linear algebra properties, you'll get
$$ (X_+^TX_+) = \begin{bmatrix}
X^TX & X^Tu\\
u^TX & u^Tu
\end{bmatrix}, $$
$$(X_+^TX_+)^{-1} = \begin{bmatrix}
A_{11} & A_{12}^T\\
A_{12} & A_{22}
\end{bmatrix}. $$
Particularly, we're interested on how the variances of the important variables (the ones from $X$) behave, wich means we want to look at the diagonal components of $A_{11}$. By using the formula for block matrices inversion we get
$$A_{11} = (X^TX)^{-1} + (X^TX)^{-1}X^Tu(u^Tu - u^TX(X^TX)^{-1}X^Tu) (X^TX)^{-1},$$
the second term has non-negative diagonal entries (it in fact can be zero if $u$ is orthogonal to the columns of $X$).
To get some intuition on how this works, consider the simple linear regression case
$$Y_i = \beta_0 + \beta_1x_i + \epsilon_i, $$
with $\beta_1 = 0$ ($x$ has no effect on the expected value of $Y$).
We know $\hat{\beta_0} = \bar{Y} - \hat\beta_1 \bar x$, while in the "true" model we should have $\hat\beta_0 = \bar Y$. Similar to the complicated algebra above, we have (considering $x$ in the model),
$$ Var(\hat \beta_0) = Var(\bar Y) + \bar x^2 Var(\hat \beta_1) \geq Var(\bar Y), $$
with equality if $\bar x = 0$ (the mentioned orthogonality condition).
In summary, adding a variable (considering it's not orthogonal to the previous ones) to a linear regression model will cause a bias reduction in the coefficients estimates but an increase in their variances. Since you never know what are the real relevant variables, you need to balance this bias-variance trade-off.
There are a lot of methods proposed for variable selection, one example is LASSO.
Best Answer
A parameter estimate in a regression model (e.g., $\hat\beta_i$) will change if a variable, $X_j$, is added to the model that is:
An estimated beta will not change when a new variable is added, if either of the above are uncorrelated. Note that whether they are uncorrelated in the population (i.e., $\rho_{(X_i, X_j)}=0$, or $\rho_{(X_j, Y)}=0$) is irrelevant. What matters is that both sample correlations are exactly $0$. This will essentially never be the case in practice unless you are working with experimental data where the variables were manipulated such that they are uncorrelated by design.
Note also that the amount the parameters change may not be terribly meaningful (that depends, at least in part, on your theory). Moreover, the amount they can change is a function of the magnitudes of the two correlations above.
On a different note, it is not really correct to think of this phenomenon as "the coefficient of a given variable [being] influenced by the coefficient of another variable". It isn't the betas that are influencing each other. This phenomenon is a natural result of the algorithm that statistical software uses to estimate the slope parameters. Imagine a situation where $Y$ is caused by both $X_i$ and $X_j$, which in turn are correlated with each other. If only $X_i$ is in the model, some of the variation in $Y$ that is due to $X_j$ will be inappropriately attributed to $X_i$. This means that the value of $X_i$ is biased; this is called the omitted variable bias.