It's a mixture model set up you've got. So to start, put the mixture identifying variable in - you don't have it yet. It's an indicator variable saying whether a case comes from one regression (say Z=0) or the other (say Z=1). Probably it will enter the full model in the form of an interaction with a slope and/or intercept to allow these to change depending on which regression generates the point (although other more complex arrangements are possible). Formulate that model carefully to ensure the mixture dependencies are what you want - there are a lot of possibilities.
Now, if Z was observed you'd know how to fit the complete model and get betas from it because there would be nothing unobserved on the right hand side of it. But assuming you see only the data and the covariates, you don't observe it. However, you have assumed a complete model for how data is generated for each value of Z. So (E-step) use that to get a posterior distribution over the possible values of Z for each data point using the model with its parameters as they stand and some prior assumption about the distribution of Z (or you could estimate that too). Recall that the posterior probability of Z=1 just is the expectation of Z. Now (M-step) use that expected Z as if it was a real observation of Z to refit the whole model. The complete data likelihood will, in normal circumstances, not go down.
Alternate this process until the likelihood of the data under the model stops rising, retrieve the final set of betas, hope you're not in a local minimum, and declare that you've estimated them.
Contrary to @whuber's claim, the mean of x and y are contained in the information given.
Okay, so you have the line equation
$$y_i=\alpha +x_i\beta + e_i$$
estimates $\hat{\beta}=r\frac{s_y}{s_x}$ and $\hat{\alpha}=\overline{y}-\hat{\beta}\overline{x}$.
where $r$ is the correlation. The question doesn't state whether the standard deviation (0.482) is for $s_y$ or $s_x$ (the MLE standard deviation, with divisor $n$). Either way, you can work out the either from the info given. for their ratio must satisfy:
$$\frac{\hat{\beta}}{r}=\frac{s_y}{s_x}$$
The slope can't be negative if the correlation is positive, so I have assumed that you have done something incorrectly (for you have correlation of 0.117, and slope of -0.00024; this is impossible). This will affect the numbers, but not the general method. So I will assume the standard deviations are both known, but not write in the specific values. The same goes for the rest of the actual numbers.
Now the variance of $\hat{\beta}$ is given by:
$$var(\hat{\beta})=s_e^2(X^TX)^{-1}_{22}=\frac{s_e^2 (X^TX)_{11}}{|X^TX|}$$
Note that $(X^TX)_{11}=n$ and $s_e^2$ is the "mean square error". The variance of $\alpha$ is given by:
$$var(\hat{\alpha})=s_e^2(X^TX)^{-1}_{11}=\frac{s_e^2 (X^TX)_{22}}{|X^TX|}$$
Now $(X^TX)_{22}=\sum_i x_i^2 = n(s_x^2+n\overline{x}^2)$
And dividing these two variances gives:
$$\frac{var(\hat{\alpha})}{var(\hat{\beta})}=\frac{(X^TX)_{22}}{(X^TX)_{11}}=\frac{n(s_x^2+n\overline{x}^2)}{n}=s_x^2+n\overline{x}^2$$
Now all quantities in the equation are known, except for the mean $\overline{x}$. So we can re-arrange this equation and solve for the mean:
$$\overline{x}=\pm\sqrt{\frac{\frac{var(\hat{\alpha})}{var(\hat{\beta})}-s_x^2}{n}}$$
But we know from the start that $x_i>0$ - you can't drive "negative miles". So only the positive square root is to be taken. The rest is straight-forward CI stuff. The estimate of the mean $\hat{\overline{y}}$ is given by:
$$\hat{\overline{y}}=\hat{\alpha}+\hat{\beta}\overline{x}=\hat{\alpha}+\hat{\beta}\sqrt{\frac{\frac{var(\hat{\alpha})}{var(\hat{\beta})}-s_x^2}{n}}=\overline{y}$$
And the variance is given by:
$$var(\hat{\overline{y}})=var(\hat{\alpha})+\overline{x}^2 var(\hat{\beta})+2\overline{x}cov(\hat{\alpha},\hat{\beta})$$
Now the covariance is equal to:
$$cov(\hat{\alpha},\hat{\beta})=s_e^2(X^TX)^{-1}_{21}=-\frac{s_e^2 (X^TX)_{21}}{|X^TX|}=-\frac{s_e^2 n\overline{x}}{ns_x^2}=-\frac{s_e^2 \overline{x}}{s_x^2}$$
And so the variance is given by:
$$var(\hat{\overline{y}})=var(\hat{\alpha})+\overline{x}^2 var(\hat{\beta})-2\frac{s_e^2 \overline{x}^2}{s_x^2}=var(\hat{\alpha})+\frac{\frac{var(\hat{\alpha})}{var(\hat{\beta})}-s_x^2}{n}\left(var(\hat{\beta})-2\frac{s_e^2}{s_x^2}\right)$$
So you construct your $100(1-P)$% confidence interval by choosing $T_{1-P/2}^{(n-2)}$ as the $P/2$ quantile of standard T distribution with $n-1$ degrees of freedom (which effectively equal to the standard normal, as $n-1=100$), and you have:
$$CI=\overline{y}\pm T_{1-P/2}^{(n-2)}\sqrt{var(\hat{\overline{y}})}$$
And all quantities are calculable, given the information.
Best Answer
The intercept and slope are as stated in the R output. R is not trying to trick you! The fitted model is
log(y) = 0.186 + 0.0424 * log(x)
On the unlogged scaled, the fitted model is
y = exp(0.186) * x^0.0424