Have you looked at the Wikipedia entry I linked in your question?
You don't plot "the mean of the data", but for each data point measured in two ways, you plot the difference in the two measurements ($y$) against the average of the two measurements ($x$). Using R and some toy data:
> set.seed(1)
> measurements <- matrix(rnorm(20), ncol=2)
> measurements
[,1] [,2]
[1,] -0.6264538 1.51178117
[2,] 0.1836433 0.38984324
[3,] -0.8356286 -0.62124058
[4,] 1.5952808 -2.21469989
[5,] 0.3295078 1.12493092
[6,] -0.8204684 -0.04493361
[7,] 0.4874291 -0.01619026
[8,] 0.7383247 0.94383621
[9,] 0.5757814 0.82122120
[10,] -0.3053884 0.59390132
> xx <- rowMeans(measurements) # x coordinate: row-wise average
> yy <- apply(measurements, 1, diff) # y coordinate: row-wise difference
> xx
[1] 0.4426637 0.2867433 -0.7284346 -0.3097095 0.7272193 -0.4327010 0.2356194
0.8410805 0.6985013 0.1442565
> yy
[1] 2.1382350 0.2061999 0.2143880 -3.8099807 0.7954231 0.7755348 -0.5036193
0.2055115 0.2454398 0.8992897
> plot(xx, yy, pch=19, xlab="Average", ylab="Difference")
To get the limits of agreement (see under "Application" in the Wikipedia page), you calculate the mean and the standard deviation of the differences, i.e., the $y$ values, and plot horizontal lines at the mean $\pm 1.96$ standard deviations.
> upper <- mean(yy) + 1.96*sd(yy)
> lower <- mean(yy) - 1.96*sd(yy)
> upper
[1] 3.141753
> lower
[1] -2.908468
> abline(h=c(upper,lower), lty=2)
(You can't see the upper limit of agreement because the plot only goes up to $y\approx 2.1$.)
As to the interpretation of the plot and the limits of agreement, again look to Wikipedia:
If the differences within mean ± 1.96 SD are not clinically important, the two methods may be used interchangeably.
The problem with using correlations as a measure of agreement is that what they are really assessing is the ordering of the $X_i$ and $Y_i$ values, and their relative spacing, but not that the numbers themselves agree (cf., see my answer here: Does Spearman's $r=0.38$ indicate agreement?). On the other hand, if the numbers are incommensurate, it makes no sense to try to determine if they agree—it can't mean anything whether they do or don't. As a result, a Bland-Altman plot can't be of any value here. However, a correlation might offer some (albeit little) value.
From an exploratory point of view, I would start with a regular, old scatterplot. I might also do a simple linear regression and test for curvature in the relationship. It can often be the case that different measures are differentially sensitive at different ranges. For example, they might do equally well at measuring what you want in the middle of their range, but one does a better job of measuring lower values (whereas the other just starts to output the same low number, perhaps a limit of detection), and vice-versa for higher values. What I have in mind is that that the relationships aren't linear. Consider this stylized figure of the relationship between energy and the temperature of water:
Then imagine having temperature and something else, perhaps volume (ice begins to expand at lower temperatures), both as measures of energy.
Once / if you were satisfied that the relationship were linear, your ability to measure the amount of agreement would be limited to Pearson's product-moment correlation; Bland-Altman plots just won't work here.
Best Answer
The Bland-Altman plot is more widely known as the Tukey Mean-Difference Plot (one of many charts devised by John Tukey http://en.wikipedia.org/wiki/John_Tukey).
The idea is that x-axis is the mean of your two measurements, which is your best guess as to the "correct" result and the y-axis is the difference between the two measurement differences. The chart can then highlight certain types of anomalies in the measurements. For example, if one method always gives too high a result, then you'll get all of your points above or all below the zero line. It can also reveal, for example, that one method over-estimates high values and under-estimates low values.
If you see the points on the Bland-Altman plot scattered all over the place, above and below zero, then the suggests that there is no consistent bias of one approach versus the other (of course, there could be hidden biases that this plot does not show up).
Essentially, it is a good first step for exploring the data. Other techniques can be used to dig into more particular sorts of behaviour of the measurements.