The question is quite vague so I am going to assume you want to choose an appropriate performance measure to compare different models. For a good overview of the key differences between ROC and PR curves, you can refer to the following paper: The Relationship Between Precision-Recall and ROC Curves by Davis and Goadrich.
To quote Davis and Goadrich:
However, when dealing with highly skewed datasets, Precision-Recall (PR) curves give a more informative picture of an algorithm's performance.
ROC curves plot FPR vs TPR. To be more explicit:
$$FPR = \frac{FP}{FP+TN}, \quad TPR=\frac{TP}{TP+FN}.$$
PR curves plot precision versus recall (FPR), or more explicitly:
$$recall = \frac{TP}{TP+FN} = TPR,\quad precision = \frac{TP}{TP+FP}$$
Precision is directly influenced by class (im)balance since $FP$ is affected, whereas TPR only depends on positives. This is why ROC curves do not capture such effects.
Precision-recall curves are better to highlight differences between models for highly imbalanced data sets. If you want to compare different models in imbalanced settings, area under the PR curve will likely exhibit larger differences than area under the ROC curve.
That said, ROC curves are much more common (even if they are less suited). Depending on your audience, ROC curves may be the lingua franca so using those is probably the safer choice. If one model completely dominates another in PR space (e.g. always have higher precision over the entire recall range), it will also dominate in ROC space. If the curves cross in either space they will also cross in the other. In other words, the main conclusions will be similar no matter which curve you use.
Shameless advertisement. As an additional example, you could have a look at one of my papers in which I report both ROC and PR curves in an imbalanced setting. Figure 3 contains ROC and PR curves for identical models, clearly showing the difference between the two. To compare area under the PR versus area under ROC you can compare tables 1-2 (AUPR) and tables 3-4 (AUROC) where you can see that AUPR shows much larger differences between individual models than AUROC. This emphasizes the suitability of PR curves once more.
One of the advantages to ROC curves is that they are agnostic to class skew. ROC curves remain the same whether your data is balanced or not, bar some finite-sample effects when you have very few examples of one class.
As such, weighted ROC curves have nothing to do with class balance. Instead, weighted ROC curves are used when you're interested in performance in a certain region of ROC space (e.g. high recall) and was proposed as an improvement over partial AUC (which does exactly this but has some issues). You can read more about it in Weighted Area Under the Receiver Operating Characteristic Curve and Its Application to Gene Selection by Li and Fine.
Best Answer
Yes, it is average precision, where the average is taken across different thresholds for saying "yes".
The precision-recall curve typically starts out relatively high, and descends though not monotonically. On the right edge, to guarantee perfect recall you just say "yes" to everything, so precision will be down at the base rate. On the left, you require absolute certainty to say "yes", so you miss a lot, but hopefully everything you identify is a target.
Because of noise there will be fluctuations in the line.
If the base rate is low, it's possible that a model has a high area under the ROC curve but still a low area under the PR curve. For example, Andy Berger notes this is the case for conflict studies, and provides some example graphs.