All centrality measures are dependent on the shape of your data. Laplacian centrality is a convincing measure of centrality for weighted graphs.
Define a matrix to store our weights.
$ W_{ij} = \left\{
\begin{array}{lr}
w_{ij} & : i \neq j\\
0 & : i = j
\end{array}
\right. $
Define a matrix, where the diagonal is the sum of the weights associated with a node.
$ X_{ij} = \left\{
\begin{array}{lr}
0 & : i \neq j\\
\sum\limits_{i=0}^n W_{i}& : i = j
\end{array}
\right. $
The Laplacian is then defined by
$L = X - W$
We can define a property of the graph, Laplacian energy.
$E = \sum\limits_{i=0}^n\lambda_i^2$
Where $\lambda$s are the eigenvalues associated with the Laplacian.
Rather than eigensolving our matrices, we can equivalently solve.
$E = \sum\limits_{i=0}^n x_i^2 + 2\sum\limits_{i<j}w_{ij}^2$
To define the importance of a particular node in a graph, we remove that node and calculate the energy.
Consider the following data, generated from an RBF kernel of 1000 multivariate normal observations centered at the origin with a standard deviation of unity. The indices are the same for both figures. The data was presorted according to the distance of each observation $\in \mathbb R^n$ from the origin.
The importance of the Laplacian is beyond the scope of this answer. Laplacians are central to many piercing theorems in spectral graph theory and many practical results in the literature of manifold learning and clustering. I'd highly recommend reading up on the subject if you think you'll be dealing with weighted graphs in the near future.
At high level I would compute a similarity coefficient for each connected node to adjust weights of the connection. Then run clustering algorithm with adjusted graph.
Probably the easiest would be to start with Jaccard similarity - for each connected node compute number of shared categorical attributes and divide by the number of total categorical attributes. For numeric attributes you'll need to define similarity yourself (e.g. if two birds has X and Y feathers, than the smaller |X - Y| is - the more similar birds are).
At the end you will have a vector of similarity attributes:
\begin{bmatrix}j_1,&n_2,&n_3,&...&n_k\end{bmatrix}
Then you need to come up with weights for each similarity coefficient:
\begin{bmatrix}w_1,&w_2,&w_3,&...&w_k\end{bmatrix}
And your final edge weight will be something like:
$$
e_{new\;weight} = (w_1\ast j_1+w_2\ast n_2+...+w_k\ast n_k)\ast e_{old\;weight}
$$
Best Answer
Testing for significance means first defining what classes of results are considered "more surprising" and "less surprising." In 1D normally-distributed statistical tests, results near the mean are less surprising than the empirical result, and the p-value is defined as the area under the curve where the result is more surprising than the emprical result.
Once you define what types of graph-differences are most surprising, you can do some monte carlo simulation of graphs and count how many of them fit that description.
Along the way you'll have to make some assumptions specific to your specific model. Don't expect an out-of-the-box procedure for this kind of thing.
Personally I would start by making an adjacency matrix, computing the eigens, then maybe computing the angle between the principal eigenvector of A and that of B. Then do some Monte-Carlo-simulated matrices based on your null-hypothesis data model, and see how many of them have a larger angle than the empirical one. Adjust this procedure as you need.