[Math] Dijkstra’s algorithm using heap

computer sciencegraph theory

My teacher gave me a pseudocode of Dijkstra's algorithm using binary heap including the following steps (x was just extracted from the heap):

For each vertex y such that it is a node for it in a heap and there is an edge (x,y) in a graph:

1) Dist[y] = min(Dist[y], Dist[x] + w(x,y))

2) Update y's node in the heap

It is also said that the complexity of these steps for the whole algorithm is O(ElogV).

But I can't undestand why the step "that it is a node for it in a heap" doesn't influence on complexity in the bad way. Because it means that we should find every neighbor of x in the heap, but there are no efficient ways of searching for node in the heap (only linear search comes to my mind), and it means that for each y we should:

1) Find it in the heap – which takes O(number of nodes in the heap) watches

2) Update it – which takes O(logV) time

This way, it will take O(V*E*logV+(V-1)*E*logV+…) = O($V^2$*E*logV), since the number of nodes in the heap decreases on each step by 1. Am I losing something?

Best Answer

One can store an array of pointers, one for each node, that points to the location of that vertex in the heap used in Dijkstra's algorithm. When each heap operation is applied (e.g. the removal of the top element), one can easily update this array for each swap operation in memory that is thus made. (So one can define a modified heap data structure such that each element in the heap has a key (i.e. a pointer), and one can immediately find the element in the heap for a given key, even if the heap changes.) This way, if you want to look up a node in the heap, you don't need to do a linear search, but just check the pointer for that node.