With up to 50 vertices, the number of paths could be enormous: in a complete graph of 50 vertices (i.e. every vertex joined to every other vertex) there would be $41337038439638629286248290504650886651492243224669378150412649225$ of them:
that's $\sum_{k=2}^{50} \frac{50!}{2 (50-k)!}$.
Hopefully your graphs will be sparse enough that the number will be much more reasonable. You might try a recursive approach: if $P(i,S)$ is the number of paths starting with vertex $i$ and with all other vertices in set $S$, and $N(i)$ is the
set of vertices $j$ such that $(i,j)$ is an edge, then $P(i,S) = \sum_{j \in N(i) \cap S}
(1 + P(j,S \backslash \{j\}))$. Note that this will overcount the paths by a factor of 2.
You can do a simple Breadth First Search from the start node. It starts with the first node, and adds all its neighbours to a queue. Then, it de-queues each node, finds its unvisited neighbors to the queue and marks the current node visited. The key observation here is that the all the vertices that are at some distance $d$ from the start vertex, are in one contiguous block in the FIFO queue. If you can keep track of how deep you are in the queue, you can stop after $k$-levels. More specifically, the moment you de-queue a vertex that you know is at a distance of $k$ from the start node, you don't have to enqueue any more, just go on till the queue is empty, and you are done. Using BFS in this way will give you vertices in increasing order of hops/distance from the start.
You can also adapt the Depth First Search algorithm for this, by limiting the depth to $k$. This is easier to code, but won't give vertices in order of distances from the start. So, for some of the discovered vertices, you may later find a shorter path. But no vertex will have to be discarded after being discovered in either methods.
Both these algorithms are linear in the size of the graph - $O(|V|+ |E|)$ even if they traverse the whole graph. But since you are stopping much sooner, they would be very efficient. More specifically, they will visit only those edges and vertices that are actually in the $k$-hop neighborhood, and nothing more -- $O(|V_s|+|E_s|)$, where $V_s$ and $E_s$ are the number of vertices and edges in the $k$-neighborhood subgraph. Another consideration can be that if $k$ is small but vertex degrees are very large, you might prefer DFS.
k-hops for all vertices:
If the graph is sparse, that is, $|E| = O(|V|)$, then running a BFS/DFS from each node should be the best option asymptotically. Rectangular or hexagonal lattice graphs are sparse graphs. So, complexity would be $O(|V|^2)$. I do not think that there should be anything asymptotically faster than this for sparse graphs, though there may be other solutions that are more efficient (but still quadratic).
If the graph is not sparse, you can compute the $k$-th power of the adjacency matrix in $O(n^{\omega}\log k)$ time, where $\omega$ is the constant of matrix multiplication. The current best known algorithm has $\omega = 2.3727$. But in practice, the simple cubic matrix multiplication ($\omega = 3$) is the most efficient. You could also use the all-pair shortest path algorithm by Floyd-Warshall and find those vertices that have distance atmost $k$. This is $O(|V|^3)$
Best Answer
Hint
To get a path of length $n$ from adjacency matrix $A$ you need to consider $A^n$ as a Boolean product of matrices.
The idea for $n=2$ is as follows: Suppose $u$ is adjacent to $v$ and $v$ is adjacent to $w$, then there is a path of length $2$ from $u$ to $w$. What the matrix product does is to capture all such paths.