We can generalize the Single source shortest path problem to find the shortest path between *all pairs* of vertices.

## Worst case to beat

Run Bellman-Ford algorithm on each vertex, treating it as a source. This gives us a runtime of $O(n_{2}m)$, with worst case $m=n_{2}$. Therefore, $O(n_{4})$ is the runtime we want to beat.

## Using DP

One approach is to use DP. Maybe decompose path w.r.t. the last edge that we chose to include?

### Naively

A $G$-function could look like

$G(u,v,k)=shortest pathu→vusing≤kedges$Base case occurs when we have $k=0$. If $u=v$, then $G$ should return 0, otherwise it’s infinity, because we’ve used up all our edges and still haven’t formed a path between them. If $u=v$ and $k>0$, we should also return 0 for the same reason.

A generalized $G$-function could look like

$G(u,v,k+1)=(z,v)∈Emin {G(u,z,k)+w(z,v)}$for all edges $(z,v)∈E$. Essentially we are minimizing the weight of the next edge $(z,v)$ by checking each possible one and picking the smallest to extend the path $u→z$ by.

#### This leads to a bad runtime

We have $n_{3}$ subproblems: $u$ and $v$ both come from $V$, and we use at most $k≤n−1$ edges between them. Next, we iterate over the neighbor set of $v$ to form the path $u→z→v$, which is at most $deg(v)=n−1$. Assuming we cache results, our runtime is still $n_{3}(n−1)=O(n_{4})$, which is not any better than Bellman-Ford.

Another way of visualizing it is to consider the algorithm in a bottom-up approach. Iterate through $k∈[0..n−1]$, iterate through each pair of vertices $(u,v)∈V×V$, and consider each $z$ such that $u→z→v$ exists. We have the summation:

$k=0∑n−1 (u∑ v∑ deg(v))$This is equal to, worst case, $n⋅(n⋅n⋅(n−1))=O(n_{4})$.

### Cutting the time we spend at each subproblem

The issue above is that while we restrict the *number* of edges we can use, we are not restricting *where* the edges can lead (e.g. the vertices). To reduce runtime, we should only consider a subset of the vertices at each iteration. This leads us to the Floyd-Warshall algorithm.