speaker
Oxford University logo
Phonetics Laboratory
Faculty of Linguistics, Philology, and Phonetics

Finding the best path through the matrix

Consider the problem of how to find the best path through the matrix of local distances:
 

7
0
0
1
4
1
1
6
1
1
0
1
0
4
5
4
4
1
0
1
9
4
1
1
0
1
0
4
3
0
0
1
4
1
1
2
0
0
1
4
1
1
1
1
1
4
9
4
0
 
1
2
3
4
5
6

(In this example, the bottom row and the leftmost column are used to indicate the sample numbers of x[t] and  y[t ], not their sample values, as before.)

We make two assumptions, and one remark:

1) The endpoints of the two signals correspond to one another. That is, the path always begins at (1,1) and ends at (6,7), or whatever the coordinate of the upper-right cell happens to be.

2) The best path lies near to the diagonal. In other words, all other things being equal, a diagonal move going upwards and to the right should be favoured over simple upward or rightward moves. This constraint can be satisfied if we discount the cost of a diagonal move, i.e. we cost it at, say 50% of the cost of a move upwards or rightwards.

3) Time moves forwards. This constraint can be satisfied by allowing just three kinds of move: up, right, or up-and-right. This means that there are only three ways of arriving at a particular cell: from below, from the left, or from below-left.

The first step of the dynamic programming algorithm can now be stated as follows:

For each cell in the matrix, calculate the cost of arriving from a) below, b) left, and c) below-left, by adding the cell value to the lowest-cost way of arriving at the cell below, left, or below-left, respectively.

This gives us the lowest-cost accumulated distance matrix.