| Vector 1 | Vector 2 | Squared difference |
| 1.001708 | 0.629627 | 0.138444 |
| -0.201289 | 0.167945 | 0.136334 |
| 0.068966 | -0.015374 | 0.007113 |
| 0.016894 | -0.050550 | 0.004549 |
| 0.135456 | -0.087576 | 0.049743 |
| -0.093006 | -0.008318 | 0.007172 |
| -0.056108 | 0.165056 | 0.048914 |
| -0.045149 | 0.069519 | 0.013149 |
| 0.044609 | 0.110669 | 0.004364 |
| 0.019852 | 0.034234 | 0.000207 |
| 0.011160 | 0.024627 | 0.000181 |
| 0.140104 | 0.050056 | 0.008109 |
| -0.103086 | -0.032632 | 0.004964 |
| 0.025018 | -0.064510 | 0.008015 |
| Sum = 0.008015 |
In speech recognition, a major component of the computational load is the repeated calculation of the distances between spectral vectors. To reduce this load, and make the representations of speech more compact, we can compile a codebook of spectral vectors: that is, a scheme for numbering n distinct, typical spectral vectors. For example, we might make a codebook with 1024 distinct spectral templates, in which vector 1 (above) is number 435, and vector 2 is number 884. We could even draw up a 1024 × 1024 table, giving the spectral distance between every pair of spectra in the codebook. Although it would have over just 1 million cells, it would only have to be calculated once. After that, calculating spectral distances would boil down to a) working out the code numbers of the two spectra to be compared, and then b) looking up the distance between them in the table. Likewise, comparing one utterance with another boils down to comparing the sequence of code-numbers for one utterance with those of the other.