Binary numbers

Although computers use binary numbers, we shall hardly bother with them most of the time. However, it is essential to understand what size of decimal number can be stored with a binary number of a certain number of bits. With one bit, we can represent two (21) numbers, 0 and 1. Two bits allows us to represent four (22) numbers: 00 (= decimal 0), 01 (decimal 1), 10 (= 2) and 11 (= 3). Four bits allows us to represent 16 (24) numbers (e.g. 0 to 15, or –8 to +7, if we want to use both negative and positive numbers), as listed in table 2.4.

Table 2.4. Four-bit binary numbers
Binary
Decimal
Binary
Decimal
… or (for signed integers)
0000
0
1000
8
–8
0001
1
1001
9
–7
0010
2
1010
10
–6
0011
3
1011
11
–5
0100
4
1100
12
–4
0101
5
1101
13
–3
0110
6
1110
14
–2
0111
7
1111
15
–1

Note that the sign of a number takes 1 bit to represent, so the largest signed number for a give number of bits is (half of the largest unsigned number) minus 1.

8 bits (1 byte) allows 256 (28) numbers to be represented (e.g. 0 to 255 or –128 to +127. 16 bits (2 bytes) can encode 216 numbers (e.g. –32768 to 32767). Instead of “hundreds, tens and units”, the places of digits in a binary number represent powers of two: from right to left, ones, twos, fours, eights etc. We shall use 16-bit binary numbers so frequently in this course that it is as well to have a simple name for them: we shall call them short integers, for reasons that will become clear a little later.