- How does one convert between binary, hex, and decimal number representations?
- How does one determine the
*word size*of a computer? - What does it mean for a computer to be
*little endian*or*big endian*and under what circumstances does this matter to a programmer? - Important C basics:
- What are pointers, and how are they declared and used?
- How are strings represented?
- What bit-level, logical, and shift operations does C provide?
- What integral data types does C support?
- What floating-point (FP) data types does C support?
- What encodings are used for the integral and FP data types and what range of values can be represented?
- What happens numerically when you convert between all combinations of supported data types?
- What are the rules of implicit type conversion?
- What numerical outcomes are signaled as errors by the hardware and/or runtime system?

- How can bit vectors be used to represent finite sets?
- What are the general properties of integer arithmetic using a finite number of bits?
- What sequence of operations might be used to replace an integer multiply operation, and under what circumstances might the multi-instruction sequence be faster?
- Under what circumstances can an integer divide instruction be replaced by simpler instructions, and when must a bias be added to the dividend?
- IEEE FP basics:
- In general, how is a value represented using the sign bit, the significand, and the exponent?
- How does one distinguish between normalized, denormalized, and special values?
- What is the problem of
*rounding*and what options exist for dealing with it? - What properties hold for FP multiplication and division?
- What floating-point data types are available in C?
- What rules are followed in C when casting values between int, long, float and double formats?

- Self test: Can you complete all practice problems in Chapter 2?