Pub. Date:
McGraw-Hill Companies, The
Principles of Digital Audio / Edition 3

Principles of Digital Audio / Edition 3

by Ken C. Pohlmann, K. Pohlmann
Current price is , Original price is $44.95. You

Temporarily Out of Stock Online

Please check back later for updated availability.

This item is available online through Marketplace sellers.



This definitive text provides comprehensive coverage of today’s leading digital audio technologies as well as a thorough survey of fundamentals and theory. Written by well-known audio engineering expert and best-selling author Ken Pohlmann, four previous editions have been valued for their clear explanations and have been widely used as college texts and professional references. The fifth edition of Principles of Digital Audio has been extensively updated and revised to reflect ongoing widespread changes in the audio industry.

Beginning with an in-depth discussion of digital audio recording and reproduction, the text then details perceptual low bit-rate coding, CD and DVD disc formats, digital audio broadcasting, internet and network audio, and DSP. From the basic theory to the latest technological advancements, Principles of Digital Audio completely covers this multifaceted field, including topics such as:

• MP3, AAC, and Dolby Digital audio coding

• DVD playback and recording formats

• PC-based desktop audio systems

• 5.1-channel surround-sound coding

• HD Radio and satellite radio

• Music downloading and streaming

• Digital signal processing


* Sound and Numbers * Fundamental Theory * Digital Audio Recording * Digital Audio Reproduction * Error Correction * Digital Audio Tape * Optical Disc Storage * Compact Disc and SACD * Recordable CD * Interconnection * Perceptual Coding: Theory and Applications * MPEG-1 and MPEG-2 * MP3 Codec * MPEG-4 and AAC * Psychoacoustic Models * Surround Sound Coding * Lossless Coding * DVD-Video and DVD-Audio * Recordable DVD * HD-DVD and Blu-ray * Minidisc * Desktop Audio * Network Audio * Downloadable and Streaming Internet Audio * File Formats * Digital Rights Management * Watermarking and Encryption * MPEG-7 * Digital Radio and TV Broadcasting * HD Radio * Satellite Radio * Digital Audio Workstations * Digital Signal Processing: Theory and Applications * Sigma Delta Conversion and Noise Shaping

Product Details

ISBN-13: 9780070504691
Publisher: McGraw-Hill Companies, The
Publication date: 09/01/1995
Edition description: Older Edition
Pages: 622
Product dimensions: 7.45(w) x 9.28(h) x 1.28(d)

About the Author

Ken C. Pohlmann is a professor at the University of Miami in Coral Gables, Florida, and the director of the Music Engineering program in the university's Frost School of Music. He has initiated new undergraduate and graduate courses in digital audio, advanced digital audio, Internet audio, acoustics and psychoacoustics, and studio production. In 1986, he founded the first master's degree program in Music Engineering in the United States. Mr. Pohlmann holds Bachelor of Science and Master of Science degrees in Electrical Engineering from the University of Illinois in Urbana-Champaign.

Mr. Pohlmann is the author of Principles of Digital Audio (McGraw-Hill); this book has appeared in five editions and has been translated into Dutch, Spanish, and Chinese. He is also author of The Compact Disc Handbook (A-R Editions); this book has appeared in two editions and has been translated into German. He is co-author of Writing for New Media (John Wiley & Sons), and editor and co-author of Advanced Digital Audio (Howard W. Sams). Since 1982, he has written over 2200 articles for publications including Audio magazine, Broadcast Engineering, dB magazine, Car Stereo Review, CD Review, Electronics Australia, Guitar Player magazine, Handbook for Sound Engineers, IEEE Spectrum, Journal of the Audio Engineering Society, Laserdisk Professional, McGraw-Hill Encyclopedia of Science and Technology, Mix magazine, Mobile Entertainment, National Association of Broadcasters Handbook, NARAS Journal, PC magazine, Sound & Vision, Scientific American, Spektrum der Wissenschaft, Stereo Review, Video magazine, and World Book Encyclopedia. He is a senior reviewer for Road & Track Road Gear, and contributing technical editor and columnist for Sound & Vision.

Mr. Pohlmann is president of Hammer Laboratories, a company devoted to the research, development, and testing of new audio technology. He serves as a consultant in the design of digital audio systems, the development of sound systems for automobile manufactures, and as a consultant and expert witness in technology and patent litigation. Some of his consulting clients include: Alpine Electronics, Analog Devices; Apple Computer; Bertlesmann Music Group, Blockbuster Entertainment, BMW, Canadian Broadcasting Corporation, DaimlerChrysler, Eclipse, Ford, Fujitsu Ten, Harman International, Hughes Electronics, Hyundai, IBM, Kia, Lexus, Lucent Technologies, Microsoft, Mitsubichi Electronics, Motorola, Nippon Columbia, Onkyo America, Philips, RealNetworks, Recording Industry Association of America, Samsung, Sensormatic, Sonopress, Sony, TDK, Time Warner, Toyota, United Technologies, and the U.S. Justice Department Anti-Trust Division.

Mr. Pohlmann has consulted with such laws firms as Arnold & Porter; Baker & McKenzie; Christie Parker & Hale; Cushman, Darby & Cushman; Dewey Ballantine; Fish & Richardson; Greenberg, Glusker, Fields, Machtinger & Kinsella; Darby & Darby; Firmstone & Feil; Fish & Neave; Hunton & Williams; Paul, Weiss, Rifkind, Wharton & Garrison; Barnes & Thornburg; Kenyon & Kenyon; and Young & Thompson.

Mr. Pohlmann co-founded Microcomputer Arts, Inc. (1980), International Business Information Systems, Inc. (1982), and U.S. Digital Disc Corporation (1985). He chaired the Audio Engineering Society's International Conference on Digital Audio in Toronto in 1989 and co-chaired the Society's International Conference on Internet Audio in Seattle in 1997. He was presented two Fellow Board of Governor's Awards (1989 and 1998) and was named an AES Fellow in 1990 for his work as an educator and author. He was elected to the AES Board of Governors in 1991. He was presented the University of Miami Philip Frost Award for Excellence in Teaching and Scholarship in 1992. He served as AES convention papers chairman in 1984 and papers co-chairman in 1993. He was elected as the AES Vice President of the Eastern U.S. and Canada Region in 1993. He served as a Non-Board Member of National Public Radio Distribution/Interconnection Committee (2000-2003). Mr. Pohlmann served on the Board of Directors of the New World Symphony (2000-2005).

Read an Excerpt

Chapter 1: Sound and Numbers

Digital audio is a highly sophisticated technology. It pushes the envelopes of many diverse engineering and manufacturing disciplines. Although the underlying concepts have been well understood since the 1920s, commercialization of digital audio did not begin until the 1970s simply because theory had to wait 50 years for technology to catch up. The complexity of digital audio is all the more reason to begin the discussion with the basics. In particular, this chapter begins our exploration of ways to numerically encode the information contained in an audio event.

Physics of Sound

It would be a mistake for a study of digital audio to ignore the acoustic phenomena for which the technology has been designed. Music is an acoustic event. Whether it radiates from musical instruments or is directly created by electrical signals, all music ultimately finds its way into the air, where it becomes a matter of sound and hearing. It is therefore appropriate to briefly review the nature of sound.

Acoustics is the study of sound and is concerned with the generation, transmission, and reception of sound waves. The circumstances for those three phenomena are created when energy causes a disturbance in a medium. For example, when a kettle drum is struck, its drumhead disturbs the surrounding air (the medium). The outcome of that disturbance is the sound of a kettledrum. The mechanism seems fairly simple: the drumhead is activated and it vibrates back and forth. When the drumhead pushes forward, air molecules in front of it are compressed. When it pulls back, that area is rarefied. The disturbance consists of regions of pressure above and below theequilibrium atmospheric pressure. Nodes define areas of minimum displacement, and antinodes are areas of maximum (positive or negative) displacement. The displacement is quite small; in normal conversation, particle displacement is about one millionth of an inch. A crowd's acoustic outpouring might cause displacement of one thousandth of an inch.

Sound is propagated by air molecules through successive displacements that correspond to the original disturbance. In other words, air molecules colliding one against the next propagate the energy disturbance away from the source. Sound transmission thus consists of local disturbances propagating from one region to the next. The local displacement of air molecules occurs in the direction in which the disturbance is traveling; thus sound undergoes a longitudinal form of transmission. A receptor (like a microphone diaphragm) placed in the sound field will similarly move according to the pressure acting on it, completing the chain of events. Incidentally, the denser the medium, the easier is the task of propagation. For example, sound travels more easily in water than in air.

We can access an acoustical system with transducers, devices able to change energy from one form to another. These serve as sound generators and receivers. For example, a kettle drum changes the mechanical energy contributed by the mallet to acoustical energy. A microphone responds to the acoustical energy by producing electrical energy. A loudspeaker reverses that process to again create acoustical energy from electrical.

The pressure changes of sound vibrations can be produced either periodically or aperiodically. A violin moves the air back and forth periodically at a fixed rate. (In practice, things like vibrato make it a quasi-periodic vibration.) However, a cymbal crash has no fixed period; it is aperiodic. One sequence of a periodic vibration, from pressure rarefaction to compression and back again, determines one cycle. The number of vibration cycles that pass a given point each second is the frequency of the sound wave, measured in Hz (hertz). A violin playing concert A, for example, generates a waveform that repeats about 440 times per second; its frequency is 440 Hz. Alternatively, the reciprocal of frequency, the time it takes for one cycle to occur, is called the period. Frequencies in nature can range from very low, such as changes in barometric pressure around 10-5 Hz, to very high, such as cosmic rays at 1022 Hz. Sound is loosely described to be that narrow, low-frequency band from 20 Hz to 20 kHz-roughly the range of human hearing. Audio devices are designed to respond to frequencies in that general range.

Wavelength is the distance sound travels through one complete cycle of pressure change and is the physical measurement of the length of one cycle. Because the velocity of sound is relatively constant-about 1130 ft/s (feet per second)-we can calculate the wavelength of a sound wave by dividing the velocity of sound by its frequency. Quick calculations demonstrate the enormity of the differences in the wavelength of sounds. For example, a 20-kHz wavelength is about 0.7 inch long, and a 20-Hz wavelength is about 56 feet long. No transducers (including our ears) are able to linearly receive or produce that range of wavelengths. Their frequency response is not flat, and the frequency range is limited. The range between the lowest and highest frequencies a system can accommodate defines a system's bandwidth. If two waveforms are coincident in time with their positive and negative variations together, they are in phase. When the variations exactly oppose one another, the waveforms are out of phase. Any relative time difference between waveforms is called a phase shift. If two waveforms are relatively phase shifted and combined, a new waveform results from constructive and destructive interference.

Sound will undergo diffraction, in which it bends through openings or around obstacles. Diffraction is relative to wavelength; longer wavelengths diffract more apparently than shorter ones. Thus, high frequencies are considered to be more directional in nature. Try this experiment: hold a magazine in front of a loudspeaker-high frequencies will be blocked by the barrier, while lower frequencies (longer wavelengths) will go around it.

Sound also can refract, in which it bends because its velocity changes. For example, sound can refract because of temperature changes, bending away from warmer temperatures, and toward cooler ones. Specifically, velocity of sound in air increases by about 1.1 ft/s with each increase of 1°F. Another effect of temperature on the velocity of sound is well known to every wind player. Because of the change in the speed of sound, the instrument must be warmed up before it plays in tune (the difference is about half a semitone).

The speed of sound in air is relatively slow, 740 mph. The time it takes for a sound to travel from a source to a receptor can be calculated by dividing the distance by the speed of sound. For example, it would take a sound about onesixth of a second to travel 200 feet. Sound is absorbed as it travels. The mere passage of sound through air acts to attenuate the sound energy. High frequencies are more prominently attenuated in air; a lightning strike close by is heard as a sharp clap of sound, and one far away is heard as a low rumble because of high-frequency attenuation. Humidity affects air attenuation; specifically, wet air absorbs sound better than dry air. Interestingly, moist air is less dense than dry air (water molecules weigh less than the nitrogen and oxygen they replace) causing the speed of sound to increase.

Sound pressure level

Amplitude describes the sound pressure displacement above and below the equilibrium atmospheric level. In absolute terms, sound pressure is very small; if atmospheric pressure is 15 psi (pounds per square inch), a loud sound might cause a deviation from 14.999 to 15.001 psi. However, the range from the softest to the loudest sound, which determines the dynamic range, is quite large. In fact, human ears (and hence audio systems) have a dynamic range spanning a factor of millions. Because of the large range, a logarithmic ratio is used to measure sound pressure levels. The decibel (dB) uses base 10 logarithmic units to achieve this. A base 10 logarithm is the power to which 10 must be raised to equal the value. For example, an unwieldy number such as 100,000,000 yields a tidy logarithm of 8 because 108 = 100,000,000...

Table of Contents


Chapter 1: Sound and Numbers

Chapter 2: Fundamentals of Digital Audio

Chapter 3: Digital Audio Recording

Chapter 4: Digital Audio Reproduction

Chapter 5: Error Correction

Chapter 6: Magnetic Storage Media

Chapter 7: Digital Audio Tape (DAT)

Chapter 8: Optical Disc Media

Chapter 9: The Compact Disc

Chapter 10: Perceptual Coding

Chapter 11: DVD

Chapter 12: The MiniDisc

Chapter 13: Audio Interconnection

Chapter 14: Desktop Audio

Chapter 15: Internet Audio

Chapter 16: Digital Radio and Television Broadcasting

Chapter 17: Digital Signal Processing

Chapter 18: Sigma-Delta Conversion and Noise Shaping

Customer Reviews

Most Helpful Customer Reviews

See All Customer Reviews

Principles of Digital Audio 4 out of 5 based on 0 ratings. 2 reviews.
Anonymous More than 1 year ago
A must have book for students and professionals.
Anonymous More than 1 year ago