[Physics] Why can I tell a flute from a trumpet

acoustics

The usual story I've heard describing the difference between a 440 Hz note played by a flute and a trumpet is that the overtones are different. That is, if you play a note at 440 Hz, there will also be Fourier components at 880 Hz, 1320 Hz, etc. The difference in the relative amplitudes of those higher harmonics constitutes the difference between instruments.

However, if you mess around with a Fourier applet (like this one, the first from a Google search), I think you'll quickly become convinced that you can't make a sound like a trumpet, flute, or pretty much anything other than an electronic box with it.

What's the essential difference between the idealized sound source and the real sound source?

Best Answer

There are a few different things going on here:

  1. The spectrum of a musical instrument is quite complex, with contributions from a lot of different frequencies, and they have to be fairly precisely balanced in order to make it sound right. It's unlikely that you'd happen to stumble upon just the right combination of harmonics while playing with a synthesizer like that applet. But if, for example, you're looking at a table of the relative intensity of different peaks for a particular instrument, then you would know where to set the sliders, and that way you can get close enough to be recognizable. It's easier with certain instruments (flutes and guitars) than others (low brass), due to the relative complexity of the spectra.
  2. Even if you do manage to get the relative intensities of the harmonics exactly right, there's more to the spectrum than that - in other words, the frequencies that are integer multiples of the frequency of the note being played aren't the only ones that matter. A real instrument is a complex system with many different resonant modes that can be excited in varying proportions, and not all of them are harmonics. To get something that sounds better than a cheap MIDI synthesizer, you need to duplicate all those modes. (Doing this can be very computationally intensive, which is why cheap MIDI synthesizers don't sound real: they only reproduce the largest 6-10 peaks in frequency space)
  3. In fact, it's not even as simple as just talking about an instrument: you also have to take into account the musician, the acoustical environment (reverb etc.), and even the note being played. The same instrument can have a slightly different spectrum when it's playing a high note as opposed to a low note, for example, because of the non-harmonic modes. We get used to hearing those differences in a musical performance. So when you try to synthesize a note without making adjustments based on absolute frequency, it tends to sound mechanical.
  4. And of course, there are the things the other answers mentioned about subtle adjustments in amplitude and frequency that musicians make, intentionally or not, as they play. These are also effects we get used to hearing in real musical performances, and we notice the difference when they are missing.