This chapter should give a short introduction about the basics of digital audio processing, without going too much into details. Of course this might be a bit incomplete, but if you have questions, you can ask at the Kwave mailing list or consult some further literature.
First of all, one must know that the world is analogue - but computers work digitally. So there are several ways to convert analogue audio to digital audio and back again. As the way from digital to analogue normally is the reversion of the way from analogue to digital, we only describe the way from analogue to digital.
Conversion from sound to bits
Before continuing, analogue audio has to be transformed into electronic signals in order to find it's way into a computer. One common way to do this is by using a microphone and an amplifier. This combination gets sound (changes of air pressure) at it's input and a voltage at it's output. Higher amplitude of the pressure changes will be represented by higher voltages at the amplifier's output. This output is also called a 'signal'. Instead of a microphone you can of course also imagine other sources of audio. And the "amplifier" can be the one that is integrated into your sound card, where you normally can't see it.
Conversion to electronic signal
At this stage, the electrical signal has three limitations that one should keep in mind:
The amplitude (volume) is limited to some maximum level. This is a consequence of the electronic (amplifiers) that are only able to handle voltages within some specific range. That's no problem as long as sounds are not too loud. In that case the signal would be clipped, which means that the electrical signal will run against it's margins and the result will be disturbed.
The frequency range is also limited. Due to the mechanical constrains of microphones and the limited frequency range of amplifiers, a signal's frequency range is limited. There are no hard borders besides which the sound abruptely disappears, but below some low and above some higher frequency the amplitude of the signal starts to decrease more and more. The existance of a maximum frequency can be easily understood as a limited speed of the electrical signal to rise and fall. By using high quality amplifiers and microphones, the limits can be spread into ranges where the human ear is no longer able to hear their results and thus get out of interest. The human ear normally is not able to hear sound above 20 kHz.
The signal contains noise. Noise is the most ugly enemy of everyone who has to handle audio signals in any way. Noise is a typical analogue effect, that makes the audio signal "unsharp" and disturbed, it is always present and cannot be avoided. One can only try to use high quality components that produce as low noise as possible, so that one can't hear it. Normally noise has a certain volume, so that the interesting sound should be much louder in comparism to the noise. This is called the signal to noise ratio (SNR), the higher it is the better the sound's quality will be. Sounds that have lower volume than the noise cannot be heart.