Hey gang,
there has been a lot of good info in this thread, but there appears to be some misunderstanding. It will take far more than a short post to explain all the issues involved, so I will sit down and try to write a clear, understandable explanation about interfacing audio devices.
In the meantime, to address the specific issues here:
dB is a power ratio! That's it, that's all it is, that's all it can ever be. (We'll ignore, for the moment, the fact that is the logarithm of the ratio of two powers.)
When the world switched from 'matched impedance, maximum power' transfer to voltage transfer engineers figured out a way to use a power ration to describe a level ratio. Because of the relationship between level and power you can use the dB to describe voltage ratios.
When the dB is used to describe a signal level it must always be with respect to a standard reference level.
There are two generally accepted reference voltage levels:
0 dBu (the 'u' means unreferenced, save the math for later) uses 0.7746Vrms (saving where that came from for later too)as the reference. A positive dBu means that the level is greater than 0.7746V. A negative dBu means it is less.
+4 dBu is the nominal operating level for 'professional' audio inputs and outputs.
+8 dBu is the nominal operating level for broadcasters.
0 dBV (the 'V' stands for volts, and like everything else surrounding this standard it makes sense... oh well!) uses 1.0Vrms as the reference level. Positive and negative dBV numbers represent levels above or below 1V.
Because the two standards use different references we can not simply add or subtract them. The difference between +4 dBu and -10 dBV is about 11 dB - note that in this case we are talking about a difference in levels expressed as a power ratio.
As if that wasn't enough, input stages, output stages, and the connection between them can be balanced or single-ended (aka unbalanced). Balance refers to impedance only - it has nothing at all to do with levels.
A balanced output presents an equal impedance from each signal conductor to ground. AND, ground is not used as a reference.
A balanced input looks at the impedance from each signal pin to ground, and uses that to cancel out any signal that is common to both conductors, while retaining the difference between the two pins. Again there is no requirement for a ground reference.
A balanced cable has two signal conductors. It can also have a ground conductor (which has problems) or a shield that connects to ground at one or both ends.
Most balanced cables twist the signal conductors to make them more immune to magnetic interference. The shield has little impact on magnetic fields. The shield, on the other hand, protects from RF fields, something twisting the pairs does little to prevent.
A single-ended source has only one signal conductor, and uses ground as a reference.
A single-ended input responds to the difference between the signal conductor and ground.
A single ended cable is usually made up of a single signal conductor and a shield, but it can be two signal conductors.
You can drive a balanced input from a single-ended source easily. The noise immunity is dependent on the design of the input stage, a simple op-amp input stage will suffer from the impedance imbalance, an instrumentation amplifier, a transformer, or the InGenius chip can tolerate huge imbalances.
A quick word about symmetry. A signal is called symmetrical if the level on each conductor is equal in amplitude, but opposite in polarity (not phase!) This provides a bit more headroom, and some improvement in S/N ratio, but it is not the feature that makes a balanced interface work.
The best way to connect a single-ended source to a balanced input is
- connect the pin on the RCA connector (or tip of the TS connector) to pin 2 of the XLR (or the tip of the TRS connector) through the first signal conductor.
- connect the second signal conductor from the sleeve of the RCA or TS connector to the ring of the TRS connector or pin 3 of the XLR.
- connect the shield to the sleeve of the RCA or TS connector and the sleeve of the TRS or pin 1 of the XLR (or even better, to the chassis at the receive end!)
This will work almost as well as a balanced to balanced connection if the input stage is properly designed. It's really pretty remarkable how well it works.
It is usually safe to send a -10 dBV signal to a +4 dBu input, you will lose a little bit of S/N ratio, but the preamplifier should be able to provide sufficient makeup gain. And you'll have TONS of headroom<G>!
It is a bad idea to send a +4 dBu signal to an input designed for -10 dBV. You'll need an pad or attenuator in front of the input, and you'll lose both headroom and S/N ratio.
Phew - I still haven't addressed impedance!
Impedance means the opposition to current flow. It can be resistive, which means it applies to energy at any frequency equally. It can be reactive, which means that it is dependent on frequency. It can also be the result of the relationship between wavelength and conductor length.
What matters for now is that in order for an interface to be good at transferring voltage the input impedance needs to be much higher than the source impedance. The rule of thumb range most often quoted is 10:1. And that certainly works<G>!
Typical source impedances for audio devices are usually very low, a couple of ohms is not uncommon. In order to prevent damage to the output stage most manufacturers add a build-out resistor, anywhere from 10 ohms to as high as 50 ohms.
Typical input impedances range from 100K ohms on up. The actual input impedance for a typical op-amp is in the Meg ohms, but additional circuitry is added to protect the inputs, reduce RFI, etc, and so it is made lower.
That's a lot of detail, I know, but it's the shortest thing I could write without resorting to pictures and equations.
I hope it helps, but if you have questions just ask!