In case the OP is still interested, I am finding the post by Sir Les a little confusing. The reason you have a discordance between your audio interface and your Sonar project is likely, as he says, due to something re-setting the sample rate for the audio interface and its driver. But I am unclear what the point of getting the properties from the wave file would be.
Say your audio interface is set to collect samples at a rate of 48K samples per second. In sampling 1 second of real world sound it will deliver 48000 samples to the audio buffer somewhere in computer memory. Sonar has no way of knowing what the sampling rate in the interface is set to, it is just going to the buffer and picking up a series of numbers (amplitudes) from the buffer. Internally it represents this data as 4,8000 different sequential numbers. Now if the project in Sonar is set to 44.1K samples/sec when you listen to play back of the audio track in Sonar, it is going to be delivered to the D/A playback device at a rate of 44,100 samples/sec. At the end of one second of playback at 44100 samples/sec there will be 3,900 samples left to play, so to account for the full 48,000 samples it will take Sonar about 1.088 seconds. The frequency represented by the rate of change of sequential samples will drop as the rate at which the samples are delivered to D/A output device gets longer as well. So you have a recording that plays back lower and longer than the real world performance.
When Sonar records the sound it includes the requirement that it is to be played back at 44.1K sample rate in the recorded file. In a wave file this sample rate is in a specific location in the file, and it is that header information that Windows and audio devices look for to tell them how many samples to deliver per second, and what they will report as the sample rate in the properties window. When you go to look at the properties of the file, you will find that it says it was recorded at 44.1K, even though it was sampled at 48K and playback at 44.1K results in the longer and lower error. The samples in this mislabeled 44.1K file are in fact exactly accurate, and if your playback device were to play it back at 48K instead the sound would be recovered in a perfect state. But your playback device reads the erroneous sample rate header and plays it wrong.
If Sonar were asked to import a properly identified 48K wave file into a project set for 44.1K it would automatically resample the file to match the 44.1K project sample rate. Resampling is something of a misnomer, since the original sound is no longer available to be sampled. What happens is that a mathematical algorithm calculates how the sound represented by the 48K data should be represented by 44.1K data and makes a substitution. That "resampled" data is not necessarily exactly identical, but given a good enough algorithm and close enough sample rates it is indistinguishable in practice.
What you would ideally like to do is to tell Sonar that the file that was sampled at 48K but incorrectly labeled as needing to be interpreted as having been sampled at 44.1K, should be considered to be a 48K. I do not know of any way to do that within Sonar, but given the frequency of this problem, perhaps it should be a feature request. An elegant kludge would be to export the affected tracks to wave files and then use a hex editor to change the bytes in the file that specify the sample rate. Without changing a single sample, the problem is solved, and when Sonar imports this new, correctly identified file into an audio track, automatic resampling would result in a track that would play as expected. There are several wave file editing utilities available that will allow you to do that without delving deeply into the wave file format specification. I have not used them, but this might be a simple option if you plan to go that route.
http://www.railjonrogut.com/HeaderInvestigator.htm If you just use a time stretch/shrink process to get the track to the correct length, you will still have the pitch problem. Time and pitch changing algorithms are designed to change the length without changing the pitch or vice versa to enable two recorded performances of slightly different speed or tuning to be fit together. Applying time correction followed by pitch correction is an option but introduces much more opportunity for loss of fidelity.
As Sir Les says the best solution is to be diligent in observing the actual settings in Sonar and audio interfaces to be sure they match before the problem occurs.