Signal Measurement with HDSDR

Fairly accurate comparative measurements may be made with HDSDR.
Please remember the method used, without reference to a suitably calibrated source measurements will not be absolute. But this method might provide an experimenter with a useful way of comparing equipment.  

S-meter calibration: you can calibrate the S-meter level with Options (F7)  / Calibration Settings (C) / S-meter calibration.
(Unrealistic "Current Level", picture taken with HDSDR stopped.)

The S-meter shows the power in the selected and filtered tune band .. just like a usual non-SDR receiver.

The S-meter reading just depends on calibration and selected demodulation bandwidth (blue in upper spectrum). Nothing else.
Clicking on the 'dBm' value/text below the graphical S-meter switches the text to a value appended with 'dBm wRF/IF'. If the ExtIO supports/informs HDSDR of the RF and/or IF gain, then the power level is compensated for the RF and IF gain slider settings.

Spectral measurement depends on S-meter calibration but is independent of demodulation bandwidth. The following parameters also have an effect: RBW, Peak Power Spectrum/Average Power Spectral Density (Avg PSD), FFT Windowing and Averaging.

For calibration and measurement you must not use any AGC inside or in front of the SDR hardware. You also have to remember all RF and IF values (Attenuator, Amplifier, Gain, ..) used at time of calibration. Do not assume that given/written RF/IF attenuation/gain values are exact. Usually the values are also band dependent. In general you should use the same RF/IF settings you used for calibration.

For exact spectral measurement you should also check Options / Visualization / RF Spectrum type:

Peak Power Spectrum (former: Amplitude spectrum (AS)) shows the max frequency bin, when multiple bins are falling to 1 pixel. This allows one to see narrow CW carriers without the need to zoom in optically. But this also increases the noise floor a little - up to ~ 11 dB when zoomed out, so that multiple bins fall to the same pixel.
With this setting each bin shows the power for the RBW setting, which depends on your samplerate. Lowering the RBW resolution from 11 to 22 Hz will result in an increase of the displayed powers by 3 dB (power is doubled).

Average Power Spectral Density (Avg PSD) uses 'average' detector - leading to lower level values. But these should be better for measurement purpose. You may also try using some averaging ..
With PSD HDSDR normalises all FFT results, which are for the resolution bandwidth (RBW) of 1 Hz. This keeps the noise floor independent of RBW.
The RBW setting is internally used, but then normalized. This usually decreases the displayed levels compared to the Amplitude spectrum.
Usually the normalization reference is 1 Hz. With {Ctrl+J} you can change the 'Spectrum Reference Bandwidth', e.g. to 500 Hz, what leads to some higher levels.

Up from HDSDR v2.75 pressing {J} key switches between Peak PS and and Avg PSD.

Also, up from HDSDR v2.75, there's an additional level displayed in the upper right corner of the RF spectrum. This value represents the ADCs used bits as a level to Full Scale [dBFS]. As most ADCs go into compression for high values near 0 dBFS, you should not trust the S-meter or spectral levels in this case.

The internal FFT calculates the power in one FFT bin. The bandwidth of one bin depends on the RBW: So with a with a sample rate of 48 kHz and an FFT length of 1024, we get an RBW of 48000 / 1024 = 46.875 Hz. With a screen width of 1024 pixels each pixel represents one bin. But HDSDR normalizes the FFT power by dividing with RBW and then converting to a level in dB.
Now when there is only noise inside one FFT bin, then there should not be any difference in the noise level displayed, even if you change RBW (and there is still just noise in one bin).

But usually a much finer RBW of ~ 10 Hz may be used. With a samplerate of 48 kHz the RBW will be 11,71875 Hz for an FFT length of 4096.
Now things change a bit, because the screen width is not wide enough to display 4096 pixels. So if  we have exactly 1024 pixels width, then HDSDR will need to compress 4 FFT bin values to 1 pixel. [Note HDSDR actually allows the display of fractional ratios to accommodate any size of window.] 

There is a difference between "amplitude spectrum" and "power spectrum density", which is the kind of compression. 

Amplitude spectrum chooses the maximum of each of the four FFT bins (which might be a narrow signal, e.g. CW carrier). The noise floor gets increased by using this maximum, because the noise varies in each bin.
PSD makes an average of the four bins, which leads to a better noise and signal estimation. But if one of the four bins should be a narrow CW carrier, then the level is of the carrier gets reduced through the averaging with the other weaker noise bins.

General to both, any narrow carrier, with a smaller bandwidth compared to RBW, will not be seen at it's true power level. Carriers with small SNR (signal to noise ratio) will be buried in noise or smeared away.
 If we measure with an RBW of 10 Hz then a carrier with 1 single Hz bandwidth (a pure unmodulated carrier theoretically occupies zero bandwidth!) will be seen as a narrow carrier with 10 times lower power.   The FFT bin catches the noise power in 9 Hz plus the power of that carrier. In total we have the power in 10 Hz (=RBW), which is then displayed. If the noise power is zero we "measure" 10 times less power for the carrier.

Observations from Warren (At the moment, July 2013, 9V1TD) 

The HDSDR S meter seems to sit there at an unnaturally high level on just pure noise with no apparent relationship to the displayed noise floor level. It does not change as I adjust the resolution bandwidth on either spectrum display; while the displayed noise floor drops as expected as you decrease the RBW. What is going on here?
Well, it appears the S Meter is indicating the sum of the power within the bandwidth of the filter selected. Prove this to yourself. Select a quiet portion of the band in SSB mode and compare the S meter reading with the filter set at 2KHz to that when it is set at 1 KHz. Hm... about half an S unit.... 3 dB! That makes sense!

Now tune a strong, steady carrier... S9 or better, and narrow up the filter around it. No change, until you start to attenuate the carrier itself. Again, makes sense: the power of the carrier in the filter passband overwhelms the noise so the total power displayed on the S meter doesn't materially change.
Suddenly we have a very valuable tool!
 Consider MDS measurements. The ARRL methodology requires a true RMS reading voltmeter to be attached to the speaker terminals. The reference level on the meter is set by adjusting the audio gain of the receiver with no signal applied. Then the signal is applied through a variable attenuator and the amount of attenuation is adjusted for a reading on the voltmeter that is 1.414 times the reference level previously recorded.

Forget the RMS voltmeter and disassembling your speaker cabinet to get to the speaker terminals. Simply tune the generator signal on HDSDR, remove the signal, read the S meter, then inject the signal and adjust the attenuator until the meter is half an S unit above the no-signal level. Take the known level of your generator, subtract the amount of attenuation you used and, voila!, MDS.

 I actually discovered this while trying to quantify phase noise for my GPS referenced synthesizer. The ARRL methodology for this requires a lot of the same techniques as MDS measurement and I noticed that the relative S meter readings were correlating closely with my observations made using the audio output power.

Measuring PSK symbol rate

Digital signals might be identified by measuring the symbol rate. It is also the first step for demodulation of unknown signals.

Alipio has prepared following document describing measurement of PSK (Phase Shift Keying) signals: PSK_speed_measurement.pdf

All the technical notes are by "LC" one of  HDSDR's developers. 

Dec 22, 2014, 3:36 PM