Technical details of the CHU time signal can be found on the NRC's web page at http://inms-ienm.nrc-cnrc.gc.ca/time_services/shortwave_broadcasts_e.html.
Then, use a
cron table entry to run the
program periodically (one every 15 minutes is good).
I use a cron table entry that looks like this:
5,20,35,50 * * * * /sbin/chu -l:300 -d:2946>> /var/log/chu.log
This also records a bunch of information in the file
such as the time of last correction, and misc. details of the how it was
computed. Any data decoded from the CHU
Broadcast Codes is also logged here.
chu can also be used with NTP to make a stratum-1 time server.
For more information, see the man page - chu(8)
Actual performance will vary with the quality of the signal received from CHU. Noisier signals produce greater errors. Also, signal availability will have an effect on system.
At my location the signal from CHU is noisy most of the time, and only available for about 10 hours of each day. Typically, while the signal is available, my computer stays within +/- 2 mS. During the remainder of the day when the signal is not available, the error can climb as high as 20 - 50 mS. I expect I can improve this with either a better receiver or antenna or both.
It also looks for the CHU Broadcast Codes between 31 and 39 seconds past each minute. This is information, if it is received, is used to make coarse time corrections when necessary. These codes are decoded internally to the program with its built-in Bell 103 compatible demodulator.
Once a "top of the minute" mark has been detected that indicates the local clock is off by less than 500 mS, the software then starts to look for second marks as well. These second marks are used as additional synchronization points, and are used in the final offset computation. If the top of the minute marks indicate we are off my more than 500 mS, then the second marks are ignored.
After several marks are detected, and their timing compared to the system clock. If everything looks good, then the system clock is adjusted.
The program opens the sound card for input using the default settings (8 bit samples, 8000 Hz sample rate), and notes the current system time. It then reads data from the sound card in 64 mS blocks.
Each of these 64 mS blocks is filtered with a 1000 Hz FIR bandpass filter, and then passed to a to detection function. The bandpass filter is 200 Hz wide at the -3 dB points, and over -40 dB attenuation of out of band signals. The tone detection is done by computing a correlation between the filtered input signal and a reference tone.
The tone detection function is also used to look for a pair of guard tones that should not be present. Each guard tone is 25 Hz from the desired 1000 Hz tone. The purpose of the guard tones is to prevent the software from mistaking wideband noise from true signals. If the tone detector finds the 1000 Hz tone present, and both guard tones not present, the software considers the signal valid.
The program then counts a series of 64 mS blocks containing valid signal. When the signal ends, if it is of approximately the correct duration, further processing is done to more accurately find the start of the tone. This is done by going back over previously recorded samples with a 1 Hz comb filter applied, looking for the start of the tone. This procedure finds the start of the tone within one millisecond.
Once the start of the tone is found, its time of occurance relative to the system clock is recorded, as well as its absolute time measured by the system clock. In order to find the time of occurance of the tone, the software counts the number of samples read by the sound card since being opened, and adds the time that takes to the recorded time the sound card was opened. This means that the sound card's clock is actually being used to time the start of the tone, not the system clock.
The above procedure is repeated several times until we have at least 6 and at most 50 valid measurements. Because the time of the tone was measured against the sound card's clock, we can expect this will diverge from the system clock at some fixed rate. Using a least-squares linear regression algorithm, this rate is computed.
The variance of each sample with the computed sound card divergence rate is calculated, and the sample with the greatest variance is discarded. This is repeated until either the maximum variance is under 1 mS, or there are less than half of the samples left.
If there are at least half of the samples left, using the sound card divergence rate, the time that the sound card was originally opened is computed. This is compared to the recorded system clock reading at that time. This produces an error offset in the range of -30.000 to +30.000 seconds.
If the data received from the CHU broadcast codes indicates an adjustment greater than that is needed, then it computed and added into the error offset. The system clock is then adjusted to correct for this offset and the system clock frequency is adjusted as well to reduce this error the next time the program is run.