VOACAP, ICEPAC and REC533

VOACAP Quick Guide: Home

 

One of the problems with the HF software is that they are all based on monthly median data. But what everyone seems to want is "now-casting".

George Lane: Let me first address the time discontinuity in the ITS programs VOACAP, ICEPAC and REC533. There are several large global maps of ionospheric data used in these programs. That data was collected during the IGY (the International Geophysical Year, late 50's to early 60's). The data was reduced to tables which were valid for 2 or 3 month periods at the even hour. These maps are the critical frequency for the E, F1, F2, M-3000, F-days distribution and the EXCESS SYSTEM LOSS TABLES (more about them later). Of equal importance to the signal power calculation is the Noise Power. Atmospheric radio noise was mapped in 3 month groups and 4 hour time blocks. The original data on an hourly basis, as collected, is lost.

I remember sitting in with the old Signal Corps Radio Propagation Agency engineers as they discussed whether it is realistic to make computer predictions at each month from tables where the monthly dependence was lost by averaging. I bet their eyes would pop out of their heads if they knew Johnny-come-latelies are now making daily predictions with these averaged models! Certainly, you can force the programs to give a smooth transition across the days of the month but it is just a trick which has little bearing on reality. If you have made HF measurements for as many years as I have, you know that the day to day variations are not smooth across the days of the month at a given hour. However, we can quite accurately compute the distribution of the variation such that we can say only 3 days at this hour during the month should be this bad. But we have no idea when they will occur during the month.

URSI Coefficients and VOACAP

Next let's talk a bit about use of the URSI Coefficients. Newer is not necessarily better. But the big bug in using them is this. They totally destroy the validity of the EXCESS SYSTEM LOSS TABLES and throw the entire performance prediction process out. Back in the 1950's, the NBS and the RPA discovered that the prediction model just did not predict the signal power that was actually received. There was an 8 dB difference between the median measured power and the predicted power at mid- and low latitude paths. With 90% confidence, the difference jumped up to about 16 dB! Things were even worse at high latitudes.

The fix was to create a table of differences at the median, upper and lower decile for predicted versus measured signal power. This was done for one epoch of ionospheric data so that high and low sunspot data was included. This difference table became the EXCESS SYSTEM LOSS TABLE (transmission loss table in IONCAP). It is the fudge factor which brings the prediction signal power down to the level of the measured signal power as a function of path length, geomagnetic latitude, hour, month and sunspot level.

John Lloyd, creator of IONCAP, points out very vividly that if you change any of the data base tables to a different epoch, you must recompute the Excess System Loss correction maps. Since the actual data no longer exists, you have no way of doing so. This is why you may get very poor predictions running VOACAP with the URSI coefficients. It could happen that they are really more accurate but it would be a fluke. From a statistician's point of view, you can have no confidence in the performance predictions when using the URSI coefficients which are from a different epoch than the rest of the maps.

I can hear someone saying let's use CCIR Data Base D or D-1 or D-2. I attended meetings for the development of Rec.533. I became intrigued when I discovered that they had subtracted 8 dB from the median values in Excess System Loss tables. They only used the residuals of the table less the 8 dB. CCIR never did use any of the decile tables, just the median values. Then they ran REC-533 and subtracted the signal powers from those contained in Data Base D-1. They found the median difference was 9.2 dB. So the net difference was 9.2 - 8.0 or 1.2 dB! Rather amazing since they only use the foF2 maps and compute F-1 and E layer from them.

Someone noticed that some of the circuits in DataBase D-1 were greatly different than predictions. So they started removing these offending circuits from the data base. I became very curious because they always seemed to fall in the region of the 1F2 to 2F2 transition. In talking with the German who had processed most of this data, I learned that he had removed the actual antenna pattern gain from the measured signal power by assuming a fix layer height for the F2 layer. Therefore, the normalized data in D-1 was extremely dependent on the nomogram used to assess the takeoff and arrival angle of the signal. This meant that CCIR was forcing Rec.533 to fit the nomogram with a fixed layer height!

About the time I retired, CCIR was attempting to 'de-normalize' D-1. Anyhow I have never been too keen on Rec. 533. I especially dislike the use of median values and then applying some variation tables at the end of the prediction. PC's have enough memory and are fast enough today that you don't need all of the simplifications which were needed in the 1980's when most of the Rec.533 development work was done. The intent for 'little 252', HFBC-84 and then Recommendation 533 was to develop a very fast running computer model which could analyze the full International Broadcast frequency schedule on a seasonal basis using inexpensive desk top computers. As such, the development was very forward thinking and successful. There was no way of knowing at that time how fast computer technology would overtake events.

About VOACAP-ICEPAC Comparison

There was a recent comment about ICEPAC giving better predictions at long distance on high latitude paths than VOACAP. I have not found that to be true but have had only limited experience. In 1998, I was able to obtain a full month's worth of listener reports here in the USA for the east coast and the west coast on a path nearly over the magnetic pole on a transmission from northern Germany. I had been contacted because the broadcaster was using ICEPAC which predicted no usable coverage at any hour. When I ran VOACAP Method 30, I found that the signals should be quite usable on many days in the month. We wrote this up as a paper for the Ionospheric Effects Symposium - 1999 [ John Goodman, chairman. Washington DC ] showing the excellent agreement between VOACAP predictions and actual listener reports for a full month.

The reason VOACAP was more 'correct' than ICEPAC is that Method 30 in VOACAP uses a smoothing function between the short path (ray hop) and long (forward scatter) models as a function of distance between 7000 and 10,000 km. ICEPAC uses the abrupt transition between short path and long path at 10,000 km. It turns out that in the ray hop model on that path between Germany and the USA finds an ionospheric control point with a tremendous signal loss whereas the long path model computes a weak but detectable signal assuming a scatter propagation mechanism. Therefore, ICEPAC found no signal and VOACAP found a weak scatter signal which was actually there.