This MAC problem must account for the 'hidden node' problem. Coaxial cable ethernet functions as a big party line and the MAC protocol rests on the assumption that everybody can hear everybody else. It also rests on the assumption that propagation times over the LAN segment are negligibly short. Neither of these assumptions apply to the radio-WAN issue so we can't simply borrow a solution from the LAN protocols. But we can borrow part of the solution and the framework, to considerable profit.
Constraining knowledge. Other considerations that our research has acknowledged: radio circuits involve transmitter 'spool-up times'. This is different than a baseband copper or fiber optic LED or laser diode emitter. Additionally modem and cryptographic synchronization overhead is part of the spool up that conventional wired networks don't have.
And the propagation times in satellite-based networks are very restrictive -- the option of polling networks becomes hard to envision, leaving us with scheduling ones.
Some existing satcom systems (Navy CUDIXS) have some radio-WAN protocols. The run-up to Navy FLTSATCOM included CPODA work at NRL which was on the right track but not used. Navy ADNS didn't rework any of the FLTSATCOM access protocols, but did reimplement them in the Channel Access Protocol renderings. These protocols tend to be unique for each satellite system ... indeed in the case of Navy fleet satellite commications (currently known as UFO for UHF Follow On), each user community (IXS) has a different access method.
We should discount the IEEE 802.11 approaches to the problem -- they won't meet our needs. The wireless LAN solutions warp ethernet CSMA/CD solution by adding a beacon (often called CTS/RTS) at the access point that suppresses competitive transmitters until the reason for the beacon (a node sending data) is finished. This only works in situations with low propagation times. 802.11 still uses a single queue model and the single queue is contention-based. The IEEE 802.11 standard contains a point distribution control function specification which could potentially be used as an API to shim-insert a MAC algorithm. But none of the chipset manufacturers have apparently included this feature.
Both IEEE 802.5 token ring and ANSI FDDI token rings included provisions for priority traffic (four and 8 levels respectively) and FDDI supported early token release which partially met the priority access requirement. FDDI also supported synchronous service. In all these cases, the features were suppored in the standards and in the chipsets, but almost never used in practice.
I've poked at this problem off and on for 15 years (a chapter in my own masters thesis). I'm only to blackboard stage. Additionally, I know of one other researcher (Graham Campbell) who's done work that appears to scale into the problem but he's not yet to working prototype stage. The solution sketched below is drawn from my masters work which captured all the requirements noted above except for synchronous service; this version includes an approach to that requirement.
There are two ways to centrally manage a net: polling and scheduling. Polling algorithms (example: Link 11) work in cases where propagation time can be ignored. But in cases, such as satellite hops, where propagation time is significant, polling algorithms become prohibitively inefficient, leaving us with schedules.
A. scheduling algorithm description
Disengagement from the network can be done in a variety of fashions including simply stopping transmission. The commsta detects no traffic in the allocated time window and eventually times out the allocation. A more graceful MAC disconnect would result in a bit more efficiency (reclaiming the unused allocation). A halfway ground might be for the commsta to include a silent host in its schedule (non-contention queue) but with a very small allocation -- only large enough to get out an overhead packet so that node will get full service in the next cycle. This kind of approach can accomodate the intermittencies associated with moving communications nodes that are occasionally shaded or otherwise unable to communicate on the instant but need to remain 'in the system'. In any event, a non-transmitting station can passively receive.
Synchronous service is provided by the commsta making sure that a customer node needing it gets the same time slot (or set of time slots) each cycle. This 'time division multiplexing' would make the service look somewhat like a T1 allocation. If the requirement is simply for expedited service, then the MAC has done it's job and it's up to a priority queueing arrangement in the router (e.g. at Layer 3) to act. If the requirement is truly for deterministic service (i.e. bounded delay), then an interface that bypasses IP and reaches directly to Layer 2 is required. (The unpopularity of this approach is one of the reasons that the priority and synchronous features in token LANs were rarely used).
Benefits -- bandwidth efficient. The scheduling algorithm sketched is bandwidth efficient -- it maximizes the amount of time used for communications by minimizing overhead time:
Drawbacks -- not very interactive. A TCP three-way handshake requires 1+1/2 cycles to complete. Further, if a station has to negotiate entry into the noncontention queue as a prerequisite to sending any traffic, the startup latency could be increased even further. This makes this MAC approach quite suitable for bulk operations such as multicast file transfers or staging of web data onto local servers or for e-mail MTA operations. But it is a poorer fit for highly interactive applications (I shudder to think how telnet or ssh would operate over such a link).
My analysis of Graham Campbell's DQSA algorithm. Graham Campbell performed parallel research while on the faculty of Illinois Institute of Technology (ref http://www.iit.edu/~dqrap/):
Up to this point, the radio-WAN discussion applies equally to any part of the spectrum and, for the most part, cares not whether there's a relay (e.g. satellite). This section specifically addresses those issues pertinent to the HF (and low band VHF) part of the spectrum where ionospheric refraction is a key feature in gaining long distance.
Definitions. HF uses have tended to divide into three areas:
If we target the third (ELOS) as the appropriate niche for HF radio and ignore the other two, we can treat the HF network as a single segment one where all net members are on the same frequency. If we add more complexity in the form of different networks on different frequencies, then we simply have multiple HF segments that are best bridged together with a LAN switch (see previous discussion about Logical Link Control).
Spectrum allocation myths. HF bandwidth has traditionally been allocated in 5kHz slices of which 3kHz is used for communications with 1kHz on either side as guardband to minimize interference with adjacant channels (efficiency hacks such as single sideband and independent sideband are essentially tweaks to the old paradigm). Note that the 3kHz of 'talk capacity' is identical to the 3kHz that the circuit switched telephone system allocates to each telephone call. This 'way we've always done it' gives each link about 10kbps capacity that can be adjusted at the margins depending on atmospherics, burst noise, and how much money we put into the modems. But the 5kHz allocation is an administrative convenience based on the assumption of analog voice -- this is the way we've always done it. Not digital voice and certainly not router-to-router interconnect.
If we reallocate the spectrum in larger slices on a per-radio-WAN basis rather than a per-link basis, then we can use the spectrum much more efficiently.
Existence proof. The author once viewed an experiment at SPAWAR Systems Center San Diego where an HF data link was set up in extended line of sight circumstances. In this demonstration, several existing adjacent HF channels (9 if I remember correctly) were amalgamated into a single, 45kHz-wide channel and two radios were specially built to use the single wide channel. The modems used some forward error correction to clean up burst noise; after subtracting this overhead, the link operated at 56kbps with very few uncorrected bit errors. (Unfortunately, I've been unsuccessful in my attempts to unearth the writeup of this experiment).
Impelling reasons. In addition to efficiency issues, decreasing the number of HF emitters on a platform has a number of other benefits. There are a variety of interference issues in HF (intermodulation interference, rusty bolt syndrome, etc.) that tend not to be a problem in frequency bands above VHF. These tend to keep HF in the 'half duplex' mode -- if any HF transmitter on a ship is operating, none of the HF receivers on the same ship will work well. Reorganizing the HF transmitter on a ship on a wideband general purpose network basis, rather than the existing one-radio-for-one-application approach decreases the interference problem.
The scheduling method illustrated above can be implemented wholly in band. If the contention queue is moved out of band -- which is entirely possible -- then the system resembles the existing 'orderwire' and 'DAMA' setups. The drawback to this approach is that terminals need two RF implementations where the in-band approach requires only one.
It would appear that an open standards body approach would yield better return on the development dollar than putting such a development into a single acquisition program which would tend to yield yet another acqusition-specific solution.