Embedded Avionics, Military

Move Over, VME

By By Charlotte Adams | July 1, 2010
Send Feedback

Standards-based aviation computing took off with VME (Versa Module Eurocard) when the military switched from mil-spec to commercial off-the-shelf (COTS) processors. But this ubiquitous parallel bus technology has reached a speed limit. Its potential successor — VPX — shares little beyond form factors with VME.

But board vendors and their customers are impressed by VPX’s eye-popping speed, small size and rugged design. They are also following the progress of OpenVPX, a standard that will promote interoperability between different card flavors.

Acronym-hungry readers will want to know what VPX stands for. But not even VITA, the organization behind VPX, can say. The technology is simply known as VPX or VITA 46. VITA itself, which used to stand for the VME International Trade Association, now goes only by VITA.

OpenVPX, which adds a system-level framework to VPX, has been issued as VITA 65 and was in the process of adoption by the American National Standards Institute (ANSI) at this writing. It was expected to be published as an ANSI standard in late May.

What’s driving the need for speed? Aircraft computers need to process more sensor data on board in real time to identify targets and execute missions. There are larger databases to crunch through, larger images to resolve and identify, and huge computational demands associated with the search for network-centric connectivity. Data rates from front-end sensors are climbing higher and require much higher interboard bandwidth.

Customers in many cases are trying to do things in seconds that used to take minutes, said Steve Edwards, chief technology officer with Curtiss-Wright Controls Embedded Computing, based in Leesburg, Va. Antennas are more sensitive, faster and operate over a wider range. The blending of data from sensors such as radar and infrared and tasks such as automatic target recognition demand higher throughput.

Emerging programs may have five or six high-speed processing boards, and they are trying to move this data across the backplane, said David Pepper, product manager and technologist with GE Intelligent Platforms, of Charlottesville, Va., which makes VME as well as VPX cards. VPX would be more appropriate for these applications, with its higher speed and larger thermal and power envelope.

Given the huge investment in VME systems over the last 20 years, however, VPX probably will be adopted gradually and be used by new programs rather than upgrades. “There’s always going to be VME,” Pepper said.

Mercury Computer Systems, which also supports a large VME product base, is still developing new VME systems for upgrades.

Parallel-bussed systems like VME don’t scale that well physically, either, because of the distance the data has to travel, said Alan Baldus, a field application engineer with Kontron AG, of Eching, Germany, and San Diego. Two boards in a system, for example, may communicate over an inch of wire on the backplane. But a 20-board system has to be able to drive the same signal, with the same timing constraints, over a longer distance. It’s difficult to drive that signal to board 20 and keep the same timing constraints. With serial data transmitted over multiple serial lines, however, the transfer rate is factors of 10 faster.

Within VPX 3U is likely to be the dominant form factor in tight spaces. 3U VME was never very popular because of its negligible I/O. A 3U VME card cage probably had a 30 to 35 Mbyte/s throughput for the whole card cage, said Rodger Hosking, vice president of Pentek, based in Upper Saddle River, N.J., a VME and VPX producer. The predominant business for the small 3U form factor is still Compact PCI, Pepper added.

That said, VPX cards are coming thick and fast. Mercury Computer, of Chelmsford, Mass., has released 12 types of OpenVPX cards, including 3U and 6U switch cards and processor cards.

Pentek has adapted existing PMC and XMC modules to the OpenVPX 3U form factor, Hosking said. The 14 Pentek 3U VPX boards target data acquisition, software radio and digital signal processing applications. Crossbar switching supports four fat pipe connections to the backplane, with each fat pipe delivering maximum data rates of up to 2 Gbytes/s with Gen 2 PCIe, and 1.25 Gbytes/s with Serial Rapid IO.

GE Intelligent Platforms has issued a bevy of VPX cards, including a 3U I/O card with Mil-Std-1553 and ARINC 429, repackaged from a PMC daughter card. Kontron has a 3U VPX board with Intel Core 2 Duo processor, and is poised to release early access versions of 6U cards with dual Core i7.

VME allowed the proliferation of interchangeable cards using a common, parallel bus. While data travels along the VMEbus in parallel rather than a few bits at a time, transfer speeds top out at around 320 Mbytes/s — some say 500 Mbytes/s — for the VME64 bus with all the high-speed extensions. That’s still not fast enough for tomorrow’s applications. What’s more, the cards on the backplane share the bus. Only one card can talk at a time.

VPX migrated from a parallel bus backplane to a gigabit serial backplane, using protocols such as PCIexpress (PCIe), Serial Rapid IO (SRIO), Gigabit Ethernet and 10 Gigabit Ethernet. Although these new serial fabric interconnects send only a couple of bits at a time, they do it at warp speed. Instead of one card talking at a time, serial fabrics allow multiple receivers and senders all to transmit simultaneously.

Industry estimates of VPX card-to-card throughput can reach more than 100 Gbytes/s. GE’s Pepper, however, puts VPX transfer speeds at at least 10 Gbytes/s. The problem with scaling speed to the higher numbers, he said, is that as the fabrics get faster, there may be fundamental issues such as signal integrity. “If I run the bandwidth high, the signal is going to be so distorted, my bit error rates are going to go through the roof.”

But even at the low end of the speed range, VPX is impressive. If VME, as some say, can be pushed to 500 Mbytes/s, 5 Gbytes/s would still be an order-of-magnitude increase.

VPX also is able to accommodate multiple flavors of serial interconnects by assigning them to different data planes. Some data planes can also run more than one protocol. GE’s SBC610 6U VPX card, for example, runs SRIO and PCIe on the data plane. “There are no hard and fast rules” about which protocol to use on the data plane, Baldus said.

Complexity arises from the fact that serial links are point-to-point. So every card — in the absence of a dedicated switch card — needs a direct line, via the backplane, to every other card it needs to communicate with, and vice versa.

OpenVPX, a system-level interoperability specification, defines items such as backplane topologies and how interconnects are done for different types of topologies, said Pete Jha, senior software engineer with Curtiss-Wright Controls Embedded Computing. The spec essentially guarantees that all the necessary connections are there to support the topologies, assuming the integrator follows the appropriate path within the standard and makes the right choices on software and firmware matters outside the scope of the VPX architecture.

The problem with baseline VPX was that there were so many different possible combinations that “you could end up with a matrix that spins out of control,” Baldus said. He sees OpenVPX as an effort to “define a few sets of configurations [to] keep the designer sane.” OpenVPX didn’t redefine any of the configurations, he said. It simply added clarity. It created a “locked matrix,” so when somebody says, “OpenVPX-xyz,” everybody knows what that is. Everybody knows, for example, that SRIO or PCIe or Gigabit Ethernet has been allocated to particular pins.

The concept of planes is important. At a high level, multiplane technology means that different tasks can be performed separately, so that the computer as a whole can operate faster, said Anne Mascarin, product marketing manager with Mercury Computer Systems. The various planes do communicate but at a very basic level, and interrupts occur less frequently than with a standard multicomputer. The data path is where high-speed data processing occurs. Different companies have different preferences.

Mercury, for example, prefers Serial Rapid IO for the data plane because of its low-latency, highly deterministic performance, said Tom Roberts, a Mercury product marketing manager.

Another VPX advantage is power per slot. It would be hard to have more than 100 watts on a 6U VME board, Pepper said. “You tend to stay below 100 watts a slot in VME 6U to accommodate the electronics.”

VPX, on the other hand, can deliver more power than you can cool, he said. “We’ve got plenty of power pins, so it’s more of a thermal issue.” It’s not that unusual in VPX to get 150 to 200 watts per slot. VME is pushing it at 70 watts per slot, Baldus said.

Power per slot is limited by the power available from the backplane and the cooling capacity of the slot, Hosking explained. VPX can deliver several hundred watts from the backplane to the modules, but the actual power per slot depends on the cooling technology, chassis type, application and customer requirements.

VPX also defines a new connector. One thing customers didn’t like about VME was, “we couldn’t get that much I/O out the back,” Pepper said.

VPX, by contrast, offers six I/O connectors with a total of 672 data pins, Pepper said. VPX also is backward-compatible with VME. Mixed-mode applications use a hybrid backplane, combining both VME and VPX. GE already has a VPX single-board computer card, the SBC620, with the VME interface that would operate in a hybrid backplane.

A VME board in a hybrid backplane typically would communicate just with other boards implementing VME, according to Pepper. This type of system might suit the needs of a customer who has some low-speed sensors that could plug in via a VME interface and yet share a single chassis with VPX cards. It’s a question of whether the customer can afford to convert all boards to VPX.

OpenVPX

The original VPX standard — VITA 46 — defines basics, such as module sizes, serial fabric protocols, connectors and power connections.

A sister standard, VITA 48, added mechanical designs for forced air, conduction cooling and liquid cooling. But the base VPX standard focused on module electrical and mechanical specifications rather than system-level requirements. Because the original standard could be implemented a number of ways, interoperability was an issue.

OpenVPX defines a set of system implementations and architectures to promote interoperability between vendors and lower life-cycle costs, according to Pentek. The standard takes a top-down, rather than bottom-up approach. OpenVPX has not fundamentally changed what was developed in VPX, Edwards said. It addresses system options for the different topologies and assigns all the pin-outs required to be able to communicate. This also helps in the development of standard backplanes.

If the customer wants a centralized-switching backplane, it’s actually been predefined in the OpenVPX standard, Edwards said. The customer can buy that backplane, along with modules that conform to appropriate profiles, off the shelf.

OpenVPX also introduced standard terms to describe the paired gigabit serial links that are used to transmit and receive data. Each type of pipe is a point-to-point connection that is grouped together as a logical data channel.

For example, a single, bidirectional serial pair, or 1X, is known as an ultra thin pipe, a double pair is a thin pipe, a quad pair is a fat pipe, eight pairs are a double fat pipe, 16 pairs a quad fat pipe and 32 pairs an octal fat pipe.

OpenVPX also defines the number of pipes that can be assigned to each connector. A 3U card has three connectors (two data connectors) and accommodates up to eight fat pipes (4X links) or other combinations of pipes. The bigger, 6U cards have room for four more data connectors, for a maximum of 24 fat pipes and other combinations.

OpenVPX distinguishes between the kinds of traffic transmitted through the pipes. There are five levels: utility, management, control, data and expansion planes.

OpenVPX also defines a number of profiles, described on Pentek’s site, that help ensure that component modules will be able to talk to each other and use a standard set of backplanes.

➤ Slot Profile: Describes the pipes and planes on the backplane connectors of each slot;

➤ Module Profile: Describes the pipes, planes, fabrics and protocols on each card;

➤ Backplane Profile: Describes how slots are interconnected by pipes, slot sizes (3U or 6U), slot spacing (1.0, 0.85 or 0.8 inch), quantity and types of slots, and topologies, including mesh, central switch, distributed, dual-star and root-leaf;

➤ Development Chassis Profile: Describes not only the Backplane Profile but the chassis dimensions, power supply and cooling technique.

Receive the latest avionics news right to your inbox