Commercial, Military

Implementing a Complex Bus Solution with a Runtime-Defined Instrument Architecture

By By Peter Hansen | September 1, 2012
Send Feedback


Historically, defense and aerospace test programs on large-scale automatic test equipment (ATE) operated by sequentially applying, measuring and comparing voltages, currents, waveforms and digital truth tables. This procedural approach worked well in the past for automating tests that otherwise would be executed manually on a test bench, providing improved throughput and repeatability. However, this approach can have some drawbacks. First, it is rare that this sequential approach faithfully reproduces the actual operation of a Unit-Under-Test (UUT). Second, it works best with older avionics that can be readily controlled at a very low level by external test equipment, a function that is often not as straightforward in newer designs.

Today, test systems are no longer merely measuring voltage and signals they are increasingly required to exchange and analyze large, complex data sets with the UUT at very high rates. Exchanging this data and controlling the UUT usually means the ATE must employ complex protocols on various buses. Command, control and data exchange are often tightly interwoven in time, placing real-time performance demands on the ATE. The test equipment must be truly “in the loop” with the UUT as it operates in a manner similar to the end application. This is a tremendous challenge, and one that traditional sequential testing methods do not handle well, or quickly.

This data interchange may use one of many low-level digital buses such as Fibre Channel to apply application-specific upper-level processing to the data, and provide control from high-level Test Program Set (TPS) software that typically runs on a Windows PC. Achieving fast and predictable real-time behavior while controlling the process from a Windows PC is an additional challenge placed on the test equipment.

Storing pre-calculated stimulus values and captured response data to disk is rarely a viable approach in these data-intensive scenarios. The associated batch processing of the data cannot provide the responsiveness needed for real-time “in the loop” testing; the quantity and bandwidth of raw stimulus and response data often makes it impractical. Instrumentation with real-time processing can calculate stimulus values and make UUT quality assessments on the fly and that’s the direction that ATE is headed.

As avionics test requirements have escalated, the internal architecture of test instrumentation has become increasingly powerful and configurable. Inflexible hard-wired logic gave way to microprocessors, programmable logic and now to fully reconfigurable, real-time processors and field programmable gate arrays (FPGA). The current power and flexibility of these instruments allows them to execute applications that address the bus interface and upper-level processing requirements of recent designs. The flexibility also results in equipment that can be reconfigured for a specific test, in contrast with earlier instruments that were dedicated to a single purpose. Under control of a specific TPS, this equipment can be categorized as Runtime-Defined Instruments the test program specifies what the instrument can do, thus enabling one test instrument to do many things.

This combination of processing elements forms a test subsystem with a three-tier architecture. A Windows PC controls the high-level flow of the TPS and performs pre-test setup of the underlying real-time test hardware. Below the PC are one or many flexible instruments that implement the lower two tiers, consisting of real-time processors in the middle and FPGAs at the bottom. The three tiers are bound together with well-designed and supported software and firmware interfaces. This arrangement provides an ideal platform for implementing a UUT-specific test application, whether developed by the ATE manufacturer, the end-user or a third party.

There are various models for communications protocols. The three-tier architecture of these Runtime-Defined Instruments, not coincidentally, can be mapped roughly to the Open Systems Interconnection (OSI) model employed by many avionics test departments, where Tier 1 maps to lower-level protocols, Tier 2 maps to upper-layer protocols and Tier 3 maps to applications. As in the OSI model, as you descend through the stack the operations become more complex and demand higher performance.

Why has this model emerged? Before the advent of Runtime-Defined Instruments, test departments were forced to purchase specialized instruments for each application for instance, Video over Fibre Channel or Command & Control over Fibre Channel. This equipment was typically provided by firms that were not necessarily dependable suppliers over the decades of life that are typical for large-scale test systems, leaving users in the lurch. Additionally, this approach results dozens of underutilized, single-purpose bus test instruments taking up valuable space.

Historically, the other alternative was to build homegrown test circuitry, typically located in the Interface Test Adapter (ITA) between the ATE and UUT. Complex ITA circuitry seemed less expensive than purchasing specialized instruments but has proven impractical in the long run due to lack of throughput, repeatability, logistical support, training and documentation.

Fortunately, the emergence of Runtime-Defined Instruments is eliminating the need for these types of one-offs, with flexible architectures that enable one subsystem to fit a variety of testing needs and handle various types of high-speed digital buses. Upper-level and lower-level protocols can be implemented in a variety of ways. The test equipment vendor may directly support standardized buses and protocols, alleviating the need for end-users to perform these detailed tasks. In cases where buses are of a custom nature, an open system provides the end-user full access to system capabilities. Users can be trained to program or tailor the systems on their own an important feature particularly for organizations that leverage classified custom buses. Test equipment can be configured to individual bus requirements ranging from physical-layer configuration to low levels of protocol. Providing local processing on each bus instrument permits real-time data analysis and interaction with the UUT. Multiple instruments accommodate the real-world scenario of concurrently operating buses. Streaming data from one bus instrument to another allows for true closed-loop testing that emulates the in-system behavior of the UUT such as executing a test Operational Flight Program (OFP).

Three key technologies have matured in the past decade, enabling the creation of Runtime-Define Instruments: PC-based high-level TPS programming tools and infrastructure, real-time processors and software, and the speed and flexibility of FPGAs:

➤ The Windows-based PC offers mature and efficient TPS development tools to do the high-level setup, control and results processing for multiple concurrently operating bus instruments. Test standards such as VISA and IVI provide a well-understood and consistent instrument interface.

➤ The programming of processors using a Real-Time Operating System (RTOS) has gone from esoteric to mainstream. Development tools are quickly catching up with the tools available on Windows.

➤ FPGAs have become larger and faster, allowing them to take on roles formerly reserved for dedicated hardware. In addition, programming tools are continuously improving, and a broader range of engineers is becoming trained in their use.

The three-tier processing capability allows the TPS developer to make tradeoffs between performance and programming time. For example, it’s easiest to implement a math function on the upper tier (PC); however, the performance is much greater with the lower tier (FPGA) but at a far higher development cost. For many applications the real-time processor in the middle tier provides the best compromise of performance vs. effort. Properly combined and balanced, this three-tier, multiple-instrument approach forms the basis for accommodating each of the concurrent UUT buses, combining them into a unified subsystem and inserting it into both existing and future test stations.

Fibre Channel buses and applications are a good example of the use of three-tiered Runtime Defined Instruments. Fibre Channel is widely deployed in avionics upgrades for older platforms such as F-18, as well as in the latest aircraft such as JSF.

Individual Runtime-Defined Instruments could be used in various roles over Fibre Channel to:

➤ Send and receive video over Fibre Channel using the modern ARINC 818 standard, or earlier platform-specific arrangements such as FC-AV;

➤ Exchange memory between processors over Fibre Channel using the FC-AE-RDMA standard;

➤ Use the MIL-STD-1553 protocol over Fibre Channel using FC-AE-1553; and/or

➤ Use the Anonymous Subscriber Messaging (FC-AE-ASM) protocol to transport command, control, signal processing and sensor and video data used on aircraft such as JSF.

The common denominator of these Fibre Channel applications is that the lower-level protocols remain constant and highly standardized, and are best implemented in the lowest tier (FPGA) of a Runtime-Defined Instrument. Proven FPGA code is available, and the relatively high cost of developing or procuring it can be amortized over the many applications that share it. The upper-level protocols such as ARINC 818, FC-AE-RDMA, FC-AE-1553 and FC-AE-ASM are each implemented using the middle tier the real-time processor. These protocols are best developed and debugged in the C-language as opposed to the far more complex FPGA Verilog/VHDL environments. If the test equipment supplier implements both the lower-level and upper-level protocols, the end-user can concentrate on the test program written on the upper PC tier.

Runtime-Defined Instruments integrated into subsystem are becoming the optimal approach for addressing the high-speed bus test requirements present in new aircraft as well as avionics upgrades. This emerging class of test equipment has been found to offer the highest throughout, lowest TPS development cost and lowest lifecycle logistics cost.

Peter Hansen is the instrument product line manager, Assembly Test Division, at Teradyne, based in North Reading, Mass.

Receive the latest avionics news right to your inbox