Night Vision: Beyond Image Intensification

By Charlotte Adams | January 1, 2005
Send Feedback

Today’s head-mounted night vision systems allow pilots to operate safely at night, avoid obstacles and fly close to the surface by amplifying light from external sources such as the moon and stars. Rarely is infrared (IR) imagery piped to a night vision system from an external sensor. But tomorrow’s night vision systems promise more: a wraparound IR scene, synthetic terrain, advanced guidance aids, information fusion and weapons cueing.

Most pilot night vision systems use analog image intensification (I2) technology to amplify external light. The sensors, or I2 "tubes" attached to the front of the helmet, resemble a thin pair of binoculars. The pilot looks through these "goggles" and focuses his eyes on a display screen where the image has been created. Special optics magnify the image about tenfold, allowing the pilot to see at night with approximately 20/40 vision. As many as 40,000 of these aviator’s night vision imaging system (ANVIS) goggles have been fielded to the U.S. aviation fleet, of which ITT Industries has supplied the lion’s share.

With night vision goggles (NVGs) pilots can see terrain and objects close to the ground that can’t be perceived by the naked eye. At higher altitudes they also can see flares at up to 100 miles (160 km), aircraft lights at up to 50 miles (80 km), and vehicle headlights at 20 to 50 miles (32 to 80 km). But the field of view is limited, and range and quality decrease in overcast conditions. The goggles also reduce visual acuity, inhibit depth perception and lack color discrimination. Weight and forward head pressure are also issues.

Panoramic Goggles

The U.S. Air Force Aeronautical Systems Center (ASC) has tackled the field of view problem with an NVG boasting a 95-degree-horizontal-by-38-degree-vertical viewing range, about double the standard 40-degree-circular view available today. Insight Technology, of Londonderry, N.H., will supply 400 of these panoramic night vision goggles (PNVGs) under an early production contract, and basic units will be ready early next year. The Air Combat, Materiel and Special Operations commands will get the first systems for A-10 tank busters, C-17 airlifters and special ops C-130s, says Lt. Col. Terrence Leary, commander of the Combat Systems Squadron. Versions have been flown more than 500 hours on F-16s and F-15C/E fighters, C-130 and C-5 transports, AC- and MC-130 special ops aircraft, KC-10 and KC-135 tankers, and MH-53 special ops helicopters, in addition to C-17s and A-10s. HH-60 pilots also will use the new equipment.

A wide field of view enables the pilot to scan the outside world by simple eye movements, rather than head movements, increasing safety. And the pilot can pick up altitude or velocity cues from outside objects via peripheral vision while going into a landing or a hover.

The PNVG widens the pilot’s side view by doubling the number of image intensification tubes found on standard goggles from two to four. To save weight, the system uses tubes measuring 0.63 inch (16 mm) vs. the typical 0.71 inch (18 mm) in diameter. (ITT currently is the only supplier of these tubes.) The projecting elements are a little shorter than on today’s ANVIS-9 goggle. This improves the center of gravity and lessens the strain on the head.

But what’s really new is the ability to put the tubes close together and optically generate something pilots can use, says Martin Andries, PNVG lead engineer. There are four sets of high-resolution optics. But the lenses associated with the inner and outer channels are "joined together" to allow the I2 images to be "blended together." The resolution of all four channels is identical.

The program also has developed a prototype ejection-safe PNVG, which will be tested next year. The "safe-separation" goggle is intended to automatically release from the head as a result of the acceleration imparted by the ejection process. Current equipment, by contrast, is removed manually, if time permits.

A second growth area identified by the PNVG program is video or map display. A miniature display could be placed in the "right inboard channel"–in front of the right eye–to present video or maps of a target area transmitted from a surveillance or command and control aircraft. And a camera could be added in the "left inboard channel" to record the fight scene. The recording would be used in post-mission debrief and analysis. ASC has developed a prototype but has shelved it until a customer is identified.

The display feature, if carried forward, would go beyond the current ANVIS head-up display (HUD) offered by Elbit Systems, which allows pilots to see information such as the artificial horizon, heading, altitude, velocity, engine data and aircraft warnings together with I2 night scene.

Night Vision Cueing

The Air Force’s PNVG program also is looking for night vision cueing–the pilot’s ability to aim sensors and weapons at night by pointing his head, rather than turning the aircraft, toward the target. This feature is absent in the currently fielded "look and shoot" helmet, called the joint helmet-mounted cueing system (JHMCS). Employed on F-15s, F-16s and F/A-18s, JHMCS permits cueing in visual conditions. The monocular display system provides a 20-degree-circular field of view and can’t present video. Some F-15 pilots use JHMCS rather than goggles at night, but that’s so they can see the airspeed and altitude information in front of them, while using their own eyes to pick up glints and glares, Leary says. But this won’t do for an A-10 tank killer flying close air support on a dark night.

In the Air Force PNVG/JHMCS integration, the PNVG camera will record what the pilot sees in the left inner channel and provide that to JHMCS for recording. Cueing and display data will be captured in the right inboard channel. The Air Force is testing development prototypes now but has not chosen a final production design.

The Air Force and Navy are extremely interested in night vision cueing and display and want to add it to JHMCS. But the services have followed different paths–the Air Force with Insight Technology and the Navy with Vision Systems International (VSI). (VSI, which produces the JHMCS, is a joint venture between Rockwell Collins and Elbit’s subsidiary, EFW Inc.)

The Navy has announced the advanced development phase of its Night Vision Cueing and Display (NVCD) program for JHMCS. VSI, the contractor for the program’s first two phases, has adapted a goggle developed by Kollsman–an Elbit subsidiary– to the JHMCS system. Known as "quadEye," the goggle achieves a 100-by-40-degree field of view, projecting the display to the pilot’s dominant eye. VSI also has developed an interface to JHMCS and integrated a complete NVCD system. In the current competition for the third phase of NVCD, the Navy is understood to seek a mature, testable prototype. A contract award was expected in late 2004 or early 2005.

The Air Force, meanwhile, is expected to mount a separate competition and award a contract this summer to obtain representative test units. Leary says the two services are working together on potential future strategies, but how closely their priorities and timelines mesh is not clear.


Another step for night vision goggles will be direct digitization of the I2 scene. Digitization is important because these images can be enhanced and transmitted electronically. Today image-intensified output is digitized by bonding the tube to a solid state camera, or charge coupled device (CCD), explains Larry Curfiss, ITT Night Vision’s vice president and director of business development. More direct digitization of the I2 signal is expected to reduce the length of the image intensification tubes and improve the helmet’s balance. ITT is developing a prototype sensor for the U.S. Army’s Night Vision Lab under the Electronic Image Intensification (EI2) program.

ITT is embedding a chip-based, complementary metal oxide semiconductor (CMOS) detector in the I2 tube–an approach that promises to reduce power consumption, compared with a CCD. The CMOS detector, which digitizes the image intensification output, will replace the screen that is used to convert the electronic I2 signal into light. This would shorten the tubes by up to 0.39 inch (10 mm), or almost half their 1-inch (25.4-mm) length.

While the estimated weight savings would be only 0.7 ounce (20 grams), the reduced forward projection of the helmet would increase ejection safety and equalize weight distribution, exerting less forward pressure on the pilot’s head. The tube sensor and camera assembly also can be moved to the side of the helmet, further reducing the equipment’s forward projection. ITT expects to deliver a prototype digital I2 sensor to the Night Vision Lab next summer or fall.

Sensor Fusion

ITT also is pursuing sensor fusion. The company completed a development program with the Army. Now, teamed with Raytheon, it is bidding on the Army’s Enhanced Night Vision Goggle (ENVG) program, which will fuse long-wave, 8-12-micron infrared and I2 images. Although the program focuses on the dismounted soldier, observers in the back of helicopters probably will use the monocular device, with the door slid open so that the glass does not absorb the IR energy. The ENVG program, which encompasses as many as 44,000 units over a four-year period, is worth as much as $400 million. A contract is expected in the first quarter of 2005.

Pixel-by-pixel fusion won’t be attempted at first, Curfiss says. ITT envisions an "optical overlay" where IR and I2 images are coupled together in a one-for-one registration. Infrared will supplement I2 output in truly lightless conditions, and both sensors can complement each other in spotting targets in semi-obscured, wooded terrain.

The company also is developing a digital enhanced NVG under the Army’s Digital Enhanced Night Vision Goggle program. ITT plans to deliver a prototype early next year that fuses I2 and thermal signals at the pixel level. The challenge will be to reduce the power and weight required to perform this task, Curfiss says. "To do pixel-by-pixel fusion takes a lot of software–you’re beginning literally to put computers into these goggles."

Synthetic Vision Goggles

Two projects in Canada suggest a path toward more intuitive helicopter night vision systems–with the addition of synthetic terrain and advanced flight guidance displays.

The National Research Council (NRC) several years ago developed an experimental helicopter vision system for potential Canadian Forces search and rescue (SAR) applications. The enhanced and synthetic vision system (ESVS) projected synthetic images on wide field of view goggles. Smaller images based on infrared or visible-light cameras could be inserted into the synthetic scene. Cameras mounted on top of the test helicopter were slaved to the movement of the helmet, recalls Dave McKay, program manager for human factors engineering with CMC Electronics. CMC provided image fusion expertise and CAE developed the synthetic imagery.

The scenario was an aircraft down in the Gateneau Hills outside of Ottawa. The ESVS-equipped helicopter pilot would conduct a search pattern, identify the crash location, and fly to it. The experiment was instructive. "We learned that the concept of synthetic vision and sensor viewing on a head-mounted system has the potential to be effective," McKay says. But better resolution and lower latencies were considered necessary, going forward.

The Canadians, teamed with the UK Ministry of Defence (MoD), now are applying some ESVS concepts to more off-the-shelf gear. A UK goggle equivalent to the U.S. ANVIS-9 was modified with daylight filters, a display module and a miniature camera to record what the pilot saw.

In addition to navigation data from the flight instruments, the system displays see-through, "wireframe" terrain imagery and pathway-in-the-sky guidance cues, as well as hazard markers, landing/hover markers, waypoint markers, and a "hover arrow" symbol developed by Qinetiq. The symbology helped to give an impression of depth to the night scene.

A trial on a variable-stability Bell 205 helicopter was conducted last year to assess the symbology in simulated, degraded conditions during low-level maneuvers. Flights, however, were restricted to an accurately surveyed field. This flight test, according to a technical paper, showed that a careful systems integration approach can achieve low processing latencies and usable conformal symbology. The pathway in the sky guidance, for example, improved the pilot’s control of track and height.

A second set of flight tests took place recently in unsurveyed, hilly terrain. Although results are not yet available, pilots were able to fly at about 200 feet above ground level (AGL)–below the tops of the hills. The goal is to get about 50 feet AGL, says Sion Jennings, an NRC research officer. Officials now plan to integrate mission-planning software with conformal symbology by October 2005. Beyond that, the object is to improve software reliability and develop a flightworthy package.

Joint Strike Fighter

The Joint Strike Fighter (JSF) helmet-mounted display (HMD) uses embedded low-light level sensors for night vision. The binocular system also will display wraparound infrared imagery from the JSF’s distributed aperture system (DAS). VSI and BAE Systems have developed slightly different designs. But both companies employ a double-visor system: a clear, optical visor for imaging and cueing, plus a retractable, tinted visor for daytime use.

The helmet will be the primary display on the aircraft, says Marty Gunther, VSI’s director of business development. The HMD is a virtual HUD. When pilots look forward–where a HUD normally would be–they will see the artificial horizon, pitch ladder, velocity vector, airspeed and altitude data. If desired, the VSI helmet could even display a 360-degree horizon with DAS imagery, Gunther says. It provides a field of view measuring 50 degrees horizontal by 30 degrees vertical, compared with BAE’s 40-by-30-degree system.

Funded by MoD, BAE’s system is based on its Striker family of HMDs. The basic helmet is flying in the Gripen and recently was flown in the Typhoon.

The BAE helmet uses two, independent night vision cameras that digitize the output of traditional I2 tubes. These cameras are mounted on each side of the HMD. (VSI’s day/night camera, by contrast, is mounted in a location above the forehead.) The use of two independent I2 cameras enables the brain to process the random electronic "noise" better, so that the pilot perceives much better resolution, asserts George Lim, business development manager for helmet systems.

Experts argue about camera placement. The BAE approach, which mounts two cameras on the same plane as the eyes, is said to be easier on the pilot, reducing the disorientation and nausea that can be experienced when the eyes view objects the brain knows are on a different plane from the eyes. But others contend that placing two cameras on the sides of the helmet creates the problem of fusing their output into a single image and produces a confusing horizontal offset view. BAE’s design has more than 100 hours of human factors flight trials under its belt. The company delivered a lab model to Lockheed in August of 2004.

Networked Night Vision

Engineers at the Human Effectiveness Directorate at the U.S. Air Force Research Lab, meanwhile, aim for a network centric, night vision system using solid state sensors. The research is at too early a stage to be designated for fixed- or rotary-wing use. Much will depend on developments in sensors and networking beyond the project’s control. "We see a device that can give the warfighter 24-hour a day vision enhancement," says Peter Marasco, an adviser of the Battlespace Visualization Branch.

The key idea is "information fusion." Rather than limiting the pilot to a single sensor, this digital vision enhancement device aims to draw information from other onboard sensors as well as those on unmanned air vehicles, satellites and aircraft outside of the pilot’s immediate cluster.

Researchers are eyeing 2D, 1-2-micron staring sensors and visual light sensors. The windscreen does not absorb IR energy at that wavelength, and these sensors can provide higher resolution than mid- and long-wave detectors, asserts Marasco.

The goal is a body-mounted system, including a head-mounted sensor and display and a wireless link to aircraft systems. But it will be another 18 months before technologies are selected and two years before a demonstrator is built.

Night Vision Sensors

Core infrared (IR) sensors for thermal imaging applications are progressing, too. Elbit Systems’ subsidiary, El-Op Electro-Optics Industries, for example, has developed an IR sensor in the 8-to-12-micron range that can be used on the company’s Compass IV multisensor stabilized electro-optical payload.

Compass IV is a candidate for the UK Watchkeeper unmanned air vehicle (UAV) system. A version of the system was used on Elbit’s Hermes 450 UAV in a U.S. border surveillance program in Arizona. Compass IV can include a second-generation forward-looking infrared (FLIR), laser rangefinder/designator, target illuminator and laser spot tracker. Sensor output can be slaved to a helmet.

The company’s new 8-to-12-micron FLIR includes "continuous zoom," says Gabby Sarusi, El-Op’s chief scientist. This allows pilots to switch automatically or manually between a narrow and wide field of view, something that’s available in the visible realm but not in the 8-to-12-micron regime. El-Op is finalizing a similar system in the 3-to-5-micron range.

Receive the latest avionics news right to your inbox