• Chinese Optics Letters
  • Vol. 13, Issue 8, 081201 (2015)
Vinod Karar1、*, Divya Agrawal1, and Smarajit Ghosh2
Author Affiliations
  • 1Optical Devices and Systems, Central Scientific Instruments Organisation, Chandigarh 160030, India
  • 2Electrical and Instrumentation Engineering Department, Thapar University, Patiala, Punjab 147004, India
  • show less
    DOI: 10.3788/COL201513.081201 Cite this Article Set citation alerts
    Vinod Karar, Divya Agrawal, Smarajit Ghosh. Intuitive approach towards detection of attention tunneling while using a head-up display[J]. Chinese Optics Letters, 2015, 13(8): 081201 Copy Citation Text show less

    Abstract

    Head-up display (HUD), a primary cockpit display, helps in optimizing a pilot’s attention towards aircraft and outside events. Slight mismatch in the balance may cause missed events; this phenomenon is called attention tunneling and affects the situational awareness of the pilot. This work reports an intuitive approach to detect attention tunneling while use of HUD in aircrafts. Texture analysis of a composite HUD camera video provided three distinguishing parameters, viz., contrast, correlation, and homogeneity. These three texture parameters are used as inputs for a fuzzy inference-based assistive detection system which could be used for distinguishing tunneled and nontunneled HUD operation.

    Head-up display (HUD) plays a crucial role in establishing an optimal control of aircraft by providing a collimated symbology/image superimposed on the outside world, seen on a semi-reflective transparent glass. This enables reduction in scanning and re-accommodation required to process near and far domain information at the same time. Significant benefits of HUD come at the cost of distribution of the pilot’s near and far domain attentional resources. This phenomenon is termed attention or cognitive capture. A few factors which play roles in attention tunneling are location of symbology reticles, symbology clutter, Mandelbaum effect, symbology format, size misconception, binocular misalignment, spatial location and disorientation, limited field-of-view, luminance, identical color and focal distance during symbology overlaid on an infrared raster image, accommodation and convergence, and so on[15].

    HUDs generally provide displays in green color, whereas the outside world has hues of various saturations. A HUD displays characteristics such as feature salience, contrast interference, and contrast or luminance differences between various display elements also play a significant role in deciding the response time of the pilot[2,3,6]. The overall effect is on situational awareness (SA) which is the perception of elements in the environment within a volume of time and space along with their comprehension and projection of near future status[7]. The presence of HUD and various flight instrument panels increases the complexity of the aircraft cockpit. Constant addition of new gadgets in the cockpit may lead to human factor problems, which include workload and sensory overload.

    To automatically detect attention tunneling and help in improving the SA of the pilot, it is important to analyze the real-time situation. This could be achieved by analyzing the HUD charge-coupled device (CCD) camera output video as the HUD camera captures the exact scene which is being viewed by the pilot during flight. In this work, the HUD camera-captured composite image comprising the outside world and symbology has been used for texture analysis and further classification purposes.

    Any image can be characterized by its primitives such as color, shape, and texture. Tuceryan and Jain in their book chapter about texture analysis have mentioned “…the ‘definition’ of texture is formulated by different people depending upon the particular application and that there is no generally agreed upon definition…”[8]

    Texture is one of the significant characteristics used to classify regions of interest or objects in an image[9]. The textural features include information about image characteristics such as gray-tone linear dependencies, complexity, nature and number of boundaries existing in the image, and so on[10].

    The composite image captured by a HUD camera can be very complex. Its texture analysis could reveal discriminating features necessary to classify tunneled and nontunneled operation. Texture possesses important information about the structural arrangement of surfaces and their relationship to the surrounding environment. Image texture can be characterized through descriptors such as autocorrelation, directionality, central moments, coarseness, and so on.

    The texture analysis has been utilized in this work to characterize regions in the images by their texture content. Various texture features can be extracted using co-occurrence probabilities through the gray-level co-occurrence matrix (GLCM). The GLCM, a statistical method of exploring texture, takes into consideration the spatial relationship of pixels. The analysis was performed using the image processing toolbox of MATLAB[10].

    The outside world view captured by the HUD camera is with continuous gray levels and may have varying intensities and contrast throughout the scene, whereas the stroke form symbology has the same luminance throughout. The luminance and contrast patterns of the symbology play an important role in maintaining adequate contrast against varying backgrounds.

    An experimental setup was established to simulate the flying conditions encountered by a pilot during flight. A real-time data logging system was also developed to capture a composite HUD video. This video stream was then used to extract features which were used as inputs for the fuzzy inference system (FIS)-based decision making to detect attention tunneling. A FIS-based inference mechanism was used to reap the benefits of both image processing and the intuitive experience of the users. Fuzzy inference is a very efficient tool which helps in translation of supervisor experience into a set of rules for efficient desired operation.

    The experimental setup (Fig. 1) consists of a HUD system mounted on a cockpit mock-up with a display simulator along with a seat adjustment mechanism, a HUD signal generator, a projector setup coupled with the background simulation computer, a light source, a light diffuser, a photometer, and a television (TV) monitor. A light source along with a light diffuser was used to simulate various ambient lighting conditions.

    Experimental setup.

    Figure 1.Experimental setup.

    Our work was conducted over all the three ranges of ambient luminance (AL): high AL (500030000cd/m2), middle AL (10005000cd/m2), and low AL (50500cd/m2) with HUD symbology luminance (SL) varying through its 17 levels for each range. The aim was to study tunneling effects during high outside luminance conditions (sunny day), medium outside luminance (normal cloudy day), and low outside luminance (twilight conditions). Participants were required to report about two event changes: first, to report about any noticed changes in designated areas on the HUD display; and second, to report any noticed changes in the outside scene. The CCD camera output (composite HUD video) was recorded in real-time for analysis.

    The programmability feature of the HUD signal simulator enabled generation of various symbology frames. Changes in the symbology field included: (1) horizon line, (2) airspeed, (3) heading scale, (4) mach number, (5) angle of attack, (6) vertical velocity, and (7) instantaneous velocity vector, as shown in Fig. 2(b).

    (a) Outside world view with markers; (b) HUD symbology page.

    Figure 2.(a) Outside world view with markers; (b) HUD symbology page.

    During the experimentation, the outside scene was simulated through pre-recorded scenes which covered various background conditions. The idea was to obtain maximum variation in background texture, luminance, and the contrast levels. Also, in outside scenery, different symbols (which includes up arrow, down arrow, quad arrow, cylindrical shape, and so on) kept appearing and disappearing for checking awareness of the user about outside scenery shown in Fig. 2(a). The luminance control of the HUD enabled variable SL to simulate low, high, and optimum symbol salience conditions.

    All these together created experimental conditions required to obtain HUD image conditions where the responses of participants could be evaluated under nontunneled operation (optimum; pilot able to optimally adjust his/her attention to both the outside as well as on the HUD display events) and tunneled operation (pilot either engrossed too much with HUD display or on the outside environment). Thus, the dynamic nature of outside scene, symbology, and varying AL and SL facilitated creation of a wide range of display conditions necessary to understand tunneling aspects and subsequent application of fuzzy inference on the data obtained.

    Each participant was required to answer questions for the same setting and two sets of readings were recorded. Questions were asked during the time when the participant was looking through the HUD and focusing on the outside scene as well as the symbology. A total of 16 event changes (nine in outside scene and seven on symbology page as depicted in Fig. 2) were to be identified in a single run. For every correct identification of result, a score point 1 was awarded and a 0 for the miss. Scores for HUD event detection and outside event detection were recorded individually. The scores of each participant were averaged for both set of readings. These individual average score of all the participants for both HUD event detection and outside event detection were averaged for each instance. This final average score was used as the percentage observation value for the corresponding instance (operation variable values). The effect of the contrast ratio (CR) over attention tunneling was reported in a study by Karar and Ghosh[11]. The CR is defined as CR=(AL+SL)AL.The results obtained are as shown in the graphs in Fig. 3. The participants’ responses reveal a significant mismatch in the percentage of event detection between outside events and HUD events. These responses were taken as a measure to classify the recorded composite HUD videos into three categories: (1) nontunneled, balanced event detection for both HUD and outside events; (2) tunneled due to low symbol salience, poor HUD event detection; (3) tunneled due to high symbol salience, poor outside event detection. Tunneled HUD images were classified in accordance with participant response under two headings: (1) tunneling due to low symbol salience, i.e., HUD SL is low which causes background events to dominate pilots attention; (2) tunneling due to high symbol salience, i.e., HUD SL is so high that the pilot is not able to focus on the background scene and his/her focus stays more on the HUD symbology.

    HUD and outside event detection during different AL conditions and varying CR.

    Figure 3.HUD and outside event detection during different AL conditions and varying CR.

    The recorded composite HUD videos were then further used to extract and generate an image data set. Generated image data set was saved and processed (Fig. 4). The extracted frames were also stored under three categories. i.e., (1) nontunneled, (2) tunneled due to low symbol salience, and (3) tunneled due to high symbol salience, on the basis of participant’s score.

    (a) Real-time image processing system developed for HUD image capturing and data logging; (b) example image frames extracted.

    Figure 4.(a) Real-time image processing system developed for HUD image capturing and data logging; (b) example image frames extracted.

    The aim here was to extract features from the classified image dataset which could help in translating the subjective knowledge obtained from visual inspection about classification of attention tunneling into an automatic detection scheme.

    The composite image extracted from a composite HUD video can be very complex. The luminance and contrast patterns of the symbology play an important role in maintaining adequate contrast against varying backgrounds. Image frames extracted from the captured composite HUD videos were used as the input image. This image is converted to gray scale and the GLCM calculation is performed. The GLCM properties are extracted using the graycoprops function. The GLCM is a second-order texture measure. Different GLCM parameters are related to specific first-order statistical parameters. Association of a textural meaning to each of these parameters is very critical. The GLCM is dimensioned to number of gray levels. It stores the co-occurrence probabilities gij. To determine texture features, selected statistics are applied to each GLCM by iterating through the entire matrix[9,10]. The GLCM features such as contrast, homogeneity, energy, and correlation, which are the primary texture features to describe an image, along with the standard deviation and entropy of the image, have been used for HUD image classification in terms of the tunneled and nontunneled image.

    An algorithm was developed using the MATLAB platform to analyze the texture features of all three sets of HUD camera-captured images. A MATLAB run was made to calculate all six parameters for each image dataset. An attempt was made to evolve a pattern which could be used to classify HUD image conditions in terms of tunneled or nontunneled cases objectively. Obtained parameter values are shown in Figs. 5(a)5(f). From Figs. 5(a)5(f), it can be inferred that contrast, correlation, and homogeneity features can be used for discriminating between tunneled and nontunneled operation.

    (a) Contrast; (b) correlation; (c) energy; (d) homogeneity, statistical parameters; (e) standard deviation; (f) entropy. Series 1, trend for HUD images with low symbol salience; Series 2, nontunneled operation; Series 3, trend for HUD images with high symbol salience.

    Figure 5.(a) Contrast; (b) correlation; (c) energy; (d) homogeneity, statistical parameters; (e) standard deviation; (f) entropy. Series 1, trend for HUD images with low symbol salience; Series 2, nontunneled operation; Series 3, trend for HUD images with high symbol salience.

    Blue lines (Series 1) in Figs. 5(a), 5(b), and 5(d) correspond to the HUD image dataset classified as tunneled due to low symbology salience. Visual examination was supported by low contrast, high homogeneity, and high correlation values. On the other hand, green line trends (Series 3) in Figs. 5(a), 5(b), and 5(d) were the results obtained for the HUD image dataset classified as tunneled due to high symbology salience. This time visual examination was supported by high contrast, low homogeneity, and low correlation values. Middle range values for these parameters shown in red color (Series 2) in Figs. 5(a), 5(b), and 5(d) correspond to the nontunneled image dataset. It indicates an appropriately lit symbology which will essentially result in an appropriately distributed attention.

    Other three parameters, viz., energy, entropy, and standard deviation, do not reveal any meaningful information regarding attention capture or symbology salience. Thus, luminance contrast between a pixel and its neighbor over the entre image, gray tone differences in pair elements, and gray tone linear dependencies in a HUD image indicated by these parameters could answer the question regarding the need to lower or increase the SL to mitigate tunneling and optimize the attention capture.

    Analysis of Fig. 5 indicates that contrast, homogeneity, and correlation parameters calculated for all three sets of composite HUD images could be used for classification of nontunneled and tunneled HUD image conditions. Thus, these parameters are chosen as inputs for fuzzy decision making to discriminate tunneling and nontunneling cases.

    Each input was divided into three membership functions (MFs) each, i.e., low, medium, and high. For contrast (0.02–0.23) and correlation (0.89–0.99), a trapezoidal-shaped MF was selected while for homogeneity (0.94–0.99) a triangular-shaped MF was selected (Fig. 6). The selection of MF shape was done on the basis of a range of parameters. Since homogeneity varied in a lesser span a triangular-shaped MF was chosen and a trapezoidal shape for contrast and correlation.

    Input MF.

    Figure 6.Input MF.

    Here our aim was to make a distinction between the three cases: (1) tunneling due to low symbol salience, (2) nontunneled operation, and (3) tunneling due to high symbol salience. A Sugeno-type fuzzy model was chosen in this work as in case of Sugeno fuzzy model output is a linear or constant. In our case output is a linear value which represents: (a) ‘0’ for tunneling due to low symbol salience, (b) ‘0.5’ for nontunneled operation, and (c) ‘1’ for tunneling due to high symbol salience.

    For a FIS to make correct decision, a total of 27 (33) rules were made which provided the intuitive ability required for decision making (Fig. 7). These rules were made after critically examining HUD operation images in correspondence with contrast, correlation, and homogeneity values for the respective images. The rules were made in concurrence with participants involved in testing and evaluation of HUDs.

    Rules of FIS for detection of attention tunneling.

    Figure 7.Rules of FIS for detection of attention tunneling.

    Finally, a graphical user interface (GUI) was built using the MATLAB platform which incorporates the proposed FIS. It takes continuous composite HUD video as input, generates alerts for both types of tunneling cases, and also displays a normal operation message when no tunneling is taking place (Fig. 8).

    Working of FIS-based attention tunneling detection system.

    Figure 8.Working of FIS-based attention tunneling detection system.

    Trials were conducted to test the developed GUI. In the trials, participants were asked to observe HUD symbology while a record of alerts generated by the FIS was maintained simultaneously. Scores were given to participants for correct identification of events occurring both in the foreground and background. Participant scores of identification revealed the information about when they got more focused on HUD symbology and less on the outside scene, or vice-versa, or a nontunneled performance. The subjective results were found to be in agreement with the alerts generated by the developed FIS system for attention tunneling detection. The scores obtained by the participants and corresponding alerts generated by the detection system are as tabulated in Table 1.

    AL (cd/m2)CRHUD Event Detection (%)Outside Event Detection (%)Alert Generateda
    30,0001.0165498T-LSL
    20,0001.2756696T-LSL
    5,0002.58094NT
    1,0001.15698T-LSL
    50028295NT
    500109875T-HSL
    1001.15698T-LSL
    502.58794NT
    5013.89870T-HSL

    Table 1. Sample of Participant Score and Corresponding Alerts Generated by Our FIS-Based Detection System

    In conclusion, we report an intuitive approach of detecting attention tunneling while use of HUD in an aircraft. Texture features of HUD images are used by a FIS-based decision making system to identify tunneled or nontunneled HUD operation. Attention tunneling is detected until the present date using a subjective approach only, but this work opens new possibilities of automation in this field. The FIS-based system depending on feature values obtained by analysis of the HUD CCD camera video generates an alert with respect to attention tunneling. The system is of an assistive nature and it will make the pilot aware about the possibility of tunneling to be encountered. Also, real-time implementation of this technique will help in making use of a HUD, simpler further enhancing the pilot’s SA.

    References

    [1] C. D. Wickens. Attentional issues in head-up displays. Engineering Psychology and Cognitive Ergonomics, 1, 3(1997).

    [2] J. Crawford, A. Neal. Int. J. Aviat. Psychol., 16, 1(2006).

    [3] D. C. Foyle, R. S. McCann, B. D. Sanford, M. F. J. Schwirzke. 37th Meeting of the Human Factors and Ergonomics Society(1993).

    [4] D. Cheng, Q. Wang, Y. Wang, G. Jin. Chin. Opt. Lett., 11, 31201(2013).

    [5] C. Xu, D. Cheng, H. Peng, W. Song, Y. Wang. Chin. Opt. Lett., 12, 060011(2014).

    [6] J. He. J. Ergon., 3, 1000e120(2013).

    [7] M. R. Endsley. Human Factors, 37, 65(1995).

    [8] M. Tuceryan, L. F. Pau, C. H. Chen, A. K. Jain, P. S. P. Wang. The Handbook of Pattern Recognition and Computer Vision, 207(1998).

    [9] R. C. Gonzalez. Digital Image Processing(2009).

    [10] . User’s Guide (R2013b)(2013).

    [11] V. Karar, S. Ghosh. Chin. Opt. Lett., 12, 013301(2014).

    Vinod Karar, Divya Agrawal, Smarajit Ghosh. Intuitive approach towards detection of attention tunneling while using a head-up display[J]. Chinese Optics Letters, 2015, 13(8): 081201
    Download Citation