Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Uncertainty Theories and Multisensor Data Fusion
Uncertainty Theories and Multisensor Data Fusion
Uncertainty Theories and Multisensor Data Fusion
Ebook317 pages3 hours

Uncertainty Theories and Multisensor Data Fusion

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Addressing recent challenges and developments in this growing field, Multisensor Data Fusion Uncertainty Theory first discusses basic questions such as: Why and when is multiple sensor fusion necessary? How can the available measurements be characterized in such a case? What is the purpose and the specificity of information fusion processing in multiple sensor systems? Considering the different uncertainty formalisms, a set of coherent operators corresponding to the different steps of a complete fusion process is then developed, in order to meet the requirements identified in the first part of the book.
LanguageEnglish
PublisherWiley
Release dateJul 9, 2014
ISBN9781118578674
Uncertainty Theories and Multisensor Data Fusion

Related to Uncertainty Theories and Multisensor Data Fusion

Related ebooks

Electrical Engineering & Electronics For You

View More

Related articles

Reviews for Uncertainty Theories and Multisensor Data Fusion

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Uncertainty Theories and Multisensor Data Fusion - Alain Appriou

    Introduction

    Combining multiple sensors in order to better grasp a tricky, or even critical, situation is an innate human reflex. Indeed, humans became aware, very early on, of the need to combine several of their senses so as to acquire a better understanding of their surroundings when major issues are at stake. On the basis of this need, we have naturally sought to equip ourselves with various kinds of artificial sensors to enhance our perceptive faculties. Even today, we continue to regularly exploit new technologies, which allow us to observe more things, to see further, more accurately and more surely, even in the most adverse conditions. The resulting quantity and variety of information produced are beyond our capacity for interpretation. Proper use of a set of sensor equipment, therefore, is very closely linked to the performance of the processing necessary to draw the expected benefit from the available data – particularly in terms of data fusion and construction of information that serves the operational needs.

    The development of these processing capabilities, however, must integrate a number of aspects relating to the changing context in which they are employed. The first relates to technological advances in the sensors used, and the resulting change in the nature of the data to be exploited. The performances of these processes are continuously being improved – mainly in terms of the spatial precision of scanning, acuity of reconstruction of the physical values at play, or reliability. In parallel to this, the domain in which sensors are used is growing, particularly because the sensors themselves are shrinking (being miniaturized), becoming compatible with onboard systems, and becoming increasingly robust in difficult environments, and are therefore able to acquire a different type of information. Finally, new observation techniques are constantly emerging, typically enabling us to analyze a wider variety of physical characteristics (wavelengths used and wave forms exploited, etc.), with increasingly agile acquisition capabilities, and spatial deployment in more extensive networks.

    Another major tendency which needs to be taken into account relates to the integration of an increasing number of sensors in ever-more-complex systems, where a wide variety of independent components must interact intelligently. Such is the case with systems of systems developed for defense purposes – particularly in the context of network centric warfare, the aim of which is to network all means of observation, command and intervention. Another example is security, where the concept of homeland security has gradually evolved into that of global security, which involves the pooling, regardless of geographical borders, of means of surveillance, information, decision support and security. The deployment of all these systems requires a wide range of very specific information to be gleaned from a set of distinct, and isolated, observations, and then transmitted in an appropriate form to their point of use.

    Autonomous smart systems also represent an area of major progression. Whether in terms of robotics in general, or more specifically in terms of deployment of autonomous land, air or sea craft, a system’s decision-making autonomy relies on critical observation and interpretation of its environment. Functions that the system has to fulfill, can be very diverse: navigation, observation, reconnaissance, planning, intervention, etc. This necessitates the development of a high-level perceptive capability, able to provide a circumstantial understanding of the very varied situations that may be encountered – often on the basis of insufficient observable data.

    Decision support is another area where the variety and complexity of problems require constant advances to be made. Whether in terms of medical diagnosis, technical expertise, intelligence, security operational support or surveillance, the objective is to reconstruct poorly defined cognitive data using multiple observations which are generally difficult to interpret.

    What all of these fields of application share is that they require collaborative processing of a large number of factors from a vast quantity of data, particularly disparate both in terms of their nature and quality, to deduce higher-level information whose connection to the available data is often imperfectly defined. It is therefore useful to design processing techniques capable of adapting to the imperfections of the input data on the basis of the objectives at hand. These imperfections are very diverse in type, as each observation has its own strong points and weak points, depending on the use we make of it. Weak points, for example, include uncertainty about a poorly defined event, inaccuracy with regard to a value that is difficult to estimate, incompleteness in terms of partially unobservable phenomena, or lack of reliability due to the use conditions.

    The quality of a particular data processing technique is therefore directly linked to its ability to handle imperfections in the information at all levels in order to make fuller and better use of the truly meaningful content, without being confused by imperfect knowledge, whatever form it may take. The solution to this requirement will thus inevitably originate in a set of theories commonly referred to as uncertainty theories.

    The oldest of these theories, and that which is most widely used in commercial systems today, is the well-known probability theory. Devoted to handling uncertainty, i.e. estimating the likelihood of an event occurring, it is relatively simple to use, and lends itself well to the processing of signals and images delivered by sensors. Yet as we will see, given the complexity of the situations mentioned above, its limitations soon become apparent – particularly when it becomes difficult to create a reliable probabilistic model.

    Another theory is the Fuzzy sets theory, established by Zadeh in 1965 in his seminal article of the same name [ZAD 65]. Complementing the previous theory fairly well, this relatively easy-to-understand theory aims to deal with the imprecision of the values used, i.e. only an approximate knowledge of these values. This technique, which can be used to develop reasoning as well as robust control for systems that are highly nonlinear or difficult to identify, quickly became very successful because of its ease of use, and the fact that it very immediately and naturally takes account of the available data.

    Zadeh used this as the basis for the construction of his possibility theory, which is specifically devoted to handling uncertainty about events. More flexible than the probability theory, and perfectly compatible with the uncertainty handling for which the Fuzzy sets theory is designed, this approach enables the user to conduct complex reasoning processes by adapting to what knowledge is available.

    In a very similar train of thought, another theory emerged, in parallel to those mentioned above, from Dempster’s early work on upper and lower probabilities induced by a multivalued mapping in 1967 [DEM 67]. Using this work as a springboard, in 1976, Shafer, in his book A Mathematical Theory of Evidence [SHA 76], laid the foundations for the belief functions theory. This theory is more powerful than the previous theories in terms of richness of analysis, both of uncertainty and imprecision. We will see, in particular, that probabilities and possibilities are two different specific examples of belief functions, making this theory a general and overarching framework to jointly process data very diverse in nature. However, it is more complex to use, and in particular, the interpretation of specific problems in this form is much more challenging. This difficulty meant that for years, belief functions were ignored, before beginning to be used very subjectively for qualitative reasoning processes. Driven by the evolution of requirements as discussed above, a certain number of publications in the 1990s were finally able to develop practical tools for data modeling and implementation for real-world applications. This led to the rise of a community of researchers who, though they subscribed to slightly different schools of thought, have now achieved a fairly full command of these techniques. This community began to come together and organize effectively in the 2000s – primarily in France. Indeed, the success of a number of national conferences on belief functions led to the founding, in 2010, of an international society (the Belief Functions and Applications Society) and correlatively the organization of the earliest international events entirely devoted to the theory (the International Workshop on the Theory of Belief Functions in 2010 and the Spring School on Belief Functions Theory and Applications in 2011).

    Evidently, these different theories were not initially developed for data fusion (in particular, multisensor data fusion). Hence, the aim of this book is to identify the specific and joint contributions which can be drawn from these theoretical frameworks in order to serve the needs expressed, and to create a coherent set of tools for multisensor data processing. This work fits in perfectly with the concern with data fusion which has regularly brought (and continues to bring) the scientific community together since the 1998 founding of the International Society on Information Fusion, whose annual conference FUSION has a growing attendance and impact, and the International Journal on Information Fusion.

    With this in mind, it is appropriate to begin with a chapter that clearly defines the different aspects of the topic of multisensor data fusion and the requirements inherent in it. The basic principles of the different theories are then set out and compared in Chapter 2. The subsequent chapters each discuss a particular function in detail, in an order which lends itself to the gradual construction of a consistent set of operators. At each turn, we examine the solutions which can be developed in each theoretical framework, either from a competitive point of view or combining different solutions. The functions examined relate to the different stages of the processing: data modeling, assessment of the reliability of different information fragments, choosing of frameworks for analysis and propagation of the information from different viewpoints, combination of different sources or decision-making in relation to the observed situation. The deployment of complete processing techniques, dealing with general issues such as the matching of ambiguous data or the tracking of vehicles, is then discussed in the later chapters, before drawing a conclusion as to the contribution of uncertainty theories to multi sensor data fusion.

    At each stage, didactic examples are used to illustrate the practical application of the proposed tools, their operation and the performances that we can typically expect from them for each of the problems at hand.

    The discussion in these chapters gives an overview of the scientific advances that the author has, for two decades, been teaching in different contexts: the Collège de Polytechnique, engineering schools, international seminars, etc., capitalizing on an original, overarching view of the domain.

    1

    Multisensor Data Fusion

    1.1. Issues at stake

    Why would anyone seek to combine multiple sensors while this inevitably increases cost, complexity, cumbersomeness and weight, etc.?

    The first reason that often comes to mind is that we can use multiple identical sensors to improve their performances. Yet, if n sensors provide the estimation of the same value with the same signal-to-noise ratio (SNR), at best, the joint use of those n sensors will lead to a gain of in relation to that SNR, while multiplying by a factor close to n all the material factors of the resulting system (cost, weight, bulk etc.). Additionally, in such cases, there are often simpler and more effective solutions available – particularly solutions based on temporal integration of the data from a single sensor.

    This example highlights the fact that combining multiple sensors is only irrefutably advantageous in the production, in specific conditions, of information, which a single sensor (whatever its type) would be unable to provide. In practice, in order to identify the situations where it is helpful, we consider three categories of objectives that a multisensor approach may serve. Each of these categories can be illustrated by looking at a few situations, where observation and surveillance systems are used.

    The first major benefit of multisensor systems is their robustness in any observation context, which is usually a decisive factor in the choice to use such systems. For example, the system may be less vulnerable to disturbances – whether intentional (counter-measures specifically targeted at a particular wave form or wavelength, but that do not affect those of the other sensors), or natural (atmospheric phenomena that adversely affect one sensor but not the others, such as multiple trajectories to a low site, and the effect of an evaporation duct on radar, or atmospheric transmission in optoelectronics). Other examples include the ability to function in an environment or conditions of observation that impede the operation of a single sensor, but do not have the same effect if a variety of appropriate observation devices are used simultaneously. Thus, various types of weather-related disturbances, geometrical masking effect, problems of spatial or radiometric resolution, or limitations in detection range may render one of the sensors (though not always the same one) non-operational. In the same vein of ideas, there is also the problem of representativeness of certain data used to train a given sensor to later recognize specific objects, in relation to the reality on the ground. If the training data used are not representative, the only way to recognize the target objects is by cross-referencing the data from different sensors.

    The second point of superiority of multisensor systems is the acuity and richness of the information gleaned. For example, one sensor might discriminate between targets independently of their size on the basis of the features of their rotating parts, while another sensor, which is not capable of observing these features, distinguishes them by their size. The combination of the distinguishing capabilities of these sensors will, obviously, help to refine the taxonomy finally generated. Similarly, the relevant association of a radar – which provides good distance – and Doppler resolution with a passive optical device with good angular resolution will generate a fine-grained analysis in a four-dimensional space – those dimensions being the site, the bearing, the distance and the Doppler. Partial non-availability of data to one sensor (unobservable measurements, non-availability of training data, etc.) can also be compensated for by data from another sensor.

    The third great capability of multisensor systems is a better reaction time when presented with the most complex requests, because they can share out the required tasks between the different sensor components used. Indeed, each of the different sensors can, in parallel, focus on dedicated functions, which are appropriate to their capabilities. The synergy of the work of acquisition and processing then optimizes the reactiveness of the whole system. For example, a radar can quite easily perform a quick pre-screening of the space – a survey with a high detection rate but also a high false alarm rate – with a simple wave form, in order to provide a small number of potential targets for detailed analysis with an optoelectronic identification system.

    To begin with, it is useful to note that for these three major categories of benefits reaped with the multisensor approach, the expected gain can only be obtained by appropriate complementarity of the sensors used and their processing. Hence, above all else, the quality of a multisensor system is dependent upon the diversity of its components in the face of the problem at hand. Consequently, the functional specificity of each of these components, the diversity of the data they provide, and the exponential increase in the volume of data to be processed are all unavoidable complexifying factors for the design and deployment of multisensor data fusion modules.

    In addition, combining multiple sensors only makes sense, correlatively, to carry out functions that a lone sensor of any type would be incapable of performing, in any and all foreseeable circumstances. This means that the system’s performances hinge upon the capabilities of one or other of the sensors at different times. (The same sensor will not always be fully functional, and different sensors will perform better at different times; otherwise we would only need to look at one sensor – we would have no need for the others). What follows from this is that we must constantly fuse relevant data with defective data. Yet, as we will see, blithely combining good and bad data always yields an inaccurate result, as the bad data pollute the good. Therefore, we need to constantly use all of the available information, both exogenous and previously collected, to assess and qualify the observations feeding from the different sensors, and exploit those observations on the basis of their relevance. Of course, this further increases the diversity and volume of the information needing to be integrated, which in turn further increases the complexity of the processing, because at all levels, this qualitative dimension needs to be integrated in detail.

    In view of this significant increase in the complexity of the system and its processing, its real-time operation necessitates objectives in terms of reactivity, and therefore rapidity, often associated with constraints in terms of on board ability. A crucial objective in terms of data fusion processing, therefore, is to find a compromise between the complexity needed to ensure the desired benefits and the simplicity needed to be compatible with the operational constraints.

    1.2. Problems

    In practice, the combination of different sensors may be useful for two types of goals:

    – Distinguishing hypotheses in a discrete set: this is the case for the functions of detection, extraction, classification, recognition, identification, counting or diagnostics more generally.

    – Estimating variables in a continuous set: of particular note here are the functions of localization, tracking, navigation or, more generally, metrology (quantification of descriptors on the basis of observations).

    In both cases, the fusion algorithms must not only exploit the richness of all the available information as best they can, but also satisfy the expression of high-level operational requirements imposed by the pooling of different means of observation in increasingly complex systems.

    As a support and as a reference for the coming discussion, consider the expected evolution of a generic classification system. Figure 1.1 illustrates the traditional structure of such a system, where the objective is to find the

    Enjoying the preview?
    Page 1 of 1