AFRL-AFOSR-VA-TR-2015-0329: Dennis Bernstein
AFRL-AFOSR-VA-TR-2015-0329: Dennis Bernstein
AFRL-AFOSR-VA-TR-2015-0329: Dennis Bernstein
Dennis Bernstein
UNIVERSITY OF MICHIGAN
10/01/2015
Final Report
Form Approved
REPORT DOCUMENTATION PAGE OMB No. 0704-0188
The public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing
data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or
any other aspect of this collection of information, including suggestions for reducing the burden, to Department of Defense, Executive Services, Directorate (0704-0188).
Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to any penalty for failing to comply with a collection of information
if it does not display a currently valid OMB control number.
PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ORGANIZATION.
1. REPORT DATE (DD-MM-YYYY) 2. REPORT TYPE 3. DATES COVERED (From - To)
19-10-2015 Final Performance 01-09-2012 to 14-10-2015
4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER
Transformative Advances in DDDAS with Application to Space Weather Monitoring
14. ABSTRACT
This project focused on DDDAS-motivated developments in support of space weather monitoring and
prediction. The project involved four interrelated tasks relating to physics-driven adaptive modeling, adaptive
data assimilation with input reconstruction, event-based sensor reconfiguration, and optimization of
scheduling. For data assimilation, the emphasis has been on model refinement. The problem of estimating
the eddy diffusion coefficient using total electron content measurements has led to new techniques for
determining the essential modeling details needed by the retrospective cost model refinement technique.
For spacecraft design, multidisciplinary optimization design techniques were applied to the design of small
satellites accounting for multiple vehicle subsystems. For download scheduling, optimization techniques
were used to account for multiple spacecraft and ground stations
15. SUBJECT TERMS
DDDAS
16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF 18. NUMBER 19a. NAME OF RESPONSIBLE PERSON
a. REPORT b. ABSTRACT c. THIS PAGE ABSTRACT OF Dennis Bernstein
PAGES
Unclassified Unclassified Unclassified UU 19b. TELEPHONE NUMBER (Include area code)
734-764-3719
Standard Form 298 (Rev. 8/98)
Prescribed by ANSI Std. Z39.18
https://livelink.ebs.afrl.af.mil/livelink/llisapi.dll 10/27/2015
Transformative Advances in DDDAS
with Application to Space Weather Monitoring
Abstract
This project focused on DDDAS-motivated developments in support of space weather monitoring and
prediction. The project involved four interrelated tasks relating to physics-driven adaptive modeling, adap-
tive data assimilation with input reconstruction, event-based sensor reconfiguration, and optimization of
scheduling. For data assimilation, the emphasis has been on model refinement. The problem of estimat-
ing the eddy diffusion coefficient using total electron content measurements has led to new techniques for
determining the essential modeling details needed by the retrospective cost model refinement technique.
For spacecraft design, multidisciplinary optimization design techniques were applied to the design of small
satellites accounting for multiple vehicle subsystems. For download scheduling, optimization techniques
were used to account for multiple spacecraft and ground stations.
For modeling the ionosphere-troposphere, we use the Global Ionosphere-Thermosphere Model [77], which
is a CFD code with atmospheric chemistry, as the basis for data assimilation, input estimation, and subsystem
identification. These objectives were addressed by applying a specialized technique that is being developed
under this project. This technique is called retrospective-cost-based adaptive input and state estimation
(RCAISE). We applied RCAISE to GITM in order to estimate unknown drivers (inputs) to the ionosphere-
thermosphere [2, 24]. This approach complements ensemble-based methods as in [66], which applies the
ensemble code DART to GITM, since the main goal of RCAISE is to estimate the unknown input rather than
obtain estimates of all model states. Since RCAISE is not an ensemble code, it is highly computationally
efficient compared to DART, but, unlike DART, it provides best estimates rather than a probability density
function. A related approach was previously demonstrated in [25] to identify a subsystem cooling model
whose dynamics are inaccessible in the sense that neither the input nor the output of the cooling subsystem
dynamics are measured.
Much of this effort has focused on determining the essential modeling details required by RCAISE for
both input estimation and model refinement. For linear systems, this information is well understood. How-
ever, for application to space weather modeling, GITM is both extremely nonlinear and high dimensional.
Our goal is thus to extract the essential modeling details from numerical simulations. For earlier studies,
numerical testing was used to determine this information. However, an additional goal was to estimate
the eddy diffusion coefficient (EDC) using total electron content (TEC) data. TEC data is available from
ground stations located worldwide, and thus RCAISE is able to work with multiple measurements. Numeri-
cal testing has shown that estimating this modeling information is computationally expensive, and thus more
efficient methods are needed.
In previous work, RCMR was used in [25] to estimate NOx cooling profile using GITM and simulated
spacecraft measurements. This study was limited to a one-dimensional version of GITM. RCMR was sub-
sequently used in [18] to estimate the photoelectron heating coefficient in a fully three-dimensional version
of GITM using both simulated and real satellite measurements. The applications in [25] and [18] demon-
strated the ability of RCMR to adaptively refine a nominal model by iteratively updating an estimate of an
unknown parameter modeled by an adaptive subsystem model. This parameter is modeled as an unknown
subsystem, as shown in Figure 1, where the input signal ŷ and output signal û of the unknown subsystem
are not measured and thus cannot be used to estimate the unknown subsystem.
Alternatively, for the objective of estimating an unknown external input, RCAISE was used in [2] to
estimate the unknown driver F10.7 in a fully three-dimensional version of GITM using both simulated and
real satellite measurements. In this case, as shown in Figure 2, the adaptive driver estimator is updated to
obtain an estimate ŵ of the unknown driver w. These studies demonstrated that RCMR and RCAISE can
effectively use data parameter, state, and input estimation within the context of a highly nonlinear, large-
scale model consisting of thousands of state variables.
In this section we briefly describe work where the goal is to estimate the eddy diffusion coefficient (EDC)
of the thermosphere. The drag force felt by low-Earth orbiting objects is linearly proportional to the mass
density of the thermosphere. Uncertainties in thermospheric mass density variations are the major limit-
ing factor for precise low-Earth orbit determination. The perturbation of the thermospheric mass density is
strongly controlled by the energy deposited into the upper atmosphere. The difference in the thermosphere
mass density responses to different sources of energy input has been investigated, and the spatial and tem-
poral variations of the thermospheric mass density during a series of idealized substorms were studied using
the Global Ionosphere Thermosphere Model (GITM) [64]. The mass density response to different types of
energy inputs was shown to have strong local time dependence. In addition, the thermosphere mass density
response to different sources of energy input is a slightly nonlinearly system, and the non-linearity grows
Figure 3: Simulated Ground Locations for Total Electron Count Measurements. Each ”+” denotes the
location of a simulated ground location for measurements of total electron count (TEC). Simulated data
from these locations is used by RCMR to estimate the eddy diffusion coefficient in the thermosphere.
RCMR and RCAISE depend on modeling information about the system being considered; this modeling
information can be viewed as a reduced-order model, or as tuning parameters, since often only a small
number of parameters are needed. For a low-dimensional linear system, the required modeling information
can be extracted in the form of the impulse response. For DDDAS applications, however, our interest is in
high-dimensional nonlinear systems, such as the thermosphere as modeled by GITM, where the number of
states may be as large as 107 . GITM is implemented by a FORTRAN code that captures diverse physics
including fluid dynamics, thermodynamics, chemical kinetics, and electrodynamics. For this application, no
analytical model is available, and it is not possible to analytically extract the required modeling information
due to the complexity of the physics. Therefore, we typically implement RCMR and RCAISE using a
small number of tuning parameters determined by numerical testing. As we have found in relation to the
estimation of EDC using TEC measurements, this approach is inefficient.
What is needed to make RCMR/RCAISE a truly practical tool is a reliable, systematic, and easily imple-
mentable technique for obtaining the tuning parameters required by RCMR/RCAISE. One approach, which
we have tested on low-order examples, involves iterative refinement of the tuning parameters that comprise
the filter Gf . For a fixed data window, initial tuning parameters are chosen and used to obtain a subsys-
tem model. This subsystem model is merged with main system model, and the resulting “closed-loop”
impulse response is computed. The impulse response parameters are then used as tuning parameters within
an updated filter Gf for re-estimating the unknown subsystem, and the process repeats using the same data
window used in previous iterations. Tests have shown that this iterative filter refinement approach provides
fast response from a possibly poor initial choice of Gf and is efficient in the sense that only limited data
window is needed.
To illustrate this technique, consider a 2nd order linear system where A11 is uncertain. We assume that
the nominal model of  is such that Â11 − A11 = ∆A11 = 1. We construct Gf from the impulse response of
transfer function from û to ŷ0 . In subsequent iterations, Gf is refined using the updated value of Â. Figure
Figure 4: (a) RCMR estimate of ∆A11 with iterative filter refinement. The filter Gf is refined after each
iteration based on the updated estimate of ∆A11 . Note that the data window for each iteration consists of
only 10 time steps, RCMR is switched on after 5 time steps, and one-step estimation of the estimate of ∆A11
is achieved at the 3rd iteration. (b) Estimate of ∆A11 at the end of each iteration. After each iteration, Gf is
refined using the updated estimate, and is used in the next iteration.
4(a) shows the estimate of ∆A11 in each iteration. Figure 4(b) shows the estimate of ∆A11 at the end of
each iteration.
This Gf refinement technique has the added advantage that refinement of Gf is implemented on the same
data set, and thus only a small amount of data is required to increase the rate of convergence. The final refined
filter Gf is then used on the remaining data set. For nonlinear systems, this technique can simplify the numer-
ical testing procedure for constructing Gf . If successful on large-scale applications such as GITM, this iter-
ative filter refinement technique will significantly simplify the implementation of RCMR/RCAISE on new
and challenging applications. In particular, this technique will facilitate the application of RCMR/RCAISE
with greater assurance of success.
An additional component of this project was the development of post-launch calibration techniques for
spacecraft that collect data for atmospheric modeling. This effort ties into the development of Cubesats
at the University of Michigan, several of which are currently in orbit [11, 22, 78]. New techniques were
developed for calibrating on-board sensors after the spacecraft reaches its specified orbit. In particular,
low-cost photoelectric and magnetic-field sensors typically lose calibration due to the launch and space
environment. Through a combination of physics modeling and data analysis, it is shown in [89] that the
accuracy of on-board sensors can be enhanced through ground-based re-calibration. This re-calibration
accounts for time-varying electrical and magnetic fields, the Earth’s spatially varying magnetic field, and
sensor degradation.
Additional research has focused on design studies for spacecraft configurations. In particular, multidis-
ciplinary design optimization techniques were applied in [51] to optimize multiple spacecraft subsystems,
including power and communications subsystems. In addition, a study of photovoltaic power generation
constraints due to spacecraft solar panel geometry, orbital effects, and self-induced shadowing is described
in [61].
The last component of this project addresses the challenges that arise from transferring data from a con-
stellation of satellites. Data transfer is limited due to power and energy constraints, orbit trajectory, and
the location, gain, and field of view of available ground-based stations. This leads to a capacity-constrained
scheduling problem. These issues were addressed by developing real-time scheduling algorithms for model-
ing and optimizing space networks to optimize communication capacity [86, 88]. Research includes detailed
investigations of download-scheduling techniques for configurations involving multiple satellites and ground
stations [19, 62, 87].
This project partially supported several students who received the Ph.D. degree, namely, Sara Spangelo,
John Springmann, and Asad Ali, as well as postdoc Angeline Burrell. Continuing students include Ankit
Goel in Aerospace Engineering as well as postdoc XianJing Liu. Brian Lemay and Jeremy Castaing are
expected to complete their Ph.D. degrees in 2016 and 2017, respectively.
Two students supported by this project were recognized for their technical contributions. John Spring-
mann was the winner of the Student Paper Competition at the AIAA 2013 Small Satellite Conference
for [90]. Jeremy Castaing received honorable mention in the Student Paper Competition at the AIAA 2014
Small Satellite Conference for [19].
The foundation for space weather monitoring is a first principles model of the upper atmosphere, namely,
the Global Ionosphere-Thermosphere Model (GITM) [77], which provides the basis for data assimilation
algorithms. GITM is a 3-dimensional spherical code that solves the Navier-Stokes equations for the thermo-
sphere. These types of models work better than empirical (ad hoc “correction”) models because they capture
the dynamics of the system instead of snapshots of steady-state solutions, which are what most empirical
models provide. Furthermore, first principles models, such as GITM, model the winds that can influence the
drag. In order to accurately predict ρ and Vw in the upper atmosphere, GITM considers the densities of N2
and O2 , which are the main constituents at 100 km altitude, and NO, which becomes more dominant in the
upper atmosphere.
GITM is different from most models of the atmosphere in that it solves the full vertical momentum
equation instead of assuming that the atmosphere is in hydrostatic equilibrium, where the pressure gradient
is balanced by gravity. While this assumption is fine for the majority of the atmosphere, in the auroral zone,
where significant energy is dumped into the thermosphere on short time-scales, vertical accelerations often
occur. This heating causes strong vertical winds that can significantly lift the atmosphere [30].
The grid structure within GITM is fully parallel, using a block-based, two-dimensional domain decom-
position in the horizontal coordinates [70, 71]. The number of latitude and longitude blocks can be specified
at runtime, so the horizontal resolution can easily be modified. GITM has been run on up to 256 processors
with a resolution as fine as 0.31◦ latitude by 2.5◦ longitude over the entire globe with 50 vertical levels,
resulting in a vertical domain from 100 km to roughly 600 km. This flexibility will allow us to validate
accuracy by running data assimilation and input reconstruction at various levels of resolution.
It is possible to “fly” satellites through GITM. By listing the times and locations of measurement points,
GITM can track the path of a satellite, outputting simulated data at the specified times and positions. This
feature simplifies implementation and validation of data assimilation and input estimation. Fig. 5 shows
The thermosphere and ionosphere mark the true transition of the atmosphere to the space environment. The
top of the thermosphere (the exosphere) is where the collision frequency of the neutral particles drops low
enough that it becomes difficult to describe them as fluids. In the lower thermosphere, the neutrals are
still dominated by dynamics, such as solar tidal forcing, but, as altitude increases, and the density of ions
also increases, the neutrals becomes more strongly controlled by ion forcing. Also, the density of the gas
becomes so low that the flow speeds can become quite large, reaching over 1000 m/s on occasion. This
speed is comparable to the sound speed, so it is possible to get supersonic flows in the upper atmosphere.
Furthermore, because the flows and sound speed can be so large, the dynamics of the upper atmosphere are
truly global—waves propagate from one side of the Earth to the other in just a few hours. This means that
models need to consider the global system and not just focus on regional dynamics.
Ions, because they are charged particles, are driven by electric fields, which are much stronger than forces
such as gravity and the gradient in pressure. The electric forces are so strong that the ions quickly reach an
equilibrium velocity that can have a significant amount of small-scale spatial structure [54]. The neutrals,
on the other hand, have much more inertia and are slow to react, so the ions serve to transfer momentum
and energy (through friction) to the neutrals [29, 57]. This transference often happens on small scales in
regions of large electron densities and strong electric fields, that is, in the auroral zone [31]. The heating on
small scales causes localized perturbations in the thermospheric (neutral) temperature and pressure, which
increase winds and eventually causes a global disturbance [12]. Therefore, it is often important to be able to
capture small-scale dynamics. The problem is that the global-scale phenomena need to be captured as well.
Further complicating the issue is the fundamental modeling of the aurora and how it deposits energy
into the thermosphere and ionosphere. The aurora, in essence, is a beam of electrons that is shot into
the thermosphere. Many researchers have simulated how the atmosphere reacts to this beam of energy,
and parameterized how this phenomenon deposits energy into the system [34, 38]. However, there are
aspects of this process that are often simplified and are not taken into account when simulating active time
periods. For example, models of the auroral energy deposition typically use a static thermosphere that is
not disturbed [84]. The composition is typically static in the models, so the aurora precipitates into the
same atmosphere continually. What really happens, though, is that, as the electrons penetrate into the
atmosphere, heat is deposited, preferentially lifting the heavier species (such as N2 and O2 ), which changes
the collision frequency between the beam of electrons and the atmosphere [17, 91, 92]. This is normally not
a large concern since the amount of energy deposited is not substantial enough to change the composition
significantly, but during large auroral events, this feedback should be considered.
The ionosphere and thermosphere are strongly driven by the Sun. This means that the evolution of the
thermosphere depends primarily on the input from the Sun rather than its current state. In other words, the
effect of an initial state tends to be “washed away” by the input. Consequently, estimates of solar drivers
can greatly enhance the accuracy of state estimates obtained from data assimilation with unknown inputs.
Obtaining accurate estimates of these drivers represents an application and challenge to DDDAS concepts
and technology.
In the case of the upper atmosphere, the extreme ultraviolet radiation produces photo-ionization, which
in turn, through chemistry and heating, drives the properties of the ionosphere and thermosphere. Since
a significant portion of the EUV and X-ray radiation is absorbed in the atmosphere, it is not possible to
measure the flux from the ground. Instead, a proxy is used. The flux solar irradiance at a wavelength of 10.7
cm (called F10.7) is thus measured on the ground and used to estimate the solar spectrum in the 0-150 nm
range. The unit of F10.7 is 10−22 W/Hz/m2 , which is equivalent to 1 solar flux unit.
Although good estimates of F10.7 can enhance data assimilation in the ionosphere-thermosphere, esti-
mates of F10.7 are infrequent (for example, once per day) and approximate. This situation motivates the
need to estimate F10.7 concurrently with the states. This is a problem of input estimation.
6.2 Retrospective Cost Adaptive Input and State Estimation (RCAISE) for DDDAS
State estimation techniques, such as the Kalman filter and its numerous variants, use measurements to re-
cursively refine state estimates. In the simplest case, the system of interest has the form
where x(k) is the state and y(k) is the available measurement. The input to the system is modeled as a
combination of a known deterministic signal u(k) and an unknown stochastic signal w(k). The known
deterministic signal u(k) is injected numerically into the observer (not shown here), which uses knowledge
of the statistics of w(k) and the sensor noise v(k) as well as knowledge of the matrices A and C to obtain
an estimate x̂(k) of the state x(k). This is how an estimator works. In practice, however, the deterministic
input u(k) is often unknown, or at least partially uncertain. A common practice is thus to treat u(k) as part
of the stochastic input w(k). This approach, however, can yield poor state estimates for the simple reason
that the characteristics of u(k) may be quite different from the assumed statistics of w(k).
Because of uncertainty in u(k), extensive research has been devoted to developing extensions of the
Kalman filter that are either insensitive to knowledge of the deterministic input or that attempt to estimate
this signal in addition to the states. These techniques are referred to as unbiased Kalman filters, unknown
input observers, input estimators, and state estimators with input reconstruction. The literature is extensive
[14, 36, 49, 59, 60, 72, 94, 96, 101], but no unified techniques have yet emerged, and application to large-
scale data assimilation has been limited.
The importance of input estimation is due to the fact that, for many systems, the effect of the initial state
decays, and thus the asymptotic response is governed entirely by the forcing. In the linear case, systems with
this property are asymptotically stable. In the nonlinear case, these systems are called incrementally stable
or contractive, and the relevant phenomenon is called entrainment [9, 81, 85]. The presence of entrainment
suggests that, at least in some cases, the accuracy of state estimation can be greatly enhanced by the ability
We now summarize recent results describing the application of RCAISE to the ionosphere-thermosphere.
An overview is given in [24]. The objective of input estimation supports the goals of DDDAS by enhancing
prediction accuracy. In particular, state and input estimation are considered in [2] based on the 3D GITM
model with both synthetic and real measurements. Here we consider the case of real measurements from
the CHAMP satellite.
This study is motivated by the fact that radio propagation and satellite drag are affected by the Sun’s
influence on the ionosphere and thermosphere. In particular, extreme ultraviolet (EUV) and X-ray radiation
produce photo-ionization, which, in turn, through chemistry and heating, drives the formation of the iono-
sphere and shapes the thermosphere. In addition, the effect of the EUV and X-ray radiation is sufficient to
render the ionosphere-thermosphere a strongly driven system [9, 81, 85].
Since a significant portion of EUV and X-ray radiation is absorbed by the atmosphere, it is not possible
to measure these quantities from the ground. Instead, a proxy is used. The most common proxy for EUV
and X-ray radiation is the flux solar irradiance at a wavelength of 10.7 cm (F10.7 ), which is measured (in
(a) Output-matching performance of RCAISE for GITM us- (b) State-estimation performance of RCAISE for GITM.
ing driver estimates.
Figure 7: (a) µ90,y (k), µ90,ŷ (k), µ90,ŷm (k), σ90,y (k), σ90,ŷ (k), and σ90,ŷm (k) for the case of real CHAMP satellite
data and GITM with photoelectron heating. For this example, GITM with RCAISE yields 6% lower RMS(z) compared
to GITM with measured F̄10.7 (k). (b) This plot shows µ90,yG (k), µ90,ŷG (k), µ90,ŷG,m (k), σ90,yG (k), σ90,ŷG (k), and
σ90,ŷG,m (k) for real GRACE satellite data and the case of real CHAMP satellite data and GITM with photoelectron
heating. For this example, GITM with RCAISE yields 11% reduction in RMS(zG ) compared to GITM with measured
F̄10.7 (k).
Let p(k) ∈ R be an arbitrary signal, and let T be a positive integer. Then, for all k ≥ T , define the
windowed average of the signal p(k) as
k
4 1 X
µT,p (k) = p(i),
T
i=k−T +1
where T is the interval over which the signal is averaged. Similarly, for all k ≥ T , define the windowed
standard deviation of the signal p(k) as
v
u k
4 u 1 X
σT,p (k) = t (p(i) − µT,p (i))2 .
T
i=k−T +1
The in-situ measurements required for GITM are difficult to make. First, they must be made in space, that is,
in the ionosphere and thermosphere. Space is a harsh environment where extreme thermal fluctuations, high
vacuum levels, and high-energy radiation require specialized, hardened electronics and systems. These spe-
cialized space-based sensors are expensive to build. Also, launches are infrequent and expensive. Second,
these measurements are multiscale in time and space. At orbital velocities (∼ 8 km/s), steep gradients in
the thermosphere approach rapidly and require fast sampling. Our measurement environment is on a global
scale, covering more than 577 million square kilometers. This is beyond current sensing and space system
capabilities.
Within the DDDAS Frontier Application Measurement Systems and Methods, we are developing a new
capability to support space-based, multiscale measurements, namely, event-based sensor reconfiguration
(EBSR). We consider sensing platforms and support systems that are reconfigurable. For example, dynamic
elements could include data-collections rates, sensor selection, sensor physical distribution, and sensor di-
rectionality. Consistent with DDDAS, sensor reconfiguration is most advantageously based on events, both
physical from the environment and feedback-based from the data assimilation and operations system. This
will enable closed-loop, real-time sensor-system modification consistent with DDDAS goals and vision. Al-
though motivated by the space-based domain, the fundamental characteristics of EBSR can be summarized
in terms of three primary elements: 1) the system must be able to reconfigure; 2) the system must be able
to determine when to reconfigure; and 3) the system must collect relevant telemetry and data to inform the
r
X Figure 9: The Radio Aurora Ex-
B̃y = b[By cos(ρ) + Bx sin(ρ)] + y0 + si,ỹ I˜i + ηy , (7.2)
plorer (RAX) triple Cubesat devel-
i=1
oped at the University of Michigan.
Xr
B̃z = c[Bx sin(λ)+By sin(φ) cos(λ)+Bz cos(φ) cos(λ)]+z0 + si,z̃ I˜i +ηz .
i=1
(7.3)
The recalibration procedure estimates magnetometer scale factors, non-
orthogonality, and constant as well P as time-varying bias. The time-
varying bias is captured in the term, ri=1 si,j I˜i , j ∈ {x̃, ỹ, z̃}, where
si,j is the coefficient that maps the i-th current measurement I˜i to the
magnetic field in the j-th magnetometer axis, and r is the number of current measurements included in the
model. Although current measurements are required for re-calibration, on-board current sensors are typi-
cally part of spacecraft health monitoring. Hence inclusion of the current measurements in the recalibration
does not require additional sensors for the purpose of recalibration. Most importantly, this model does not
(c) The difference (µT) versus time. The sun indicator takes the value of one
when RAX-1 is in the sun, and zero when in eclipse, which shows when the
solar panels are illuminated and generating current.
Figure 10: Data from the RAX-1 PNI MicroMag3 magnetometer. The x-axis of each plot shows time
elapsed since the start of the data set, 01-Dec-2010 08:30:46 UTC.
(b) Difference between the corrected measured field magnitude and the ex-
pected magnitude (µT)
Figure 11: Results of the calibration to estimate both constant errors and time-varying magnetometer bias.
We developed a method for on-orbit calibration of photodiodes for sun sensing in an attitude determination
system [90]. The calibration estimates the scale factors and alignment angles of the photodiodes, resulting
in higher attitude determination accuracy than achieved with the pre-flight calibration parameters. The cali-
bration is implemented with an extended Kalman filter to simultaneously estimate spacecraft attitude and the
calibration parameters. This approach, as opposed to an attitude-independent method, enables the calibra-
tion of an arbitrary number of photodiodes mounted in any orientation on the spacecraft and facilitates the
use of an attitude-dependent Earth albedo model. The method is demonstrated by application to flight data
from the RAX-2 satellite and results in an average angular improvement of 10◦ in sun vector measurements
with the photodiodes. Attitude determination accuracies of below 1◦ in each axis are demonstrated using
the calibrated photodiodes in combination with a low-cost three-axis magnetometer and rate gyroscope.
Previously, we discussed the need for new methods for collecting, analyzing, and using data gathered from
the ionosphere-thermosphere to monitor space weather and thus better predict the orbital motion of large
space objects. To successfully meet the goals outlined in these three sections, we also need to develop new
models for scheduling and communicating policies on how to transmit these data to Earth.
Although we have posed (and will continue to discuss) this problem in the context of Cubesat data
collection, the results that we are developing have much broader applicability. Specifically, we consider
problems in which we have multiple clients (e.g., Cubesats) and multiple servers (e.g., ground stations).
The clients are largely controlled by a central manager who provides policies to guide behavior; the clients
in turn make local decisions based on these policies as well as on environmental factors that they experience
(for example, a Cubesat might gather more data than originally planned if significant variation between its
expected and observed location were detected). The central manager has discrete and possibly infrequent
opportunities to communicate with the clients to update the policies. Finally, the clients are dependent on
the servers, which themselves may be controlled by external agents, may have conflicts with other users,
and may be subject to uncertainty in their availability and their performance characteristics. Problems of
this type can be found in computing systems, telecommunication systems, energy systems, UAV systems,
and many more real-world contexts. To address these fundamental, challenging, and widely applicable
scheduling problems, we are developing heuristics, optimization-based algorithms, and simulation tools for
hierarchical closed-loop scheduling (HCLS). Our work towards these goals is described below.
We have developed a general, analytical framework for modeling an operational satellite mission. We define
a framework as a set of reusable elements and templates for describing dynamics, constraints, and goals.
The framework analytically represents the dynamic interaction of states (such as position, energy, and data)
and subsystem operations (such as communication and energy management) of an operational satellite. It
captures mission constraints, which are often called requirements, that specify minimum performance levels.
It also enables analytical expression of objectives, which are goals of the mission to be max- imized. The
The four main elements of the framework are parameters, states, subsystems, and the schedule. Elements
are constant or time-dependent, time notation is omitted for simplicity.
Parameters – A parameter, p, is a model input that provides numerical values to dynamically model sys-
tem states and subsystem functions. Let P be the set of all model parameters, where p ∈ P . Examples
parameters are orbital parameters, ground station locations, and Tf .
States – A system state is a model variable, and is defined as the information at some initial time that,
combined with the input (parameters and the schedule) for all future time, uniquely determines the output for
all future time [21]. Let X = [x1 , ..., xk , ...xm ]T be the vector of all the system state variables, where there
are m variables. Example states include on-board resources such as energy and payload data. Opportunities
for mission operations such as payload operation and ground station availability are also system states.
An opportunity is modeled as binary, o ∈ {0, 1}, where a value of one indicates an opportunity and zero
indicates no opportunity.
Subsystems – A subsystem, s, performs functions on states. Let S be the set of all subsystems. A single
function operates on state k and is denoted fs,j,k ∈ F , where j ∈ Js is the function index, and Js is the set
of all function indices. fs,j,k is an element of the set of functions, F .
Schedule – The schedule, U (t), is a series of time-dependent events that describes how and when the sub-
system functions operate on the states. Events are scheduled when there are opportunities. For example, a
data download event may occurs when there is a line of sight between a ground station and satellite. The
schedule is designed to achieve the mission objectives while satisfying the mission constraints. The schedule
may be an output (e.g. when a solver is used to find an optimal schedule) or an input (e.g. when simulating
a given schedule to test performance).
The model is formulated as a conventional optimization problem in Eqs. 8.1-8.4. Mission objectives,
represented in Eq. 8.1, maximize the total transfer of a mission-specific system state, x∗ , a component of X,
over the planning horizon. The decisions in the optimization problem are when and how the events occur,
which are captured in the schedule, U (t), which is an output of the optimization problem as formulated
here. The constraints in the formulation include state dynamics (Eq. 8.2), bounds on state values (Eq. 8.3),
s.t.
XX
X(t + ∆t) = N(X, P, t) + Fs,j (X, U, Ps,j , t) 0 ≤ t ≤ Tf (8.2)
s∈S j∈Js
States evolve over time due to nominal dynamics and subsystem functions (see Eq. 8.2). Nominal
dynamics are independent of subsystem functions. The vector of nominal dynamics equations is defined in
Eq. 8.5, where each element k represents the nominal dynamics of state xk . Orbital motion and battery self-
discharge are example nominal dynamics of the state variables position and on-board energy, respectively.
The vector of subsystem functions that operates on the state vector is expressed in Eq. 8.6. The inputs to
each function fs,j,k include the states, parameters, schedule, and time. Note the vector in Eq. 8.6 contains
zero entries when combined subsystems and functions do not operate on specific states.
Fs,j (X, U, Ps,j , t) = [fs,j,1 (X, U, Ps,j , t), ..., fs,j,k (X, U, Ps,j , t), ...]T ∀s ∈ S, j ∈ Js (8.6)
The nominal and functional dynamics in Eqs. 8.5 and 8.6 may each be described by any type of function,
for example they may be analytical or extracted from a simulation system.
The state vector, X, is constrained by lower and upper bounds, {Xmin , Xmax } ∈ P , as in Eq. 8.3.
Example bounds include maximum and minimum battery capacity and maximum data storage capacity.
Operational mission requirements are represented in Eq. 8.4 by enforcing a minimum change in system
state over a specific time period. For example, there may be a mission requirement that a minimum amount
of state (such as energy) must be acquired or consumed during a certain period of time. Each interval i ∈ Ik ,
where Ik is the set of intervals spanning the full planning horizon for state xk , has a start time, 0 ≤ ti ≤ Tf ,
where the end of interval i corresponds to the start of interval i + 1. Eq. 8.4 enforces a minimum change
of state xk during every interval i ∈ Ik , represented as Θk,i . The change in state during interval i is its
integrated time rate of change from ti to ti+1 . For states without requirements, Θk,i will be zero ∀i ∈ Ik .
Another perspective for describing spacecraft operations is to consider subsystem functions individually.
In particular, consider the analytical relationship between inputs and outputs specific to subsystem s and
function j, Zs,j = gs,j (Ys,j , U, P, t), where the vector of inputs is Ys,j and the vector of outputs is Zs,j ,
which are both comprised of components of X. The function gs,j is the combination of fs,j,k ∀k ∈ K, i.e.,
it models the impact of subsystem s and function j on all state inputs and outputs. These relationships for a
single subsystem are shown in Figure 12.
We represent the model framework using a conventional control system block diagram to demonstrate the
interaction of the various model elements in Figure 13. The set P of parameters is provided to the input
block, which identifies opportunities for subsystem functions, O, and interprets the mission requirements,
R, as control inputs. The error signal is expressed as E = R−M, where M is estimated state values, which
are measured by on-board or ground sensors. E, P and R are provided to the scheduler, which generates
the operational schedule, U . Note that U is an output of the controller and an input to the dynamic system.
The states evolve according to both the nominal dynamics and subsystem functions as prescribed by the U ,
where updated states (after time ∆t) are denoted X(t + ∆t). Unmodeled realistic disturbances, D, may
be injected into the system and modify the state. Mission performance is evaluated by measuring the states
and verifying if the mission requirements are satisfied and comparing realized objectives to their expected
values. Feedback control occurs when the scheduler updates U according to mission performance, i.e., uses
E in for future scheduling decisions.
Figure 13: Elements and dynamics of the system model represented with a conventional feedback control
loop diagram. The non-italicized labels are the conventional elements of a control feedback loop. The
italicized labels are the elements of the modeling framework.
Thus far we developed a sophisticated model for space communication systems and developed an opti-
mization algorithm for single-satellite, multiple ground station scheduling. Our next steps will included the
following:
[2] A. A. Ali, A. Goel, A. J. Ridley, and D. S. Bernstein. Retrospective-Cost-Based Adaptive Input and
State Estimation for the IonosphereThermosphere. J. Aerospace Information Systems, 2015. available
online.
[3] J. Anderson, T. Hoar, K. Raeder, H. Liu, N. Collins, R. Torn, and A. Arellano. The data assimilation
research testbed. Bul. Amer. Met. Soc., 90:1283–1296, 2009.
[4] J. L. Anderson. An ensemble adjustment kalman filter for data assimilation. Monthly Weather Review,
129:2884–2903, 2001.
[5] J. L. Anderson. An ensemble adjustment kalman filter for data assimilation. Monthly Weather Review,
129:2884–2903, 2001.
[6] J. L. Anderson. Exploring the need for localization in ensemble data assimilation using an hierarchical
ensemble filter. Physica D, 230:99–111, 2007.
[7] J. L. Anderson. Spatially and temporally varying adaptive covariance inflation for ensemble filters.
Tellus, 61:72–83, 2009.
[8] Jeffrey L. Anderson. An ensemble adjustment Kalman filter for data assimilation. Monthly Weather
Review, 129:2884–2903, 2001.
[9] D. Angeli. A Lyapunov approach to incremental stability properties. IEEE Trans. Autom. Control,
47:410–421, 2002.
[11] H. Bahcivan, M. C. Kelley, and J. W. Cutler. Radar and rocket comparison of uhf radar scattering from
auroral electrojet irregularities: Implications for a nanosatellite radar. In J. Geophys. Res., volume
114, A06309, 2009.
[12] R. Bauske and G. W. Prölss. Modeling the ionospheric response to traveling atmospheric distur-
bances. J. Geophys. Res., 102:14555, 1997.
[13] H. Bekerat, R. Schunk, L. Scheirles, and A. Ridley. Comparison of satellite ion drift velocities with
amie derived convection patterns. J. Atmos. Sol-Terr. Phys., 67:1463–1479, 2005.
[14] S. Bhattacharyya. Observer design for linear systems with unknown inputs. IEEE Trans. Autom.
Control, 23:483–484, 1978.
[15] P. Bournes and D. Williamson. (u) cubesat experiments (qbx). Presented at the 2009 Cubesat Devel-
oper’s Workshop, April 2009.
[16] G. Burgers, P. J. van Leeuwen, and G. Evensen. Analysis scheme in the ensemble kalman filter.
Monthly Weather Review, 126:1719–1724, 1998.
[18] A. G. Burrell, A. Goel, A. J. Ridley, and D. S. Bernstein. Correction of the photoelectron heating effi-
ciency within the global ionosphere-thermosphere model using retrospective cost model refinement.
Journal Atmospheric and Solar-Terrestrial Physics, 124:30–38, 2015.
[19] J. Castaing, A. Cohn, and J. Cutler. Scheduling downloads for multi-satellite, multi-ground station
missions. In Proc. 28th Annual AIAA/USU Conference on Small Satellites, Logan, UT, August 2014.
SSC14-VIII-4.
[20] P. C. Chamberlin, T. N. Woods, and F. G. Eparvier. Flare irradiance spectral model (fism): Daily
component algorithms and results. Space Weather, 5:S07005, 2007. doi:10.1029/2007SW000316.
[21] R. Chen, S. AhmadBeygi, D. Beil, A. Cohn, and A. Sinha. Solving truckload procurement auctions
over an exponential number of bundles. Transportation Science, 43(4):493–510, November 2009.
[22] James Cutler, Aaron Ridley, and Andrew Nicholas. Cubesat investigating atmospheric density re-
sponse to extreme driving (cadre). In Proceedings of the 25th Annual Small Satellite Conference,
Logan, UT, August 2011.
[23] James W. Cutler, John C. Springmann, Sara Spangelo, and Hasan Bahcivan. Initial flight assessment
of the radio aurora explorer. In Proceedings of the 25th Annual Small Satellite Conference, Logan,
Utah, August 2011.
[24] A. M. D’Amato, A. A. Ali, A. Ridley, and D. S. Bernstein. Retrospective cost optimization for
adaptive state estimation, input estimation, and model refinement. In Procedia Computer Science,
Vol. 18, Proceedings of the ICCS, pages 1919–1928, Barcelona, Spain, June 2013.
[29] W. Deng, T.L. Killeen, A.G. Burns, and R.G. Roble. The flywheel effect: Ionospheric currents after
a geomagnetic storm. Geophys. Res. Lett., 18:1845, 1991.
[30] Y. Deng, A. D. Richmond, A. J. Ridley, and H.-L. Liu. Assessment of the non-hydrostatic effect
on the upper atmosphere using a general circulation model (gcm). Geophys. Res. Lett., 35:L01104,
2008. doi:10.1029/2007GL032182.
[33] G. Evensen. Sequential data assimilation with a nonlinear quasi-geostrophic model using monte carlo
methods to forecast error statistics. J. Geophys. Res., 99:10,143–10,162, 1994.
[36] T. Floquet and J. P. Barbot. State and unknown input estimation for linear discete-time systems.
Automatica, 44:1883–1889, 2006.
[37] J.C. Foster, F.-P. St. Maurice, and V.J. Abreu. Joule heating at high latitudes. J. Geophys. Res.,
88:4885, 1983.
[38] R.A. Frahm, J.D. Winningham, J. R. Sharber, R. Link, G. Crowley, E. E. Gaines, D. L. Chenette, B. J.
Anderson, and T. A. Potemra. The diffuse aurora: A significant source of ionization in the middle
atmosphere. J. Geophys. Res., 102:28203, 1997.
[40] T.J. Fuller-Rowell and D.S. Evans. Height-integrated Pedersen and Hall conductivity patterns inferred
from TIROS–NOAA satellite data. J. Geophys. Res., 92:7606, 1987.
[41] R. R. Garcia and S. Solomon. The effect of breaking gravity waves on the dynamics and chemical com
position of the mesosphere and lower thermosphere. Journal of Geophysical Research: Atmospheres
(1984-2012), 90(D2):3850–3868, 1985.
[42] D.A. Hardy, M.S. Gussenhoven, R. Raistrick, and W.J. McNeil. Statistical and fuctional representa-
tion of the pattern of auroral energy flux, number flux, and conductivity. J. Geophys. Res., 92:12,275,
1987.
[43] J.P. Heppner and N.C. Maynard. Empirical high-latitude electric field models. J. Geophys. Res.,
92:4467, 1987.
[44] F. A. Herrero, H. H. Jones, and J. G. Lee. The gated electrostatic mass spectrometer (gems): Defini-
tion and preliminary results. Journal of the American Society for Mass Spectrometry, 19(10):1384 –
1394, 2008.
[45] H.E. Hinteregger, K. Fukui, and B.R. Gibson. Observational, reference and model data on solar EUV
from measurements on AE-E. Geophys. Res. Lett., 8:1147, 1981.
[47] J. B. Hoagg and D. S. Bernstein. Retrospective cost model reference adaptive control for
nonminimum-phase systems. AIAA J. Guid. Contr. Dyn., 35:1767–1786, 2012.
[48] J. B. Hoagg, M. A. Santillo, and D. S. Bernstein. Discrete-time Adaptive Command Following and
Disturbance Rejection with Unknown Exogenous Dynamics. IEEE Trans. Autom. Contr., 53:912–
928, 2008.
[49] M. Hou and P. Muller. Design of observers for linear systems with unknown inputs. IEEE Trans.
Autom. Control, 37:871–875, 1992.
[50] P. L. Houtekamer and Herschel L. Mitchell. Data assimilation using an ensemble Kalman filter
technique. Monthly Weather Review, 126:796–811, 1998.
[52] E. Kalnay, L. Hong, T. Miyoshi, S.-C. Yang, and J. Ballabrera-Poy. 4-var or ensemble kalman filter?
Tellus, 59A:758–773, 2003.
[53] J. Kappenman. A perfect storm of planetary proportions. IEEE Spectrum, pages 26–31, February
2012.
[54] M.C. Kelley. The Earth’s Ionosphere. Academic Press, Inc., San Diego, 1989.
[55] E. A. Kihn, R. Redmon, A. J. Ridley, and M. R. Hairston. A statistical comparison of the AMIE
derived and DMSP-SSIES observed high-latitude ionospheric electric field. J. of Geophys. Res.,
111:8303, 2006.
[56] E.A. Kihn and A.J. Ridley. A statistical analysis of the amie auroral specification. J. Geophys. Res.,
110, 2005. A07305,10.1029/2004JA010775.
[57] T.L. Killeen and R.G. Roble. An analysis of the high-latitude thermospheric wind pattern calculated
by a thermospheric general circulation model 1. Momentum forcing. J. Geophys. Res., 89:7509,
1984.
[58] V. W. J. H. Kirchhoff and B. R. Clemesha. Eddy diffusion coefficients in the lower thermosphere.
Journal of Geophysical Research: Space Physics (1978-2012), 88(A7):5765–5768, 1983.
[59] S. Kirtikar, H. Palanthandalam-Madapusi, E. Zattoni, and D. S. Bernstein. l-delay input and initial-
state reconstruction for discrete-time linear systems. Circ. Sys. Sig. Processing, 30:233–262, 2011.
[60] P. K. Kitanidis. Unbiased minimum-variance linear state estimation. Automatica, 23:775–778, 1987.
[61] D. Y. Lee, J. W. Cutler, J. Mancewicz, and A. J. Ridley. Maximizing photovoltaic power generation
of a space-dart configured satellite. Acta Astronautica, 111:283–299, 2015.
[62] B. Lemay, J. Castaing, R. Zidek, A. Cohn, and J. Cutler. An Optimization-Based Approach for Small
Satellite Download Scheduling, with Real-World Applications. submitted.
[96] M. E. Valcher. State observers for discrete-time linear systems with unknown inputs. IEEE Trans.
Autom. Control, 44:397–401, 1999.
[97] D.R. Weimer. A flexible, IMF dependent model of high-latitude electric potential having ”space
weather” applications. Geophys. Res. Lett., 23:2549, 1996. doi:10.1029/2000JA000604.
[99] T.N. Woods et al. Xuv photometer system (xps): Improved solar irradiance algorithm using chianti
spectral models. Solar Phys., 249:235–267, 2008.
[100] T.N. Woods and G.J. Rottman. Solar ultraviolet variability over time periods of aeronomic interest.
Atmospheres in the Solar System: Comparative Aeronomy, Geophys. Monogr. Ser., 130:221, 2002.
[101] Y. Xiong and M. Saif. Unknown disturbance inputs estimation based on a state functional observer
design. Automatica, 39:1389–1398, 2003.
1.
1. Report Type
Final Report
Primary Contact E-mail
Contact email if there is a problem with the report.
[email protected]
Primary Contact Phone Number
Contact phone number if there is a problem with the report
7347643719
Organization / Institution name
University of Michigan
Grant/Contract Title
The full title of the funded effort.
FA9550-12-1-0401
Principal Investigator Name
The full name of the principal investigator on the grant or contract.
Dennis S. Bernstein
Program Manager
The AFOSR Program Manager currently assigned to the award
Frederica Darema
Reporting Period Start Date
09/01/2012
Reporting Period End Date
10/14/2015
Abstract
This project focused on DDDAS-motivated developments in support of space weather monitoring and
prediction. The project involved four interrelated tasks relating to physics-driven adaptive modeling,
adaptive
data assimilation with input reconstruction, event-based sensor reconfiguration, and optimization of
scheduling. For data assimilation, the emphasis has been on model refinement. The problem of estimating
the eddy diffusion coefficient using total electron content measurements has led to new techniques for
determining the essential modeling details needed by the retrospective cost model refinement technique.
For spacecraft design, multidisciplinary optimization design techniques were applied to the design of small
satellites accounting for multiple vehicle subsystems. For download scheduling, optimization techniques
were used to account for multiple spacecraft and ground stations.
Distribution Statement
This is block 12 on the SF298 form.
AFD-070820-035DDDAS.pdf
Upload the Report Document. File must be a PDF. Please do not password protect or secure the PDF . The
maximum file size for the Report Document is 50MB.
DDDASFinalReportVSept282015.pdf
Upload a Report Document, if any. The maximum file size for the Report Document is 50MB.
Archival Publications (published) during reporting period:
A. A. Ali, A. Goel, A. J. Ridley, and D. S. Bernstein. Retrospective-Cost-Based Adaptive Input and
State Estimation for the IonosphereThermosphere. J. Aerospace Information Systems, 2015. available
online.
A. G. Burrell, A. Goel, A. J. Ridley, and D. S. Bernstein. Correction of the photoelectron heating efficiency
within the global ionosphere-thermosphere model using retrospective cost model refinement.
Journal Atmospheric and Solar-Terrestrial Physics, 124:30–38, 2015.
J. Castaing, A. Cohn, and J. Cutler. Scheduling downloads for multi-satellite, multi-ground station
missions. In Proc. 28th Annual AIAA/USU Conference on Small Satellites, Logan, UT, August 2014.
SSC14-VIII-4.
James Cutler, Aaron Ridley, and Andrew Nicholas. Cubesat investigating atmospheric density response
to extreme driving (cadre). In Proceedings of the 25th Annual Small Satellite Conference,
Logan, UT, August 2011.
James W. Cutler, John C. Springmann, Sara Spangelo, and Hasan Bahcivan. Initial flight assessment
of the radio aurora explorer. In Proceedings of the 25th Annual Small Satellite Conference, Logan,
Utah, August 2011.
S. Spangelo, J. Cutler, K. Gilson, and A. Cohn. Optimization-Based Scheduling for the Single-
Satellite, Multi-Ground Station Communication Problem. Computers and Operations Research,
57:1–16, 2015.
Salary
Equipment/Facilities
Supplies
Total
Report Document
Report Document - Text Analysis
Report Document - Text Analysis
Appendix Documents
2. Thank You
E-mail user
Sep 28, 2015 10:27:04 Success: Email Sent to: [email protected]