Machine-Learning Space Applications On SmallSat Platforms With Te
Machine-Learning Space Applications On SmallSat Platforms With Te
ABSTRACT
Due to their attractive benefits, which include affordability, comparatively low development costs, shorter
development cycles, and availability of launch opportunities, SmallSats have secured a growing commercial and
educational interest for space development. However, despite these advantages, SmallSats, and especially CubeSats,
suffer from high failure rates and (with few exceptions to date) have had low impact in providing entirely novel,
market-redefining capabilities. To enable these more complex science and defense opportunities in the future, small-
spacecraft computing capabilities must be flexible, robust, and intelligent. To provide more intelligent computing, we
propose employing machine intelligence on space development platforms, which can contribute to more efficient
communications, improve spacecraft reliability, and assist in coordination and management of single or multiple
spacecraft autonomously. Using TensorFlow, a popular, open-source, machine-learning framework developed by
Google, modern SmallSat computers can run TensorFlow graphs (principal component of TensorFlow applications)
with both TensorFlow and TensorFlow Lite. The research showcased in this paper provides a flight-demonstration
example, using terrestrial-scene image products collected in flight by our STP-H5/CSP system, currently deployed on
the International Space Station, of various Convolutional Neural Networks (CNNs) to identify and characterize newly
captured images. This paper compares CNN architectures including MobileNetV1, MobileNetV2, Inception-
ResNetV2, and NASNet Mobile.
*
https://adeshpande3.github.io/A-Beginner%27s-Guide-
To-Understanding-Convolutional-Neural-Networks/
Related Research
Despite the considerable challenge posed by the
computational requirements of ML, there are several
related works that explore the state-of-the-art networks
for embedded systems and satellites. In [23], Schartel
trained the SqueezeNet model on a terrestrial system and
planned to transfer the model to an embedded system;
however, the entire design was not fully implemented. In
[24], researchers at the University of New Mexico
partnered with Stinger Ghaffarian Technologies and Air
Force Research Laboratory Space Vehicles Directorate
to demonstrate image classification on the Nvidia TX1.
In their demonstration, a desktop GPU is used to train the
model, and inference is performed on the TX1 with the
CUDA Deep Neural Network (cuDNN) library and
TensorRT. Lastly, in [25], SRC Inc., developed their
own deep CNN framework for use on a Xilinx Artix-7
FPGA platform. With their design, they studied image
classification and compared their results against the IBM
TrueNorth NS1e development board, a neuromorphic
computer with machine-learning capabilities.
III. APPROACH
In comparison to related research, our approach focuses
on developing a machine-learning solution that can run
on existing flight hardware with TensorFlow. For our Figure 2: Example of Images in Each Class
testbed and experiment, we focus on the Xilinx Zynq- Transfer learning is the process of using a trained ML
7020 which is the featured technology of the CSPv1 model to bootstrap a model for a related task. In the case
flight computer described in [22]. To test the of a CNN, transfer learning means freezing previously
computational capability of the Xilinx Zynq-7020 for trained weights for convolution layers and only learning
ML inference, we trained CNNs for image classification the weights for the classification layers [27]. Despite
and benchmarked the accuracy, execution time, and having thousands of images in the STP-H5/CSP
runtime memory usage of four target CNN architectures collection, this data is considered limited for training
on the Digilent ZedBoard development system. deep CNNs. Thus, training a CNN such as MobileNet or
Inception from scratch with only this limited dataset was
† http://spacenews.com/artificial-intelligence-arms-race-accelerating-in-space/
‡
https://www.tensorflow.org/hub/
92%
90%
88%
For our on-board performance analysis, we focused on
86% MobileNetV1 because it was the most accurate CNN on
84% the STP-H5/CSP test dataset. Using TensorFlow Lite,
82%
80%
we performed inference on all MobileNetV1 variants.
NASNet Mobile Inception-ResNetV2 MobileNetV2 MobileNetV1 We also measured the execution time required to classify
Network Architecture an image and the amount of memory used during
Top-1 accuracy Top-2 accuracy
classification. All tests were conducted on the Digilent
ZedBoard, which is regularly used as a facsimile
Figure 3: CNN Accuracy on STP-H5/CSP Images development kit for the CSPv1 flight computer.
Each CNN performed adequately on the dataset,
1600
achieving over 90% top-1 accuracy and near-perfect top- 1383
1400
2 accuracy, as shown in Figure 3. MobileNetV1 1140
V. CONCLUSIONS References
SmallSats in general and CubeSats in particular face 1. Board, S. S., and National Academies of Sciences,
arduous challenges in achieving more significant science Engineering, and Medicine, Achieving Science with
and defense goals. To meet new mission objectives, on- CubeSats: Thinking Inside the Box, Washington, DC:
board data analysis is rapidly becoming the key focus National Academies Press, 2016.
area for SmallSat development. AI systems can enable 2. National Academies of Sciences, Engineering, and
more efficient use of some system resources and perform
Medicine, Thriving on Our Changing Planet: A
crucial processing tasks for autonomous operation on a
Decadal Strategy for Earth Observation from Space.
spacecraft. However, modern ML frameworks are
Washington: National Academies Press, Jan 2018.
typically executed on resource-intensive GPUs, making
their deployment on these space systems very limited. 3. Cardillo, R., “Small Satellite 2017 Keynote
Address,” 31st Annual AIAA/USU Conference on
Using a dataset of collected space images from our STP- Small Satellites, Logan, UT, Aug 7, 2017.
H5/CSP mission on the ISS, this paper demonstrates that https://www.nga.mil/MediaRoom/SpeechesRemarks
we can achieve reasonable performance with modern /Pages/Small-Satellites---Big-Data.aspx
ML models on a low-memory, low-power, space-grade,
4. Sanchez, M., “AFSPC Long-Term Science and
embedded platform. Our results show it would be
Technology Challenges,” Space and Cyberspace
feasible for the TensorFlow Lite framework to be used
Innovation Summit, Aug 23-24, 2016.
for deploying deep-learning models in future space
http://www.defenseinnovationmarketplace.mil/resou
missions on similar space-computing platforms. rces/Innovation_Summit_Phase1_Intro.pdf
Additionally, leveraging CNNs pre-trained on ImageNet
is shown to be effective for image-classification tasks on 5. Copeland, M., “What’s the Difference Between
terrestrial-scene images. Artificial Intelligence, Machine Learning, and Deep
Learning?,” NVIDIA Blog, Jul 29, 2016.
Future Work https://blogs.nvidia.com/blog/2016/07/29/whats-
This research establishes the foundation towards difference-artificial-intelligence-machine-learning-
deep-learning-ai/
additional extensions into AI-capable small spacecraft.
The immediate next step is to upload the inferred CNNs 6. Girimonte, D and D. Izzo, “Artificial Intelligence for
directly onto the STP-H5/CSP system, thereby enabling Space Applications,” in Intelligent Computing
us to filter undesirable images (i.e. images classified as Everywhere, Springer, London, 2007, pp. 235–253
white, black, and distorted) in real-time. Thus, AI can 7. Leger, C., Trebi-Ollenu, A., Wright, J., Maxwell, S.,
prevent the system from wasting bandwidth by sending Bonitz, R., Biesiadecki, J., Hartman, F., Cooper, B.,
insignificant images. To extend the classification, more Baumgartner, E., and M. Maimone, “Mars
complex image-processing tasks will be studied, such as
exploration rover surface operations: Driving Spirit
object detection and semantic segmentation. Since our
at Gusev crater,” 2005 IEEE Conference on Systems,
NSF SHREC Center is regularly proposing new missions
Man, and Cybernetics, Oct 2005, pp. 1815-1822.
and apps, this research can be used for more complex
science classifications with smaller GSD (Ground 8. Estlin, T., Bornstein, B., Gaines, D., Anderson, R. C.,
Sample Distance) technologies to be featured on future Thompson, D., Burl, M., Castano, R., and M. Judd,
mission proposals. Finally, future extensions could “AEGIS automated targeting for the mer
include adding accelerated TensorFlow Lite inference opportunity,” ACM Transactions on Intelligent
operations using FPGAs (e.g., in CSP) and incorporating Systems and Technology, Vol. 3, No. 3, May 2012,
other hardware accelerators within the design. pp. 1–19.
9. Chien, S., Bue, B., Castillo-Rogez, J., Gharibian, D.,
Acknowledgments
Knight, R., Schaffer, S., Thompson, D. R., and K. L.
Wagstaff, “Agile science: Using onboard autonomy
This research was funded by industry and government
for primitive bodies and deep space exploration,”
members of the NSF SHREC Center and the National
Proceedings of the International Symposium on
Science Foundation (NSF) through its IUCRC Program
Artificial Intelligence, Robotics, and Automation for
under Grant No. CNS-1738783. The authors would also
like to thank additional CSP team contributors including Space, Montreal, Canada, Jun 2014.
10. Cesta, A., Cortellessa, G., Denis, M., Donati, A.,
Fratini, S., Oddi, A., Policella, N., Rabenau, E., and