27 May 2022 Markov decision process with deep reinforcement learning for robotics data offloading in cloud network
Roobaea Alroobaea, Ahmed Binmahfoudh, Sabah M. Alzahrani, Anas Althobaiti
Author Affiliations +
Abstract

Robots have a wide range of computer capacities, and executing powerful computational programs on them might be difficult due to restricted internal processing, memory, and energy. Similarly, cloud computing enables on-demand computation, so integrating robotics and cloud computing can help robots solve limitations. The key to successfully offloading jobs is an operational solution that would not underutilize the robot’s natural processing capacity and act authorized based on important costing criteria like delay and CPU resources. Applications are offloaded from robots based on the Markovian decision process. The Markovian decision helps to identify the resources in the cloud network based on probability. A deep reinforcement learning-based deep Q-network (DQN) technique selects resources in the cloud network. Further data are offloaded in cloud storage. The state-space is built on the notion that the size of the input information has a serious influence on the software’s processing time. The suggested technique is constructed as a repetitive work issue with a distinct space domain, in which we take a different activity at every successive stage using the resulting result to train the DQN to get the most prizes. A navigation testbed was created and implemented to validate the suggested method. The proposed method minimizes the cost of communications between clouds and also it minimizes the latency of the application. It increases the accuracy level by 85%, which is higher when compared with existing methods.

© 2022 SPIE and IS&T 1017-9909/2022/$28.00 © 2022 SPIE and IS&T
Roobaea Alroobaea, Ahmed Binmahfoudh, Sabah M. Alzahrani, and Anas Althobaiti "Markov decision process with deep reinforcement learning for robotics data offloading in cloud network," Journal of Electronic Imaging 31(6), 061809 (27 May 2022). https://doi.org/10.1117/1.JEI.31.6.061809
Received: 8 March 2022; Accepted: 5 May 2022; Published: 27 May 2022
Lens.org Logo
CITATIONS
Cited by 3 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Clouds

Robots

Robotics

Computing systems

Sensors

Neural networks

Robotic systems

RELATED CONTENT

Designing Vision Systems For Robotic Applications
Proceedings of SPIE (August 13 1989)
Protective operating system shell environment for robots
Proceedings of SPIE (September 15 1995)
Industrial Robots: Today And Tomorrow
Proceedings of SPIE (May 28 1980)
Unsupervised learning for autonomous systems
Proceedings of SPIE (March 01 1992)

Back to Top