Future Directions IoT Robotics and AI Based Applications

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 29

Chapter 16

Future Directions: IoT, Robotics, and AI based Applications


Abstract

The recent innovations in the information age, where data is considered as the “new oil”, points
towards rapid all pervasive developments in the Internet of Things (IoT), Robotics and Artificial
Intelligence (AI) based Applications. Data science has evolved in the last few decades as a
promising field of vast opportunities and challenges, which encompasses all endeavours of
mankind. As raw data evolves into information and intelligence through several data processors,
its value is multiplied many folds. In this chapter, we primarily focus on future directions in the
disruptive technologies such as IoT and its importance in building smarter systems for a brave,
new smarter world, where Robotics and Artificial Intelligence (AI) based applications play a
pivotal role in every human activity. The term Internet of Things (IoT) is suggested by Kelvin
Ashton of MIT in 1999, and IoT is in general any network of smart, connected devices which
can be controlled from anywhere across the globe through the Internet. It may be emphasized
that its potential advantages such as an enabler of global, remote connectivity thus aiding our
home appliances, which are smart enough to connect to the internet, its vulnerability to cyber
attacks cannot be overlooked by the intelligent designer, developer, or even the end user.
Robotics and its principles were known to technology evangelists to end users, from the days of
science fiction on Rossum’s Universal Robots (RURs) by the Czech author Karel Kapek (and his
brother Joseph Kapek who actually suggested the term Robots) in the 1920s. On the other hand,
principles of AI were first suggested by John McCarthy, Marvin Minsky, Allen Newell, and
Herbert A. Simon in 1955; however, the credit to the term was rightfully attributed to the former.
If we take a count of the devices/appliances/other smart things connected to the internet by the
year 2025, it is an astounding figure of 34.2billion worldwide (a projected data by IoT Analytics),
as against the present status of 17.8billion worldwide in 2018; inclusive of IoT devices. One can
take these figures for granted and it is more likely that the actual numbers may exceed the
projections by 2025 unless some other disruptive technology evolves. Robotics has matured
enough with developments in interdisciplinary technologies like Mechatronics and AI so that
fully automated vehicular systems and other means of transport have become the order of the
day. It is seemingly unpredictable with the current rate of innovations in Mechatronics and AI,

1
that in which direction the products and processes will evolve. However, it is arguably going to
be more towards a level playing ground, where the tools and techniques are primarily AI based,
starting from the raw/semi-processed data pumped by the IoT devices, communicated through
the internet, and processed by the most advanced and inexpensive signal processors (both
passive and active). Artificial Intelligence has grown to a mature technology in machine
intelligence, and there are several more recent developments such machine learning and deep
learning, and several flavours of Artificial Neural Networks (ANNs) such as Convolution Neural
Networks (CNN), and Recurrent Neural Networks (RNN).

Keywords- Internet of Things (IoT), Data Science, Robotics, Artificial Intelligence (AI), Data
Analytics, Artificial Neural Networks (ANN), Convolutional Neural Networks (CNN), Recurrent
Neural Networks (RNN), Deep Learning, Machine Learning, Cyber Physical Systems, Industry
4.0, Cloud Robotics, Networked Robot, Industrial Internet.

1.0 Introduction

In an information age, data is considered to be the new oil. The advents of the Internet
and its associated technologies have given way to a myriad of data proliferation. The Internet of
Things (IoT) can be loosely defined as a network of interrelated computing devices, mechanical
and electronic devices, objects, animals/people that are provided with unique identifiers and the
ability to transfer data over a network without requiring human or computer intervention. In the
Internet of Things, the things that are being connected to the internet can be segregated into three
major categories:

1. Things that accept information and forward it-aggregators of information.

2. Things that collect information and then act accordingly; but do not retransmit
anything.

3. Things that do both (1) and (2)-aggregate and process information.

Each of the above categories of things has enormous advantages associated with them. For
example, the devices those collect and send information to the Internet includes all sorts of
sensors-including temperature sensors, motion sensors, moisture sensors, air quality sensors and

2
light sensors. The sensors, along with the connection, enable users to collect
information automatically from the environment which, in turn, allows us to make intelligent,
calculated decisions. Just as our five basic sensory organs (sight, hearing, smell, touch, and
taste) allow us to make sense of the world, the sensors allow machines to feel the world.

The devices that receive and act on such information can be operated from a far off place,
and this is one of the major advantages of the Internet of Things (IoT).

The third category of devices-the ones that segregate information and resend it, as well as
receive information and act on it is the true goal of IoT. To cite an example, a soil moisture
sensor in an agricultural farm can sense the moisture in the soil and in turn decide when to switch
on the irrigation system to water the plants. Note that this is done on an intelligent fashion
without the intervention of the farmer. Obviously, the IoT enhances the efficiency of operation of
devices that are linked the Internet. Note that data proliferation is one of the major outcome of
the IoT and the associated technologies. The IoT results in large chunks of real-time data to be
processed either locally or at a distant data centre.

The IoT is considered to be a disruptive technology, due to the fact that it has indeed
preempted several prevalent technologies. Further, several new applications evolved and
proliferated with IoT, including distributed computing, wireless sensor networks and so on.
Principles of robotics evolved and the technologies matured over the past several decades.
Artificial Intelligence (AI) created another paradigm shift in modern day computing. Robotics
and AI, together, set forth several cutting edge technologies and applications in the present
millennium. In computer science parlance artificial intelligence, sometimes called machine
intelligence, is the intelligence exhibited by machines, in contrast to the natural intelligence of
humans and animals. Artificial Intelligence is an area of computer science that focuses on the
creation of intelligent machines that function and interact like humans.

In this chapter, we mainly focus on the future directions in IoT, Robotics, and AI based
applications.

3
1.1 The Impact of AI and Robotics in Medicine and Healthcare

The Tractica (a market intelligence firm) predicts that healthcare robot shipments will
exceed 10,000 units per year by 2021. A similar study by Allied Market Research suggests that
the market of surgical robotics will double from US$3 billion in 2014 to US$6 billion by 2020.
Artificial Intelligence (AI) increasingly plays a role in the diagnosis of illnesses and the planning
of treatment programs. Expert systems that scan the rapidly all pervasive scientific literatures are
helping clinicians achieve more accurate diagnosis of patients in a shorter time. For example,
the new GP at Hand service in practise in London, allows people to check symptoms through a
mobile application and have a video consultation with a doctor within two hours, and uses AI for
diagnosis. A Frost and Sullivan study predict that the AI market for health care will increase
40% per year between 2014 and 2021, with revenues rising from US$633.8 million in 2014 to
US$6.662 billion by 2021. The DAvinCi operating robots on the other hand are excellent
machines that use AI algorithms at their core to enhance accuracy. Algorithms for the making
high-resolution digital images and holograms are also based on Artificial Intelligence [1].

1.2 Advances in Artificial Intelligence Technology and Their Impact on the Workforce

Over the years, AI has matured from Symbolic AI to Embodied AI to Hybrid AI. As
CPUs, networks, and wireless communications devices becomes more and more powerful and
effective, AI and Robotics tries to catch-up [2]. Autonomous systems play an major role in
many applications across diverse domains such as aerospace, marine, air, field, road, and service
robotics. Autonomous systems assist the user in the daily chores and ideal to perform dangerous,
dirty and monotonous tasks. Yet, enabling robotic systems to perform complex, real-world
scenarios over extended time periods (i.e. weeks, months, or years) autonomously poses many
challenges to the designer and user. These have been investigated by sub-disciplines of Artificial
Intelligence (AI) including navigation and mapping, perception, knowledge representation and
reasoning, planning, interaction, and learning. The techniques developed by the sub-disciplines
when re-integrated within an autonomous system, can enable robots to operate effectively in any
complex, long-term scenarios [3].
While most of the modern researchers argue that Artificial Intelligence (AI) provides
overwhelming and profound advancements in technology and beneficial tools used daily to

4
advance human life on earth, there are other researchers who hold a contrary opinion to this
view, with regard to the rising adverse ontological and existential consequences which the
products of super-intelligent technologies has on an alarming number, of mankind in the
21st century. Scholars like Vardi, Tegmark and Greene described this scenario to a time-bomb
waiting to go off any time. The recent endorsement of 23 AI principles by 1200 AI/Robotics
researchers and over 2342 other researchers from diverse disciplines, in a just concluded Future
of Life (FLI) Conference, adds credence to the worries adverse effects of AI. The study draws
from a combination of Marxian Alienation and ontological theories which basically holds that:
“rising advancements in AI technologies, continues to alienate mankind from his existential
human nature” [4]. Machines have already automated millions of routines, working-class jobs in
manufacturing industry. And now, AI is learning to automate non-routine jobs in transportation
and logistics, legal writing, financial services, administrative support, and healthcare. People
have always worried that advancing technology would destroy jobs. Yet despite painful
adjustment periods during these changes, new jobs replaced old ones, and most workers found
no employment. Humans have never competed with machines that can outperform them in
almost anything. AI threatens to do this, and many economists worry that society will not be able
to adapt easily. What people are now realizing is that this formula that technology destroys jobs
and creates jobs, even if it is basically true, is too simplistic [5]. The question that, will the jobs
replaced by the AI systems will be replaced by the new jobs created, after acquiring the new
skill set, are highly relevant. A computerized study conducted in 2013 revealed that about 47%
of American workers held job that where observed to have a high risk of automation in one or
two decades from now. The questions which researchers like Vardi ask are: Will technology be
able to create about a 100 Million jobs if these jobs are automated by AI technologies? [5]. Will
technology create jobs commensurate to those it has phased out? Will these jobs be created soon
enough to meet rising demands of those without jobs? What will be the fate of workers whose
skills fall short of the existential advancements in modern technology? Will such people ever be
able to catch up or will they lose their existential place in society? So far, the answers to the
above questions are in affirmative.
A recent American research revealed that employment is currently growing in high
income cognitive jobs and low-income service jobs, such as elderly assistance and fast-food
service, which computers cannot automate yet. But technology is hollowing out the economy by

5
automating middle skill, working-class jobs first. Since 2000, when millions of these jobs (low
income service jobs) disappeared, displaced workers either left the labor force or accepted
service jobs that often pay $12 per hour, without benefits. Communications technology firms
now save money by hiring freelancers and independent contractors instead of permanent
workers. This has created the Gig Economy – a labor market characterized by short-term
contracts and flexible hours at the cost of unstable jobs with fewer benefits. Automation has
decoupled job creation from economic growth, allowing the economy to grow while
unemployment and income shrinks, thus increasing inequality. It may also be noted that
technology creates a “winner-takes-all” environment, where second best can hardly survive. In
1990, Detroit’s three largest companies were valued at $65 billion with 1.2 million workers. In
2016, Silicon Valley’s three largest companies were valued at $1.5 trillion but with only 190,000
workers [5]. This shows the trend is towards larger communication companies, with fewer jobs.
A computerized study conducted in 2013 discovered that 47% of American works are engaged in
jobs considered to be at a high risk of automation in the next decade or two. If this happens,
technology then must create approximately 100 million jobs to balance the gaps this reality will
create in the labour market.
Yet, the hope placed on these AI technologies were such that, technological advances in
this areas would create more jobs for the majority of mankind such that everybody will be able
to make a living as existential beings in their society. However, where less and fewer jobs are
created, leading to very high unemployment rates, the situation is professed to result to what
Verdi [5] refers to as ‘a state of violent uprising’. Considering the dynamic nature of the labour
markets, schools must begin to emphasize on teaching the right skill required for future jobs
which innovations in AI will require. Workers on the other hand would need to upgrade their
skills with assessable training for better job opportunities. However, Vardi opines, ‘the need to
adapt and train for new jobs will become more challenging as AI continues to automates a
greater verity of tasks.’

1.3 Artificial Intelligence Technologies and Human Intelligence

An Intelligent Agent (IA), is a system that senses its environment and takes pertinent
actions that maximizes its chances of success. John McCarthy, the man who coined the term

6
‘Artificial Intelligence (AI)’ in 1956, defined the discipline as: ‘the science and engineering of
making intelligent machines’. In a different way, AI can be considered as that discipline in the
field of science that focuses on aiding artifacts or machines address complex issues, thereby
providing solutions to human needs. The process involved here is that which simulates human
intelligence into algorithms in ways amenable to computers in a friendly fashion. On 11th day of
June, 1997, when an IBM computer nick-named ‘Deep Blue’, defeated Garry Kasparove, the
world’s greatest Chess Player (Grand Master) in a ‘six straight set game of chess match’, proved
that machine intelligence can become comparable to that of humans. Today, computers have
gone beyond being intelligent to becoming conscious super-intelligent machines believed to have
the capacity to develop and sustain a mind of their own, which is detrimental to the mankind. It
now an accepted fact that the advent of super- intelligent technology and machines have now
successfully invented and programmed automations and devices like autonomous vehicles,
pacemakers, and automated trading systems. All these were mere wishes some decades ago. Yet
with all these merits, Professor Stephen Hawking, could not help express his mixed feelings
about the supposed virtues of Artificial Intelligence. He observed that: ‘The success in creating
AI would be the biggest event in human history. Unfortunately, it might also be the last, unless
we learn how to avoid the risks’ [6].
One specific area in which AI can be dangerous to mankind could be exemplified in a
scenario where advanced AI machines are programmed to do specific tasks. Such programming
are often done in ways that are extremely difficult to alter even slightly (as is the case with
ICBM mechanized weapon designed to take out its target some 7000 miles away from its take
off point; say from Iran to Israel or from North Korea to Japan). Advance AI machines in this
regard are known to see any external influence aimed at altering their initial objective as a threat
that must be met with decisive and counter actions. Consequently, it (machines) finds new ways
of evading whatever obstacles or attempt anyone throws in its path, with the view to reaching its
original objectives. Thus, Professor Stephen Hawking observed that: ‘Artificial intelligence
machines could kill us because they are too clever. Such computers could become so competent
that they kill us by accident. The real risk with AI isn't malice but competence. A super intelligent
machine will be extremely good at accomplishing its goals, and if those goals aren't aligned with
ours, we're in trouble’.

7
IT should be noted that super-intelligent AI technologies are feared most for one thing:
that they will at some point, become capable of acquiring the ability to upgrade and perhaps,
reprogram themselves whenever there is need. Having acquired intentionality, they will become
more aware and conscious of their environment, and the people and things around them. Above
all, they begin to think autonomously and make judgments and take decisions on what best line
of actions to follow - with regards to carrying out assigned tasks or objectives. Above all, these
super intelligent AI’s are feared, would eventually have the ability to upgrade, preserve and
protect themselves from whatever they may consider, internal or external aggression. This means
that super-intelligent AI’s would in the nearest future for instance, possess the ability to resist
being reduced or reshaped, corrected or reprogrammed at the instance of their programmers [4].

2.0 Cloud Robotics, Remote Brains, and Its Implications

Cloud computing and cloud based services are on the rise because they offer a high
scalability and are very efficient because of capability to adjust to the needs. Ever since the
cloud computing became widely available, lots of algorithms and systems previously thought of
very unwieldy became instantly viable. For robotics and AI especially this means that if the
power behind the cloud could be harnessed, it would be possible to build smaller, more battery
effective robots, because there would be no need to have a powerful computer on board, but the
brain of the robot can be in the cloud. The idea of remote brain is not a new one, though. [7].
Due to the distribution around the world it is able to provide quick results (the task is computed
physically close to the data it is working on, to reduce network delay), has low down-time, and
most of all is financially reasonable (pay per use). James Kuffner, who is currently working at
Google, the cloud computing techniques make robots “lighter, cheaper and smarter” [8]. The
use of cloud to process heavy tasks allows use of smaller on-board computers in robots, which
will then have only to be used to work on tasks that needed to be processed in real-time (control
of sensors and motors). Separation of low-level control and high-level reasoning was explored as
the concept of remote brain in 1996 at the University of Tokyo. Further, the global knowledge
database for robots can act as a repository of objects, actions or environments. The robots will be
able to download the object description and usage manual, even when it was only the first time

8
the robot has seen the particular object, or will be able to plan a route in the environment, which
was mapped by another robot.
The main negative of using the cloud-based architecture is the possibility of losing the
connection, and in this case, if robot uses the cloud services even for basic functionality, it will
fail to do anything. This can be set viewed as an acceptable constraint, and backup systems can
be created. Today, the network connections are highly reliable (or acquiring the new stable
infrastructure is affordable compared to the cost of the robot using full-fledged intelligence),
hence the real problem becomes the speed of connection [7].
Cloud robotics usually consists of two tiers of communications, viz. Machine-to-Machine
(M2M) and Machine-to-Cloud (M2C). M2M level indicate the robot are creating a collaborative
computing group (ad-hoc cloud), which permits the robots to share intensive computational tasks
- to share resources needed, to exchange the information needed for collaboration and more
importantly, to connect the robots not in the range of cloud access point to communicate with the
cloud. On the M2C level, the cloud can provide resources for computation and storage on-
demand, so robots on M2M level can use these resources for a task that are beyond their shared
resources. This is illustrated in Figure 16.1.

9
Figure 16.1: M2M/M2C communication, sharing resources of robots in adhoc cloud and
communicating with computers in cloud. [Courtesy: Ref.[9]]

As mentioned before, this creates a “remote-brain” along with the shared memory of learned
skills, actions, and information already retrieved.

2.1 Cloud Computing and the RoboEarth Project

The main goal of RoboEarth [10] is to create a network for robots which will allow
robots to exchange knowledge, will have plug-in architecture, and will use ontology-based
language. There are three databases on the cloud which hold information about actions, about
objects and about environments. Data in these databases are stored in OWL ontology language.
Connection to the robot is done by Robot Operating System (ROS) [11], but the system itself is
not limited to the use of ROS for communication. The information about robot capabilities
(construction, sensor types and others) are published for the system, so only the appropriate
actions (the actions the robot can do) are provided. Main component of RoboEarth architecture is
Recognition/Labeling Component (RLC). It connects robot hardware with the abstractions of
actions, objects and environments. Its main function is to translate abstract definitions from
RoboEarth databases to format understood by particular robot and vice-versa so the robot can
contribute to the databases with new knowledge [7]. That means it is able to work on low-level
actions (atomic primitives – signals from sensors, motors, and others) and also on high-level
actions (spatial and temporal relations between actions to create and execute action abstractions).
The architecture of RoboEarth is shown in Figure 16.2

10
Figure 16.2: The Architecture of RoboEarth [Courtesy: Ref.[23]].
Learning the new action is done through tele-operation. Human operator controls the
robot, and since the RoboEarth component is on the ROS blackboard, it receives all the signals
from the sensors and the motors, and wraps them into the action plan. Then it asks the operator
for the action label. The operation of learning is also running when robot is executing the action
recipe – to further improve it [7].

2.2 The DAVINCI Framework

The DAvinCi framework [12] is a software framework that provides the scalability and
parallelism advantages of cloud computing for service robots in large environments. For
communication with the robots, the Robot Operating System (ROS) is used. For parallel
processing, the Hadoop/ MapReduce framework [13] was used. The framework capabilities
were tested on the implementation of Fast Simultaneous Localization and Mapping (FastSLAM)
algorithm to build a map of large arena using different robots. The DAvinCi is a typical Platform
as a Service (PaaS). Robots communicate with the DAvinCi server, which can run ROS nodes
for the robot without the capability to run them. Hierarchically, above the server is located the
Hadoop Distributed File System (HDFS) cluster used for execution of robotic algorithms. Also,

11
DAvinCi server acts as a central communication point, as well as the master node. On this
framework, as a proof of concept the Fast Simultaneous Localization and Mapping (FastSLAM)
algorithm was implemented and tested with promising results [7].
Although the area of cloud computing is rather new, there are several projects oriented on
its use for the needs of AI and robotics. The main advantages of cloud computation are the
scalability, on-demand performance and storage, availability and parallel computing. Several
method used in AI, grasping, image processing and neural networks to name the few are
somewhat computationally intensive and because of that it is not viable to use them on a wide
range of robots which do not have required resources on-board. With the cloud robotics, it is
possible to achieve the “remote-brain”, older concept in which the hardware (sensors, actuators
and low-level actions) and the software (high-level reasoning, path planning, grasping, etc.) are
separated [7].

3.0 Artificial Intelligence and Innovations in Industry

A white-collar robot can be defined as a set of computer-based capabilities that performs


a task or a set of tasks historically done by white-collar workers, but without the need for human
intervention [14]. White-collar robots automate particular tasks. A human decides what task a
robot will do and then chooses a robot to do that task. As in manufacturing settings, white-collar
robots might be part of a process, providing information inputs or outputs to others, whether
people or robots. This definition is consistent with robots used for manufacturing and does not
require that the robot have “consciousness.” Instead, the robot just does things that need to be
done. In general, the robot is likely to be able to do some things better than the person, but the
person is likely to do some things better than the robot.

3.1 Watson Analytics and Data Science

The Watson Analytics (www.ibm.com/analytics/watsonanalytics), provides active


software that helps any user investigate data. Upload data to the cloud, and Watson Analytics
will analyze the data quality, provide initial analysis, and prod the user to consider different

12
combinations of variables. Watson Analytics is distinct from Watson Cognitive, which gained
fame from capabilities used on the TV show Jeopardy! The latter’s capabilities focus on text,
natural language, and other related applications. As a result, much of the discussion of Watson’s
capabilities is bifurcated into two seemingly independent pieces-cognitive and analytics.
Watson Analytics requires that the user (or someone helping the user) negotiate a data
upload to the cloud. After the data is uploaded, Watson Analytics provides starting points for the
user to analyze the data. These starting points are a sequence of questions that the system has
developed on the basis of the data—for example, “What drives X?” “What is a predictive model
of Y?” and “What is the trend of Y and Z?” [14].
The data format is important to Watson Analytics. Watson Analytics does not work well
when there are more than two dimensions to the data, say, with row or nested headings, or both.
Also, note that Watson Analytics is active software: the software makes recommendations and
does analysis without needing the user to specify the models [15]. Generally, active software
makes assumptions as to where or what the user wants or needs. Active software is likely to
make predictions about what would be useful to the user. Active software may be autonomous
and function largely without the user or its inputs. Active software is likely to be goal oriented.
White-collar robots generally will be active software [14].
It is highly likely that Watson Analytics frames the data for the user, which could lead
the user to form initial conclusions that are not appropriate under greater scrutiny. Furthermore,
Watson Analytics does not focus on how independent variables drive the dependent variable,
either positively or negatively. As a result, although users might understand which variables
predict or drive, they may not know in which direction this occurs. Other limitations include the
apparent unavailability of classical statistical tests, an inability to designate control variables, and
an inability to choose particular statistical methods. In addition, the modeling approach limits the
number of variables (or at least the user’s perspective) used in estimation to one or two variables
[14] [15].

4.0 Innovative Solutions for a Smart Society Using AI, Robotics and Internet of Things
(IoT)
AI, Robotics, and IoT are attracting wide attention, expected as technologies to change
society in the future. These innovative technologies have potentials to build (1) a borderless

13
communication society, (2) a symbiotic society between humans and robots, and (3) a safe and
secure networked society [16]. Various components of Smart Solutions include borderless
communication, Symbolic communication between humans and robots (machines), and safe and
secure networked society.
Smart Solutions for a smarter society include automatic speech translation systems, dam
inspection robot system, and large scale imaging security system. Speech recognition in noisy
environments results in inaccurate translation. The beam forming technology demonstrated its
effectiveness by improving speech recognition performance by up to 40%. Also, directivity and
direction of beam forming must be controlled based on the number and direction of speaker and
noise sources. For example, directivity should be set narrower when speaker and noise sources
are close [16]. Dam inspection by intelligent robots with adequate provision for lighting was
demonstrated to be a viable alternative. For efficient operation of a large-scale system with over
10,000 cameras, visual security by human is inadequate and image recognition technology will
be needed. The conventional approach requires a large number of cameras with a high-capacity
network and large-scale server system, which lacked in scalability. To solve this issue, a
functionally-distributed facial recognition system has been developed. The “Best-shot method”
detects faces from security camera images, and then transfers only the best shot thumbnail which
is selected as most useful image for facial recognition (with the highest resolution angle, facial
angle, focus, etc.) to the facial recognition server. The server then extracts facial features for face
matching. This method reduces the network and server load, making it easier to build a large-
scale security system [16].

4.1 Cyber Physical Systems (CPS)

AI technologies will play more important roles to expand Cyber Physical System [17]
sustainably in the future. High-performance cloud-based processing environment in cyber space
will also be necessary. However, due to the performance limitations on cloud-based processing
as described in the large-scale imaging security solutions, load distribution is required. The load
balance between cloud and IoT/EDGE devices is important when building Cyber Physical
Systems. The quickness of response is important for the natural dialogue translation and
autonomous robot solutions.

14
Cyber world and physical world were considered as two different entities in the past
decade. However, researchers have found that these two entities are closely correlated with each
other after integration of sensor/actuators in the cyber systems. Cyber systems became
responsive to the physical world by enabling real time control emanating from conventional
embedded systems, thus emerging a new research paradigm named Cyber Physical System
(CPS) [17]. Cyber Physical Systems can be used in a wide range of application fields, including
water and mine monitoring, aerospace and so on, which undeniably demonstrates that in future
many service providers will be focusing on implementing CPS technologies for their customers.
In cyber physical systems, communication is needed for conveying sensor observations to
controllers/actuators; thus, the design of the communication architecture is a critical requirement
for system functionality.
Vehicular Cyber Physical System (VCPS) is not a new concept. It refers to a wide range
of integrated transportation management system which should be real-time, efficient and
accurate. Based on modern technologies such as electronics, computers, sensors, and networks,
the traditional modes of transport are becoming more intelligent. The CarTel project, developed
by the MIT, combines mobile computing and sensing, wireless networking, and data intensive
algorithms running on servers in the cloud to address these challenges. CarTel helps applications
to easily collect process, deliver, analyze, and visualize data from sensors located on mobile
units. The contributions of CarTel include traffic mitigation, road surface monitoring and hazard
detection, vehicular networking and so on [17].
Design of precision agriculture includes data management of production experiments,
fundamental geographic information of farmland, micro-climate information and other data. The
project "underground wireless sensor network" was developed by University of Nebraska-
Lincoln Cyber physical Networking Lab, where Agnelo R. Silva and Mehmet C. Vuran
developed a novel cyber-physical system through the integration of center pivot systems with
wireless underground sensor networks, i.e. CPS for precision agriculture [18]. The Wireless
Underground Sensor Networks (WUSNs) consist of wirelessly connected underground sensor
nodes that communicate untethered through soil.
Health Cyber Physical Systems (HCPS) will replace traditional Health devices working
individually which we will face in the future. With sensors and networks, various Health devices
work together to detect the patients’ physical condition in real time, especially for critical

15
patients, such as the patients with heart disease. The portable terminal devices carried by the
patient can detect the patient’s condition at any time and send timely warning or prediction in
advance. In addition, the collaboration between Health equipment and real-time data delivery
would be much more convenient for patients [17].
The proposed standard Cyber Physical System architecture is illustrated in Figure 16.3.

Figure 16.3: The Proposed Standard Cyber Physical Systems Architecture. [Courtesy: Ref.[17]]
The standard CPS architecture consists of six modules-Sensing Module, Data
Management Module (DMM), Next Generation Internet, Service Aware Modules (SAM),
Application Module (AM), and Sensors and Actuators. The sensing module sends an association
request to Data Management Module (DMM) and it replies with an acknowledgement packet.
Once association between DMM and Sensing module is completed, nodes start sending the
sensed data to DMM. Here, noise reduction and data normalization provide the bridge between
the cyber world and physical world. Through Quality of Service (QoS) routing, data is transferred
to Service Aware Modules using services of Next Generation Internet. Available services are
assigned to different applications in Application Module. To ensure the security and integrity of
data, during each network operation, data is sent to a cloud platform and also to a local database
[17].

4.2 IoT Architecture, Enabling Technologies, Security and Privacy, and Applications

Fog/edge computing has been proposed to be integrated with Internet of Things (IoT) to
enable computing services devices deployed at network edge, aiming to improve the user’s
experience and resilience of the services in case of failures. With the advantage of distributed

16
architecture and close to end-users, fog/edge computing can provide faster response and greater
quality of service for IoT applications. Thus, fog/edge computing-based IoT becomes future
infrastructure on IoT development [19]. In fog/edge computing, the massive data generated by
different kinds of Internet of Things (IoT) devices can be processed at the network edge instead
of transmitting it to the centralized cloud infrastructure due to bandwidth and energy
consumption concerns. Because fog/edge computing is organized as distributed architecture and
can process data and store data in networking edge devices, which is close to end-users, fog/edge
computing can provide services with faster response and greater quality, in comparison with
cloud computing. Thus, fog/edge computing is more suitable to be integrated with IoT to provide
efficient and secure services for a large number of end-users, and fog/edge computing-based IoT
can be considered as the future IoT infrastructure.
It is well known that both CPS and IoT aim to achieve the interaction between cyber
world and physical world. Particularly, CPS and IoT can measure the state information of
physical components via smart sensor devices without human’s input. Meanwhile, in both CPS
and IoT, the measured state information can be transmitted and shared through wired or wireless
communication networks. After the analysis of measured state information, both CPS and IoT
can provide secure, efficient, and intelligent services to applications. The existing efforts on CPS
applications and IoT applications have been expanded to similar areas (smart grid, smart
transportation, smart city, etc.). In CPS, the sensor/actuator layer, communication layer, and
application (control) layer are present. The sensor/actuator layer is used to collect real-time data
and execute commands, communication layer is used to deliver data to upper layer and
commands to lower layer, and application (control) layer is used to analyze data and make
decisions [19]. We can see that CPS is a vertical architecture. In contrast, IoT is a networking
infrastructure to connect a massive number of devices and to monitor and control devices by
using modern technologies in cyber space. Thus, the key of IoT is “interconnection.” The main
objective of IoT is to interconnect various networks so that the data collection, resource sharing,
analysis, and management can be carried out across heterogeneous networks.
Figure 16.4 illustrates the typical integration of Internet of Things with Cyber Physical
System. The basic difference between CPS and IoT is that, CPS is considered as a system, while
IoT is considered as “Internet.”

17
Figure 16.4: Integration of IoT and CPS. [Courtesy: Ref.[19]]
The common requirements for both CPS and IoT are real-time, reliable, and secure data
transmission. The distinct requirements for CPS and IoT are listed as follows: for CPS, effective,
reliable, accurate, real-time control is primary goal, while for IoT, resource sharing and
management, data sharing and management, interface among different nets, massive-scale data
and big data collection and storage, data mining, data aggregation and information extraction,
and high quality of network quality of service (QoS) are important services.
Typical application of the integrated IoT and CPS include Smart Grids, Intelligent
Transport Systems (Smart Transportation), and Smart Cities.
In integrating IoT and CPS, the smart grid has been developed to replace traditional
power grid to provide reliable and efficient energy service to consumers [20]. In the Smart Grid,
distributed energy generators are introduced to improve the utilization of distributed energy
resources and electric vehicles are introduced to improve the capability of energy storage and
reduce emission of CO2, and smart meters and bidirectional communication networks are
introduced to achieve the interactions between customers and utility providers. By integrating
with IoT, a large number of smart meters can be deployed in houses and buildings connected in
Smart Grid communication networks. Smart meters can monitor energy generation, storage, and
consumption, and can interact with utility providers to report energy demand information of
customers and receive real-time electricity pricing for customers. With the aid of fog/edge
computing infrastructure, the large amount of data collected from smart meters can be stored and
processed so that the effective operations of the Smart Grid can be supported. With the
interaction information, utility providers can optimize the energy dispatch of the grid, and

18
customers can optimize their energy consumption, resulting in the improvement of resource
utilization and the reduction of cost [19].
Smart transportation, also known as intelligent transportation systems, is another typical
IoT-CPS-based application, in which intelligent transportation management, control system,
communication networks, and computing techniques are integrated to make transportation
systems reliable, efficient, and secure. In the smart transportation system, a large number of
smart vehicles are included and connected with each other through wireless networks. Smart
vehicles can efficiently perceive and share traffic data and schedule drivers’ travels with great
efficiency, reliability, and safety. In the recent past, smart vehicles (Google’s Self-Driving car,
etc.) have been designed and tested.
Smart cities can be considered a complex IoT paradigm, which aims to manage public
affairs via introducing information and communication technology (ICT) solutions. Smart cities
can use public resources in more efficient ways, resulting in the improvement of the QoSs
provided to users and the reduction of operational costs to public administrators. Smart cities, as
a complex CPS/IoT application, may consist of several subapplications or services, including the
Smart Grid, Smart Transportation, the structural health of buildings, waste management,
environmental monitoring, smart health, smart lighting, etc. All these subapplications, or
services, should be supported by a unified communication network infrastructure, or
communication networks designed for these subapplications or services should be interconnected
to establish a large-scale interconnected heterogeneous network for IoT/CPS applications, with
the aim of achieving the best use of public resources in cities.

4.3 The Internet of Robotic Things (IoRT)

The concept of Internet of Robotic Things (IoRT) was coined by Dan Kara, director of
robotics at ABI Research. Internet of Robotic Things is a mix of diverse technologies like Cloud
Computing, Artificial Intelligence (AI), Machine Learning and Internet of Things (IoT). As the
concept of IoT is evolving and leading to significant developments in terms of innovation to
various application domains, new terminologies are evolving, like: Internet of Medical Things
(IoMT), Internet of Nano Things (IoNT), Internet of Mobile Things (IoMBT), Internet of Cloud

19
Things (IoCT), Internet of Autonomous Things (IoAT), Internet of Drone Things (IoDT),
Industrial Internet of Things (IIoT), Internet of Underwater Things (IoUT) and many more [21].
Internet of Things is also laying strong foundation for development of Industry 4.0 or
Smart Factories. Smart Factories are those, where most of the work will be performed via
sophisticated and next generation sensor technology and all the working personnel can respond
quickly to any sort of changes, which nowadays, gets almost unnoticed. Industry 4.0 [22] or
Industrial Internet will bridge physical and digital world and is referred to as Cyber Physical
Systems.
Internet of Things (IoT), in the near future altogether with diverse areas like Artificial
Intelligence, Machine Learning, Deep Learning, Augmented Reality, Cloud Computing and
Swarm Intelligence can change the face of robotics by proposing next generation class of
Intelligent robotics titled as Internet of Robotic Things (IoRT). According to Stratistics MRC,
Internet of Robotics Things (IoRT) market was around $4.37 billion in 2016 and is expected to
touch $28.03 billion by 2023 with a staggering Compound Annual Growth Rate (CAGR) of
30.4%. The primary factors responsible for this growth is rise of E-Commerce platforms,
Education Sector, Consumer Markets, Research and Development Wings and above all Industry
4.0.

4.4 Cloud Robotics and Industry 4.0

The primary motivation behind the design and development of Internet of Robotic Things
is Cloud Robotics [23]. Cloud Robotics is regarded as a system that relies on Cloud Computing
infrastructure to access large amounts of processing power and data to perform operations. In
cloud robotics, all operations ranging from sensing, computation and memory is integrated into a
single standalone system like Networked Robot. Cloud robotic systems include some portion of
its capacity for local processing for low-latency responses in case a situation of network failure
arises. Cloud robotics is designed as transition state between pre-programmed robotics and
networked robotics. With the development of cloud computing, big data, and other emerging
technologies, the integration of cloud technology and multi-robot systems makes it possible to
design multi-robot systems with improved energy efficiency, high real-time performance, and

20
low cost. Implementation of Cloud Robotics in an industrial environment is shown in Figure
16.5.
Since the cloud computing became widely available, lots of previously algorithms and
systems previously thought of as very time consuming became instantly viable. For robotics and
AI especially this means that if the power behind the cloud could be harnessed, it would be
possible to build smaller, more battery effective robots because there would be no need to have a
powerful computer on board, but the brain of the robot can be in the cloud. It is possible to
create a remote brain now. Centralized cloud for robot means that the memory can be nearly
infinite, and instantly available to other robots, so the process of learning and exchanging the
knowledge can be simplified [7].
In the 1980s, General Motors developed the Manufacturing Automation Protocol (MAP).
A diverse set of incompatible proprietary protocols were offered by vendors until a shift began in
the early 1990s when the World Wide Web popularized the HTTP over IP protocols.

Figure 16.5: The Implementation of Cloud Robotics in an Industrial Environment. [Courtesy:


Ref.[23]]
In 1994, the first industrial robot was connected to the Web with an intuitive graphical user
interface that allowed visitors to teleoperate the robot via any Internet browser. In the mid and

21
late 1990s, researchers developed a series of web interfaces to robots and devices to explore
issues such as user interfaces and robustness that initiated the subfield of “Networked Robotics”.
The term “Industry 4.0,” introduced in Germany in 2011, predicts a fourth industrial
revolution that will use networking to follow the first (mechanization of production using water
and steam power), the second (mass production with electric power), and the third (use of
electronics to automate production) industrial revolutions. In 2012, General Electric introduced
the term “Industrial Internet,” to describe new efforts where industrial equipment such as wind
turbines, jet engines, and MRI machines connect over networks to share data and processing for
industries including energy, transportation, and healthcare [24].

4.5 Opportunities, Challenges, and Future Directions

Using the Cloud for robotics and automation systems introduces several new challenges.
The connectivity inherent in the Cloud raises a range of privacy and security concerns. These
concerns include data generated by Cloud-connected robots and sensors, especially as they may
include images or video or data from private homes or corporate trade secrets. Cloud Robotics
and Automation also introduces the potential of robots and systems to be attacked remotely: a
hacker could take over a robot and use it to disrupt functionality or cause damage. For instance,
researchers at University of Texas at Austin demonstrated that it is possible to hack into and
remotely control UAV drones via inexpensive GPS spoofing systems in an evaluation study for
the Department of Homeland Security (DHS) and the Federal Aviation Administration (FAA).
These concerns raise new regulatory, accountability and legal issues related to safety, control,
and transparency [24].
Now, let us consider the technical challenges in Cloud Robotics. New algorithms and
methods are needed to cope with time-varying network latency and Quality of Service (QoS).
Faster data connections, both wired Internet connections and wireless standards such as Long
Term Evolution (LTE), are reducing latency, but algorithms must be designed to degrade
gracefully when the Cloud resources are very slow, noisy, or unavailable. For example,
“anytime” load balancing algorithms for speech recognition on smart phones send the speech
signal to the Cloud for analysis and simultaneously process it internally and then use the best
results available after a reasonable delay. Similar algorithms will be needed for robotics and

22
automation systems. New algorithms are also needed that scale to the size of Big Data, which
often contain dirty data that requires new approaches to clean or sample effectively. When the
Cloud is used for parallel-processing, it is vital that algorithms oversample to take into account
that some remote processors may fail or experience long delays in returning results. When
human computation is used, algorithms are needed to filter unreliable input and balance the costs
of human intervention with the cost of robot failure. Moving robotics and automation algorithms
into the Cloud requires frameworks that facilitate this transition. The cloud provides three
possible levels of service-Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and
Software as a Service (SaaS). The RoboEarth is an example for PaaS.
With SaaS, an interface allows data to be sent to a server that processes it and returns
outputs, which relieves users of the burden of maintaining data and software and hardware and
allows companies to control proprietary software. This approach is termed as Robotics and
Automation as a Service (RAaaS)[24].
Cloud robotics allows robots to share computation resources, information and data with
each other, and to access new knowledge and skills not learned by themselves. This opens a new
paradigm in robotics that we believe leads to exciting future developments. It allows the
deployment of inexpensive robots with low computation power and memory requirements by
leveraging on the communications network and the elastic computing resources offered by the
cloud infrastructure. Applications that can benefit from the cloud robotics approach are myriad
and include SLAM, grasping, navigation, and several others, like weather monitoring, intrusion
detection, surveillance, and formation control [9].

5.0 Internet of Skills (Human 4.0) and the Tactile Internet (Zero Delay Internet)

Capitalizing on the latest developments in 5G and ultra-low delay networking as well as


Artificial Intelligence (AI) and robotics, we can predict the emergence of an entirely novel
Internet which will enable the delivery of skills in digital form. By enabling the delivery of
physical experiences remotely (and globally), the Internet of Skills will revolutionize operations
and servicing capabilities for industries and it will revolutionize the way we teach, learn, and
interact with our surroundings for consumers. It will be a world where our best engineers can
service cars instantaneously around the world; or anybody being taught how to paint by the best

23
available artists globally. At an estimated revenue of $20 trillion per annum worldwide (20% of
today’s global Gross Domestic Product, GDP), it will be an enabler for skillset delivery - thus a
very timely technology for service driven economies around the world [25]. The transformation
to the Internet of Skills is illustrated in Figure 16.6.
The Internet of Skills will be an enabler for remote skillset delivery and thereby
democratize labor, the same way as the Internet has democratized knowledge. The core
technology enablers for the Internet of Skills are (a) ultra fast data networks (zero delay internet
or tactile internet), (b) haptic encoders (both kinesthetic and tactile), and (c) edge Artificial
Intelligence (to beat light limit).

Figure 16.6: The Evolution of Internet of Skills (Human 4.0). [Courtesy: Ref.[25]]

6.0 Future Directions in Robotics, AI, and IoT

We have come far enough much since the beginning of an Internet of things (IoT) and
artificial intelligence (AI) when things appeared to be new. Here is another world that absolutely

24
deals with computation statistics and brilliant cloud data servers. With the amalgamation of two
noteworthy innovations like IoT and AI, things are moving towards a superior future without a
doubt. Businesses like automation, propelled robotics, domestic appliances or the so-called smart
home, assembling, design and retail are for the most part harvesting the advantages of these
advanced technologies. While this mix or integration of shrewd strategies is in its underlying
long stretches of advancement, every year we advance a couple of steps and make a greater jump
than the last one. It appears to be just a thing of yesterday when our homes and domestic
appliances began adopting the intelligence from AI and the capability to connect well through
the technical endowment of IoT [26].

It is believed that the Internet of Things will affect the development of AI and the
progressions will be very noteworthy. When one analyzes AI that has been commercialized to
date, the algorithms deployed are generally single agent, practically first-person algorithms. The
algorithms are intended to see, analyze and act. There is communication; however, the majority
of the insight happens independently. Currently, how about we take a look at different
algorithms, for instance, those intended for a commercial center or trading floor. Here, costs get
set because of numerous small-scale interactions between individual dealers or responses to
issues in the supply chain or fluctuations in currency. These interacting set of individual events
result in a value computation for a wide assortment of goods. This sort of algorithm is termed as
social AI [26].

The nature of intelligence in the household circle would turn out to be progressively
sensible with smart gadgets that could be controlled with a cell phone. Without being available in
a similar room, anybody could send directions to a washing machine to wash and dry garments,
turn the lights on and off, turn the thermostats on high or low, and ways to naturally close and
open. Kitchens would end up being productive spots where a great part of the work could be
performed with electrical machines requiring less manual directions. Smart homes with inbuilt
IoT, yet common homes will likewise observe an ascent in the use of smart items and appliances
and will clear path for a problem-free way of life. Voice assistants offered by Amazon and
Google are some incredible gadgets which will make living more astute. The development of
retrofit IoT solutions would be more in creating nations in light of constrained purchasing power
and adoration for advancing innovations [26].

25
The gap between IoT and AI will continue to decrease, leading to the growth of
astonishing functionality in gadgets and operational excellence. With client explicit intelligence,
a similar kind of IoT gadget would be equipped for offering client explicit experiences. IoT
applications would wind up more sufficiently intelligent to astutely watch the earth, correct the
flaws and autocorrect plenty of operational glitches. An ascent in client explicit innovation will
make the truly necessary base for Personal Artificial Intelligence era [26].

The dependence on the Cloud for inexpensive, localized computing power need not be
ignored. The advent of Cloud Robotics will result in a myriad of applications. Also, more recent
developments in Deep Learning and Machine Learning will continue to support several AI
applications including those in image processing.

In the era of rapidly evolving technologies, big data analytics and Internet of
Things (IoT) are the two leading revolutionary technologies which can change the domain of
business operations. Both the technologies are still in its nascent stage and hold massive potential
and opportunities for the future. The technologies can be coupled together for a more efficient
implementation and can help all units by making smarter decisions [27].

Machine learning is a modern science which enables computers to work without being
explicitly programmed. The modern-day technology deploys algorithms that can train and
improve on the data that is fed to them. Over the years, machine learning has made possible the
concept of self-driving cars, effective web search, spam free emails, practical speech recognition
software, personalized marketing and so on. Today, machine learning is increasingly being
deployed in credit card purchase fraud detection, personalized advertising though pattern
identification, personalized shopping/entertainment recommendations, to determine cab arrival
times, pick-up locations, and finding routes on maps [28]. The five trends are : (a) creation of
more jobs in Data Science, (b) new approaches to data security, (c) robotic process automation
(Industry 4.0), (d) improved IT operations, and (e) transparency in decision making [28].

It is interesting to note that the essence of machine learning and AI lies in Artificial
Neural Networks (ANNs). Deep Neural Networks (DNNs) enhances the learning capability of
ANNs by adding more hidden layers to the network. Convolutional Neural Networks (CNNs) is a
class of deep, feed-forward (not recurrent) artificial neural networks that are applied to analyzing
visual imagery. Convolutional Neural Networks are usually composed by a set of layers that can

26
be grouped by their functionalities. CNNs consists of feature learning layers and classification
layers apart from the input and output layers.

The field of machine learning has taken a dramatic twist in recent times, with the rise of
the Artificial Neural Network (ANN). These biologically inspired computational models are able
to far exceed the performance of previous forms of artificial intelligence in common machine
learning tasks. One of the most impressive forms of ANN architecture is that of the
Convolutional Neural Network (CNN). CNNs are primarily used to solve difficult image-driven
pattern recognition tasks and with their precise yet simple architecture, offer a simplified method
of getting started with ANNs [29].

Convolutional Neural Networks differ to other forms of Artificial Neural Network in that
instead of focusing on the entirety of the problem domain, knowledge about the specific type of
input is exploited. This in turn allows for much simpler network architecture to be set up [29].

Bibliography

[1] Jim Banks, The Human Touch-Practical and ethical implications of putting AI and robotics to
work for patients, IEEE Pulse, pp.15-18, May/June 2018.
[2] Masahiro Fujita, 17.1 AI x Robotics: Technology Challenges and Opportunities in Sensors,
Actuators, and Integrated Circuits, Proceedings of the 2019 IEEE International Solid-State
Circuits Conference (ISSCC 2019), pp.276-277, ISBN 978-1-5386-8531-0/19, February 2019.
[3] Lars Kunze et al., Artificial Intelligence for Long-Term Robot Autonomy: A Survey, IEEE
Robotics and Automation Letters, pp.1-8, July 2018.
[4] I.A.P.Wogu et al., Artificial Intelligence, Alienation and Ontological Problems of Other
Minds: A Critical Investigation into the Future of Man and Machines, Proceedings of the 2017
International Conference on Computing, Networking and Informatics (ICCNI), ISBN 978-1-
5090-4642-3/17, Digital Object Identifier: 10.1109/ICCNI.2017.8123792, December 2017.
[5] T. Davey, Artificial Intelligence and the Future of Work: An Interview With Moshe Vardi.
Future of Life. Online publication. https://futureoflife.org/2017/06/14/artificial- 2017.

27
[6] Stephen Hawking, M. Tegmark, S. Russell, and F. Wilczek, Transcending cComplacency on
Super-intelligent Machines, Hoffpost. Online publication;
http://www.huffingtonpost.com/stephen-hawking/artificial-intelligence_b_5174265.html 2014.
[7] D. Lorencik and P. Sincak, Cloud Robotics: Current trends and possible use as a service,
Proceedings of the IEEE 11th International Symposium on Applied Machine Intelligence and
Informatics (SAMI 2013), pp.85-88, ISBN 978-1-4673-5929-0/13, Herl’any, Slovakia, 2013.
[8] Erico Guizzo, Robots With Their Heads in the Clouds, IEEE Spectrum, [online],
http://spectrum.ieee.org/robotics/humanoids/robots-with-their-heads-in-the-clouds March 2011.
[9] Guoqiang Hu, Wee Peng Tay, and Yonggang Wen, Cloud Robotics: Architecture, Challenges
and Applications, IEEE Network, pp.21-28, Vol.26, Issue 3, May/June 2012.
[10] RoboEarth Project, [online], http://www.roboearth.org/
[11] The Robotic Operating System (ROS), [online], http://www.ros.org/wiki/
[12] R. Arumugam et al., DAvinCi: A Cloud Computing Framework for Service Robots,
Proceedings of the 2010 IEEE International Conference on Robotics and Automation (ICRA),
pp.3084-3089, 3-7 May 2010.
[13] Apache Hadoop, [online], http://hadoop.apache.org/
[14] Daniel E. O’Leary, Emerging White-Collar Robotics: The Case of Watson Analytics, IEEE
Intelligent Systems, pp.63-67, March/April 2017.
[15] Introduction to IBM Watson Analytics Data Loading and Data Quality, IBM, March 2016.
[16] Takeshi Yukitake, Innovative solutions toward future society with AI, Robotics, and IoT,
Proceedings of 2017 Symposium on VLSI Circuits, pp.C16-C19, ISBN 978-4-86348-606-5,
2017.
[17] S. H. Ahmed, G. Kim, and D. Kim, Cyber Physical System: Architecture, Applications and
Research Challenges, Proceedings of the IFIP Wireless Days Conference (WD’13), pp. 1-5,
Valencia, Spain, November 2013.
[18] Agnelo R. Silva, and Mehmet C. Vuran, (CPS)2: Integration of Center Pivot Systems with
Wireless Underground Sensor Networks for Autonomous Precision Agriculture, Proceedings of
the 1st ACM/IEEE International Conference on Cyber-Physical Systems, pp. 79-88, Stockholm,
2010.

28
[19] Jie Lin et al., A Survey on Internet of Things: Architecture, Enabling Technologies, Security
and Privacy, and Applications, IEEE Internet of Things Journal, pp.1125-1142, Vol.4, No.5,
October 2017.
[20] NIST & The Smart Grid. Accessed on April 12, 2019. [Online]. Available:
https://www.nist.gov/engineering-laboratory/smart-grid/about-smart-grid/nist-and-smart-grid
[21] Anand Nayyar, Ranbir Singh Batth, and Amandeep Nagpal, Internet of Robotic Things:
Driving Intelligent Robotics of Future- Concept, Architecture, Applications and Technologies,
Proceedings of the 2018 4th International Conference on Computing Sciences, pp.151-160,
ISBN 78-1-5386-8025-4/18, 2018.
[22] Adelaja Oluwaseun Adebayo, Mani Shanker Chaubey, and Levis Petiho Numbu, Industry
4.0: The Fourth Industrial Revolution and How It Relates to the Application of Internet of Things
(IoT), Journal of Multidisciplinary Engineering Science Studies (JMESS) ISSN: 2458-925X,
pp.2477-2482, Vol. 5 Issue 2, February – 2019.
[23] Jiafu Wan et al., Cloud Robotics: Current Status and Open Issues, IEEE Access, pp. 2797-
2807, Vol.4, 2016.
[24] Ben Kehoe et al., A Survey of Research on Cloud Robotics and Automation, IEEE
Transactions on Automation Science and Engineering, pp.398-409, Vol.12, No.2, April 2015.
[25] Mischa Dohler et al., Internet of Skills, where Robotics meets AI, 5G and the Tactile
Internet, Proceedings of the 2017 European Conference on Networks and Communications
(EuCNC), pp.1-5, 2017.
[26] Priya Dialani, AIOPS: The Integration of AI and IoT, [online]
https://www.analyticsinsight.net/aiops-the-integration-of-ai-and-iot/ 2019.
[27] Sampriti Sarkar, How to Build IoT Solutions with Big Data Analytics, [online]
https://www.analyticsinsight.net/how-to-build-iot-solutions-with-big-data-analytics/ 2017.
[28] Kamalika Some, Top 5 Machine Learning Trends of 2018, [online]
https://www.analyticsinsight.net/top-5-machine-learning-trends-of-2018/ 2018.
[29] Keiron O’Shea and Ryan Nash, An Introduction to Convolutional Neural Networks, pp.1-
11, 2015.

29

You might also like