Unit 3

Download as pdf or txt
Download as pdf or txt
You are on page 1of 30

UNIT-III

IoT Architecture and Protocols

3.1 Reference Model and Architecture:


The IoT can be considered both a dynamic and global networked infrastructure that manages
self- configuring objects in a highly intelligent way. This, in turn, allows the
interconnection of IoT devices that share their information to create new applications and
services which can improve human lives. Originally, the concept of the IoT was first
introduced by Kevin Ashton, who is the founder of MIT auto-identification centre in 1999.
Ashton has said, “The Internet of Things has the potential to change the world, just as the
Internet did. May be even more so”. Later, the IoT was officially presented by the International
Telecommunication Union (ITU) in 2005. The IoT has many definitions suggested by many
organizations and researchers. However, the definition provided by ITU in 2012 is the most
common. It stated: “a global infrastructure for the information society, enabling
advanced services by interconnecting (physical and virtual) things based on, existing and
evolving, interoperable information and communication technologies”. In addition,
Guillemin and Friess in have suggested one of the simplest definitions that describe the IoT
in a smooth manner. It stated: “The Internet of Things allows people and things to be
connected Anytime, Anyplace, with anything and anyone, ideally using any path/network and
any service”. Several definitions were suggested by many researchers describing the IoT
system from different perspectives but the important thing that majority or researchers have
agreed on is the IoT is created for a better world for all the human beings.
A Reference Model: A reference model is a division of functionality together with data flow
between the pieces. A reference model is a standard decomposition of a known problem into
parts that cooperatively solve the problem.
A Reference Architecture:
A reference architecture is a reference model mapped onto software elements (that cooperatively
implement the functionality defined in the reference model) and the data flows between them.
Whereas a reference model divides the functionality, a reference architecture is the mapping of that
functionality onto a system decomposition. The mapping may be, but by no means necessarily is, one
to one. A software element may implement part of a function or several functions. Reference models,
architectural patterns, and reference architectures are not architectures; they are useful concepts that
capture elements of an architecture. Each is the outcome of early design decisions. The relationship
among these design elements is shown in Figure 1

Figure 1
IoT Architecture Overview:

IoT can be classified into a four or five-layered architecture which gives you a complete overview of
how it works in real life. The various components of the architecture include the following:

Four-layered architecture: this includes media/device layer, network layer, service and application
support layer, and application layer.
Five-Layered architecture:
Five-layered architecture: this includes perception layer, network layer, middleware layer,
application layer, and business layer.

Functions of Each Layer

Sensor/Perception layer: This layer comprises of wireless devices, sensors, and radio frequency
identification (RFID) tags that are used for collecting and transmitting raw data such as the
temperature, moisture, etc. which is passed on to the next layer.
Network layer: This layer is largely responsible for routing data to the next layer in the hierarchy
with the help of network protocols. It uses wired and wireless technologies for data transmission.

Middleware layer: This layer comprises of databases that store the information passed on by the
lower layers where it performs information processing and uses the results to make further
decisions.
Service and application support layer: This layer involve business process modeling
andexecution as wellas IoT service monitoring and resolution.
Application layer: It consists of application user interface and deals with various applicationssuch
as homeautomation, electronic health monitoring, etc.
Business layer: this layer determines the future or further actions required based on the
dataprovided by thelower layers.

The IoT World Forum (IoTWF) Standardized Architecture:

In 2014 the IoTWF architectural committee (led by Cisco, IBM, Rockwell Automation, and
others) published a seven-layer IoT architectural reference model. While various IoT
reference models exist, the one put forth by the IoT World Forum offers a clean, simplified
perspective on IoT and includes edge computing, data storage, and access. It provides a
succinct way of visualizing IoT from a technical perspective. Each of the seven layers is
broken down into specific functions, and security encompasses the entire model. Figure
belowdetails the IoT Reference Model published by the IoTWF.
Figure 2-2 IoTWF Standardized Architecture

As shown in Figure 2-2, the IoT Reference Model defines a set of levels with control flowing
from the center (this could be either a cloud service or a dedicated data center), to the edge,
which includes sensors, devices, machines, and other types of intelligent end nodes. In
general, data travels up the stack, originating from the edge, and goes northbound to the
center.

Using this reference model, we are able to achieve the following:


 Decompose the IoT problem into smaller parts
 Identify different technologies at each layer and how they relate to one another
 Define a system in which different parts can be provided by different vendors
 Have a process of defining interfaces that leads to interoperability
 Define a tiered security model that is enforced at the transition points
between levels
The following sections look more closely at each of the seven layers of the IoT Reference
Model.
Layer 1: Physical Devices and Controllers Layer
The first layer of the IoT Reference Model is the physical devices and controllers layer. This
layer is hometo the “things” in the Internet of Things, including the various endpoint devices
and sensors that send andreceive information. The size of these “things” can range from
almost microscopic sensors to giantmachines in a factory. Their primary function is
generating data and being capable of being queriedand/or controlled over a network.
Layer 2: Connectivity Layer
In the second layer of the IoT Reference Model, the focus is on connectivity. The most
important functionof this IoT layer is the reliable and timely transmission of data. More
specifically, this includestransmissions between Layer 1 devices and the network and
between the network and informationprocessing that occurs at Layer 3 (the edge computing
layer).As you may notice, the connectivity layer encompasses all networking elements of IoT
and doesn’t reallydistinguish between the last-mile network (the network between the
sensor/endpoint and the IoT gateway,discussed later in this chapter), gateway, and backhaul
networks. Functions of the connectivity layer aredetailed in Figure 2-3.

Layer 3: Edge Computing Layer


Edge computing is the role of Layer 3. Edge computing is often referred to as the “fog” layer
and isdiscussed in the section “Fog Computing,” later in this chapter. At this layer, the
emphasis is on datareduction and converting network data flows into information that is
ready for storage and processing byhigher layers. One of the basic principles of this reference
model is that information processing is initiated
as early and as close to the edge of the network as possible. Figure 2-4 highlights the
functions handledby Layer 3 of the IoT Reference Model.
Another important function that occurs at Layer 3 is the evaluation of data to see if it can be
filtered oraggregated before being sent to a higher layer. This also allows for data to be
reformatted or decoded,making additional processing by other systems easier. Thus, a critical
function is assessing the data to seeif predefined thresholds are crossed and any action or
alerts need to be sent.

Upper Layers: Layers 4–7


The upper layers deal with handling and processing the IoT data generated by the bottom
layer. For thesake of completeness, Layers 4–7 of the IoT Reference Model are summarized
in Table 2-2.
3.2 IoT Protocols:
3.2.1 Adaption layer and Network Layer Protocols:
6LoWPAN:

While the Internet Protocol is key for a successful Internet of Things, constrained nodes and
constrained networks mandate optimization at various layers and on multiple protocols of the
IP architecture. Some optimizations are already available from the market or under
development by the IETF. Figure 3.1 highlights the TCP/IP layers where optimization is
applied.

Figure 3.1: Optimizing IP for IoT Using an Adaptation Layer

In the IP architecture, the transport of IP packets over any given Layer 1 (PHY) and
Layer 2 (MAC) protocol must be defined and documented. The model for packaging IP into
lower-layer protocols is often referred to as an adaptation layer.

An adaptation layer designed for IoT may include some optimizations to deal with
constrained nodes and networks. The main examples of adaptation layers optimized for
constrained nodes or “things” are the ones under the 6LoWPAN working group and its
successor, the 6Lo working group.

The initial focus of the 6LoWPAN working group was to optimize the transmission
of IPv6 packets over constrained networks such as IEEE 802.15.4. Figure 3.2 shows an
example of an IoT protocol stack using the 6LoWPAN adaptation layer beside the well-
known IP protocol stack for reference.

Figure 3.2: Comparison of an IoT Protocol Stack Utilizing 6LoWPAN and an IP Protocol
Stack

The 6LoWPAN working group published several RFCs, but RFC 4994 is
foundational because it defines frame headers for the capabilities of header compression,
fragmentation, and mesh addressing. These headers can be stacked in the adaptation layer to
keep these concepts separate while enforcing a structured method for expressing each
capability. Depending on the implementation, all, none, or any combination of these
capabilities and their corresponding headers can be enabled. Figure 3.3 shows some examples
of typical 6LoWPAN header stacks.

Figure 3.3 6LoWPAN Header Stack

Header Compression

IPv6 header compression for 6LoWPAN was defined initially in RFC 4944 and
subsequently updated by RFC 6282. This capability shrinks the size of IPv6’s 40-byte
headers and User Datagram Protocol’s (UDP’s) 8-byte headers down as low as 6 bytes
combined in some cases. Note that header compression for 6LoWPAN is only defined for an
IPv6 header and not IPv4.
The 6LoWPAN protocol does not support IPv4, and, in fact, there is no standardized
IPv4 adaptation layer for IEEE 802.15.4. 6LoWPAN header compression is stateless, and
conceptually it is not too complicated. However, a number of factors affect the amount of
compression, such as implementation of RFC 4944 versus RFC 6922, whether UDP is
included, and various IPv6 addressing scenarios.

At a high level, 6LoWPAN works by taking advantage of shared information known by all
nodes from their participation in the local network. In addition, it omits some standard header
fields by assuming commonly used values. Figure 3.4 highlights an example that shows the
amount of reduction that is possible with 6LoWPAN header compression.

Figure 3.4 6LoWPAN Header Compression

At the top of Figure 3.4, you see a 6LoWPAN frame without any header compression
enabled: The full 40- byte IPv6 header and 8-byte UDP header are visible. The 6LoWPAN
header is only a single byte in this case. Notice that uncompressed IPv6 and UDP headers
leave only 53 bytes of data payload out of the 127- byte maximum frame size in the case of
IEEE 802.15.4.

The bottom half of Figure 3.4 shows a frame where header compression has been
enabled for a best-case scenario. The 6LoWPAN header increases to 2 bytes to accommodate
the compressed IPv6 header, and UDP has been reduced in half, to 4 bytes from 8. Most
importantly, the header compression has allowed the payload to more than double, from 53
bytes to 108 bytes, which is obviously much more efficient. Note that the 2- byte header
compression applies to intra-cell communications, while communications external to the cell
may require some field of the header to not be compressed.
Fragmentation
The maximum transmission unit (MTU) for an IPv6 network must be at least 1280 bytes.
The term MTU defines the size of the largest protocol data unit that can be passed. For IEEE
802.15.4, 127 bytes is the MTU. This is a problem because IPv6, with a much larger MTU, is
carried inside the 802.15.4 frame with a much smaller one. To remedy this situation, large IPv6
packets must be fragmented across multiple 802.15.4 frames at Layer 2.

The fragment header utilized by 6LoWPAN is composed of three primary fields: Datagram
Size, Datagram Tag, and Datagram Offset. The 1-byte Datagram Size field specifies the
total size of the unfragmented payload. Datagram Tag identifies the set of fragments for a
payload. Finally, the Datagram Offset field delineates how far into a payload a particular
fragment occurs. Figure 3.5 provides an overview of a 6LoWPAN fragmentation header.

Figure 3.5 6LoWPAN Fragmentation Header

In Figure 3.5, the 6LoWPAN fragmentation header field itself uses a unique bit value
to identify that the subsequent fields behind it are fragment fields as opposed to another
capability, such as header compression. Also, in the first fragment, the Datagram Offset field
is not present because it would simply be set to 0. This results in the first fragmentation
header for an IPv6 payload being only 4 bytes long. The remainder of the fragments have a 5-
byte header field so that the appropriate offset can be specified.

Mesh Addressing

The purpose of the 6LoWPAN mesh addressing function is to forward packets over
multiple hops. Three fields are defined for this header: Hop Limit, Source Address, and
Destination Address. Analogous to the IPv6 hop limit field, the hop limit for mesh addressing
also provides an upper limit on how many times the frame can be forwarded. Each hop
decrements this value by 1 as it is forwarded. Once the value hits 0, it is dropped and no
longer forwarded.
The Source Address and Destination Address fields for mesh addressing are IEEE
802.15.4 addresses indicating the endpoints of an IP hop. Figure 3.6 details the 6LoWPAN
mesh addressing header fields.

Figure 3.6: 6LoWPAN Mesh Addressing Header

Note that the mesh addressing header is used in a single IP subnet and is a Layer 2
type of routing known as mesh-under. RFC 4944 only provisions the function in this case as
the definition of Layer 2 mesh routing specifications was outside the scope of the 6LoWPAN
working group, and the IETF doesn’t define “Layer 2 routing.” An implementation
performing Layer 3 IP routing does not need to implement a mesh addressing header unless
required by a given technology profile.

Routing Protocols for Low Power and Lossy Networks (RPL):


3.2.2 IoT Application Layer Protocols

When considering constrained networks and/or a large-scale deployment of constrained


nodes, verbose web-based and data model protocols, may be too heavy for IoT applications.
To address this problem, the IoT industry is working on new lightweight protocols that are
better suited to large numbers of constrained nodes and networks. Two of the most popular
protocols are CoAP and MQTT. Figure 2.19 highlights their position in a common IoT
protocol stack.

Figure 3.7: Example of a High-Level IoT Protocol Stack for CoAP and MQTT

In Figure 3.7, CoAP and MQTT are naturally at the top of this sample IoT stack, based on an
IEEE 802.15.4 mesh network. While there are a few exceptions, you will almost always find
CoAP deployed over UDP and MQTT running over TCP. The following sections take a
deeper look at CoAP and MQTT.

Constrained Application Protocol (CoAP):

Constrained Application Protocol (CoAP) resulted from the IETF Constrained


RESTful Environments (CoRE) working group’s efforts to develop a generic framework for
resource- oriented applications targeting constrained nodes and networks. The CoAP
framework defines simple and flexible ways to manipulate sensors and actuators for data or
device management.

The CoAP messaging model is primarily designed to facilitate the exchange of messages over
UDP between endpoints, including the secure transport protocol Datagram Transport Layer
Security (DTLS).

From a formatting perspective, a CoAP message is composed of a short fixed-length


Header field (4 bytes), a variable-length but mandatory Token field (0–8 bytes), Options
fields if necessary, and the Payload field. Figure 3.8 details the CoAP message format, which
delivers low overhead while decreasing parsing complexity.
Figure 3.8: CoAP Message Format

The CoAP message format is relatively simple and flexible. It allows CoAP to deliver low
overhead, which is critical for constrained networks, while also being easy to parse and
process for constrained devices.

Table 3.1 CoAP Message Fields

CoAP can run over IPv4 or IPv6. However, it is recommended that the message fit
within a single IP packet and UDP payload to avoid fragmentation. For IPv6, with the default
MTU size being 1280 bytes and allowing for no fragmentation across nodes, the maximum
CoAP message size could be up to 1152 bytes, including 1024 bytes for the payload. In the
case of IPv4, as IP fragmentation may exist across the network, implementations should
limit themselves to more conservative values and set the IPv4 Don’t Fragment (DF) bit.
CoAP communications across an IoT infrastructure can take various paths.
Connections can be between devices located on the same or different constrained networks or
between devices and generic Internet or cloud servers, all operating over IP. Proxy
mechanisms are also defined, and RFC 7252 details a basic HTTP mapping for CoAP. As
both HTTP and CoAP are IP-based protocols, the proxy function can be located practically
anywhere in the network, not necessarily at the border between constrained and non-
constrained networks.

Figure 3.9 : CoAP Communications in IoT Infrastructures

Just like HTTP, CoAP is based on the REST architecture, but with a “thing” acting as
both the client and the server. Through the exchange of asynchronous messages, a client
requests an action via a method code on a server resource. A uniform resource identifier
(URI) localized on the server identifies this resource. The server responds with a response
code that may include a resource representation. The CoAP request/response semantics
include the methods GET, POST, PUT, and DELETE.

Message Queuing Telemetry Transport (MQTT)

At the end of the 1990s, engineers from IBM and Arcom (acquired in 2006 by
Eurotech) were looking for a reliable, lightweight, and cost-effective protocol to monitor and
control a large number of sensors and their data from a central server location, as typically
used by the oil and gas industries. Their research resulted in the development and
implementation of the Message Queuing Telemetry Transport (MQTT) protocol that is now
standardized by the Organization for the Advancement of Structured Information Standards
(OASIS).

The selection of a client/server and publish/subscribe framework based on the TCP/IP


architecture, as shown in Figure 3.10
Figure 3.10 : MQTT Publish/Subscribe Framework

Figure 3.11 : MQTT Publish/Subscribe sequence diagram

An MQTT client can act as a publisher to send data (or resource information) to an
MQTT server acting as an MQTT message broker. In the example illustrated in Figure 3.10,
the MQTT client on the left side is a temperature (Temp) and relative humidity (RH) sensor
that publishes its Temp/RH data. The MQTT server (or message broker) accepts the network
connection along with application messages, such as Temp/RH data, from the publishers. It
also handles the subscription and unsubscription process and pushes the application data to
MQTT clients acting as subscribers.

The application on the right side of Figure 3.10 is an MQTT client that is a subscriber
to the Temp/RH data being generated by the publisher or sensor on the left. This model,
where subscribers express a desire to receive information from publishers, is well known. A
great example is the collaboration and social networking application Twitter.

With MQTT, clients can subscribe to all data (using a wildcard character) or specific
data from the information tree of a publisher. In addition, the presence of a message broker in
MQTT decouples the data transmission between clients acting as publishers and subscribers.
In fact, publishers and subscribers do not even know (or need to know) about each other. A
benefit of having this decoupling is that the MQTT message broker ensures that information
can be buffered and cached in case of network failures. This also means that publishers and
subscribers do not have to be online at the same time. MQTT control packets run over a TCP
transport using port 1883. TCP ensures an ordered, lossless stream of bytes between the
MQTT client and the MQTT server. Optionally, MQTT can be secured using TLS on port
8883, and WebSocket (defined in RFC 6455) can also be used.
MQTT is a lightweight protocol because each control packet consists of a 2-byte fixed
header with optional variable header fields and optional payload. You should note that a
control packet can contain a payload up to 256 MB. Figure 2.23 provides an overview of the
MQTT message format.

Figure 3.12 MQTT Message Format

Compared to the CoAP message format, MQTT contains a smaller header of 2 bytes
compared to 4 bytes for CoAP. The first MQTT field in the header is Message Type, which
identifies the kind of MQTT packet within a message. Fourteen different types of control
packets are specified in MQTT version 3.1.1. Each of them has a unique value that is coded
into the Message Type field. Note that values 0 and 15 are reserved. MQTT message types
are summarized in Table 3.2

Table 3.2 MQTT Message Type


The next field in the MQTT header is DUP (Duplication Flag). This flag, when set,
allows the client to notate that the packet has been sent previously, but an
acknowledgement was not received.
The QoS header field allows for the selection of three different QoS
levels. Quality of service (QoS) levels determine how each MQTT message
is delivered and must be specified for every message sent through MQTT.
Three QoS for message delivery could be achieved using MQTT:
1. QoS0 (At most once) -where messages are delivered
according to the best efforts of the operating environment.
Message loss can occur.
2. QoS1 (At least once) -where messages are assured to arrive
but duplicates can occur.
3. QoS2 (Exactly once) -where message are assured to arrive
exactly once.
There is a simple rule when considering performance impact of QoS:
“The higher the QoS, the lower the performance".
The next field is the Retain flag. Only found in a PUBLISH
message, the Retain flag notifies the server to hold onto the message
data. This allows new subscribers to instantly receive the last known
value without having to wait for the next update from the publisher.

The last mandatory field in the MQTT message header is Remaining


Length. This field specifies the number of bytes in the MQTT packet
following this field.

MQTT sessions between each client and server consist of four


phases: session establishment, authentication, data exchange, and
session termination. Each client connecting to a server has a unique
client ID, which allows the identification of the MQTT session between
both parties. When the server is delivering an application message to
more than one client, each client is treated independently.

Subscriptions to resources generate SUBSCRIBE/SUBACK


control packets, while unsubscription is performed through the exchange
of UNSUBSCRIBE/UNSUBACK control packets. Graceful termination
of a connection is done through a DISCONNECT control packet, which
also offers the capability for a client to reconnect by re-sending its client
ID to resume the operations.

A message broker uses a topic string or topic name to filter


messages for its subscribers. When subscribing to a resource, the
subscriber indicates the one or more topic levels that are used to structure
the topic name. The forward slash (/) in an MQTT topic name is used to
separate each level within the topic tree and provide a hierarchical
structure to the topic names.
Comparison of CoAP and MQTT
3.3 Thing Speak IoT Frame works:

The Internet of Things(IoT) is a system of ‘connected things’. The things generally


comprise of an embedded operating system and an ability to communicate with the
internet or with the neighbouring things. One of the key elements of a generic IoT
system that bridges the various ‘things’ is an IoT service. An interesting implication
from the ‘things’ comprising the IoT systems is that the things by themselves cannot
do anything. At a bare minimum, they should have an ability to connect to other
‘things’. But the real power of IoT is harnessed when the things connect to a
‘service’ either directly or via other ‘things’. In such systems, the service plays the
role of an invisible manager by providing capabilities ranging from simple data
collection and monitoring to complex data analytics. The below diagram illustrates
where an IoT service fits in an IoT ecosystem:
One such IoT application platform that offers a wide variety of analysis, monitoring and
counter-action capabilities is ‘ThingSpeak’.

What is ThingSpeak

ThingSpeak is a platform providing various services exclusively targeted for building IoT
applications. It offers the capabilities of real-time data collection, visualizing the collected
data in the form of charts, ability to create plugins and apps for collaborating with web
services, social network and other APIs. We will consider each of these features in detail
below.

The core element of ThingSpeak is a ‘ThingSpeak Channel’. A channel stores the data
that we send to ThingSpeak and comprises of the below elements:

 8 fields for storing data of any type - These can be used to store the data from a sensor
or froman embedded device.
 3 location fields - Can be used to store the latitude, longitude and the elevation. These
are veryuseful for tracking a moving device.
 1 status field - A short message to describe the data stored in the channel.

To use ThingSpeak, we need to signup and create a channel. Once we have a channel, we
can send the data, allow ThingSpeak to process it and also retrieve the same. Let us start
exploring ThingSpeak by signing up and setting up a channel.
Getting Started

Open https://thingspeak.com/ and click on the ‘Get Started Now’ button on the center of
the page and you will be redirected to the sign-up page(you will reach the same page
when you click the ‘Sign Up’ button on the extreme right). Fill out the required details
and click on the ‘Create Account’ button.

Now you should see a page with a confirmation that the account was successfully
created. The confirmation message disappears after a few seconds and the final page
should look as in the below screen:
Go ahead and click on ‘New Channel’. You should see a page like the below:

You can change the name to fit your need and you can add a description corresponding
to the channel. You can add any other useful description into the metadata field. In the
same page, you should see the fields for Latitude, Longitude and Elevation. Also, when
you scroll down you should see a check box that says ‘Make Public?’. Let us consider
the significance of the various fields and the tabs:

 Latitude, longitude and elevation - These fields correspond to the location of a


‘thing’ and are especially significant for moving things.
 Make Public? - If the channel is made public, anyone can view the channel's data
feed and the corresponding charts. If this check box is not checked, the channel is
private, which means for every read or write operation, the user has to pass a
corresponding API key.
 URL - This can be the URL of your blog or website and if specified, will appear on the
public view of the channel.
 Video ID - This is the ID corresponding to your YouTube or Vimeo ID. If
specified, the video appears on the public view of the channel.
Fields 1 to 8 - These are the fields which correspond to the data sent by a sensor or a
‘thing’. A field has to be added before it can be used to store data. By default, Field 1 is
added. In case you try posting to fields that you have not added, your request will still
be successful, but you will not be able to see the field in the charts and the
corresponding data. You can click on the small box before the ‘add field’ text
corresponding to each field to add it. Once you click the ‘add field’ box, a default label
name appears in the text box corresponding to each field and the ‘add field’ text
changes to ‘remove field’. You can edit the field text that appears by default when a
field is added to make more sense. For example, in the below screen, I have modified
the text for Field 2 to ‘SensorInput’. To remove a field which is added, just check on the
‘remove field’ box. Once you click this, the text ‘remove field’ changes back to ‘add
field’ and the corresponding field text is cleared.

Once you have edited the fields, click on ‘Save Channel’ button. You should now see
a page likethe below in which the ‘Private View’ tab is defaulted:
The Private View shows a chart corresponding to each of the fields that we have added.
Now click on the ‘Public View’ tab. This should look exactly similar to the what we see
in the ‘Private View’ tab since our channel is public. In case your channel is not
public('make public' check box not checked in the ‘channel settings’ tab), the public
view tab shows a message that ‘This channel is not public’.

Now click on the ‘API Keys’ tab. You should see a screen similar to the below. The
write API key is used for sending data to the channel and the read API key(s) is used to
read the channel data.
When we create a channel, by default, a write API key is generated. We generate read
API keys by clicking the ‘Generate New Read API Key’ button under this tab.You can
also add a note corresponding to each of the read API keys.

Note: Please note that clicking on the ‘Generate New Write API Key’ will over-write
the previous key. You will only have one Write API key at any point of time. Also, in
case your channel is private, others can only view the channel’s feed and charts by
using a Read API key. Please share the Read API keys with people who are approved
and authorized to view your channel.
Now click on the ‘Data Import/Export’ tab and you should see a screen similar to the
below. This tab is used to import the ‘Comma Separated Values(CSV)’ data from a file
into the channel. You
can also download the channel’s feed from here in CSV format. This tab also outlines
how to send and view data by providing the URIs to the send and view APIs.

After a series of updates, the charts in the private view tab for each of the fields will
look like the below:
Each of the dots correspond to the value and the time at which the value was posted to
the channel. Place the mouse over a dot to get more details on the exact date and the
GMT offset from which the value was posted.

ThingSpeak Apps

ThingSpeak provides apps that allow us for an easier integration with the web services,
social networks and other APIs. Below are some of the apps provided by ThingSpeak:

 ThingTweet - This allows you to post messages to twitter via ThingSpeak. In essence,
thisis a TwitterProxy which re-directs your posts to twitter.
 ThingHTTP - This allows you to connect to web services and supports GET, PUT, POST
and DELETE methods of HTTP.
 TweetControl - Using this, you can monitor your Twitter feeds for a specific key word
and then process the request. Once the specific keyword is found in the twitter feed, you
can then use ThingHTTP to connect to a different web service or execute a specific
action.
 React - Send a tweet or trigger a ThingHTTP request when the Channel meets a certain
condition.
 TalkBack - Use this app to queue up commands and then allow a device to act
uponthese queued commands.
 Timecontrol - Using this app, we can do a ThingTweet, ThingHTTP or a TalkBack
at a specified time in the future. We can also use this to allow these actions to
happen at aspecified time throughout the week.

In addition to the above, ThingSpeak allows us to create the ThingSpeak applications


as plugins using HTML, CSS and JavaScript which we can embed inside a website or
insideour ThingSpeak channel.

You might also like