Big Data Analytics Assignment

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 7

BIG DATA ANALYTICS ASSIGNMENT

CAT – II

APACHE HADOOP

DONE BY ,

SANJAY R – 160081601046

SITHAARTHAN D – 160081601049
APACHE HADOOP
Apache Hadoop is a collection of open-source software utilities that facilitate using a
network of many computers to solve problems involving massive amounts of data and
computation. It provides a software framework fordistributed storage and processing of big
data using

the MapReduce programming model. Originally designed forcomputer clusters built from commodity
hardware[3]—still the common use—it has also found use on clusters of higher-end hardware.[4][5] All
the modules in Hadoop are designed with a fundamental assumption that hardware failures are
common occurrences and should be automatically handled by the framework.

The core of Apache Hadoop consists of a storage part, known as Hadoop Distributed File System
(HDFS), and a processing part which is a MapReduce programming model. Hadoop splits files into
large blocks and distributes them across nodes in a cluster. It then transfers packaged code into
nodes to process the data in parallel. This approach takes advantage of data locality,[7] where nodes
manipulate the data they have access to. This allows the dataset to beprocessed faster and more
efficiently than it would be in a more conventional supercomputer architecture that relies on
a parallel file system where computation and data are distributed via high-speed networking.[8][9]

The base Apache Hadoop framework is composed of the following modules:

 Hadoop Common – contains libraries and utilities needed by other Hadoop modules;
 Hadoop Distributed File System (HDFS) – a distributed file-system that stores data on
commodity machines, providing very high aggregate bandwidth across the cluster;
 Hadoop YARN – (introduced in 2012) a platform responsible for managing computing resources
in clusters and using them for scheduling users' applications;[10][11]
 Hadoop MapReduce – an implementation of the MapReduce programming model for large-scale
data processing.

The term Hadoop is often used for both base modules and sub-modules and also
the ecosystem,[12] or collection of additional software packages that can be installed on top of or
alongside Hadoop, such as Apache Pig, Apache Hive,Apache HBase, Apache Phoenix, Apache
Spark, Apache ZooKeeper, Cloudera Impala, Apache Flume, Apache Sqoop,Apache Oozie,
and Apache Storm.[13]

Apache Hadoop's MapReduce and HDFS components were inspired by Google papers
on MapReduce and Google File System.[14]

The Hadoop framework itself is mostly written in the Java programming language, with some native
code in C and command line utilities written as shell scripts. Though MapReduce Java code is
common, any programming language can be used with Hadoop Streaming to implement the map
and reduce parts of the user's program.[15] Other projects in the Hadoop ecosystem expose richer
user interfaces.

History
According to its co-founders, Doug Cutting and Mike Cafarella, the genesis of Hadoop was the
Google File System paper that was published in October 2003.[16][17]This paper spawned another one
from Google – "MapReduce: Simplified Data Processing on Large Clusters".[18] Development started
on the Apache Nutch project, but was moved to the new Hadoop subproject in January
2006.[19] Doug Cutting, who was working at Yahoo! at the time, named it after his son's toy
elephant.[20] The initial code that was factored out of Nutch consisted of about 5,000 lines of code for
HDFS and about 6,000 lines of code for MapReduce.

In March 2006, Owen O’Malley was the first committer to add to the Hadoop project;[21] Hadoop 0.1.0
was released in April 2006.[22] It continues to evolve through contributions that are being made to the
project.[23]

Architecture
See also: Hadoop Distributed File System, Apache HBase, and MapReduce

Hadoop consists of the Hadoop Common package, which provides file system and operating system
level abstractions, a MapReduce engine (either MapReduce/MR1 or YARN/MR2)[24] and the Hadoop
Distributed File System (HDFS). The Hadoop Common package contains the Java ARchive
(JAR) files and scripts needed to start Hadoop.

For effective scheduling of work, every Hadoop-compatible file system should provide location
awareness, which is the name of the rack, specifically the network switch where a worker node is.
Hadoop applications can use this information to execute code on the node where the data is, and,
failing that, on the same rack/switch to reduce backbone traffic. HDFS uses this method when
replicating data for data redundancy across multiple racks. This approach reduces the impact of a
rack power outage or switch failure; if any of these hardware failures occurs, the data will remain
available.

A small Hadoop cluster includes a single master and multiple worker nodes. The master node
consists of a Job Tracker, Task Tracker, NameNode, and DataNode. A slave or worker node acts as
both a DataNode and TaskTracker, though it is possible to have data-only and compute-only worker
nodes. These are normally used only in nonstandard applications.[26]

Hadoop requires Java Runtime Environment (JRE) 1.6 or higher. The standard startup and
shutdown scripts require that Secure Shell (SSH) be set up between nodes in the cluster.[27]
In a larger cluster, HDFS nodes are managed through a dedicated NameNode server to host the file
system index, and a secondary NameNode that can generate snapshots of the namenode's memory
structures, thereby preventing file-system corruption and loss of data. Similarly, a standalone
JobTracker server can manage job scheduling across nodes. When Hadoop MapReduce is used
with an alternate file system, the NameNode, secondary NameNode, and DataNode architecture of
HDFS are replaced by the file-system-specific equivalents.

File systems
Hadoop distributed file system
The Hadoop distributed file system (HDFS) is a distributed, scalable, and portable file system written
in Java for the Hadoop framework. Some consider it to instead be a data store due to its lack
of POSIX compliance,[28] but it does provide shell commands and Java application programming
interface (API) methods that are similar to other file systems.[29] A Hadoop is divided into HDFS and
MapReduce. HDFS is used for storing the data and MapReduce is used for the Processing the Data.
HDFS has five services as follows:

1. Name Node

2. Secondary Name Node

3. Job tracker

4. Data Node

5. Task Tracker

Top three are Master Services/Daemons/Nodes and bottom two are Slave Services. Master
Services can communicate with each other and in the same way Slave services can communicate
with each other. Name Node is a master node and Data node is its corresponding Slave node and
can talk with each other.

Name Node: HDFS consists of only one Name Node we call it as Master Node which can track the
files, manage the file system and has the meta data and the whole data in it. To be particular Name
node contains the details of the No. of blocks, Locations at what data node the data is stored and
where the replications are stored and other details. As we have only one Name Node we call it as
Single Point Failure. It has Direct connect with the client.

Data Node: A Data Node stores data in it as the blocks. This is also known as the slave node and it
stores the actual data into HDFS which is responsible for the client to read and write. These are
slave daemons. Every Data node sends a Heartbeat message to the Name node every 3 seconds
and conveys that it is alive. In this way when Name Node does not receive a heartbeat from a data
node for 2 minutes, it will take that data node as dead and starts the process of block replications on
some other Data node.

Secondary Name Node: This is only to take care of the checkpoints of the file system metadata
which is in the Name Node. This is also known as the checkpoint Node. It is helper Node for the
Name Node.

Job Tracker: Basically Job Tracker will be useful in the Processing the data. Job Tracker receives
the requests for Map Reduce execution from the client. Job tracker talks to the Name node to know
about the location of the data like Job Tracker will request the Name Node for the processing the
data. Name node in response gives the Meta data to job tracker.

Task Tracker: It is the Slave Node for the Job Tracker and it will take the task from the Job Tracker.
And also it receives code from the Job Tracker. Task Tracker will take the code and apply on the file.
The process of applying that code on the file is known as Mapper.[30]

Hadoop cluster has nominally a single namenode plus a cluster of datanodes,


although redundancy options are available for the namenode due to its criticality. Each datanode
serves up blocks of data over the network using a block protocol specific to HDFS. The file system
uses TCP/IP sockets for communication. Clients useremote procedure calls (RPC) to communicate
with each other.

HDFS stores large files (typically in the range of gigabytes to terabytes[31]) across multiple machines.
It achieves reliability by replicating the data across multiple hosts, and hence theoretically does not
require redundant array of independent disks (RAID) storage on hosts (but to increase input-output
(I/O) performance some RAID configurations are still useful). With the default replication value, 3,
data is stored on three nodes: two on the same rack, and one on a different rack. Data nodes can
talk to each other to rebalance data, to move copies around, and to keep the replication of data high.
HDFS is not fully POSIX-compliant, because the requirements for a POSIX file-system differ from
the target goals of a Hadoop application. The trade-off of not having a fully POSIX-compliant file-
system is increased performance for data throughput and support for non-POSIX operations such as
Append.[32]

In May 2012, high-availability capabilities were added to HDFS,[33] letting the main metadata server
called the NameNode manually fail-over onto a backup. The project has also started developing
automatic fail-overs.

The HDFS file system includes a so-called secondary namenode, a misleading term that some might
incorrectly interpret as a backup namenode when the primary namenode goes offline. In fact, the
secondary namenode regularly connects with the primary namenode and builds snapshots of the
primary namenode's directory information, which the system then saves to local or remote
directories. These checkpointed images can be used to restart a failed primary namenode without
having to replay the entire journal of file-system actions, then to edit the log to create an up-to-date
directory structure. Because the namenode is the single point for storage and management of
metadata, it can become a bottleneck for supporting a huge number of files, especially a large
number of small files. HDFS Federation, a new addition, aims to tackle this problem to a certain
extent by allowing multiple namespaces served by separate namenodes. Moreover, there are some
issues in HDFS such as small file issues, scalability problems, Single Point of Failure (SPoF), and
bottlenecks in huge metadata requests. One advantage of using HDFS is data awareness between
the job tracker and task tracker. The job tracker schedules map or reduce jobs to task trackers with
an awareness of the data location. For example: if node A contains data (a, b, c) and node X
contains data (x, y, z), the job tracker schedules node A to perform map or reduce tasks on (a, b, c)
and node X would be scheduled to perform map or reduce tasks on (x, y, z). This reduces the
amount of traffic that goes over the network and prevents unnecessary data transfer. When Hadoop
is used with other file systems, this advantage is not always available. This can have a significant
impact on job-completion times as demonstrated with data-intensive jobs.[34]

HDFS was designed for mostly immutable files and may not be suitable for systems requiring
concurrent write operations.[32]

HDFS can be mounted directly with a Filesystem in Userspace (FUSE) virtual file
system on Linux and some other Unix systems.

File access can be achieved through the native Java API, the Thrift API (generates a client in a
number of languages e.g. C++, Java, Python, PHP, Ruby, Erlang, Perl, Haskell, C#, Cocoa,
Smalltalk, and OCaml), the command-line interface, the HDFS-UI web application over HTTP, or via
3rd-party network client libraries.[35]

HDFS is designed for portability across various hardware platforms and for compatibility with a
variety of underlying operating systems. The HDFS design introduces portability limitations that
result in some performance bottlenecks, since the Java implementation cannot use features that are
exclusive to the platform on which HDFS is running.[36] Due to its widespread integration into
enterprise-level infrastructure, monitoring HDFS performance at scale has become an increasingly
important issue. Monitoring end-to-end performance requires tracking metrics from datanodes,
namenodes, and the underlying operating system.[37] There are currently several monitoring
platforms to track HDFS performance, including Hortonworks, Cloudera, and Datadog.

Other file systems


Hadoop works directly with any distributed file system that can be mounted by the underlying
operating system by simply using a file:// URL; however, this comes at a price – the loss of
locality. To reduce network traffic, Hadoop needs to know which servers are closest to the data,
information that Hadoop-specific file system bridges can provide.
In May 2011, the list of supported file systems bundled with Apache Hadoop were:

 HDFS: Hadoop's own rack-aware file system.[38] This is designed to scale to tens of petabytes of
storage and runs on top of the file systems of the underlyingoperating systems.
 FTP file system: This stores all its data on remotely accessible FTP servers.
 Amazon S3 (Simple Storage Service) object storage: This is targeted at clusters hosted on
the Amazon Elastic Compute Cloud server-on-demand infrastructure. There is no rack-
awareness in this file system, as it is all remote.
 Windows Azure Storage Blobs (WASB) file system: This is an extension of HDFS that allows
distributions of Hadoop to access data in Azure blob stores without moving the data permanently
into the cluster.

A number of third-party file system bridges have also been written, none of which are currently in
Hadoop distributions. However, some commercial distributions of Hadoop ship with an alternative file
system as the default – specifically IBM and MapR.

 In 2009, IBM discussed running Hadoop over the IBM General Parallel File System.[39] The
source code was published in October 2009.[40]
 In April 2010, Parascale published the source code to run Hadoop against the Parascale file
system.[41]
 In April 2010, Appistry released a Hadoop file system driver for use with its own CloudIQ
Storage product.[42]
 In June 2010, HP discussed a location-aware IBRIX Fusion file system driver.[43]
 In May 2011, MapR Technologies Inc. announced the availability of an alternative file system for
Hadoop, MapR FS, which replaced the HDFS file system with a full random-access read/write
file system.

You might also like