Hadoop 2.6 Installing On Ubuntu 14.04 (Single-Node Cluster)
Hadoop 2.6 Installing On Ubuntu 14.04 (Single-Node Cluster)
Hadoop 2.6 Installing On Ubuntu 14.04 (Single-Node Cluster)
04
(SINGLE-NODE CLUSTER)
Installing Java
Hadoop framework is written in Java!!
k@laptop:~$ cd ~
Installing SSH
ssh has two main components:
1. ssh : The command we use to connect to remote machines - the client.
2. sshd : The daemon that is running on the server and allows clients to connect
to the server.
The ssh is pre-enabled on Linux, but in order to start sshd daemon, we need
to install sshfirst. Use this command to do that :
This will install ssh on our machine. If we get something similar to the
following, we can think it is setup properly:
So, we need to have SSH up and running on our machine and configured it to
allow SSH public key authentication.
Hadoop uses SSH (to access its nodes) which would normally require the
user to enter a password. However, this requirement can be eliminated by
creating and setting up SSH certificates using the following commands. If
asked for a filename just leave it blank and press the enter key to continue.
k@laptop:~$ su hduser
Password:
hduser@laptop:~$ ssh-keygen -t rsa -P ""
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hduser/.ssh/id_rsa):
Created directory '/home/hduser/.ssh'.
Your identification has been saved in /home/hduser/.ssh/id_rsa.
Your public key has been saved in /home/hduser/.ssh/id_rsa.pub.
The key fingerprint is:
50:6b:f3:fc:0f:32:bf:30:79:c2:41:71:26:cc:7d:e3 hduser@laptop
The key's randomart image is:
+--[ RSA 2048]----+
| .oo.o |
| . .o=. o |
| . + . o . |
| o = E |
| S + |
| . + |
| O + |
| O o |
| o.. |
+-----------------+
hduser@laptop:/home/k$ cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/auth
orized_keys
The second command adds the newly created key to the list of authorized
keys so that Hadoop can use ssh without prompting for a password.
Install Hadoop
hduser@laptop:~$ wget http://mirrors.sonic.net/apache/hadoop/common
/hadoop-2.6.0/hadoop-2.6.0.tar.gz
hduser@laptop:~$ tar xvzf hadoop-2.6.0.tar.gz
Oops!... We got:
hduser@laptop:~/hadoop-2.6.0$ su k
Password:
Now, the hduser has root priviledge, we can move the Hadoop installation to
the /usr/local/hadoop directory without any problem:
1. ~/.bashrc
2. /usr/local/hadoop/etc/hadoop/hadoop-env.sh
3. /usr/local/hadoop/etc/hadoop/core-site.xml
4. /usr/local/hadoop/etc/hadoop/mapred-site.xml.template
5. /usr/local/hadoop/etc/hadoop/hdfs-site.xml
1. ~/.bashrc:
Before editing the .bashrc file in our home directory, we need to find the path
where Java has been installed to set the JAVA_HOME environment variable
using the following command:
hduser@laptop:~$ vi ~/.bashrc
#HADOOP VARIABLES START
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64
export HADOOP_INSTALL=/usr/local/hadoop
export PATH=$PATH:$HADOOP_INSTALL/bin
export PATH=$PATH:$HADOOP_INSTALL/sbin
export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export HADOOP_HDFS_HOME=$HADOOP_INSTALL
export YARN_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib"
#HADOOP VARIABLES END
note that the JAVA_HOME should be set as the path just before the '.../bin/':
2. /usr/local/hadoop/etc/hadoop/hadoop-env.sh
We need to set JAVA_HOME by modifying hadoop-env.sh file.
hduser@laptop:~$ vi /usr/local/hadoop/etc/hadoop/hadoop-env.sh
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64
Adding the above statement in the hadoop-env.sh file ensures that the value
of JAVA_HOME variable will be available to Hadoop whenever it is started up.
3. /usr/local/hadoop/etc/hadoop/core-site.xml:
hduser@laptop:~$ vi /usr/local/hadoop/etc/hadoop/core-site.xml
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/app/hadoop/tmp</value>
<description>A base for other temporary directories.</description
>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
<description>The name of the default file system. A URI whose
scheme and authority determine the FileSystem implementation. Th
e
uri's scheme determines the config property (fs.SCHEME.impl) nami
ng
the FileSystem implementation class. The uri's authority is used
to
determine the host, port, etc. for a filesystem.</description>
</property>
</configuration>
4. /usr/local/hadoop/etc/hadoop/mapred-site.xml
hduser@laptop:~$ cp /usr/local/hadoop/etc/hadoop/mapred-site.xml.te
mplate /usr/local/hadoop/etc/hadoop/mapred-site.xml
The mapred-site.xml file is used to specify which framework is being used for
MapReduce.
We need to enter the following content in between the
<configuration></configuration> tag:
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:54311</value>
<description>The host and port that the MapReduce job tracker run
s
at. If "local", then jobs are run in-process as a single map
and reduce task.
</description>
</property>
</configuration>
5. /usr/local/hadoop/etc/hadoop/hdfs-site.xml
Before editing this file, we need to create two directories which will contain the
namenode and the datanode for this Hadoop installation.
This can be done using the following commands:
Open the file and enter the following content in between the
<configuration></configuration> tag:
hduser@laptop:~$ vi /usr/local/hadoop/etc/hadoop/hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
<description>Default block replication.
The actual number of replications can be specified when the file
is created.
The default is used if replication is not specified in create tim
e.
</description>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/hadoop_store/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/hadoop_store/hdfs/datanode</value>
</property>
</configuration>
Format the New Hadoop Filesystem
Now, the Hadoop file system needs to be formatted so that we can start to
use it. The format command should be issued with write permission since it
creates current directory
under /usr/local/hadoop_store/hdfs/namenode folder:
Starting Hadoop
Now it's time to start the newly installed single node cluster.
We can use start-all.sh or (start-dfs.sh and start-yarn.sh)
k@laptop:~$ cd /usr/local/hadoop/sbin
k@laptop:/usr/local/hadoop/sbin$ ls
distribute-exclude.sh start-all.cmd stop-balancer.sh
hadoop-daemon.sh start-all.sh stop-dfs.cmd
hadoop-daemons.sh start-balancer.sh stop-dfs.sh
hdfs-config.cmd start-dfs.cmd stop-secure-dns.sh
hdfs-config.sh start-dfs.sh stop-yarn.cmd
httpfs.sh start-secure-dns.sh stop-yarn.sh
kms.sh start-yarn.cmd yarn-daemon.sh
mr-jobhistory-daemon.sh start-yarn.sh yarn-daemons.sh
refresh-namenodes.sh stop-all.cmd
slaves.sh stop-all.sh
hduser@laptop:/usr/local/hadoop/sbin$ start-all.sh
hduser@laptop:~$ start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.
sh
15/04/18 16:43:13 WARN util.NativeCodeLoader: Unable to load native
-hadoop library for your platform... using builtin-java classes whe
re applicable
Starting namenodes on [localhost]
localhost: starting namenode, logging to /usr/local/hadoop/logs/had
oop-hduser-namenode-laptop.out
localhost: starting datanode, logging to /usr/local/hadoop/logs/had
oop-hduser-datanode-laptop.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/l
ogs/hadoop-hduser-secondarynamenode-laptop.out
15/04/18 16:43:58 WARN util.NativeCodeLoader: Unable to load native
-hadoop library for your platform... using builtin-java classes whe
re applicable
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hd
user-resourcemanager-laptop.out
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/
yarn-hduser-nodemanager-laptop.out
hduser@laptop:/usr/local/hadoop/sbin$ jps
9026 NodeManager
7348 NameNode
9766 Jps
8887 ResourceManager
7507 DataNode
The output means that we now have a functional instance of Hadoop running
on our VPS (Virtual private server).
Stopping Hadoop
$ pwd
/usr/local/hadoop/sbin
$ ls
distribute-exclude.sh httpfs.sh start-all.sh
start-yarn.cmd stop-dfs.cmd yarn-daemon.sh
hadoop-daemon.sh mr-jobhistory-daemon.sh start-balancer.sh
start-yarn.sh stop-dfs.sh yarn-daemons.sh
hadoop-daemons.sh refresh-namenodes.sh start-dfs.cmd
stop-all.cmd stop-secure-dns.sh
hdfs-config.cmd slaves.sh start-dfs.sh
stop-all.sh stop-yarn.cmd
hdfs-config.sh start-all.cmd start-secure-dns.sh
stop-balancer.sh stop-yarn.sh
hduser@laptop:/usr/local/hadoop/sbin$ pwd
/usr/local/hadoop/sbin
hduser@laptop:/usr/local/hadoop/sbin$ ls
distribute-exclude.sh httpfs.sh start-all.cmd
start-secure-dns.sh stop-balancer.sh stop-yarn.sh
hadoop-daemon.sh kms.sh start-all.sh
start-yarn.cmd stop-dfs.cmd yarn-daemon.sh
hadoop-daemons.sh mr-jobhistory-daemon.sh start-balancer.sh
start-yarn.sh stop-dfs.sh yarn-daemons.sh
hdfs-config.cmd refresh-namenodes.sh start-dfs.cmd
stop-all.cmd stop-secure-dns.sh
hdfs-config.sh slaves.sh start-dfs.sh
stop-all.sh stop-yarn.cmd
hduser@laptop:/usr/local/hadoop/sbin$
hduser@laptop:/usr/local/hadoop/sbin$ stop-all.sh
This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh
15/04/18 15:46:31 WARN util.NativeCodeLoader: Unable to load native
-hadoop library for your platform... using builtin-java classes whe
re applicable
Stopping namenodes on [localhost]
localhost: stopping namenode
localhost: stopping datanode
Stopping secondary namenodes [0.0.0.0]
0.0.0.0: no secondarynamenode to stop
15/04/18 15:46:59 WARN util.NativeCodeLoader: Unable to load native
-hadoop library for your platform... using builtin-java classes whe
re applicable
stopping yarn daemons
stopping resourcemanager
localhost: stopping nodemanager
no proxyserver to stop
hduser@laptop:/usr/local/hadoop/sbin$ start-all.sh
DataNode
I hope this site is informative and helpful.
Using Hadoop
If we have an application that is set up to use Hadoop, we can fire that up and
start using it with our Hadoop installation!