Edureka Interview Questions - HDFS
Edureka Interview Questions - HDFS
Edureka Interview Questions - HDFS
Big Data is nothing but an assortment of such a huge and complex data that it becomes very tedious to capture, store, process, retrieve and analyze it with
the help of on-hand database management tools or traditional data processing techniques.
Know more about Big Data
Can you give some examples of Big Data?
There are many real life examples of Big Data Facebook is generating 500+ terabytes of data per day, NYSE (New York Stock Exchange) generates about 1
terabyte of new trade data per day, a jet airline collects 10 terabytes of censor data for every 30 minutes of flying time. All these are day to day examples of
Big Data!
Can you give a detailed overview about the Big Data being generated by Facebook?
As of December 31, 2012, there are 1.06 billion monthly active users on facebook and 680 million mobile users. On an average, 3.2 billion likes and
comments are posted every day on Facebook. 72% of web audience is on Facebook. And why not! There are so many activities going on facebook from wall
posts, sharing images, videos, writing comments and liking posts, etc. In fact, Facebook started using Hadoop in mid-2009 and wa s one of the initial users
of Hadoop.
Also read Facebooks Giant Leap with Big Data
What are the four characteristics of Big Data?
According to IBM, the three characteristics of Big Data are: Volume: Facebook generating 500+ terabytes of data per day.Velocity: Analyzing 2 million
records each day to identify the reason for losses. Variety: images, audio, video, sensor data, log files, etc. Veracity: biases, noise and abnormality in data
How Big is Big Data?
With time, data volume is growing exponentially. Earlier we used to talk about Megabytes or Gigabytes. But time has arrived when we talk about data volume
in terms of terabytes, petabytes and also zettabytes! Global data volume was around 1.8ZB in 2011 and is expected to be 7.9ZB in 2015. It is also known
that the global information doubles in every two years!
How is analysis of Big Data useful for organizations?
Effective analysis of Big Data provides a lot of business advantage as organizations will learn which areas to focus on and which areas are less important.
Big data analysis provides some early key indicators that can prevent the company from a huge loss or help in grasping a great opportunity with open hands!
A precise analysis of Big Data helps in decision making! For instance, nowadays people rely so much on Facebook and Twitter before buying any product or
service. All thanks to the Big Data explosion.
Who are Data Scientists?
Data scientists are soon replacing business analysts or data analysts. Data scientists are experts who find solutions to analyze data. Just as web analysis,
we have data scientists who have good business insight as to how to handle a business challenge. Sharp data scientists are not only involved in dealing
business problems, but also choosing the relevant issues that can bring value-addition to the organization.
More about Data Scientists
What is Hadoop?
Hadoop is a framework that allows for distributed processing of large data sets across clusters of commodity computers using a simple programming model.
More on Hadoop
Why do we need Hadoop?
Everyday a large amount of unstructured data is getting dumped into our machines. The major challenge is not to store large data sets in our systems but to
retrieve and analyze the big data in the organizations, that too data present in different machines at different locations. In this situation a necessity for
Hadoop arises. Hadoop has the ability to analyze the data present in different machines at different locations very quickly and in a very cost effective way. It
uses the concept of MapReduce which enables it to divide the query into small parts and process them in parallel. This is also known as parallel computing.
The following link Why Hadoop gives a detailed explanation about why Hadoop is gaining so much popularity!
What are some of the characteristics of Hadoop framework?
Hadoop framework is written in Java. It is designed to solve problems that involve analyzing large data (e.g. petabytes). The programming model is based on
Googles MapReduce. The infrastructure is based on Googles Big Data and Distributed File System. Hadoop handles large files/data throughput and
supports data intensive distributed applications. Hadoop is scalable as more nodes can be easily added to it.
Give a brief overview of Hadoop history.
In 2002, Doug Cutting created an open source, web crawler project. In 2004, Google published MapReduce, GFS papers. In 2006, Doug Cutting developed
the open source, Mapreduce and HDFS project. In 2008, Yahoo ran 4,000 node Hadoop cluster and Hadoop won terabyte sort benchmark. In 2009,
Facebook launched SQL support for Hadoop.
Give examples of some companies that are using Hadoop structure?
A lot of companies are using the Hadoop structure such as Cloudera, EMC, MapR, Hortonworks, Amazon, Facebook, eBay, Twitter, Google and so on.
What is the basic difference between traditional RDBMS and Hadoop?
Traditional RDBMS is used for transactional systems to report and archive the data, whereas Hadoop is an approach to store huge amount of data in the
distributed file system and process it. RDBMS will be useful when you want to seek one record from Big data, whereas, Hadoop will be useful when you want
Big data in one shot and perform analysis on that later.
What is structured and unstructured data?
Structured data is the data that is easily identifiable as it is organized in a structure. The most common form of structured data is a database where specific
information is stored in tables, that is, rows and columns. Unstructured data refers to any data that cannot be identified easily. It could be in the form of
images, videos, documents, email, logs and random text. It is not in the form of rows and columns.
What are the core components of Hadoop?
Core components of Hadoop are HDFS and MapReduce. HDFS is basically used to store large data sets and MapReduce is used to process such large
data sets.
tracker goes down all the running jobs are halted. It receives heartbeat from task tracker based on which Job tracker decides whether the assigned task is
completed or not.
What is a task tracker?
Task tracker is also a daemon that runs on datanodes. Task Trackers manage the execution of individual tasks on slave node. When a client submits a job,
the job tracker will initialize the job and divide the work and assign them to different task trackers to perform MapReduce tasks. While performing this action,
the task tracker will be simultaneously communicating with job tracker by sending heartbeat. If the job tracker does not receive heartbeat from task tracker
within specified time, then it will assume that task tracker has crashed and assign that task to another task tracker in the cluster.
Is Namenode machine same as datanode machine as in terms of hardware?
It depends upon the cluster you are trying to create. The Hadoop VM can be there on the same machine or on another machine. For instance, in a single
node cluster, there is only one machine, whereas in the development or in a testing environment, Namenode and datanodes are on different machines.
What is a heartbeat in HDFS?
A heartbeat is a signal indicating that it is alive. A datanode sends heartbeat to Namenode and task tracker will send its heart beat to job tracker. If the
Namenode or job tracker does not receive heart beat then they will decide that there is some problem in datanode or task tracker is unable to perform the
assigned task.
Are Namenode and job tracker on the same host?
No, in practical environment, Namenode is on a separate host and job tracker is on a separate host.
What is a block in HDFS?
A block is the minimum amount of data that can be read or written. In HDFS, the default block size is 64 MB as contrast to the block size of 8192 bytes in
Unix/Linux. Files in HDFS are broken down into block-sized chunks, which are stored as independent units. HDFS blocks are large as compared to disk
blocks, particularly to minimize the cost of seeks. If a particular file is 50 mb, will the HDFS block still consume 64 mb as the default size? No, not at all! 64
mb is just a unit where the data will be stored. In this particular situation, only 50 mb will be consumed by an HDFS block and 14 mb will be free to store
something else. It is the MasterNode that does data allocation in an efficient manner.
What are the benefits of block transfer?
A file can be larger than any single disk in the network. Theres nothing that requires the blocks from a file to be stored on the same disk, so they can take
advantage of any of the disks in the cluster. Making the unit of abstraction a block rather than a file simplifies the storage subsystem. Blocks provide fault
tolerance and availability. To insure against corrupted blocks and disk and machine failure, each block is replicated to a small number of physically separate
machines (typically three). If a block becomes unavailable, a copy can be read from another location in a way that is transparent to the client.
If we want to copy 10 blocks from one machine to another, but another machine can copy only 8.5 blocks, can the blocks be broken at the time of
replication?
In HDFS, blocks cannot be broken down. Before copying the blocks from one machine to another, the Master node will figure out what is the actual amount
of space required, how many block are being used, how much space is available, and it will allocate the blocks accordingly.
How indexing is done in HDFS?
Hadoop has its own way of indexing. Depending upon the block size, once the data is stored, HDFS will keep on storing the last part of the data which will
say where the next part of the data will be. In fact, this is the base of HDFS.
If a data Node is full how its identified?
When data is stored in datanode, then the metadata of that data will be stored in the Namenode. So Namenode will identify if the data node is full.
If datanodes increase, then do we need to upgrade Namenode?
While installing the Hadoop system, Namenode is determined based on the size of the clusters. Most of the time, we do not need to upgrade the Namenode
because it does not store the actual data, but just the metadata, so such a requirement rarely arise.
Are job tracker and task trackers present in separate machines?
Yes, job tracker and task tracker are present in different machines. The reason is job tracker is a single point of failure for the Hadoop MapReduce service. If
it goes down, all running jobs are halted.
When we send a data to a node, do we allow settling in time, before sending another data to that node?
Yes, we do.
Does hadoop always require digital data to process?
Yes. Hadoop always require digital data to be processed.
On what basis Namenode will decide which datanode to write on?
As the Namenode has the metadata (information) related to all the data nodes, it knows which datanode is free.
Doesnt Google have its very own version of DFS?
Yes, Google owns a DFS known as Google File System (GFS) developed by Google Inc. for its own use.
Who is a user in HDFS?
A user is like you or me, who has some query or who needs some kind of data.
Is client the end user in HDFS?
No, Client is an application which runs on your machine, which is used to interact with the Namenode (job tracker) or datanode (task tracker).
What is the communication channel between client and namenode/datanode?
The mode of communication is SSH.
What is a rack?
Rack is a storage area with all the datanodes put together. These datanodes can be physically located at different places.
single location.