Top 10 Lessons Learned From Deploying Hadoop in A Private Cloud
Top 10 Lessons Learned From Deploying Hadoop in A Private Cloud
Top 10 Lessons Learned From Deploying Hadoop in A Private Cloud
Agenda
Introduction The Problem The Solution Top 10 Lessons Final Thoughts Q&A
OpenLogic, Inc.
Introduction
Rod Cope
CTO & Founder of OpenLogic 25 years of software development experience IBM Global Services, Anthem, General Electric
OpenLogic
Open Source Support, Governance, and Scanning Solutions Certified library w/SLA support on 500+ Open Source packages Over 200 Enterprise customers
OpenLogic, Inc.
The Problem
Big Data
All the worlds Open Source Software Metadata, code, indexes Individual tables contain many terabytes Relational databases arent scale-free
Growing every day Need real-time random access to all data Long-running and complex analysis jobs
OpenLogic, Inc. 4
The Solution
Hadoop, HBase, and Solr
Hadoop distributed file system, map/reduce HBase NoSQL data store column-oriented Solr search server based on Lucene All are scalable, flexible, fast, well-supported, used in production environments
OpenLogic, Inc.
Solution Architecture
Web Browser Scanner Client Ruby on Rails Resque Workers Stargate Nginx & Unicorn MySQL Live replication Live replication Solr
HBase
OpenLogic, Inc.
Hadoop Implementation
Private Cloud
100+ CPU cores 100+ Terabytes of disk Machines dont have identity Add capacity by plugging in new machines
OpenLogic, Inc.
Configuration is Key
Many moving parts Pay attention to the details
Operating system max open files, sockets, and other limits Hadoop max Map/Reduce jobs, memory, disk HBase region size, memory Solr merge factor, norms, memory
OpenLogic, Inc.
10
Upgrade your machine BIOS, network card BIOS, and all hardware drivers
Example: issues with certain default configurations of Dell boxes on CentOS/RHEL 5.x and Broadcom NICs
Will drop packets & cause other problems under high load
OpenLogic, Inc.
12
Hadoop stresses every bit of networking code in Java and tends to expose all the cracks This bug was fixed in JDK 1.6.0_18 (after 6 years)
OpenLogic, Inc. 13
Commodity Hardware
Commodity hardware != 3 year old desktop Dual quad-core, 32GB RAM, 4+ disks Dont bother with RAID on Hadoop data disks
Be wary of non-enterprise drives
OpenLogic, Inc.
14
Hadoop datanode gets remaining drives Redundant enterprise switches Dual- and quad-gigabit NICs
OpenLogic, Inc. 15
HBase
Needs 5+ machines to stretch its legs Depends on ZooKeeper low-latency is important Dont let it run low on memory
OpenLogic, Inc. 16
At OpenLogic, we spread raw source data across many machines and hard drives via NFS
Be very careful with NFS configuration can hang machines
How do find my data if primary key wont cut it? Solr to the rescue
Very fast, highly scalable search server with built-in sharding and replication based on Lucene Dynamic schema, powerful query language, faceted search, accessible via simple REST-like web API w/XML, JSON, Ruby, and other data formats
OpenLogic, Inc.
19
Solr
Sharding
Query any server it executes the same query against all other servers in the group Returns aggregated result to original caller
OpenLogic
Solr farm, sharded, cross-replicated, fronted with HAProxy
Load balanced writes across masters, reads across slaves and masters
Billions of lines of code in HBase, all indexed in Solr for real-time search in multiple ways Over 20 Solr fields indexed per source file
OpenLogic, Inc. 20
Expect to discover new and better ways of modeling your data and processes
Dont be afraid to start over once or twice
OpenLogic, Inc.
21
OpenLogic, Inc.
22
} public List filterLongerThan( List list, int length ) { List result = new ArrayList(); Iterator iter = list.iterator(); while ( iter.hasNext() ) { String item = (String) iter.next(); if ( item.length() <= length ) { result.add( item ); } } return result; }
OpenLogic, Inc. 23
Groovy
list = ["Rod", "Neeta", "Eric", "Missy"] shorts = list.findAll { name -> name.size() <= 4 } println shorts.size shorts.each { name -> println name } -> 2 -> Rod Eric
OpenLogic, Inc. 24
OpenLogic, Inc.
25
OpenLogic, Inc.
26
OpenLogic, Inc.
28
Operating System
Kernel panics, zombie processes, dropped packets
HBase
Backup, replication, and indexing solutions in flux
Solr
Several competing solutions around cloud-like scalability and fault-tolerance, including ZooKeeper and Hadoop integration
OpenLogic, Inc.
31
Final Thoughts
You can host big data in your own private cloud
Tools are available today that didnt exist a few years ago Fast to prototype production readiness takes time Expect to invest in training and support
Public clouds
Great for learning, experimenting, testing Best for bursts vs. sustained loads Beware latency, expense of long-term Big Data storage
Q&A