Thank you for sending your enquiry! One of our team members will contact you shortly.
Thank you for sending your booking! One of our team members will contact you shortly.
Course Outline
-
Introduction
- Hadoop history and core concepts
- The Hadoop ecosystem
- Available distributions
- High-level architecture overview
- Common Hadoop myths
- Challenges of Hadoop (hardware and software)
- Labs: Discussion of your Big Data projects and challenges
-
Planning and installation
- Selecting software and Hadoop distributions
- Sizing the cluster and planning for future growth
- Hardware and network selection
- Rack topology
- Installation procedures
- Multi-tenancy configurations
- Directory structures and logs
- Benchmarking performance
- Labs: Cluster installation and running performance benchmarks
-
HDFS operations
- Key concepts (horizontal scaling, replication, data locality, rack awareness)
- Nodes and daemons (NameNode, Secondary NameNode, HA Standby NameNode, DataNode)
- Health monitoring
- Administration via command-line and browser interfaces
- Adding storage and replacing defective drives
- Labs: Familiarizing with HDFS command lines
-
Data ingestion
- Using Flume for logs and other data ingestion into HDFS
- Using Sqoop to import from SQL databases to HDFS and export back to SQL
- Hadoop data warehousing with Hive
- Copying data between clusters (distcp)
- Utilizing S3 as a complement to HDFS
- Best practices and architectures for data ingestion
- Labs: Setting up and using Flume and Sqoop
-
MapReduce operations and administration
- Parallel computing before MapReduce: comparing HPC with Hadoop administration
- Managing MapReduce cluster loads
- Nodes and Daemons (JobTracker, TaskTracker)
- Walk-through of the MapReduce UI
- MapReduce configuration
- Job configuration
- Optimizing MapReduce performance
- Fool-proofing MR: Guidance for programmers
- Labs: Running MapReduce examples
-
YARN: New architecture and capabilities
- YARN design goals and implementation architecture
- New actors: ResourceManager, NodeManager, Application Master
- Installing YARN
- Job scheduling under YARN
- Labs: Investigating job scheduling
-
Advanced topics
- Hardware monitoring
- Cluster monitoring
- Adding and removing servers, upgrading Hadoop
- Backup, recovery, and business continuity planning
- Oozie job workflows
- Hadoop high availability (HA)
- Hadoop Federation
- Securing your cluster with Kerberos
- Labs: Setting up monitoring
-
Optional tracks
- Cloudera Manager for cluster administration, monitoring, and routine tasks; installation and usage. In this track, all exercises and labs are performed within the Cloudera distribution environment (CDH5)
- Ambari for cluster administration, monitoring, and routine tasks; installation and usage. In this track, all exercises and labs are performed within the Ambari cluster manager and Hortonworks Data Platform (HDP 2.0)
Requirements
- Proficiency in basic Linux system administration
- Basic scripting skills
Prior knowledge of Hadoop and Distributed Computing is not required, as these topics will be introduced and explained throughout the course.
Lab environment
Zero Install: There is no need to install Hadoop software on your personal machine. A fully operational Hadoop cluster will be provided for student use.
Students will need the following tools
- An SSH client (Linux and Mac systems come with built-in SSH clients; for Windows, PuTTY is recommended)
- A web browser to access the cluster. We recommend using Firefox with the FoxyProxy extension installed
21 Hours
Testimonials (1)
Hands on exercises. Class should have been 5 days, but the 3 days helped to clear up a lot of questions that I had from working with NiFi already