Product Description
OVERVIEW
Our Big Data Hadoop Certification Training course is a 39 hours classroom course. Companies around the world today are finding it increasingly difficult to organize and manage large volumes of data. Apache Hadoop is the most popular framework for processing Big Data. Hadoop provides rich and deep analytics capability, and it is making in-roads in to traditional BI analytics world. In this course, audience will learn about the business benefits and use cases for Hadoop and its ecosystem, how to install, maintain, and optimize Hadoop. This course will introduce to various components (HDFS, Map Reduce, Pig, Hive, Sqoop, Flume, Oozie, Spark, and HBase) Hadoop ecosystem.
AUDIENCE PROFILE: Coders, Developers, Administrators, DBA, Students, QA, BI, BA, Data Analytics, Architects, Data Scientists, Business Managers and Project Managers in IT who wants to jump into Big Data world
PRE-REQUISITE:
Software Knowledge: Should have the knowledge in the following areas:
- OOPs Concepts preferably with Core Java (knowledge of Core Java)
- Linux usage and File System operations (basic)
- Shell Scripting (basic)
- SQL (basic)
Hardware: Laptop or Desktop with at least
- HDD – 20+ GB free space, preferably SSD being faster
- RAM – 4+ GB
- CPU – 2 dual core
OS – Windows/Linux/Mac OS
COURSE TYPE: Classroom
DURATION: 39 Hours
COURSE DESCRIPTION
Techno Canada Centre of Excellence’s Big Data Hadoop Certification Training course is an interactive course with practical work as lab assignments, projects, and case study discussion.
Our course helps you to prepare for Hadoop Certification.
On course completion, you will receive a “Certificate of Completion” from Techno Canada Centre of Excellence.
Our Big Data Hadoop course covers:
- Introduction to Big Data and Hadoop
- Hadoop Administration – Getting Started with Hadoop Setup
- Hadoop Architecture: HDFS
- Hadoop Architecture: MapReduce Framework
- Data Warehousing – Pig
- Data Warehousing – Hive
- NoSQL Databases – HBase, Cassandra, Mongo DB
- Import/Export Structured Data – Sqoop
- Import Semi-Structured Streaming Data – Flume
- Workflows using Oozie
- Fast Processing using Apache Spark Ecosystem
- Big Data Real Time Scenarios
- Hadoop Use Cases and Case Studies