Big Data using Hadoop: Difference between revisions
No edit summary |
No edit summary |
||
Line 1: | Line 1: | ||
The document | The document, [[File:Big_Data_Workshop_using_Hadoop.pdf]], provides an understanding of the Big Data Technology and its ecosystems as well as how to apply it in the enterprise, understanding of HDFS, Map Reduce Processing, Hive, Sqoop, Pig, and HBase. The reader is expected to have a background knowledge in cloud computing usage, e.g., [https://cloud.google.com Google Cloud]. | ||
The following topics are included in this document: | The following topics are included in this document: |
Revision as of 21:06, 13 June 2017
The document, File:Big Data Workshop using Hadoop.pdf, provides an understanding of the Big Data Technology and its ecosystems as well as how to apply it in the enterprise, understanding of HDFS, Map Reduce Processing, Hive, Sqoop, Pig, and HBase. The reader is expected to have a background knowledge in cloud computing usage, e.g., Google Cloud.
The following topics are included in this document:
● Lecture: Introduction to Big Data Technology and Hadoop
● Lecture: Big Data Develo● Hands-On: Installing Hadoop and Ecosystem Components
● Hands-On: Configuring HDFS
● Hands-On: Importing Data to HDFS
● Hands-On: Reviewing, Retrieving, Deleting Data from HDFS
● Lecture: Understanding HBase
● Hands-On: HBase Examples
● Lecture: Understanding Hive
● Hands-On: Creating Table and Retrieving Data using Hive
● Lecture: Understanding Impala
● Hands-On: Creating Table and Retrieving Data using Impala
● Lecture: Understaining Ozzie
● Hands-On Running Ozzie
● Lecture: Understanding Sqoop
● Hands-On: Loading Data from DBMS to Hadoop HDFS
● Lecture: Understanding Flume
● Hands-On:Streaming Twitter Data to Hadoop HDFS
● Lecture: Understanding Avro-tools
● Lecture: Introduction to Kafka
● Hands-On: Realtime streaming using Kafka
● Introduction to Spark