Big Data using Hadoop: Difference between revisions
No edit summary |
No edit summary |
||
Line 5: | Line 5: | ||
● Lecture: Introduction to Big Data Technology and Hadoop | ● Lecture: Introduction to Big Data Technology and Hadoop | ||
● Lecture: Big Data | ● Lecture: Big Data Development Process | ||
● Hands-On: Installing Hadoop and Ecosystem Components | |||
● Hands-On: Configuring HDFS | ● Hands-On: Configuring HDFS |
Latest revision as of 21:07, 13 June 2017
The document, File:Big Data Workshop using Hadoop.pdf, provides an understanding of the Big Data Technology and its ecosystems as well as how to apply it in the enterprise, understanding of HDFS, Map Reduce Processing, Hive, Sqoop, Pig, and HBase. The reader is expected to have a background knowledge in cloud computing usage, e.g., Google Cloud.
The following topics are included in this document:
● Lecture: Introduction to Big Data Technology and Hadoop
● Lecture: Big Data Development Process
● Hands-On: Installing Hadoop and Ecosystem Components
● Hands-On: Configuring HDFS
● Hands-On: Importing Data to HDFS
● Hands-On: Reviewing, Retrieving, Deleting Data from HDFS
● Lecture: Understanding HBase
● Hands-On: HBase Examples
● Lecture: Understanding Hive
● Hands-On: Creating Table and Retrieving Data using Hive
● Lecture: Understanding Impala
● Hands-On: Creating Table and Retrieving Data using Impala
● Lecture: Understaining Ozzie
● Hands-On Running Ozzie
● Lecture: Understanding Sqoop
● Hands-On: Loading Data from DBMS to Hadoop HDFS
● Lecture: Understanding Flume
● Hands-On:Streaming Twitter Data to Hadoop HDFS
● Lecture: Understanding Avro-tools
● Lecture: Introduction to Kafka
● Hands-On: Realtime streaming using Kafka
● Introduction to Spark