1 / 13

Hadoop Training In Hyderabad

Hadoop Training By OrienIT. The Best Institute For Hadoop And Oracle Courses, We Are The Best Trainers And We Are Providing The Unique Course Of Hadoop Training In Hyderabad. Our Faculties Are Experienced And Experts.

Télécharger la présentation

Hadoop Training In Hyderabad

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Hadoop Training In Hyderabad By Address Flat No 204, Annapurna Block,Aditya Enclave, Ameerpet, Hyderabad - 500038. Phone : 040 65142345Mobile: 970 320 2345email : info@orienit.com http://www.orienit.com

  2. About Hadoop • Hadoop: • Hadoop Cluster:  • A Real Time Example: • Core Components of Hadoop Cluster: • Slaves: http://www.orienit.com

  3. About Us • ORIEN IT has been established with the primary objective of offering superior IT training services and support for different business organizations.  • ORIEN IT provide integrated IT training services and the complete range of IT consulting support to cater all the requirements of both individual learners and corporate clients.  http://www.orienit.com

  4. Hadoop • Hadoop is an open source framework, that supports the processing of large data sets in a distributed computing environment. Hadoop consists of MapReduce, the Hadoop distributed file system (HDFS) and a number of related projects such as Apache Hive, HBase and Zookeeper. MapReduce and Hadoop distributed file system (HDFS) are the main component of Hadoop.  http://www.orienit.com

  5. Hadoop Cluster • Normally any set of loosely connected or tightly connected computers that work together as a single system is called Cluster. In simple words, a computer cluster used for Hadoop is called Hadoop Cluster. Hadoop cluster is a special type of computational cluster designed for storing and analyzing vast amount of unstructured data in a distributed computing environment. These clusters run on low cost commodity computers.  http://www.orienit.com

  6. Hadoop Cluster Hadoop clusters are often referred to as "shared nothing" systems because the only thing that is shared between nodes is the network that connects them. Large Hadoop Clusters are arranged in several racks. Network traffic between different nodes in the same rack is much more desirable than network traffic across the racks.  http://www.orienit.com

  7. A Real Time Example Here is a picture of Yahoo's Hadoop cluster. They have more than 10,000 machines running Hadoop and nearly 1 petabyte of user data.  http://www.orienit.com

  8. Core Components of Hadoop Cluster • Hadoop cluster has 3 components: • Client • Master • Slave • The role of each components are shown in the below image.  http://www.orienit.com

  9. Client • It is neither master nor slave, rather play a role of loading the data into cluster, submit MapReduce jobs describing how the data should be processed and then retrieve the data to see the response after job completion.  http://www.orienit.com

  10. Masters The Masters consists of 3 components NameNode, Secondary Node name and JobTracker. http://www.orienit.com

  11. Masters consists of 3 components • NameNode:NameNode does NOT store the files but only the file's metadata. In later section we will see it is actually the DataNode which stores the files. NameNode oversees the health of DataNode and coordinates access to the data stored in DataNode. Name node keeps track of all the file system related information such as to • Which section of file is saved in which part of the cluster • Last access time for the files • User permissions like which user have access to the file • JobTracker:JobTracker coordinates the parallel processing of data using MapReduce.  • Secondary Name Node:Don't get confused with the name "Secondary". Secondary Node is NOT the backup or high availability node for Name node.  http://www.orienit.com

  12. Slaves • Slave nodes are the majority of machines in Hadoop Cluster and are responsible to • Store the data • Process the computation http://www.orienit.com

  13. Best Hadoop Training In Hyderabad http://www.orienit.com

More Related