1 / 13

Email Trust in MobiCloud using Hadoop Framework Updates

Email Trust in MobiCloud using Hadoop Framework Updates. Sayan Cole Jaya Chakladar Group No: 1. Overview. Installation of Hadoop Understanding the existing email trust system and its suitability as a MapReduce application. Project Tasks (updated). Software and Hardware Requirements.

season
Télécharger la présentation

Email Trust in MobiCloud using Hadoop Framework Updates

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Email Trust in MobiCloud using Hadoop FrameworkUpdates Sayan Cole Jaya Chakladar Group No: 1

  2. Overview • Installation of Hadoop • Understanding the existing email trust system and its suitability as a MapReduce application

  3. Project Tasks (updated)

  4. Software and Hardware Requirements • Hadoop • Database software e.g. MySQL or Apache HDFS • 3 or 4 Android phones mapped to virtual machines in 2 different Linux boxes

  5. Hadoop Single Cluster Installation Prerequisites Java 6 Add the canonical partner repository to the apt repository Update the source list Install JDK Select Sun’s Java as the default on the machine Add a dedicated Hadoop system user Configure SSH Configure SSH access for Hadoop system user Generate an SSH key for Hadoop user Enable SSH access to local mahine with the new key created Disable IPv6

  6. Hadoop Single Cluster Installation Download Hadoop from Apache mirror sites and extract Set JAVA_HOME in /conf/hadoop-env.sh Configure core-site.xml Set path for hadoop.tmp.dir to local directory Set the HDFS variable Configure mapred-site.xml to set the host and port of mapReduce job tracker. Configure hdfs-site.xml to specify the number of replications for each file in the system.

  7. Hadoop Single Cluster Installation Format the Hadoop HDFS name node – make sure data is backed up Start a single node cluster, this starts the name node, data node, job tracker & task tracker.

  8. Hadoop Multiple Cluster Installation Setup two single node clusters to continue Designate one as master and the other one as slave Shutdown clusters in both machines Update /etc/hosts on both machines with appropriate names (master and slave) and addresses SSH configuration between master and slave Hadoop user must connect to users master and slave Password less connection

  9. Hadoop Multiple Cluster Installation • Master node runs master daemons like name node for HDFS and job tracker • Both nodes run slave daemons like data node fro HDFS and task tracker

  10. Hadoop Multiple Cluster Installation Master vs. Slave configuration On master /conf/master lists the master On slaves, /conf/slaves lists two entries master and slave Update core-site.xml on all machines to setup fs.default.name as hdfs://master:<Port number> Update mapred-site.xml on all sites to fix mapread.job.tracker as master:<port number> Change dfs.replication variable in hdfs-site.xml to the number of sites avaiable , 4 in our case. Format the name node

  11. Hadoop Multiple Cluster Installation Start up the multi-node cluster Start HDFS daemons like name node and data node daemons in master and slaves respectively Start Map reduce daemons like job tracker on master and task tracker in slaves

  12. Challenges faced so far Multi node setup errors

  13. Project Time Line

More Related