1 / 32

Apache Mesos

Apache Mesos. http://incubator.apache.org/mesos @ ApacheMesos. Benjamin Hindman – @ benh. history. Berkeley research project including Benjamin Hindman , Andy Konwinski , Matei Zaharia , Ali Ghodsi , Anthony D. Joseph, Randy Katz, Scott Shenker , Ion Stoica

monty
Télécharger la présentation

Apache Mesos

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Apache Mesos http://incubator.apache.org/mesos @ApacheMesos Benjamin Hindman – @benh

  2. history • Berkeley research project including Benjamin Hindman, Andy Konwinski, MateiZaharia, Ali Ghodsi, Anthony D. Joseph, Randy Katz, Scott Shenker, Ion Stoica • http://incubator.apache.org/mesos/research.html

  3. Mesos aims to make it easier to build distributed applications/frameworks and share cluster resources

  4. applications/frameworks services analytics

  5. applications/frameworks services analytics

  6. how? Hadoop service Hadoop service … Mesos … Node Node Node Node Node Node Node Node

  7. level of abstraction • more easily share the resources via multi-tenancy and elasticity (improve utilization) • run on bare-metal or virtual machines – develop against Mesos API, run in private datacenter (Twitter), or the cloud, or both!

  8. static partitioning sharing with Mesos Hadoop Spark shared cluster service

  9. features • APIs in C++, Java, Python • high-availability via zookeeper • isolation via linuxcontrol groups (LXC)

  10. in progress • official apache release • more linuxcgroup support (oom and i/o, in particular, networking) • resource usage monitoring, reporting • new allocators (priority based, usage based) • new frameworks (storm) • scheduler management (launching, watching, re-launching, etc)

  11. 400+ nodes running production services • genomics researchers using Hadoop and Spark • Spark in use by Yahoo! Research • Spark for analytics • Hadoop and Spark used by machine learning researchers Your Name Here

  12. demonstration

  13. linux environment • $ yum install -ygcc-c++ • $ yum install -y java-1.6.0-openjdk-devel.x86_64 • $ yum install -ymake.x86_64 • $ yum install -ypatch.x86_64 • $ yum install -y python26-devel.x86_64 • $ yum install -yant.noarch

  14. get mesos • $ wget http://people.apache.org/~benh/mesos-0.9.0-incubating-RC3/mesos-0.9.0-incubating.tar.gz • $ tar zxvf mesos-0.9.0-incubating.tar.gz • $ cd mesos-0.9.0

  15. build mesos • $ mkdirbuild • $ cdbuild • $ ../configure.amazon-linux-64 • $ make • $ make install

  16. deploy mesos (1) • /usr/local/var/mesos/deploy/masters: • ec2-50-17-28-135.compute-1.amazonaws.com • /usr/local/var/mesos/deploy/slaves: • ec2-184-73-142-43.compute-1.amazonaws.com • ec2-107-22-145-31.compute-1.amazonaws.com

  17. deploy mesos (2) • on slaves (i.e., ec2-184-73-142-43.compute-1.amazonaws.com, ec2-107-22-145-31.compute-1.amazonaws.com) • /usr/local/var/mesos/conf/mesos.conf: • master=ec2-50-17-28-135.compute-1.amazonaws.com

  18. deploy mesos (3) • $ /usr/local/sbin/mesos-start-cluster.sh

  19. build hadoop • $ make hadoop • $ mv hadoop/hadoop-0.20.205.0 /etc/hadoop • $ cp protobuf-2.4.1.jar /etc/hadoop • $ cp src/mesos-0.9.0.jar /etc/hadoop

  20. configure hadoop (1) • conf/mapred-site.xml: • <configuration> • <property> • <name>mapred.job.tracker</name> • <value>ip-10-108-207-105.ec2.internal:9001</value> • </property> • <property> • <name>mapred.jobtracker.taskScheduler</name> • <value>org.apache.hadoop.mapred.MesosScheduler</value> • </property> • <property> • <name>mapred.mesos.master</name> • <value>ip-10-108-207-105.ec2.internal:5050</value> • </property> • </configuration>

  21. configure hadoop (2) • conf/hadoop-env.sh: • #!/bin/sh • export JAVA_HOME=/usr/lib/jvm/jre • # Google protobuf (necessary for running the MesosScheduler). • export PROTOBUF_JAR=${HADOOP_HOME}/protobuf-2.4.1.jar • # Mesos. • export MESOS_JAR=${HADOOP_HOME}/mesos-0.9.0.jar • # Native Mesos library. export MESOS_NATIVE_LIBRARY=/usr/local/lib/libmesos.so • export HADOOP_CLASSPATH=${HADOOP_HOME}/build/contrib/mesos/classes:${MESOS_JAR}:${PROTOBUF_JAR} • ...

  22. configure hadoop (3) • conf/core-site.sh: • <configuration> • <property> • <name>fs.default.name</name> • <value>hdfs://ip-10-108-207-105.ec2.internal:9000</value> • </property> • </configuration>

  23. configure hadoop (4) • conf/masters: • ec2-50-17-28-135.compute-1.amazonaws.com • conf/slaves: • ec2-184-73-142-43.compute-1.amazonaws.com • ec2-107-22-145-31.compute-1.amazonaws.com

  24. configure hadoop (4) • conf/masters: • ec2-50-17-28-135.compute-1.amazonaws.com • conf/slaves: • ec2-184-73-142-43.compute-1.amazonaws.com • ec2-107-22-145-31.compute-1.amazonaws.com

  25. starting hadoop • $ pwd • /etc/hadoop • $ ./bin/hadoopjobtracker

  26. running wordcount • $ ./bin/hadoopjar hadoop-examples-0.20.205.0.jar wordcountmacbeth.txt output

  27. starting another hadoop • <configuration> • <property> •  <name>mapred.job.tracker</name> •  <value>ip-10-108-207-105.ec2.internal:9002</value> • </property> • <property> •   <name>mapred.job.tracker.http.address</name> •   <value>0.0.0.0:50032</value> •   </property> •   <property> • <name>mapred.task.tracker.http.address</name> •  <value>0.0.0.0:50062</value> •   </property> • </configuration>

  28. get and build spark • $ gitclone git://github.com/mesos/spark.git • $ cdspark • $ gitcheckout --track origin/mesos-0.9 • $ sbt/sbtcompile

  29. configure spark • $ cp conf/spark-env.sh.template conf/spark-env.sh • conf/spark-env.sh: • #!/bin/sh • export SCALA_HOME=/root/scala-2.9.1-1 • export MESOS_NATIVE_LIBRARY=/usr/local/lib/libmesos.so • export SPARK_MEM=1g

  30. run spark shell • $ pwd • /root/spark • $ MASTER=$HOSTNAME:5050 ./spark-shell

  31. setting log_dir • on slaves (i.e., ec2-184-73-142-43.compute-1.amazonaws.com, ec2-107-22-145-31.compute-1.amazonaws.com) • /usr/local/var/mesos/conf/mesos.conf: • master=ec2-50-17-28-135.compute-1.amazonaws.com • log_dir=/tmp/mesos

  32. re-deploy mesos • $ /usr/local/sbin/mesos-stop-slaves.sh • $ /usr/local/sbin/mesos-start-slaves.sh

More Related