1 / 31

Chapter 10 Cloud-Enabling Dust Storm Forecasting

Chapter 10 Cloud-Enabling Dust Storm Forecasting. Qunying Huang, Jizhe Xia, Manzhu Yu, Karl Benedict and Myra Bambacus. Learning Objectives. General computing challenges for computing intensive applications How cloud computing can help address those issues

Télécharger la présentation

Chapter 10 Cloud-Enabling Dust Storm Forecasting

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 10 Cloud-Enabling Dust Storm Forecasting QunyingHuang, Jizhe Xia, Manzhu Yu, Karl Benedict and Myra Bambacus

  2. Learning Objectives General computing challenges for computing intensive applications How cloud computing can help address those issues Configure HPC cluster on the cloud Deploy duststorm model onto the cloud Run the dust storm on the cloud Performance and cost analysis

  3. Learning Materials • Videos: • Chapter_10_Video.mp4 • Scripts, Files and others: • mirror.tar.gz

  4. Learning Modules • Dust storm modeling and challenges • Dust storm model cloud deployment • General steps • Special considerations • Use case: Arizona Phoenix 2011 July 05 • Nested modeling • Cloud performance and efficiency analysis • Conclusion and discussions

  5. Dust Storm Hazards Illness & Diseases Traffic & Car accidences Air Pollution Ecological System Desertification Global/regional Climate Phoenix Dust Storm a "100-Year Event“, 2011, July 5th

  6. Dust storm models • Eta-4bin (Kallos et al., 1997; Nickovic et al., 1997) • Low resolution (30 KM) • Large area • Eta-8bin (Nickovic et al., 1997; Nickovic et al., 2011) • 8 categories of dust particles • Low resolution (30 KM) • Large area • NMM-dust (Janjic et al., 2001; Janjic, 2003) • High resolution (3 KM) • Small area

  7. Dust storm modeling • Dust storm modeling (Purohit et al., 1999) • Divide the domain into three-dimensional grid cells • Solve a series of numerical equations on each cell • Numerical calculations are repeated on each cell

  8. Dust Storm Forecasting Challenges O(n^4) Input/output Southwest U.S. Finish One-day forecasting in 2 hours Dust Storm Forecasting

  9. Learning Modules • Dust storm modeling and challenges • Dust storm model cloud deployment • General steps • Special considerations • Use case: Arizona Phoenix 2011 July 05 • Nested modeling • Cloud performance and efficiency analysis • Conclusion and discussions

  10. Dust Storm Model Deployment onto the Cloud 1. Authorize network access 12. Create two new AMIs from the running instances 2. Launchone cluster instanceas the head node 11. Configure and test the model 3. SSH to the instance 10. Export the NFS directory to the computing node 4. Install the software dependency and middleware, e.g., NFS, and MPICH2 9. Deploy the model on the NFS exporting directory 5. Create a new AMI from the head node and start an instance from the new AMI 8. Mount the volume to the NFS exporting directory 6. Configure the middleware on both nodes to enable communication 7. Create an EBS volume Video: Chapter_10_Video.mp4

  11. Step 1. Authorize network Access • Dust Storm Model Deployment onto the Cloud • Configure firewall rule for the firewall group “hpc” • Open port 22 for SSH • Open port 9000 -10000 for MPICH2 Video: Chapter_10_Video.mp4 0:00 – 1:35

  12. Dust Storm Model Deployment onto the Cloud • Step 2. Lunch one cluster instance as the head node • Use Amazon cluster compute AMI • Create a ssh key pair “hpc”, and save the public key pair file to local storage as “hpc.pem” • Use the security group “hpc” • Step 3. SSH to the instance • Use the public ssh key file “hpc.pem” • Change the user permission for “hpc.pem” as 600 (user root read permission only) Video: Chapter_10_Video.mp4 1:35 – 4:10

  13. Dust Storm Model Deployment onto the Cloud Step 4. Install software dependencies Install NFS and MPICH2: • [root@domU-head~] yum install gccgcc-c++ autoconfautomake • [root@domU-head~] yum –y install nfs-utilsnfs-utils-lib sytem-config-nfs#install NFS • [root@domU-head~] wgethttp://www.mcs.anl.gov/research/projects/mpich2staging/ • goodell/downloads/tarballs/1.5/mpich2-1.5.tar.gz # download MPICH2 package • [root@domU-head~] tar –zvxfmpich2-1.5.tar.gz # Unzip [root@domU-head~] mkdir /home/clouduser/mpich2-install # create an installation directory • [root@domU-head~] cd mpich2-1.5 • [root@domU-head~] ./configure -prefix=/home/clouduser/mpich2-install --enable-g=all --enable-fc --enable-shared --enable-sharedlibs=gcc --enable-debuginfo • [root@domU-head~] make # Build MPICH2 [root@domU-head~] make install # Install MPICH2

  14. Dust Storm Model Deployment onto the Cloud Step 5. Create a new AMI from the head node Step 4. Install software dependencies Configure and start NFS: [root@domU-head ~] mkidr /headMnt # create a NFS export directory [root@domU-head ~] echo "/headMnt *rw " >> /etc/exports [root@domU-head ~] exportfs -ra [root@domU-head ~] service nfs start #start up NFS • Create a computing node from the new AMI Video: Chapter_10_Video.mp4 4:10 – 13:02

  15. Dust Storm Model Deployment onto the Cloud Step 6. Configure the head node and computing node Keyless access from the head node to the computing nodes [root@domU-head ~] vi /etc/hosts #access to the hosts list of the head node [root@domU-head ~] ssh-keygen -t rsa #create a public key at the head node [root@domU-computing~] mkdir –p /root/.ssh/ [root@domU-computing ~] scproot@domU-head: /root/.ssh/id_ras.pub /root/.ssh/ #copy the public key from the head node to the computing node [root@domU-computing ~] cat /root/.ssh/id_ras.pub >> /root/.ssh/authorized_keys Video: Chapter_10_Video.mp4 13:02- 17:55

  16. Step 7. Create an EBS volume • Dust Storm Model Deployment onto the Cloud • Attach to the head node Step 8. Mount the volume to the NFS exporting directory • Make a file system for the EBS volume • Mount the volume to head node directory /headMnt Video: Chapter_10_Video.mp4 13:02 – 20:40

  17. Dust Storm Model Deployment onto the Cloud Step 9. Deploy the model • Download the model under NFS directory (/headMnt) • Export the NFS directory (/headMnt) to the computing node • Step 10. Export the NFS directory to the computing node [root@domU-computing ~] mkdir /computingMnt # create the directory [root@domU-computing ~] mount -t nfs -o rwdomU-head:/headMnt /headMnt #Mount the volume to the NFS export directory Video: Chapter_10_Video.mp4 20:40 – 24:19

  18. Dust Storm Model Deployment onto the Cloud Step 11. Configure and Test the model • cd /headMnt/mirror/performancetest/scripts • ./run_test.sh ec2 >& ec2.log & • Step 12. Create two new AMIs from the running instances Video: Chapter_10_Video.mp4 24:19 – 28:18

  19. Learning Modules • Dust storm modeling and challenges • Dust storm model cloud deployment • General steps • Special considerations • Use case: Arizona Phoenix 2011 July 05 • Nested modeling • Cloud performance and efficiency analysis • Conclusion and discussions

  20. Special considerations • Configuring a virtual cluster environment • Create placement group • Loosely coupled nested modeling and cloud computing • Auto-scaling • write scripts with the EC2 APIs

  21. Create placement group 1 2

  22. Learning Modules • Dust storm modeling and challenges • Dust storm model cloud deployment • General steps • Special considerations • Use case: Arizona Phoenix 2011 July 05 • Nested modeling • Cloud performance and efficiency analysis • Conclusion and discussions

  23. Nested model • Nested model: provide high resolution results for one or several area of interests over a large area. Tightly Nesting • A model run with multiple resolutions • Modifications of models (Michalakes et al., 1998) • Knowledge of placement for high-resolution nested subdomains AOI: NMM-dust (3KM) Nested Subdomain #2 (3KM) Loosely Nesting ETA-8bin (30KM) Domain #1(30KM) • ETA-8bin identifies AOIs(Area of Interesting) • <Low resolution (30 KM) , Large area> • NMM-dust performs forecasting over AOIs • <High resolution (3 KM ), Small area> • Nested Subdomain #3 • (3KM) AOI: NMM-dust (3KM)

  24. Loosely Nested Model Low-resolution model Results Figure a. 18 AOIs Distribution Low-resolution model domain area and sub-regions (Area Of Interests, AOIs) identified for high-resolution model execution

  25. Learning Modules • Dust storm modeling and challenges • Dust storm model cloud deployment • General steps • Special considerations • Use case: Arizona Phoenix 2011 July 05 • Nested modeling • Cloud performance and efficiency analysis • Conclusion and discussions

  26. Run Under Cloud >> Performance analysis Use 2 hours for one-day forecasting over Southwest of U.S. (37 X 20 degree) 18 Subregions run on 18 Amazon EC2 virtual cluster

  27. Run Under Cloud >> Cost analysis cont The yearly cost of a local cluster is around 12.7 times higher than that of the EC2 cloud service if 28 EC2 instances (with 400 CPU cores) are leveraged to handle the high resolution and concurrent computing requirements for a duration of 48 hours.

  28. Run Under Cloud >> Cost analysis

  29. Conclusion • Large-scale forecasting is Computable (2 hours) • Loosely coupled nested model • Cloud computing • Being capable of provisioning a large amount of computing power in a few minutes • Economically sustaining low access rates and low resolution models

  30. Discussion questions • What are computing challenges for dust storm forecasting? • What are the general steps to deploy dust storm model on the cloud? • Which instance type is better for dust storm forecasting, regular instance or HPC instance? Why? • How to configure a virtual high performance computing (HPC) cluster to support computing-intensive applications? • How Elastic Block Storage (ebs) service is used in supporting the dust storm model deployment to the cloud? • How to create a placement group for HPC instances using both Amazon web console management and command line tools? Why we need this step? • Compared to Chapter 5 deployment for general applications onto the cloud, what are the special considerations for dust storm model? • Why can cloud computing achieve cost-efficiency? • Why cloud computing provides a good solution to support disruptive event (e.g., dust storm) simulation? • What are the remaining issues while using cloud computing to support dust storm simulation?

  31. Reference • Huang Q., Yang C., Benedict K., Chen, S., Rezgui A., Xie J., 2013. Enabling Dust storm Forecasting Using Cloud Computing, International Journal of Digital Earth. DOI:10.1080/17538947.2012.749949. • Huang Q., Yang C., Benedict K., Rezgui A., Xie J., Xia J., Chen, S., 2012. Using Adaptively Coupled Models and High-performance Computing for Enabling the Computability of Dust Storm Forecasting, International Journal of Geographic Information Science. DOI:10.1080/13658816.2012.715650. • Xie J., Yang C., Zhou B., Huang Q., 2010. High Performance Computing for the Simulation of Dust Storms. Computers, Environment, and Urban Systems, 34(4): 278-290

More Related