1 / 37

QuakeSim Work: Web Services, Portlets, Real Time Data Services

Collaborating geophysicists and computer scientists build cyber-infrastructure for geophysical research. Developing portlet-based portals, interactive maps, and sensor grids for real-time data analysis.

pia
Télécharger la présentation

QuakeSim Work: Web Services, Portlets, Real Time Data Services

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. QuakeSim Work: Web Services, Portlets, Real Time Data Services Marlon Pierce (mpierce@cs.indiana.edu) Contributions: Ahmet Sayar, Galip Aydin, Mehmet Aktas, Harshawardhan Gadgil, Zhigang Qi, Zao Liu Community Grids Lab Indiana University Project Funding: NASA AIST, ACCESS

  2. QuakeSim Project Overview • QuakeSim has been funded by CT, AIST, and ACCESS • Collaboration of geophysicists and computer scientists to build cyber-infrastructure for geophysical research. • CI research and development includes • Portlet-based portals • AJAX enabled • Geographical Information System services • Application services to run codes.

  3. QuakeSim Project Development Overview • Portlet-based portal components allow different portlets to be exchanged between projects. • Form-based portlets --> Interactive Maps • These are clients to Web services • Share with collaborators of REASoN portal. • Sensor Grid: Topic based publish-subscribe systems support operations on streaming data. • Web services allow request/response style access to data and codes. • GIS services (WMS, WFS) • “Execution grid” services for running codes: RDAHMM, ST_Filter • Use application specific WSDL on top of generic code management services. • GPS daily archive Web Services provided by Scripps.

  4. Evolving Project Philosophy • Methodology for interoperability between portal projects exists today. • Portlets, web services • Scripps and IU are proving this. • Must continue to move away from the portal priesthood and towards a “scientific mashup” model. • We still need to develop interesting services and client libraries. • But we should make it easy for the application scientists to • Add their own applications. • Add their own data. • Make their own web applications (ie mashups)

  5. IU Portlet Development We use JSR 168 portlets to build sharable portal plugins.

  6. Portlet Summary * Developed, needs to be converted into a portlet

  7. RDAHMM Portlet: Main Navigation

  8. RDAHMM Project Set Up

  9. RDAHMM GRWS Query Interface

  10. RDAHMM Results Page

  11. Real Time RDAHMM Portlet

  12. Station Monitor Portlet

  13. ST_Filter Portlets

  14. IU Web Service Work

  15. Web Services

  16. Sensor Grid Overview Publish/subscribe infrastructure for handling real time data.

  17. Input Signal Filter Output Signal Real-Time Services for GPS Observations • Real-time data processing is supported by employing filters around publish/subscribe messaging system. • The filters are small applications extended from a generic Filter class to inherit publish and subscribe capabilities.

  18. Filter Chains

  19. NaradaBrokering Topics

  20. Real-Time positions on Google maps

  21. Real-Time Station Position Changes

  22. RDAHMM + Real-Time GPS Integration

  23. SensorGrid Tests Galip Aydin – Zhigang Qi 11/16/2006

  24. SensorGrid Tests • Two Major Goals: System Stability and Scalability • Ensuring stability of the Filter Services for continuous operation. • Finding the maximum number of publishers (sensors) and clients that can be supported with a single broker. • Investigate if system scales for large number of sensors and clients.

  25. Test Methodology • The test system consists of a NaradaBrokering server and a three-filter chain for publishing, converting and receiving RYO messages. • We take 4 timings for determining mean end-to-end delivery times of GPS measurements. • The tests were run at least for 24 hours. Ttransfer = (T2 – T1) + (T4 – T3)

  26. 1- System Stability Test • The basic system with three filters and one broker. • The average transfer time shows the continuous operation does not degrade the system performance.

  27. We add more GPS networks by running more publishers. The results show that 1000 publishers can be supported with no performance loss. This is an operating system limit. 2 – Multiple Publishers Test

  28. We add more clients by running multiple Simple Filters. The system can support as many as 1000 clients with very low performance decrease. 3 – Multiple Clients Test 1000 Clients Adding clients

  29. Extending Scalability • The limit of the basic system appears to be 1000 clients or publishers. • This is due to an Operating System restriction of open file descriptors. • To overcome this limit we create NaradaBrokering networks with linking multiple brokers. • We run 2 brokers to support 1500 clients. • Number of brokers can be increased indefinitely, so we can potentially support any number of publishers and subscribers. • Still have to test, of course.

  30. Messages published to first broker can be received from the second broker. We take timings on each broker. The results show that the performance is very good and similar to single broker test. 4 – Multiple Brokers Test

  31. 4 – Multiple Brokers Test 750 Clients 750 Clients

  32. Test Results • The RYO Publisher filter publishes 24-hour archive of the CRTN_01 GPS network which contains 9 GPS stations. • The single broker configuration can support 1000 clients or networks (9000 stations) • The system can be scaled up by creating NaradaBrokering broker networks. • Message order was preserved in all tests.

  33. Federating Map Servers Zao Liu, Marlon Pierce, Geoffrey Fox Community Grids Laboratory Indiana University

  34. Integrating Map Servers • Geographical Information Systems combine online dynamic maps and databases. • Many GIS software packages exist • GIS servers around state of Indiana • ESRI ArcIMS and ArcMap Server (Marion, Vanderburgh, Hancock, Kosciusco, Huntington, Tippecanoe) • Autodesk MapGuide (Hamilton, Hendricks, Monroe, Wayne) • WTH Mapserver™ Web Mapping Application (Fulton, Cass, Daviess, City of Huntingburg) based on several Open Source projects (Minnesota Map Server) • Challenge: make 17 different county map servers from different companies work together. • 92 counties in Indiana, so potentially 92 different map servers.

  35. Considerations • We assume heterogeneity in GIS map and feature servers. • GIS services are organized bottom-up rather than top-down. • Local city governments, 92 different county governments, multiple Indiana state agencies, inter-state (Ohio, Kentucky) consideration, federal government data providers (Hazus). • Must find a way to federate existing services. • We must reconcile ESRI, Autodesk, OGC, Google Map, and other technical approaches. • Must try to take advantage of Google, ESRI, etc rather than compete. • We must have good performance and interactivity. • Servers must respond quickly--launching queries to 20 different map servers is very inefficient. • Clients should have simplicity and interactivity of Google Maps and similar AJAX style applications.

  36. Caching and Tiling Maps • Federation through caching: • WMS and WFS resources are queried and results are stored on the cache servers. • WMS images are stored as tiles. • These can be assembled into new images on demand (c. f. Google Maps). • Projections and styling can be reconciled. • We can store multiple layers this way. • We build adapters that can work with ESRI and OGC products; tailor to specific counties. • Serving images as tiles • Client programs obtain images directly from our tile server. • That is, don’t go back to the original WMS for every request. • Similar approaches can be used to mediate WFS requests. • This works with Google Map-based clients. • The tile server can re-cache and tile on demand if tile sections are missing.

  37. Map Server Example Marion and Hancock county parcel plots and IDs are overlaid on IU aerial photographic images that are accessed by this mashup using Google Map APIs. We cache and tile all the images from several different map servers. (Marion and Hancock actually use different commercial software.)

More Related