340 likes | 451 Vues
The Agile Infrastructure Project Monitoring. Markus Schulz Pedro Andrade. Outline. Monitoring WG and AI Today’s Monitoring in IT Architecture Vision Implementation Plan Conclusions. Monitoring WG and AI. Markus Schulz. Introduction. Motivation
E N D
The Agile Infrastructure ProjectMonitoring Markus Schulz Pedro Andrade
Outline Monitoring WG and AI Today’s Monitoring in IT Architecture Vision Implementation Plan Conclusions 2
Monitoring WG and AI Markus Schulz 3
Introduction • Motivation • Several independent monitoring activities in IT • similar overall approach, different tool-chains, similar limitations • High level services are interdependent • combination of data from different groups necessary, but difficult • Understanding performance became more important • requires more combined data and complex analysis • Move to a virtualized dynamic infrastructure • comes with complex new requirements on monitoring • Challenges • Find a shared architecture and tool-chain components while preserving our investment in monitoring • IT Monitoring Working Group 4
Timeline 5
Today’s Monitoring in IT Pedro Andrade 6
Monitoring Data • Producers • 40538 • Input Volume • 283 GB per day • Input Rate • 697 M entries per min • 2,4 M entries per min without PES/process accounting • Query Rate • 52 M queries per day • 3,3 M entries per day without PES/process accounting 9
Analysis • Monitoring in IT covers a wide range of resources • Hardware, OS, applications, files, jobs, etc • Many application-specific monitoring solutions • Some are commercial solutions • Based on different technologies • Limited sharing of monitoring data • Maybe no sharing, simply duplication of monitoring data • All monitoring applications have similar needs • Publish metric results, aggregate results, alarms, etc 10
Architecture Vision Pedro Andrade 11
Constraints (Data) • Large data store aggregating all monitoring data for storage and combined analysis tasks • Make monitoring data easy to access by everyone! • Not forgetting possible security constraints • Select a simple and well supported data format • Monitoring payload to be schema free • Rely on a centralized metadata service(s) to discover the computer center resources information • Which is the physical node running virtual machine A • Which is the virtual machine running service B • Which is the network link used by node C • … this is becoming more dynamic in the AI 12
Constraints (Technology) • Focus on providing well established solutions for each layer of the monitoring architecture • Transport, storage, analysis • Flexible architecture where a particular technology can be easily replaced by a better one • Adopt whenever possible existing tools and avoid home grown solutions • Follow a tool chain approach • Allow a phased transition where existing applications are gradually integrated 13
User Stories • User stories were collected from all IT groups and commonalities between them were identified • To guarantee that different types of user stories were provided three categories were established: • Fast and Furious (FF) • Get metrics values for hardware and selected services • Raise alarms according to appropriate thresholds • Digging Deep (DD) • Curation of hardware and network historical data • Analysis and statistics on batch job and network data • Correlate and Combine (CC) • Correlation between usage, hardware, and services • Correlation between job status and grid status 14
Architecture Overview Splunk Portal Report Alarm Portal Hadoop Analysis Storage Oracle Application Specific Storage Feed Alarm Feed Custom Feed Apollo Aggregation Lemon Publisher Sensor Publisher Sensor 15
Architecture Overview • All components can be changed easily • Including the messaging system (standard protocol) • Messaging and storage as central components • Tools connect either to the Messaging or Storage • Publishers should be kept as simple as possible • Data produced either directly on sensor or after a first level of aggregation • Scalability can be addressed either by horizontally scaling or by adding additional layers • Pre-aggregation, pre-processing • “Fractal approach” 16
Data Format • The selected message format is JSON • A simple common schema must be defined to guarantee cross-reference between the data. • Timestamp • Hardware and node • Service and applications • Payload • These base elements (tag) require the availability of the metadata service(s) mentioned before • This is still under discussion 17
Messaging Broker • Two technologies have been identified as the best candidates: Apollo and RabbitMQ • Apollo is the successor ActiveMQ • Prior positive experience in IT and the experiments • Only realistic testing environments can produce reliable performance numbers. The use case of each application must be clear defined • Total number of producers and consumers • Size of the monitoring message • Rate of the monitoring message • The trailblazer applications have already very demanding use cases 18
Central Storage and Analysis • All data is stored in a common location • Makes easy the sharing of monitoring data • Promotes sharing of analysis tools • Allows feeding into the system data already processed • NoSQL technologies are the most suitable solutions • Focus on column/tabular and document based solutions • Hadoop (from the Cloudera distribution) as first step 19
Central Storage and Analysis • Hadoop is a good candidate to start with • Prior positive experience in IT and the experiments • Map-reduce paradigm is a good match for the use cases • Has been used successfully at scale • Many different NoSQL solutions use Hadoop as backend • Many tools provide export and import interfaces • Several related modules available (Hive, HBase) • Document based store also considered • CouchDB/MongoDB are good candidates • For some use cases a parallel relational database solution (based on Oracle) could be considered 20
Integrating Closed Solutions Visualization/Reports Analysis Storage Messaging Transport Sensor Integrated Product Export Interface • External (commercial) monitoring • Windows SCOM, Oracle EM Grid Control, Spectrum CA • These data sources must be integrated • Injecting final results into the messaging layer • Exporting relevant data at an intermediate stage 21
Implementation Plan Pedro Andrade 22
Transition Plan Moving the existing production monitoring services to a new base architecture is a complex task as these services must be continuously running A transition plan was defined and foresees a staged approach where the existing applications gradually incorporate elements of the new architecture 23
Transition Plan Portal Report Alarm Analysis Storage NEW Storage Feed Alarm Feed OLD Aggregation Publisher Publisher Publisher 24
Milestones 25
Monitoring v1 • Several meetings organized • https://twiki.cern.ch/twiki/bin/view/AgileInfrastructure/AgileInfraDocsMinutes • Short-term tasks identified and tickets created • https://agileinf.cern.ch/jira/secure/TaskBoard.jspa • Work ongoing on four main areas: • Messaging broker deployment • Hadoop cluster deployment • Testing of Splunk with Lemon data • Lemon agents running on puppet 26
Monitoring v1 • Deployment of the messaging broker • Based on Apollo and RabbitMQ • Three SL6 nodes have been provided • 2 nodes for production, 1 node for development • Each node will run Apollo and RabbitMQ • Three applications have been identified to start using/testing the messaging infrastructure • OpenStack • MCollective • Lemon 27
Monitoring v1 • Testing Splunk with Lemon data • Lemon data to be exported from DB (1 day, 1 metric) • Data exported into a JSON file and stored n AFS • This data will be imported to Splunk • Splunk functionality and scalability will be tested • Started the deployment of a Hadoop cluster • Taking the Cloudera distribution • Other tools may also be deployed (HBase, Hive, etc) • Hadoop testing using Lemon data (as above) is planned 28
Monitoring v1/v2 • AI nodes monitored with existing Lemon metrics • First step • Current Lemon sensors/metrics are used for AI nodes • Lemon metadata will still be taken from Quattor • A solution is defined to get CDB equivalent data • Second step • Current Lemon sensors/metrics are used for AI nodes • Lemon metadata is not taken from Quattor • Lemon agents start using the messaging infrastructure 29
Conclusions Pedro Andrade 30
Conclusions • A monitoring architecture has been defined • Promotes sharing of monitoring data between apps • Based on few core components (transport, storage, etc) • Several existing external technologies identified • A concrete implementation plan has been identified • It assures a smooth transition for today’s applications • It enables the new AI nodes to be monitored quickly • It allows moving towards a common system 31
Links • Monitoring WG Twiki (new location!) • https://twiki.cern.ch/twiki/bin/view/MonitoringWG/ • Monitoring WG Report (ongoing) • https://twiki.cern.ch/twiki/bin/view/MonitoringWG/MonitoringReport • Agile Infrastructure TWiki • https://twiki.cern.ch/twiki/bin/view/AgileInfrastructure/ • Agile Infrastructure JIRA • https://agileinf.cern.ch/jira/browse/AI 32
Thanks ! Questions? 33