1 / 33

MITRE Performance Testing: Load Testing With Usage Analysis

MITRE Performance Testing: Load Testing With Usage Analysis. The MITRE CI&T Performance Test Team February 2009. Acknowledgements. This briefing was prepared by adopting material from the Fundamentals of LoadRunner 8.1 training manual

andie
Télécharger la présentation

MITRE Performance Testing: Load Testing With Usage Analysis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. MITRE Performance Testing: Load Testing With Usage Analysis The MITRE CI&T Performance Test Team February 2009

  2. Acknowledgements • This briefing was prepared by adopting material from the Fundamentals of LoadRunner 8.1 training manual • The final usage analysis method and model described in this briefing was pioneered by the MITRE CI&T Performance Test Team. • Initial methods and models were originated by Henry Amistadi and evolved with the input of Chris Alexander, Betty Kelley and Aimee Bechtle.

  3. Purpose & Goal • Provide background on our team and load testing at MITRE • Understand how the Project Lead can support load testing throughout the performance testing process • Introduce our Usage Analysis methods so the Project Lead • Understands how we target a realistic load in load testing • Has confidence that the right amount of testing is being performed on a project • Our goal is always to provide high quality, timely performance tests that meet the needs of the customer and the performance test team

  4. Our Team • Comprised of 3 Performance Test Engineers and 1 Team Lead • Instantiated in 2001 as the Automated Test Team • Invested in Mercury (now HP): • LoadRunner for Load Testing • QuickTest Pro for Functional Testing • Quality Center/TestDirector for requirements and test case management • In the beginning customers suspicious of the reality and accuracy of our testing • In 2005 stopped performing automated functional testing to focus on our core competency, performance testing • Have matured from single application and environment testing to multi-application and multi-environment testing • Preparing for a large-scale, enterprise load test for 2009-2010 • Now, high level of confidence in our service

  5. Background: What is Load? • What is Load? Traffic!!!!! • Transactions distributed over time, expressed as rates • Transaction Per hour (TPH) • Transactions Per minute (TPM) • Transactions Per second (TPS) • Why Load Test? To assess how well your application is directing traffic • Find the problems before your customers do • Prevent poor response times, instability and unreliability (Road Rage!) • The longer you wait to test the more costly the problem may become

  6. Background: When to Use Performance Testing • When should I consider load testing? How can I use you throughout the project? • Contact performance test team during the project planning process and consider their incorporation into the plan • Load testing can be used throughout the product’s life

  7. Background: The Process

  8. Define Goals & Objectives • Begin working with performance test team upon completion of planning process • Start by discussing high-level goals & objectives. • And what are these objectives?

  9. Select Load Test Types • What are the types of load tests?

  10. Analyze System - Environment • Need to understand the logical and physical test environment to make the proper tool, protocol selection and to know our constraints • What is the historical CPU utilization on these machines? • What other apps are on these machines?

  11. Analyze System – Usage Analysis • Usage analysis is appropriate when: • Plan is to run a Typical, Peak or Stress test to meet the agreed upon objectives • There’s an existing system to collect information from • Usage Analysis is the process of calculating load targets from log file data for the relevant URLS • AKA Workload Analysis or Log File Analysis • We employ a statistical evaluation of data collected • Results in a Transaction’s: • Target Typical & Peak TPH • Target Typical & Peak TPM • Target Typical & Peak TPS

  12. Analyze System – Usage Analysis

  13. Usage Analysis – Business Transactions • “A business transaction is a set of user steps performed within an application to accomplish a business goal” • For example: • Logon to Home Page • Search • Update profile • Focus on the business processes that are • Most commonly used • Business Critical • Resource Intensive • We need the URLs that complete these business processes. These are the target URLs • Also, what are your expected target response times for these processes under typical load? Peak load?

  14. Usage Analysis – Coarse Grain • Purpose is to identify Typical PeakUsage Patterns • Use WebTrends • We identify the Typical Peak by answering questions such as: • Is this application used cyclical or quarterly? • Are some months or days of the weeks higher use than others? • Is this application heavily used on a daily basis? • Is this application heavily used throughout the day or only at specific hours? • What are the highest use days and hours?

  15. Usage Analysis – Coarse Grain Example • Corporate Portal: • Looked at weekdays from Oct – Dec 2006 • Compared daily and hourly page views in WebTrends for portal/dt and amserver/UI/Login • Ranked days in terms of daily totals • Excluded two busiest days because of abnormal patterns • Selected next 8 busiest days for further analysis

  16. Usage Analysis – Fine Grain • Once the Typical Peak timeframe is determined, log files from that period are collected for more detailed analysis. • Relevant fields are isolated for inclusion, such as: • Requesting URL (as agreed upon with the project team), Response Code, Date and Time request completed, Referring URL, User information

  17. Usage Analysis – Fine Grain • The data is aggregated by time frame and frequency analysis is performed. For an example please see: Usage Frequency Analysis Example • The transaction rates are placed in a cumulative frequency distribution

  18. Usage Analysis – Select Targets • Select your Typical and Peak targets • We recommend using the TPH and TPM rates at the Median, 50th Percentile, as the Typical Transaction Rates • We recommend using the TPH and TPM rates at the 99th percentile as Peak Transaction Rates • The TPS in this analysis is a reference point. • This index is a satisfaction index: • What percentage of the time are you willing to have users be satisfied/unsatisfied? • If you design a system to handle 131 TPMs your users would be satisfied with response time performance 99% of the time

  19. Usage Analysis – Convert TPS • Purpose is to know the target TPS, and at what time intervals, the transactions are going to occur in the test • We can choose from two methods • Method 1: Total Time Distribution (How?) • Assumes transactions are evenly distributed and every minute or second is active • Represents the lowest transaction rate in terms of load distribution • Method 2: Active Time Distribution (How?) • Accounts for the wave patterns of transaction activity • Assumes that not every minute and second is active • The active and inactive times are calculated

  20. Pulling It All Together • At the conclusion of the usage analysis we meet with the Project Lead and review the plan. • Ideal if conducted during development, prior to integration testing • Review Business Processes and Target Transactions Rates • Target Transactions Rates Example: “The system shall sustain a load home page transaction time of 3 seconds or less during typical hours at a rate of 2.02 transactions per second, and 3-5 seconds maximum during peak hours at a rate of 3.3 transactions per second.” * PRT = Preferred Response Time

  21. Create Scripts • Scripts contain the recorded steps, test data, and think times that define the transactions • Functional test cases, use cases, design documents and training manuals are a good source for this information • A sample of the detail we need: • A real example: Detailed Steps

  22. Create Scenarios • Scenarios define the overall load test execution. The instructions. • Elements of a scenario are: • Scripts • Vusers: Emulate the steps of real users. Steps are recorded into a script. • Ramp Up Rate: The rate at which vusers are suddenly or gradually introduced into the test • Think Times: Emulates how long the user pauses when: • Executing a transaction or business process. I.e. pauses between clicks or keystrokes • Between transactions (iterations), between starting the script over • Test Duration: The length the test will run. 1 hour, 4 hours, 8 hours. Varies by test type.

  23. Script Scenarios Test Runs Home Page (Default Config) Typical Day 1x (Home Default Config) Test Run 2007-03-04-A Home Page (Default Config) Bursts Peak Second Bursts +1x (Home Default Config) Test Run 2007-03-07-D Typical Day 1x 95% Home (Default) + 5% Edits Favorites Test Run 2007-03-09-F RSS Exploratory Edits Test Run 2007-03-09-H Test Run 2007-03-09-I Center Portlet Exploratory Project Services Project Services Typical Day 1x (Home Config) Test Run 2007-03-12-L HomePage (Configured) Typical Day 3x (Home Config) Test Run 2007-03-12-N Pulling it All Together

  24. Execute Tests • Prepare Test Environment • Account Setup • Data Setup • Plan for Multiple test runs • Debug • Baseline • Full execution • Tests will be run in priority order as we agreed upon • Execute tests as a team!!! • Systems and Database administrators should monitor the performance of each tier of the application • Ideally a network administrator could monitor the network

  25. Analyze Results • At the end of the test run(s) an analysis file is produced that reports: • Running Vusers • Duration • Response Times By Transaction: • Average & Maximum • Percentiles • Test Transaction Rates • Test Total Transactions (Passed/Failed) • Test Transactions Per Second • Example analysis file: Sample LoadRunner Analysis File • The Target Transaction Rates from the Usage Analysis are compared to the Test Transaction Rates. If they don’t reconcile then…

  26. Tweak The Load Test or The System • The test will need to be updated and/or rerun if: • The test goals have not been met then change the script or scenario • Test Transaction Rates are not similar to Target Transaction Rates • For an hourly test the Total Transactions should be close to the Target TPH • The TPM and TPS rates should meet or exceed the TPM or TPS from the usage analysis • Increase vusers and decrease think time to increase load • Performance has not met expectations (i.e. poor response time or CPU utilization) • Change Code • Change System Configuration • Change Architecture • This is an iterative process which takes time. This needs to be planned for.

  27. Finally… • Roll out and repeat!! • When the load test and performance goals are met you are good to go • Contact performance test team when application changes are made that affect performance: • Significant GUI or Code changes • Changes in the architecture or environment • Performance degradation in production • Adding more users

  28. In Conclusion… • Partner with the peformance test team: • Engage team from the beginning – during planning • Have goals and objectives about what you want your performance test to accomplish • Provide: • Architecture diagrams • Transactions and steps to be scripted • Test data • Webtrends and log file data • Server permissions and DBA/Sysadmin support • Select Targets • Review the test plan, prioritize and schedule the tests • Plan enough time for an iterative testing process • Keep team informed of schedule or resource changes

  29. Background • Detailed Calculation Examples

  30. Method 1: Even Distribution - Example • Evenly divide the TPH and TPM transaction rate by 60 • TPM = TPH/60 • TPS = TPM/3600 % TPH TPS TPM • For example: • TPM = 6075/60 • TPS = 131/60 101.3 2.2 (We would use this one)

  31. Method 2: Active Time Distribution Calculate Active Time from log as the percentage of time requests are written to the log • We calculate the Active Time per hour of day in terms of: • Active Minutes per Hour • Active Seconds per Minute • For example: • At 9 o’clock every minute in the hour had at least 1 transaction, but every second did not. • Only 46.7 seconds of the minute were active

  32. Method 2: Active Time Distribution - Example • Evenly divide the TPH transaction rate by Average Active Time • TPM = TPH/Active Minutes in a Hour • TPS = TPM/Active Seconds in a Minute TPH TPS % TPM • For example: • TPM = 6075/59.9 • TPS = 131/40.1 101.4 3.3

  33. Method 2: Active Time Distribution – Example (con’t) • Add Inactive Time--beginning point for determining think times or pacing • Percentage of time when no requests are completed or written to the logs or the time between completed requests • It includes the processing time of the second request • We calculate the Inactive Time per hour of the day in terms of: • Inactive Minutes per Hour • Inactive Seconds per Minute • Inactive Time needs to be sprinkled evenly between the Active Times to avoid creating a compressed hour 10-second Snapshot 120-second Snapshot

More Related