1 / 39

Automated Testing: Better, Cheaper, Faster, For Everything

Automated Testing: Better, Cheaper, Faster, For Everything. Larry Mellon, Steve Keller Austin Game Conference Sept, 2004. What Is A MMP Automated Testing System?. Push-button ability to run large-scale, repeatable tests Cost Hardware / Software Human resources Process changes Benefit

Télécharger la présentation

Automated Testing: Better, Cheaper, Faster, For Everything

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Automated Testing:Better, Cheaper, Faster,For Everything Larry Mellon, Steve Keller Austin Game Conference Sept, 2004

  2. What Is A MMPAutomated Testing System? • Push-button ability to run large-scale, repeatable tests • Cost • Hardware / Software • Human resources • Process changes • Benefit • Accurate, repeatable measurable tests during development and operations • Stable software, faster, measurable progress • Base key decisions on fact, not opinion

  3. MMP Requires A Strong Commitment To Testing • System complexity, non-determinism, scale • Tests provide hard data in a confusing sea of possibilities • Increase comfort and confidence of entire team • Tools augment your team’s ability to do their jobs • Find problems faster • Measure / change / measure: repeat as necessary • Production / exec teams: come to depend on this data to a high degree

  4. How To Get There • Plan for testing early • Non-trivial system • Architectural implications • Make sure the entire team is on board • Be willing to devote time and money

  5. Automation: Architecture Startup & Control Collection & Analysis Repeatable, Sync’ed Test Inputs System Under Test System Under Test System Under Test Scripted Test Clients Emulated User Play Sessions Multi-client synchronization Report Managers Raw Data Collection Aggregation / Summarization Alarm Triggers Test Manager Test Selection/Setup Control N Clients RT probes

  6. Outline Overview: Automated Testing Definition, Value, High-Level Approach Applying Automated Testing Mechanics, Applications Process Shifts: Stability, Scale & Metrics Implementation & Key Risks Summary & Questions

  7. Scripted Test Clients • Scripts are emulated play sessions: just like somebody plays the game • Command steps: what the player does to the game • Validation steps: what the game should do in response

  8. Scripts TailoredTo Each Test Application • Unit testing: 1 feature = 1 script • Load testing: Representative play session • The average Joe, times thousands • Shipping quality: corner cases, feature completeness • Integration: test code changes for catastrophic failures

  9. “Bread Crumbs”: Aggregated Instrumentation Flags Trouble Spots Server Crash

  10. Quickly Find Trouble Spots DB byte count oscillates out of control

  11. Drill Down For Details A single DB Request is clearly at fault

  12. Amount of work done Time Target Launch Project Start MMP Developer Efficiency Strong test support Weak test support Process Shift: Applying Automation to Development Earlier Tools Investment Equals More Gain Not Good Enough

  13. Process Shifts: Automated Testing Can Change The Shape Of The Development Progress Curve Stability Keep Developers moving forward, not bailing water Scale Focus Developers on key, measurable roadblocks

  14. First Passing Test Now Process Shift: Measurable Targets, Projected Trend Lines Target Complete Core Functionality Tests, Any Feature (e.g. # clients) Time Any Time (e.g. Alpha) Actionable progress metrics, early enough to react

  15. Stability Analysis: What Brings Down The Team? Test Case: Can an Avatar Sit in a Chair? use_object () • Failures on the Critical Path block access to much of the game. • Worse, unreliable failures… buy_object () enter_house () buy_house () create_avatar () login ()

  16. Impact On Others

  17. Pre-Checkin Regression: don’t let broken code into the Critical Path.

  18. Monkey Test: EnterLot

  19. Non-Deterministic Failures

  20. Code Repository Compilers Reference Servers Stability Via Monkey Tests Continual Repetition of Critical Path Unit Tests

  21. Process Shift: Comb Filter Testing Sniff Test, Monkey Tests - Fast to run - Catch major errors - Keeps coders working Smoke Test, Server Sniff - Is the game playable? - Are the servers stable under a light load? - Do all key features work? Full Feature Regression, Full Load Test - Do all test suites pass? - Are the servers stable under peak load conditions? $$$ $$ $ New code ready For checkin Promotable to full testing Promotable to paying customers Full system build • Cheap tests to catch gross errors early in the pipeline • More expensive tests only run on known functional builds

  22. Process Shift: Who Tests What? • Automation: simple tasks (repetitive or large-scale) • Load @ scale • Workflow (information management) • Full weapon damage assessment, broad, shallow feature coverage • Manual: judgment / innovative tasks • Visuals, playability, creative bug hunting • Combined • Tier 1 / Tier 2: automation flags potential errors, manual investigates • Within a single test: automation snapshots key game states, manual evaluates results • Augmented / accelerated: complex build steps, …

  23. Process Shift: Load Testing (Before Paying Customers Show Up) Expose issues that only occur at scale Establish hardware requirements Establish play is acceptable @ scale

  24. Resource Debugging Data Load Testing Team Metrics Client Metrics Load Control Rig Test Test Test Test Test Test Test Test Test Client Client Client Client Client Client Client Client Client Test Driver CPU Test Driver CPU Test Driver CPU Game Traffic Internal System Server Cluster Probes Monitors

  25. Client-Server Comparison

  26. Live Beta Testers AlphaVilleServers Test Servers Highly Accurate Load Testing:“Monkey See / Monkey Do” Sim Actions (Player Controlled) Sim Actions (Script Controlled)

  27. Outline Overview: Automated Testing Definition, Value, High-Level Approach Applying Automated Testing Mechanics, Applications Process Shifts: Stability, Scale & Metrics Implementation & Key Risks Summary & Questions

  28. Data Driven Test Client Load Regression Reusable Scripts & Data Single API Test Client Single API Key Game States Pass/Fail Responsiveness Script-Specific Logs & Metrics

  29. Test Client Game Client Script Engine Game GUI State State Client-Side Game Logic Scripted Players: Implementation Commands Presentation Layer

  30. What Level To Test At? Game Client View Mouse Clicks Presentation Layer Logic Regression: Too Brittle (UI&pixel shift) Load: Too Bulky

  31. What Level To Test At? Game Client View Internal Events Presentation Layer Logic Regression & Load: Too Brittle (Churn Rate vs Logic & Data)

  32. Chat Enter Lot Use Object Route Avatar … Automation Scripts == QA Tester Scripts Basic gameplay changes less frequently than UI or protocol implementations. NullView Client View Presentation Layer Logic

  33. Common Gotchas • Setting the Test bar too high, too early • Feature drift == expensive test maintenance • Code is built incrementally: reporting failures nobody is prepared to deal with wastes everybody’s time • Non-determinism • Race conditions, dirty buffers/processState, … • Developers test with a single client against a single server: no chance to expose race conditions • Not designing for testability • Testability is an end requirement • Retrofitting is expensive • No senior engineering committed to the testing problem

  34. Outline Overview: Automated Testing Definition, Value, High-Level Approach Applying Automated Testing Mechanics, Applications Process Shifts: Stability & Scale Implementation & Key Risks Summary & Questions

  35. Summary: Mechanics & Implications • Scripted test clients and instrumented code rock! • Collection, aggregation and display of test data is vital in making decisions on a day to day basis • Lessen the panic • Scale&Break is a very clarifying experience • Stable code&servers in development greatly ease the pain of building a MMP game • Hard data (notopinion) is both illuminating and calming • Long-term operations: testing is a recurring cost

  36. Summary: Process • Integrate automated testing at all levels • Don’t just throw testing over the wall to QA monsters • Use automation to speed & focus development • Stability: Sniff Test, Monkey Tests • Scale: Load Test

  37. Tabula Rasa PreCheckin SniffTest Keep Mainline Working Hourly Monkey Tests Baseline for Developers Dedicated Tools Group Easy to Use == Used Executive Support Radical Shifts in Process Load Test: Early & Often Break It Before Live Distribute Test Development & Ownership Across Full Team

  38. Cautionary Tales Flexible Game Development Requires Flexible Tests Signal To Noise Ratio Defects & Variance In The Testing System

  39. Questions (15 Minutes) Overview: Automated Testing Definition, Value, High-Level Approach Applying Automated Testing Mechanics, Applications Process Shifts: Stability, Scale & Metrics Implementation & Key Risks Slides online @ www.maggotranch.com/MMP

More Related