1 / 47

SQL 929: SQL Anywhere Performance Evaluation and Benchmarking

SQL 929: SQL Anywhere Performance Evaluation and Benchmarking. Anil Goel R&D, ASA Query Processing anilgoel@ianywhere.com August 15-19, 2004. Goals of this presentation. Discuss performance evaluation techniques Describe iAnywhere benchmarking efforts and the resulting improvements

gali
Télécharger la présentation

SQL 929: SQL Anywhere Performance Evaluation and Benchmarking

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SQL 929: SQL Anywhere Performance Evaluation and Benchmarking Anil Goel R&D, ASA Query Processing anilgoel@ianywhere.com August 15-19, 2004

  2. Goals of this presentation • Discuss performance evaluation techniques • Describe iAnywhere benchmarking efforts and the resulting improvements • Emphasize importance, and complexity, of evaluating ASA applications before deployment • Highlight factors that affect performance and scalability of large scale ASA applications • Highly concurrent • Complex queries • Mixed workloads • Describe performance evaluation tools available with ASA • What they do • When to use them • How to use them

  3. Contents • About this presentation • Principles of performance evaluation • What? Why? When? How? • Common Myths and Mistakes • Techniques • iAnywhere benchmarking efforts • Industry standard & Customer • ASA Application performance • Bottlenecks • ASA Tools • Fetchtst • Instest • Request logging • Trantest

  4. About this presentation • Emphasis on performance issues specific to 8.x, 9.x • Focus on performance evaluation and tuning • SQL922: focus on performance analysis – what went wrong? • AM 34:focus on ASA internals(with some tuning tips) • Performance evaluation, by its very nature, is not an exact science in its entirety • Requires subjective decision making • Plenty of literature to borrow from • Further reading: • Raj Jain, The Art of Computer Systems Performance Analysis, Techniques for experimental design, measurement, simulation, and modelling. 1991, Wiley. ISBN 0-471-50336-3

  5. Principles of performance evaluation • What? • State and evaluate performance requirements • Subjective – same problem can be measured in many different ways and the data interpreted differently • Why? • Capacity Management : Performance with current resources – are we getting the best possible performance • Capacity Planning : Future resource needs -- what lowest cost ASA configuration will handle expected growth for the next 3 years while meeting performance objectives • Effects of increasing resources – can some performance bottlenecks be addressed by throwing hardware at the problem • Application scalability – what happens when adding more users and/or data • Comparative analysis – how does ASA fare vs other products and/or on different platforms

  6. Performance evaluation • When? • Designing ASA applications • Purchasing hardware • Purchasing ASA application • Selling ASA application to third party – demonstration of performance • Pre-deployment testing and evaluation • Most applications can benefit from performance evaluation and benchmarking efforts • How? • Tools to use • Tools not to use • Workload generation • Simulating multi user environment

  7. Performance evaluation – Common Myths • Adding CPUs always makes individual queries and/or the workload execute faster • Adding memory (server cache) will always improve performance • Increasing the raw CPU power always allows a linear scale up in number of users without compromising performance • Performance is usually non-linear • Better hardware is the only/best way to improve performance • Schema and application design can hinder benefiting from more resources • Larger page sizes are always better than smaller (so, use largest supported page size) • Sequential scans are always bad for partial retrievals • High CPU utilization is always a red flag

  8. Performance evaluation – Common mistakes • No goals • Decide what you are going to evaluate • Cannot evaluate everything with the same experiment • Decide upfront what is important and stay focussed • Everything else follows from what needs to be evaluated and not the other way around • Biased goals • Avoid preconceived biases and evaluate on actual measurements • Unsystematic approach • Do not choose workloads, etc. arbitrarily • Keep the goals in mind • Understand and define the problem well

  9. Performance evaluation – Common mistakes • Incorrect performance metrics • Pick the right criterion to quantify performance of the application • Response time vs. resource utilization • Measure what is relevant not what is easy to compute • Unrepresentative workload • Consider actual usage of the application • Prefer own benchmarks over someone else’s (e.g., industry standard ) • Wrong measurement tools • Use tools that are as close as possible to the way the application runs queries • Same communication link/protocol • Same cursor flags • Same prefetch settings, etc.

  10. Performance evaluation – Common mistakes • Overlooking important parameters • Cache size – dynamic or not • DB File(s) placement and fragmentation • Workload parameters – number of users, think time, etc. • Overlooking external factors • Ensure applications coexisting on the server are representative of the actual system – virus scanners, other DB servers, etc. • Only average behaviour represented in test workload • Sometimes, it may be necessary to include variances • N.B. Small differences (say, 10-20%) may be due to experimental error • Loading levels controlled inappropriately • Can increase number of users, think time and work time to add more load • Best to change number of users but requires more testing resources

  11. Performance evaluation – Common mistakes • Server caching effects ignored • Need to try to ensure cache behaviour similar to the application • sa_flush_cache() • Ignoring monitoring overhead • Use tools that introduce minimum overhead • Use ASA timing utilities • Do not use dbisql, graphical plans, etc. • Not validating measurements • Cross check to ensure correct results as well as measurements • Include automatic checks in workload, if possible • Not ensuring same initial conditions • Data may be changed • System resources may change, e.g., disk space • Cleanup or quantify the sensitivity of the test to changed state

  12. Performance evaluation – Common mistakes • Not measuring transient performance • In most cases want to study a stable system • Let the system run for a while before starting measurements • Important to study transient behaviour sometimes • The server takes a long time to achieve equilibrium • There is no stable state when the application runs • Using device utilizations for performance comparisons • Higher utilizations may not necessarily mean poorer system because the testing mechanism may actually be generating more requests for a system with smaller response (i.e., a better system) time • In general, compare throughput in terms of requests per second

  13. Performance Evaluation – Capacity Planning • One of the primary reasons for performance evaluations by/for end users • What is a cost effective ASA configuration to meet future workload demands while meeting performance objectives? • Is the current configuration fast enough for the projected growth (say, for the next three years) of data and users? • If not, then what resources need to be added while keeping additional costs at a minimum? • Don’t ignore the administrative side of things – may require additional resources • Backups • Reorganization • Unload/reload

  14. Performance Evaluation – Capacity Planning • Capacity management vs capacity planning • Current vs future requirements • Use benchmarking for both -- comparative performance evaluation

  15. Performance Evaluation Techniques & Tools • Want to measure performance of an ASA system while it is being subjected to a particular workload • Need to determine • System configuration -- hardware • Data size and contents • ASA server configuration • Workload • Tools for running the workload in a controlled manner • How to monitor/measure system performance • How to analyse the data obtained

  16. Performance Evaluation -- Workload • Most crucial component • What to use as a workload? • The application itself – usually not practical • Artificial workload, e.g., industry standard benchmark • Repeatable workload based on a trace of the Application requests • Considerations in workload selection • Services exercised – DML, DDL, stored procs, admin, … • Level of detail – SQL statements, network packets, … • Representative ness – think times, query complexity, • Updating workloads on an ongoing basis to reflect changes in application traces – helps to make the transition from application trace to workload an easy process • Loading levels – full capacity (best case), beyond capacity (worst case), real observed level (typical case) • Impact of external components – minimize work that is not being measured • Repeatability – easily reproducible with small variance (updates, etc.) • Don’t avoid updates

  17. Performance Evaluation -- Workload • Want repeatable workload but application trace is generally not repeatable • What to do? • Record the application environment • request logging • Study the application traces for various connections • Observe key characteristics • Develop a repeatable workload model • Run workloads while changing parameters • Considerations • Number of concurrent instances and parameters of the workload • Think times • Staggered execution • Simulating multiple users with one trace – need to be careful • Watch out for spurious contention

  18. Performance Evaluation – Load Driver • Need some means of submitting the workload • Internal Driver • Could put the workload in a stored procedure • Ignores communication overheads completely – not representative of an interactive application. May be okay for report generation workloads • Live Operators • Have multiple people log in and perform predetermined set of activities • Costly and difficult to control • Non repeatable • High variance • May need a number of trials to achieve acceptable level of confidence • Terminal emulators (TE) • Use computers to simulate users • Single computer can simulate many users • Controlled and repeatable • Make use of live operators for SQL workload generation and then use TEs assuming users send SQL requests

  19. Performance Evaluation – TE • Think time • Important to choose a realistic think time • Best to use some randomization • Work time • Query complexity • Do NOT want to have the TEs all executing the same sequence at about the same time • Exacerbates contention resulting in worst case performance (which in itself can be a useful measurement) • Could script application to give TE (especially for web applications) • Costly and might require one machine per emulated user or cause other problems

  20. Performance Evaluation – Analysis • Interpretation and presentation of data • Equally important as collecting data • Choose an appropriate base system – understand the ratio game • Truman's Law: If you cannot convince them, confuse them • Look for the “knee” of the graph where a resource hits a limit • Carefully examine the outliers • Determine if due to anomalies that can be explained; ignore if true • Otherwise, make part of the analysis

  21. iAnywhere Benchmarking Efforts • Significant work done towards increasing ASA performance through continuous benchmarking efforts • Industry standard benchmarks • Variety of benchmarks • Considering TPC-H at scale factor 30 • http://www.tpc.org/tpch/default.asp • Customer benchmarking • Regularly involved with performance evaluation projects undertaken by customers • Assist with the process (application performance opportunities) • Identify server improvements • Major ASA performance improvements for • Complex query execution against large databases (10s of GBs) • Concurrent execution of large number of user requests (1000s of users)

  22. iAnywhere Benchmarking Efforts – Complex Queries Query Processing Enhancement • Internal changes to various operators (parallel index scans for JNB) • Use of Bloom filters in multi-join queries • Clustered hash group by (new relational operators) • Query optimization enhancements (not exhaustive): • Cost-based choice of grouping, duplicate elimination algorithms • Enhancements to CNF normalization (predicate subsumption, nullability analysis) • Cost-based subquery optimization, including subquery de-correlation • Optimizations using WINDOW functions • LIKE predicate selectivity • Histogram improvements, particularly for single-value buckets • Updating of histograms with update DML statements

  23. iAnywhere Benchmarking Efforts – Complex Queries

  24. iAnywhere Benchmarking Efforts -- Microdec • iAnywhere Press Release • http://www.sybase.com/detail/1,6904,1029886,00.html • Profile 2000 application for global professional recruitment consultancies running on IBM hardware • Customer evaluation at the IBM benchmarking laboratory in the UK • The benchmark for a Microdec client • “model office” • Met performance objectives running with more than 1,300 active, connected users on a single database with 30 million records • Surpassed benchmark requirements on an 18GB ASA database with 3,900 concurrent connections

  25. iAnywhere Benchmarking Efforts -- Microdec • Worked with Microdec, their client, and IBM to generate a set of requirements • number of users • set of use cases • rate of use cases • mix of use cases • The benchmark • Scripted the use cases using LoadRunner • A Microdec client employee performed the use case • LoadRunner Monitored all database interactions • LoadRunner tool used to create a script from these monitored interactions • For example, user input supplied by random generator; input from previous requests is supplied by script • Think times recorded by LoadRunner and exist in scripts • option to change think time based on a random number or increase linearly

  26. iAnywhere Benchmarking Efforts -- Microdec • The benchmark (continued) • Set of client machines to • run simulated users • record average, maximum, minimum response times for each of the use cases • Monitored performance with varying number of users under different hardware configurations • Provided a performance guarantee based on observed performance in the steady state, with some discount (10-20%, based on testing inaccuracy) • On-line users worked with the system during the measurement interval to get a "feel" for the performance levels of the system

  27. iAnywhere Benchmarking Efforts -- Microdec • Pitfalls -- simulating multiple users with one trace -- need to be very careful • How to generate test data • Existing data may not be there or may be too small. How to generate large amounts of meaningful data • Avoid artificial skew in generated data (e.g., duplicating one user) • Good to match expected skew, but can get useful results with uniform distribution or simple rules; however, results must be interpreted more carefully • Server inflexion points • Make sure (auto) checkpointing is included in test runs

  28. Application performance • Important to do performance evaluation of single queries without concurrency • Baseline – determines performance when scaled up • Obtain the best possible performance for complex queries in a single user environment before proceeding with concurrent evaluation • SQL922 – Performance Analysis • Carefully consider schema design choices (indexes, etc.) • Make use of caching in the server for expressions (functions, subqueries) and access plans to maximize ASA performance • Easiest and most common form of performance evaluation and yet often ignored • Performance evaluation of network performance is critical in some situations

  29. Application performance • For benchmarking vs. other vendors, make sure to consider cost (e.g., 3 year TCO of license and support options) • Best not to consider published numbers over numbers you generate on your own • Be wary of "expensive" options • isolation levels greater than 0 • updatable cursors • forward + backward fetches (scrolling) • keyset/sensitive cursors • Correct choice of “first-row” vs “all-rows” – a global setting for the application may not be appropriate • Try to get the TE as close as possible to the app behaviour, especially w.r.t. expensive options

  30. Application Performance Bottlenecks • Common mistakes in ASA applications that inhibit scalability by introducing bottlenecks • Why, where and how they occur? • How to prevent/avoid/minimize them? • Hardware Considerations • Is the application CPU/IO bound? • Adding better disks is likely not going to help if the CPU cannot keep up and vice versa • Look at the application to see if a balance between CPU and IO tasks can be achieved

  31. Application Performance Bottlenecks • Locks • PKEY generation • Internal contention in the server (mutexes) • Hot pages (less of a problem in 9.x) • Hot row (still a big problem) • Other hot structures (mostly removed in 9.x)

  32. Application Performance Bottlenecks • Understand the architecture of an ASA connection • One active request per connection • Pre-assigned number of worker threads available for executing requests • Internal worker scheduler assigns workers to requests as they arrive • Deadlocks can occur that stall the server • Database option (dba-only, temporary-only): Dedicated_task • A task dedicated exclusively to handling requests for current connection • Prevents current connection from participating in a deadlock

  33. Tools • Need tools to help with • Establishing baselines/high watermarks • Workload capturing • Workload replaying • Commercial tools [no experience; use at own risk] • Mercury LoadRunner • http://www.mercury.com/us/products/performance-center/loadrunner/ • SilkPerformer • http://www.segue.com/products/load-stress-performance-testing/silkperformer.asp • Benchmark Factory • http://www.benchmarkfactory.com/ • ASA Tools • Located in <install_dir>\Samples\ASA • Documentation in <tool_dir>\readme.txt

  34. ASA Tools • Establishing baselines/high watermark • Use rules of thumb – simple rules to get a very rough idea for best performance • Use Fetchtst and Instest to establish rules of thumb • Get actual high watermarks from fetchtst/instest • Disk I/O is 15ms • Can return 1M rows in N seconds at best for each comm link • Overhead of opening a query is at least X and can go up to Y depending upon complexity • You can always improve performance by 10% with exponentially increasing cost (change query, schema, add hardware, pre-compute results in tables, etc.) until you reach the limits of “laws of physics”

  35. ASA Tools -- Fetchtst • Use to test fetch rates for an arbitrary query • PREPAREs a SQL statement for a given query, DESCRIBEs the result set, and fetches the rows with minimum overhead • By default, the row values are not saved • At completion, displays the times taken to execute each part of the query • Open • Describe • Fetch first row • Fetch remaining rows • Multiple statements can be specified • Statements which do not return result sets can also be included

  36. ASA Tools -- Fetchtst • Usage: FETCHTST [options] [<file1> [.. <fileN>]] • Options: (Partial List) • -b nnn : fetch nnn records at a time • -i nnn : think time (milliseconds) between statements • -is n : set isolation level at beginning to n • -k : disable prefetching • -l nnn : stop after nnn records • -p : display plan • -t cursor_type : INSENSITIVE or SCROLL

  37. ASA Tools – Instest • Use to test insert rates into a table. • By default, DESCRIBEs the result types of a given query and then uses PUT requests to insert rows. • The query is specified as a SELECT statement. • Alternatively, can specify a SELECT statement and an INSERT statement • SELECT indicates the types of the columns being inserted. • The number of rows to be generated can be specified • Can generate unique values for each column. • A starting value can be specified. • If the table being populated has a DEFAULT AUTOINCREMENT primary key, omitting the key column from the SELECT list may be sufficient to allow multiple rows to be inserted without violating uniqueness constraints.

  38. ASA Tools – Instest • Usage: INSTEST [options] [fname] • Options: (Partial List) • -i : use INSERT (default = PUT) • -k rows : CHECKPOINT frequency (default = never) • -m rows : COMMIT frequency (default = never) • -w width : rows to insert per request • -r rows : rows to insert

  39. ASA Tools – Workload Capturing • Request Logging • Captures requests received from client • Does not capture internal work • Contains connection identification • Tools under construction to generate workload scripts • Covered in detail in SQL922 (Wed 1:30pm)

  40. ASA Tools – Workload Replaying • TRANTEST • Simulates multiple client machines accessing a server to perform transactions. • Transactions to be executed are specified by the user • Use to measure the load which can be handled by a given server configuration given your database design and the set of transactions.

  41. ASA Tools – Trantest Setup • Create tables in database by executing trantabs.sql • Transactions run via un-parameterized stored procedure calls can be tested by placing the CALL statement in a file • Occurrences of {thread} replaced with thread number • call p{thread}()  thread N will execute "call pN()“ • If the transaction cannot be defined with procedure calls, add them to worklist.cpp and build TRANTEST

  42. ASA Tools – Generating Test Data • To accurately simulate server load, populate the database with some representative data, if not done already • May need to increase the amount of data if testing for future growth • Considerations for generating test data • Avoid artificial skew (e.g., multiplying the data for one user) • Can aim for expected skew • Can also get useful results with uniform distribution – be careful when interpreting results

  43. ASA Tools – Running Trantest • Don't run with too many threads on one machine • CPU and/or network on client can max out • If necessary, use multiple client machine, each with multiple threads. • On the "master" machine, specify the number of machines participating (-p) • On all other machines, specify a unique thread group id from 1 to #machines -1 (-g) • Eliminate other network traffic, if possible • run the server and client machines on an isolated network segment • Allow time for the server to "warm up" before measuring transaction times. • Ensure the test environment allows for reproducible runs: • start with the same database each time • The results of each run are recorded in the database and can be extracted to a comma-separated file with "getresults.sql".

  44. ASA Tools – Trantest • Usage: TRANTEST <switches> • Switches: (Partial List) • -a <api> API to use (ESQL or ODBC) • -f <file> SQL script file • -i <num> isolation level • -n <list> number of threads to run at each iteration • (e.g. -n 1,5,10,20) • -p <num> number of machines (thread groups) • -t <str> name of test to run

  45. iAnywhere at TechWave2004 • Ask the iAnywhere Experts on the Technology Boardwalk (exhibit hall) • Drop in during exhibit hall hours and have all your questions answered by our technical experts! • Appointments outside of exhibit hall hours are also available to speak one-on-one with our Senior Engineers. Ask questions or get your yearly technical review – ask us for details! • TechWave ToGo Channel • TechWave To Go, an AvantGo channel providing up-to-date information about TechWave classes, events, maps and more – also, keep up to date with the TechWave Newsletter – now available via your handheld device! • www.ianywhere.com/techwavetogo • Mobile and Wireless Email using Pylon Anywhere • iAnywhere is providing access to your corporate email at TechWave using Pylon Anywhere. You can keep up-to-date with your latest email, calendar, contacts, and tasks from your PDA or any Web-client! Visit the iAnywhere demo station in the Sybase booth or our “Ask the Experts” area in the Technology Boardwalk (Exhibit Hall) for details on how you can evaluate Pylon Anywhere yourself!

  46. iAnywhere at TechWave2004 • Wi-Fi Hotspots – brought to you by Intel • You can enjoy wireless internet access via Wi-Fi hotspots provided by Intel. Using either a laptop or PDA that is Wi-Fi 802.11b wirelessly-enabled, visitors can access personal email, the internet and “TechWave ToGo” • Developer Community A one-stop source for technical information! • Access to newsgroups, new betas and code samples • Monthly technical newsletters • Technical whitepapers, tips and online product documentation • Current webcast, class, conference and seminar listings • Excellent resources for commonly asked questions • All available express bug fixes and patches • Network with thousands of industry experts http://www.ianywhere.com/developer/

  47. Questions?

More Related