1 / 28

Engineering Better PL/SQL 

Engineering Better PL/SQL . Bert Scalzo Database Domain Expert Bert.Scalzo@Quest.com. Bert Scalzo …. Database Expert & Product Architect for Quest Software Oracle Background: Worked with Oracle databases for over two decades (starting with version 4)

Télécharger la présentation

Engineering Better PL/SQL 

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Engineering Better PL/SQL  Bert Scalzo Database Domain Expert Bert.Scalzo@Quest.com

  2. Bert Scalzo … Database Expert & Product Architect for Quest Software Oracle Background: Worked with Oracle databases for over two decades (starting with version 4) Work history includes time at both “Oracle Education” and “Oracle Consulting” Academic Background: Several Oracle Masters certifications BS, MS and PhD in Computer Science MBA (general business) Several insurance industry designations Key Interests: Data Modeling Database Benchmarking Database Tuning & Optimization "Star Schema" Data Warehouses Oracle on Linux – and specifically: RAC on Linux • Articles for: • Oracle’s Technology Network (OTN) • Oracle Magazine, • Oracle Informant • PC Week (eWeek) • Articles for: • Dell Power Solutions Magazine • The Linux Journal • www.linux.com • www.orafaq.com 1

  3. Books by Bert … Coming in 2009 … 2

  4. Agenda • PL/SQL as a Language • Costs of Software Defects • Failure of Best Practices • Code Reviews Inadequate • Software Engineering to the Rescue • Automation = Real Rescue = Better Code • Code Metrics – Explanation & Examples • Conclusion – Automated Metrics = Success

  5. PL/SQL as a Language… • Relatively easy to learn (based on ADA) • Well integrated with the Oracle Database • Efficient for complex and/or large scale DB operations • Origin was SQL Forms (later moved into database) • Was once an optional $$$ add-on to the database • Lots of books (best by Steven Feuerstein of Quest) • Evolved into mature, robust and functional language • Of late, many new Oracle features have been exposed via PL/SQL API rather than adding additional new SQL commands (e.g. DBMS_JOB, DBMS_SCHEDULER, DBMS_REDEFINITION, etc)

  6. Simple is as Simple does… • Simple & easy to learn !auto= effective & efficient • PL/SQL code is often as flawed as any other code • Some of the worst code I’ve seen had been PL/SQL • PL/SQL sometimes makes it easy to shoot self in foot • Many PL/SQL code issues survive into production!!! • I’ve been an “Expert Witness” in court cases on this • So, how do we apply better “scientifically based” Software Engineering “Best Practices” to mitigate development mistakes – yielding “Better Code” ?

  7. Costs of Software Defects… • Quote: inferior software in 2002 for US = $59.5 Billion • Possible Breakdown • 2.5 Million IT workers in US • Only about 50% doing coding • So $59,500M / 1.125M coders ~= $50K per developer • What if employers could hold you liable? • So reduce your salary by $50K – ouch  • We’d all have to buy mal-practice insurance!

  8. Interesting Statistics… Programmer Time Development Costs Bug Fixes Maintenance

  9. Failure of “Best Practices”… • Development teams use guidelines & best practices • Lots of developers cubes’ are full of PL/SQL books • Lots of new ideas: • Agile Development • Interaction Design • Technique de jour… • Yet paradigms are problematic • Inconsistent implementations • Less than perfect adherence • How do we monitor & measure • Net Effect: not quite effective as possible/promised 

  10. Code Reviews Inadequate… • Code Reviews are good in theory… • But: • Increased cost in both time & money • Requires “fair & reasonable” implementation • Requires good team dynamics – “true peers” • Only as good as those doing the code reviews • Thus: • Few shops try them or give up on them far too easily • Code reviews can also become just a “check box” task • Need software to simplify, automate, make more consistent, provide way to monitor and measure!

  11. Software Engineering to Rescue… • US Air force funded study in the mid 1980’s • Carnegie Mellon Software Engineering Institute (SEI) • Published “Managing the Software Process” in 1989 • Basis for “Capability Maturity Model” (CMI) in 1991 • Later “Capability Maturity Model Integration” (CMMI) • Simple 5 level gauge of software development: • Initial: ad hoc, depends on the competence of people • Repeatable: project management to schedule & track • Defined: standards emerge and are applied across projects • Managed: management controls via statistical/quantitative metrics • Optimizing: adopt agile, innovative and incremental improvements

  12. http://www.sei.cmu.edu/cmmi/adoption/pdf/cmmi-overview07.pdf

  13. http://www.sei.cmu.edu/cmmi/adoption/pdf/cmmi-overview07.pdf

  14. Automation to the Real Rescue… • Make code reviews painless and easy – and fun  • Perform automated code reviews before manual ones • If not doing code reviews, at least do automated ones • Eliminate simple and often embarrassing mistakes  • Provide team/project manager code quality reports • Managers can now better measure many aspects: • Efficiency/effectiveness of the resulting code • Efficiency/effectiveness of the developers (more accurate/fair) • History of the code quality – for project progress/regression analysis • Result: better code – and better developers over time

  15. Toad’s “Code Xpert”…

  16. Define your Coding Rules… 144 rules from Steven Feuerstien and Bert Scalzo

  17. Examine the Results…

  18. Automate Tuning as well…

  19. Examine Code Metrics too… So what are these? Only performed once!

  20. What are Code Metrics… The critical and initial step in obtaining SEI maturity level-4 (managed) is to understand, embrace and implement quantitative analysis. But what exactly is quantitative analysis? Quantitative analysis is an analysis technique that seeks to understand behavior by using complex mathematical and statistical modeling, measurement and research. By assigning a numerical value to variables, quantitative analysts try to decipher reality mathematically. That’s really just a pretty academic way to overstate a rather simple idea. There exist some very well published and accepted standards (i.e. formulas) for examining source code such as PL/SQL, and assigning it a numeric rating. Furthermore, these ratings are simple numeric values that map against ranges of values – and where those ranges have been categorized.

  21. Halstead Complexity Measure… http://www.sei.cmu.edu/str/descriptions/halstead.html This metric simply assigns a numerical complexity rating based upon the number of operators and operands in the source code as follows: Code is tokenized and counted, where: n1 = the number of distinct operators n2 = the number of distinct operands N1 = the total number of operators N2 = the total number of operands The ideal range for a program unit is between 20 and 1000: where the higher the rating the more complex the code. If a program unit scores higher than 1000, it probably does too much. Lower = Better

  22. McCabe’s Cyclomatic Complexity… http://www.sei.cmu.edu/str/descriptions/cyclomatic.html This widely-used metric is considered a broad measure of the soundness and confidence for a program. It measures the number of linearly-independent paths (i.e. loops) through a program unit – assigning a simple number that can be compared to the complexity of other programs: Cyclomatic complexity (CC) = E - N + p Where E = the number of edges of the graph N = the number of nodes of the graph p = the number of connected components

  23. Maintainability Index… http://www.sei.cmu.edu/str/descriptions/mitmpm.html This metric is calculated using a very complex polynomial equation that combines weighted values for the Halstead Complexity Measure, McCabe’s Cyclomatic Complexity, lines of code, and the number of comments as follows: 171 - 5.2 * ln(aveV) - 0.23 * aveV(g') - 16.2 * ln (aveLOC) + 50 * sin (sqrt(2.4 * perCM)) Where: aveV = average Halstead Volume V per module aveV(g') = average extended Cyclomatic Complexity per module aveLOC = the average count of lines of code (LOC) per module perCM = average percent of lines of comments per module

  24. So let’s look once again… 20 1 85 Pretty “complex” for such a small program Only performed once!

  25. Fixed and Scored Again… 236% 200% 24% Truly Better! Measurable Accurate Objective

  26. But, “Houston we have a problem”  • Most people either simply did not know or did not like these software engineering metrics • Common complaint was either who knows this junk or how to interpret such numbers • Wanted a “working mans” solution rather than an academically perfect principle … • So Quest made Toad simpler • introduced a simple street light coloration (red, yellow, green) representation based upon a proprietary formula • Created a default, minimal rule set (Bert’s and Steven’s top-20) • Seems to have quelled the complaints so far …

  27. Better code simply yields clearly obvious better score

  28. Thank you • Please offer any questions or comments • Remember: • No such thing as a “Perfect Program” • Nor should we “waste time” trying for perfection • Embrace software engineering approach & metrics • Utilize software tools that support/automate that effort • Measure success using simple yet reliable scientific metrics • This approach improves code quality – and improves skill sets • PS – all these concepts apply outside PL/SQL – but not use Toad • Your mileage may well vary (especially percentages)

More Related