1 / 24

Challenges for Software Research

Challenges for Software Research. Walter F. Tichy Universität Karlsruhe. The Central Goal of SW Engineering. Provide software of sufficient quantity at competitive cost and satisfactory quality. Competitive pressures require more functionality, cost reduction, or better quality.

erin-scott
Télécharger la présentation

Challenges for Software Research

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Challenges forSoftware Research Walter F. Tichy Universität Karlsruhe

  2. The Central Goal of SW Engineering • Provide software • of sufficient quantity • at competitive cost • and satisfactory quality. Competitive pressures require more functionality, cost reduction, or better quality. Software research seeks a fundamental understanding of software development and maintenance that can be applied to master greater software complexity, reduce (and predict) costs, and achieve (and predict) desired quality.

  3. Context • In the past, software research has emphasized technological development. • Almost all software researchers have been trained as technology pushers. • Now the advantages of new techniques are no longer obvious (personal experience and introspection provide not enough guidance). • Balance between technology, theory, and empirical work is needed.

  4. All three areas depend on each other. The Software Research Landscape “How can we help to develop satisfactory software quickly?“ Technology Processes Methods Tools Languages “Why?” “How?” “When?” “Is the theory correct?“ “How well do methods work?“ “What happens if ...?“ “What really goes on?” Experiment & Simulation test of theories evaluation of technology exploration of phenomena experimental methods Theory Models, rules, laws, cause and effect, prediction, informed by observation

  5. Some Technology Challenges... • Real time guarantees for networked systems • Programming multicore architectures • Transactional memory • Parallel software architectures • Methods for distributed/global development • Offshoring a fact of life • Your favorite topic goes here.

  6. Theory Challenges A theory should: • explain observed phenomena, • predict as yet unobserved phenomena, • allow formulation of testable hypotheses. Theories • form the core of knowledge of a scientific field, • need to be improved or replaced, as new phenomena require explanation, • may be qualitative or quantitative, • is in short supply in software research (ask the why and how questions).

  7. Example: Theory by Sauer, Jeffery, Land und Yetton (2000) about Inspections • The Effectiveness of Software Development Technical Reviews: A Behaviorally Motivated Program of Research • Result: the most important factor for effectiveness is the competence of the individual inspector. • IEEE Transactions on Software Engineering TSE 26:1 (2000) 1-14

  8. Overview

  9. Group Expertise • Group expertise is determined by the expertise of the individuals in the group. • More group members can increase group’s expertise (but there are practical limits). • Interactions in the group produce no new expertise (no synergy). • Selection and training of group members affect group expertise. • Group process and social decision scheme affect how well the group members can apply their expertise (but they don’t add any).

  10. Some General Insights • Group performance is increased by increasing the available, individual expertise. • Changes of the group process should aim at using the available expertise well. • Learning effects not addressed. • Group scheduling not addressed.

  11. Interaction with Experiments: Group Size • Number of inspectors: some recommend 3 to 5, because more individuals improve group expertise. • Observed: 2 inspectors better than one, but 4 not better than 2. Explanation: There could be pairing of experts which leads to high task expertise. • Observed: scenario-based inspection is better than ad hoc reading or inspection lists. Explanation: Scenarios train and cover defect classes better.

  12. Achievable Theories in SW Research • Models for SW development processes (perhaps stochastic) • Models for agile methods • Traditional development methods • Economic or productivity models (cost/benefit tradeoffs) • Microscopic models of sub-tasks, such as testing, implementation, design, program understanding, re-eingineering. • The goal is insight in what really goes on.

  13. Challenges in Empirical Work • Observation is the key to finding out what really goes on. • Also used to test theories • Need better tools to capture what goes on in SW work. • Analyze project histories • Observe the actions of software workers at the appropriate level (Hackystat, Eclipse plugins, coupled with video/audio) • Develop better analysis tools • In the past ten years, the fraction of empirical studies published has increased greatly (see Journal of Empirical Software Engineering, existing since 1966). Quality is good.

  14. Some Recent Empirical Results • Programming in pairs is comparable in cost and quality with individual programmers + inspections. • Test-first (writing tests before the code) is about as effective as test-last, provided tests are automated. • UML diagrams in maintenance lead to only slight improvements in productivity and quality; when diagrams are updated, productivity gains disappear. • We need to move from believe and dogma towards skepticism and evidence. • Empirical work is also important for other areas, such as HCI, algorithms, computer architecture, speech understanding, collaborative work, and others.

  15. Explorative Untersuchung: Analyzing Test Driven Development (TDD) • Tests are written before code • Incremental process, method-by-method • Tests are executable and self-testing • Data of many different sources has to be combined for an analysis, e.g. • data from unit test invocations • state of the application sources at every unit test • user behavior in development environment

  16. Data-Collection Framework • Eclipse plugins gather required data • Collected data is post processed and entered into a data base • Semi-automatic evaluator determines the number of changes that conform to TDD • For example, each application class change has to be preceded by a failed test

  17. Classification of Changes pX = professional, sX = student

  18. Results • Fraction of changes conforming to TDD is higher for professionals than for students • Average period of time between two successive unit tests (cycle time) is smaller for professionals (not shown) • Cycle time of professionals has a smaller variation than that of students

  19. Characteristics of TDD Programs • Programs developed with TDD are said to be better testable • What are the characteristics of a better testable program? • Idea: Testable programs make the effect of statements controllable, i.e., input can be set, results can be observed in a test.

  20. Controllability of Assignments • Assume assignment a = b + c in an application-code method • If b and c can be set in the test, value of a can be controlled in the test • An assignment is said to be controllable if its left-hand side can be determined during the set-up of the test • C-Metric: Ratio of controllable assignments to all assignments in a method

  21. Evaluation • 4 TDD projects and 4 open-source projects • Comparison to Chidamber-Kemmerer metric suite (WMC, DIT, NOC, CBO, RFC, LCOM) • Comparison to number of assignments per method and size of method (number of byte-code statements)

  22. Results • C-Metric negatively correlated with all other metrics. • C-Metric only statistic significant factor in logistic regression with TDD as response variable. • C-Metric is an indicator whether project was developed using TDD or not. • May also be a metric for testability. • (example of grounded theory) • Next question: What is the relation to coverage cirteria? What other characteristics does a TDD process really have? Relation to pair programming? • Why do pair programmers make mistakes that solo programmers don't?

  23. Questions?

  24. Examples for Empirical Work • Overview of software experiments conducted in Karlsruhe: http://www.ipd.uka.de/~exp/ • Simula Research Lab, Norway,Software Engineering:http://www.simula.no/department.php

More Related