![]() |
17-939A | ||
What Makes Good Research in Software Engineering? |
|||
| Spring 2005 Mary Shaw |
| Course Home | Calendar | Class Presentations | Paper Summaries | Resources | Assignments |
Course Calendar |
Including readings and responsible parties for the session | ||
Class Meetings |
Tuesdays and Thursdays, 9:00-10:30am, in Wean 4615A |
| Who | e-mail address | Office, Phone | ||
| Instructor | Mary Shaw | mary.shaw [at] cs.cmu.edu | WeH 8109 (x8-2589) | |
| Assistant | Kari Samuels | kari.samuels [at] cs.cmu.edu | WeH 8107 (x8-3063) | |
| TA | (get serious) |
Class mailing list: students17939 [at] cs.cmu.edu
| Name | email (change [at] to @) |
Dept | Tech Surv |
Resch Meth |
Tech Mat |
Sounding Board | Technology Topic | Research Paradigm | |
| Abi-Antoun, Marwan |
mabianto [at] cs.cmu.edu |
SCS /SE |
Tu 3/15 |
Th 3/31 |
Th 4/21 |
LaToza | Recovery of design information from code, to support restructuring | Analytic models for tools and notations | |
| Bergmann, Kathryn |
kbergman [at] ece.cmu.edu |
ECE | Tu 3/15 |
Th 3/31 |
Tu 4/19 |
Morris | Fault tolerance based on multiple versions of code | Experience reports | |
| Bierhoff, Kevin | kevin.bierhoff [at] cs.cmu.edu |
SCS /SE |
Tu 3/22 |
Tu 4/7 |
Th 4/21 |
Maxino | Role and handling of state in software (excluding Petri nets) | Case studies | |
| Golden, Else |
egolden [at] cmu.edu |
SCS /HCII |
Th 3/17 |
Th 3/31 |
Tu 4/26 |
Maccherone | Software cost estimation | Empirical studies, experimental | |
| Hartman, Greg |
gghartma [at] cs.cmu.edu |
SCS /SE |
Th 3/24 |
Tu 4/12 |
Th 4/28 |
Scaffidi |
Object oriented frameworks and code reuse | Empirical studies | |
| LaToza, Thomas |
tlatoza [at] cs.cmu.edu |
SCS /SE |
Th 3/17 |
Th 4/5 |
Tu 4/19 |
Abi-Antoun | Understanding software evolution | Empirical studies | |
| Maccherone, Larry |
lmaccherone [at] yahoo.com |
SCS /SE |
Th 3/17 |
Th 4/7 |
Th 4/28 |
Golden | Software process definition languages | Develop new PDL; try it out; critically evaluate result | |
| Maxino, Theresa |
maxino [at] cmu.edu |
ECE | Tu 3/22 |
Tu 4/5 |
Tu 4/26 |
Bierhoff | Real-time system design, with emphasis on scheduling | Theoretical/Formal proofs validated thru examples | |
| Morris, Jennifer |
jenm [at] ece.cmu.edu |
ECE | Tu 3/15 |
Tu 4/5 |
Tu 4/19 |
Bergmann | Hazard analysis | Qualitative methods and models; case studies | |
| Scaffidi, Christopher |
cscaffid+isri [at] cs.cmu.edu |
SCS /SE |
Th 3/24 |
Tu 4/12 |
Tu 4/26 |
Hartman |
Spreadsheets | Enhanced tools (per Newman) | |
| Dumitras, Tudor |
tdumitra [at] ece.cmu.edu |
SCS /CS |
(audit) | (audit) | |||||
| Malayeri, Donna |
donna [at] cs.cmu.edu |
SCS /CS |
(audit) | (audit) | |||||
| Poladian, Vahe |
vvp+ [at] andrew.cmu.edu |
SCS /SE |
(audit) | (audit) | |||||
| Ray, Justin |
justinr2 [at] ece.cmu.edu |
ECE | (audit) | (audit) |
|
Physics and biology have well-refined public explanations of their research processes. Physics has been so successful that the paradigm of "Form hypothesis about a phenomenon - design experiment - collect data - compare data to hypothesis - get someone else to repeat experiment" is sometimes assumed to be the only scientific paradigm for all sciences, and even engineering. This paradigm doesn't work well for complex systems with uncontrolled variation, and biology and related sciences (e.g., psychology, medicine) present another template that is sometimes taken to be definitive: "Form hypothesis about a distinction - select matched experimental and control groups that are comparable except for the distinction - collect data - commit statistics on the date." Medicine has refined the explanation of double-blind experiments so well that the public at large recognizes at least that a medical experiment involves some people being treated and others not; even if they don't understand the underlying statistics, they can grapple with the ethical questions that arise on both sides. Even though these public explanations simplify away much of the important detail of experimental design, they set the tone for the fields.
Software engineering research suffers from lack of such refined explanations of how we "create knowledge," or contribute to advancing the state of software development practice. Within the field, this impedes the design of research projects. Outside the field, it leads to characterizations such as "software engineering research is awfully soft and mushy" that seem to arise by comparing software engineering research to the simplistic model for physics. This course addresses the problem by studying the body of software engineering research not only for its specific content but also to determine the research strategies that lead us to believe the results.
In software engineering, as in other areas, a good research result requires a problem worth solving, a solution of that problem that contributes interesting and useful new knowledge, and an analysis that shows that the solution actually solves the problem. Individual results accumulate over time, refuting or reinforcing each other to give more significant results in which we have more confidence.
This course will examine principal research results of software engineering with attention to problem selection, research method, and validation of results. We will pay particular attention to the way results in an area mature. We will see through examples how research paradigm and validation method are chosen to match the problem.
Students will analyze current and classical literature for both the content of the work and the research strategy used. Students will complete two projects. One project will examine a software engineering area in depth to show how it has matured and which research strategies have contributed to this maturation. The second project will entail developing research and validation strategies for a specific software engineering research project.
This course aims to develop an appreciation of the issues that arise in doing research that contributes to improving practical large-scale software construction, together with some skill in addressing these issues. By the end of the course a student should demonstrate proficiency in three areas:
General research skills
Software engineering research methods
Specific software engineering results
This course is a major revision and upgrade of a course with the same title offered in Spring 2000 and Fall 2001. The web pages for the previous versions of the course are still online (2000), (2001) though some of the students have deleted the pages that once contained their contributions.
An improved formulation of the central ideas was the centerpiece of a keynote talk at ICSE 2001 in Toronto, followed by a minitutorial at ICSE 2003 in Portland.
This page is part of the site for course 17-939A, What Makes Good Research in Software Engineering?, taught by Mary Shaw in the Computer Science Department and Institute for Software Research, International in the School of Computer Science at Carnegie Mellon University. All material copyright © 1999, 2000, 2001, 2005 by Mary Shaw. Comments to mary.shaw [at] cs.cmu.edu. Last updated 04/18/05.