1. The C programming syntax errors can be really frustrating and demoralizing. Can we do something useful here? May be a machine learning approach help.
2. Reverse Engineering - Extracting models from code.
3. Automated or semi-automatic bug localization
4. Use of probabilistic models for semantic search and retrieval. For instance, Semantic Search Tools, to allow a developer to be able to pose questions like:
- “I have written a new piece of code. Is someone else already implementing
the same functionality?”
- “What are all the classes that implement a certain functionality?”
- “I have written this piece of code and want to update the documentation.
Where in the documentation should I make the changes, or give pointers?”
- “I have modified this piece of code. Where else should I update so that the
changes are consistent?”
- “Who else is working on the code that is semantically related to the code
segment I am working on?”
What is the best way to model a software library so that we can easily assess the testability and the maintainability of the library through static analysis of the code?
- One idea: probability theory may be useful in modeling a software library with regard to its testability and maintainability in the large state space spanned by the variables. Random input testing as used today has worked well for testing hardware systems where the inputs and the outputs generally form small state spaces. May be the hope for future testing needs...
5. As the size of the software continuously increases ... need for continued research in modularization metrics to characterize usefulness of large software through API metrics. The larger the number of modules and the more complex their functionality, the greater the need for high-quality APIs (this speaks to the need for continued research in API metrics).
6. Traceability in Software Engineering:
In today's competitive market, the pressure to increase productivity results in the tailoring of software processes, especially development and evolution processes to eliminate unproductive activities. Establishing and maintaining traceability links and consistency between software artifacts produced or modified in the software life-cycle are costly and tedious activities that are crucial but frequently neglected in practice.
Traceability between the free text documentation associated with the development and maintenance cycle of a software system and its source code are crucial in a number of tasks such as program comprehension, software maintenance, and software verification & validation. Finally, maintaining traceability links between subsequent releases of a software system is important for evaluating relative source code deltas, highlighting effort/code variation inconsistencies, and assessing the change history.
Software Traceability is crucial to handle change management effectively and efficiently. Due to the long drawn process of developing software, Traceability is required at a number of places and therefore different types of Traceability Table as listed below are generated/maintained. However, it has its own cost How Traceability is handled during Software development and Maintenance (Effectiveness and Efficiency)
Types of Traceability Tables:
http://www.traceabilitycenter.org/downloads/documents/GrandChallenges
http://www.cse.msstate.edu/~tefse13/
7. Run Time Verification
RV is concerned with monitoring and analysis of software or hardware system executions. The field is often referred to under different names, such as runtime verification, runtime monitoring, runtime checking, runtime reflection, runtime analysis, dynamic analysis, runtime symbolic analysis, trace analysis, log file analysis, etc. RV can be used for many purposes, such as security or safety policy monitoring, debugging, testing, verification, validation, profiling, fault protection, behavior modification (e.g., recovery), etc. A running system can be abstractly regarded as a generator of execution traces, i.e., sequences of relevant states or events. Traces can be processed in various ways, e.g., checked against formal specifications, analyzed with special algorithms, visualized, etc.
8. Software Visualization
Here, we may focus on visualization techniques that target aspects of software maintenance and evolution, program comprehension, reverse engineering, and re-engineering, i.e., how visualization helps programmers to understand, analyze, and evolve software. We aim to gather tool developers, users and researchers from software engineering, information visualization, and human-computer interaction to discuss theoretical foundations, algorithms, techniques,
tools, and applications related to software visualization.
Topics of interest include, but are not limited to:
9. Predictive modeling in SE
Models those are useful for the implementation, evaluation, and management of software processes and projects. Such models relate to software development efforts (both in general or software development for specific domains such as telecom, finance, scientific applications, etc).
TOPICS:
10. Software Debugging and Fault Localization
Software today is large and complex, in fact more so than ever before. Consequently, debugging when failure is observed is also becoming much more difficult and time-consuming. Manual debugging is quickly losing its viability as a practical option, and yet at the same time, various alternative approaches may still be too immature to use practically. Techniques that aim for automatic fault localization are not accurate and consistent enough to pinpoint the locations of faults to a desired degree. Distinguishing executions that fail due to different causative faults, reliably recording and replaying failed executions and fixing bugs without introducing new faults are but some of the debugging-related problems faced by developers today.
Recent efforts such as the recommender systems approaches that mine different types of software repositories and suggest various debugging actions or program fixes are still unproven to be consistently effective. Formal verification techniques generally suffer from complexity and scalability issues, static techniques can often be imprecise, and the heavy performance overhead of dynamic techniques can prohibit their application. Even though a software development project may apply such techniques, they may require developers to make final selection, diagnosis, and fixing decisions. Social aspects of software development projects that aid debugging, such as selecting the right developers to perform the right debugging tasks at the right time, have not been adequately explored. Last but not least, while studies are being conducted to reveal, clarify, or resolve some of these issues, researchers often conduct studies in restrictive environments that may inherently make unwelcome assumptions about the industry. All these concerns can induce in practitioners a lack of faith with regard to the results that debugging research can offer and deliver.
Some good topics in this area are:
11. Program Analysis using Software Tools
http://dl.acm.org/event.cfm?id=RE122
12. Mining Software Repository
Software repositories such as source control systems, archived communications between project personnel, and defect tracking systems are used to help manage the progress of software projects. Software practitioners and researchers are recognizing the benefits of mining this information to support the maintenance of software systems, improve software design/reuse, and empirically validate novel ideas and techniques. Research is now proceeding to uncover the ways in which mining these repositories can help to understand software development and software evolution, to support predictions about software development, and to exploit this knowledge concretely in planning future development.
The goal of this research is to advance the science and practice of software engineering via the analysis of data stored in software repositories.