Search for contacts, projects,
courses and publications

ASysT: Automatic System Testing

People

Leaders

Pezzè M.

(Responsible)

Collaborators

Martìnez M. S.

(Collaborator)

Terragni V.

(Collaborator)

Zuddas D.

(Collaborator)

Abstract

In this project we will develop a new approach to automatically test the many applications that we use everyday though the Web and our mobile devices. In detail, we will develop a method to automatically generate system test cases, targeting the problem of testing the common class of software systems that elaborate complex input data characterised by different kinds of mutually dependent values, and we will define approaches and techniques to automatically generate complex sequences of mutually dependent and realistic data that will extensively sample the behavior of the system under test. System testing is inherently different from unit and integration testing due to the nature, dimension and articulation of the execution space. The nature of the execution space is different because system testing targets the behavior of the system as a whole, for example, sorting a list of contacts, while unit and integration testing targets the behavior of units and their interactions independently for the overall execution context, for example, sorting strings. Thus, test case generation techniques that work at the system level shall generate semantically relevant input data, like realistic contacts and not just strings. The dimension of the execution space of a system may include a big variety of different data, while the execution space of single units and subsystems focuses on the much smaller subset of data relevant for the unit or the subsystem. Systematically exploring the execution space at the system level becomes prohibitive, thus defining effective heuristics for sampling the input space is extremely important to automate system testing. The articulation of the execution space at system level differs from unit and integration level: while units and subsystems can be tested in isolation, system testing requires testing a system while interacting with the environment. Thus system test cases must represent both meaningful sequences of operations and input data that represent appropriate interactions with the environment, that is, interaction sequences that lead to successful executions and data that refer to realistic and semantically coherent inputs. So far, the research on automatic test case generation techniques has focused on unit and integration testing, paying less attention to system testing. The few approaches to system testing are based on models, learning techniques, search-based algorithms and reuse. These strategies do not tackle the problem of generating legal operation sequences and realistic and coherent test inputs, and can hardly cope with applications that extensively exploit the semantic of the input data. In this project we will define a new approach to automatically generate semantically coherent and effective system test suites for applications that require complex sequences of mutually dependent data. The research will be grounded on our recent work and will (i) investigate the possibility of using both knowledge from a core set of correct executions and semantic information from system specifications to extend the ability of machine learning techniques, and in particular Q-learning, to deal with complex sequences of operation, (ii) study the integration of different knowledge bases, starting with the Web of data, to address specific application domains, (iii) define techniques that support an efficient interplay between the different approaches by exploring complementarities and similarities, (iv) define techniques to deal with semantic correlations between data used in different operations, (v) define approaches to efficiently deal with negative cases, (vi) complement the approaches with techniques to automatically generate partial oracles, and (vii) experimentally evaluate the approach with suitable prototypes referring to applications from different domains and based on different technologies.

Additional information

Start date
01.10.2015
End date
01.07.2018
Duration
33 Months
Funding sources
Status
Ended