An empirical evaluation of NASA-MDP data sets using a genetic defect-proneness prediction framework
artículo original
Fecha
2016-11-09Autor
Murillo Morera, Juan
Quesada López, Christian Ulises
Castro Herrera, Carlos
Jenkins Coronas, Marcelo
Metadatos
Mostrar el registro completo del ítemResumen
In software engineering, software quality is an important
research area. Automated generation of learning schemes
plays an important role and represents an efficient way to detect
defects in software projects, thus avoiding high costs and long
delivery times. This study carries out an empirical evaluation
to validate two versions with different levels of noise of NASAMDP
data sets. The main objective of this paper is to determine
the stability of our framework. In all, 864 learning schemes were
studied (8 data preprocessors x 6 attribute selectors x 18 learning
algorithms). In line with statistical tests, our framework reported
stable results between the analyzed versions. Results reported
that evaluation and prediction phases were similar. Furthermore,
the performance of the phases of evaluation and prediction
between versions of data sets were stable. This means that the
differences between versions did not affect the performance of
our framework