Contributions à la calibration d'algorithmes d'apprentissage : Validation-croisée et détection de ruptures - TEL - Thèses en ligne Accéder directement au contenu
Hdr Année : 2018

Contributions à la calibration d'algorithmes d'apprentissage : Validation-croisée et détection de ruptures

Résumé

The present manuscript mainly focus on cross-validation procedures (and in particular on leave-p-out (LpO)), describing its practical aspects as well as new strategies leading to non-asymptotic theoretical guarantees on its statistical performance (concentration inequalities, oracle inequalities). As a privileged application, cross-validation is also used to address the multiple change-points detection problem in the off-line context. This problem is then tackled in a more general framework by means of reproducing kernels and the model selection paradigm. After introducing the cross-validation procedures in Chapter 1, ongoing strategies allowing us to efficiently compute cross-validation estimators are detailed in Chapter 2. In particular several of them yield closed-form expressions for the LpO estimator, which considerably reduces the computational cost. Such closed-form expressions have been already derived in density estimation with projection and kernel estimators, and with k-nearest neighbors estimators in the regression and binary classification contexts. Chapter 3 discusses the statistical properties of the cross-validation estimators (used as risk estimators) in terms of bias, variance, and mean squared error. For instance among cross-validation estimators, it is established that the LpO one enjoys the lowest variance for a given test set cardinality. The leave-one-out (L1O) estimator is also proved to be asymptotically optimal in terms of mean squared error in density estimation with projection estimators. Several approaches leading to concentration inequalities of the LpO estimator around its expectation are dis- cussed in Chapter 4. A direct approach relying on the combination of closed-form expressions and the classical concentration inequalities of Bernstein and Talagrand is first exposed in the density estimation context. A more general approach is then described which exploits the link between the LpO estimator and U-statistics. Its main underlying idea is to deduce exponential concentration results for the LpO estimator from moment inequalities. The derivation of the preliminary results also involve the stability of the used learning algorithm. The important question of model/statiscal algorithm selection is addressed in Chapter 5 in the particular case of density estimation. The optimality of the LpO-based model selection procedure is proved under some condi- tions both in the estimation purpose—by means of a non-asymptotic oracle inequality—and in the identification purpose—through a model consistency result. Cross-validation is then used to tackle the multiple change-points detection problem in the off-line setting, where the variance is allowed to vary along the time (heteroscedastic setting). Chapter 6 summarizes the conclusions drawn from theoretical as well as empirical results about the behavior of cross-validation procedures. In particular, these conclusions lead us to suggest new model selection procedures relying on cross-validation. At the price of a higher computational cost, these procedures automatically take into account changes arising in the variance for instance, which improves the statistical performance. The more general question of detecting changes arising the full distribution of the observations (and not only in the mean) is also addressed by means of reproducing kernels. A new model selection procedure is designed that is based on a penalty derived in the reproducing kernel Hilbert space framework. Its non-asymptotic performance is quantified through an oracle inequality with high probability. Numerous aspects of the new procedure are also empirically assessed in the empirical study. For instance, the results illustrate that the chosen kernel clearly influences the final performance. Finally the manuscript ends with Chapter 7 highlighting several challenging perspectives which could give rise to important improvements both on the practical and theoretical sides.
Fichier principal
Vignette du fichier
HDR_manuscript.pdf (1.38 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

tel-02050179 , version 1 (26-02-2019)

Identifiants

  • HAL Id : tel-02050179 , version 1

Citer

Alain Celisse. Contributions à la calibration d'algorithmes d'apprentissage : Validation-croisée et détection de ruptures. Statistics [math.ST]. Université de Lille, 2018. ⟨tel-02050179⟩
225 Consultations
192 Téléchargements

Partager

Gmail Facebook X LinkedIn More