Advanced Algorithm Selection with Machine Learning : Handling Large Algorithm Sets, Learning From Censored Data and Simplifying Meta Level Decisions / Alexander Tornede ; 1st Reviewer Prof. Dr. Eyke Hüllermeier, 2nd Reviewer Prof. Dr. Axel-Cyrille Ngonga Ngomo, 3rd Reviewer Prof. Dr. Marius Lindauer. Paderborn, 2023
Inhalt
- Titlepage
- Abstract
- Acknowledgement
- Contents
- 1 Introduction
- 1.1 Content, Contributions and Preceding Publications
- 1.2 Potential Impact of This Thesis
- 1.3 Scope of This Thesis
- 1.4 How to Read This Thesis
- 1.5 Co-Author Contribution Statement
- 1.6 Additional Publications
- 2 Background
- 2.1 Algorithm Selection Problem Variants
- 2.1.1 The Instance-Specific Offline Algorithm Selection Problem
- 2.1.2 The Online Instance-Specific Algorithm Selection Problem
- 2.1.3 The Offline Instance-Specific Meta Algorithm Selection Problem
- 2.1.4 Connection Between Problem Variants
- 2.2 Distinction From Related Problems
- 2.2.1 Algorithm Scheduling
- 2.2.2 Meta Learning
- 2.2.3 Algorithm Configuration
- 2.2.4 Hyperparameter Optimization
- 2.2.5 Automated Machine Learning
- 2.3 Common Algorithm Selection Solutions
- 2.4 Algorithm Selection Loss Functions for Common Algorithmic Problem Classes
- 2.4.1 Algorithm Selection Loss Functions for Constraint Satisfaction Problems
- 2.4.2 Algorithm Selection Loss Functions for Optimization Problems
- 2.5 Instance Features
- 2.5.1 Requirements for Instance Features
- 2.5.2 Different Kinds of Instance Features
- 2.5.3 Feature Preprocessing
- 2.6 ASlib: The Algorithm Selection Library
- 3 Extreme Algorithm Selection: Generalizing Across Algorithms
- 3.1 From Standard to Extreme Algorithm Selection
- 3.2 Standard Algorithm Selection Solutions in the Context of XAS
- 3.2.1 Ranking and Regression Solutions
- 3.2.2 Classification Solutions
- 3.2.3 Collaborative Filtering Solutions
- 3.2.4 Clustering Solutions
- 3.3 Exploiting a Dyadic Feature Representation
- 3.4 Experimental Evaluation: A Case Study
- 3.4.1 Benchmark Scenario
- 3.4.2 Baselines
- 3.4.3 Performance Metrics
- 3.4.4 Experimental Setup
- 3.4.5 Results
- 3.5 Related Work
- 3.6 Conclusion and Future Work
- 4 Offline Algorithm Selection Under Censored Feedback
- 4.1 The Problem of Censored Training Data
- 4.2 Survival Analysis and Random Survival Forests
- 4.3 Survival Analysis for Algorithm Selection
- 4.4 Experimental Evaluation
- 4.5 Related Work
- 4.6 Conclusion and Future Work
- 5 Online Algorithm Selection Under Censored Feedback
- 5.1 The OAS Problem From a Bandit Perspective
- 5.2 Modeling Runtimes
- 5.3 Stochastic Linear Bandits Approaches
- 5.3.1 Imputation-Based Upper Confidence Bounds
- 5.3.2 Randomization of Upper Confidence Bounds
- 5.3.3 Bayesian Approach: Thompson Sampling
- 5.4 Expected PAR10 Loss Minimization
- 5.5 Evaluation
- 5.6 Related Work
- 5.7 Conclusion and Future Work
- 6 Algorithm Selection on a Meta Level
- 6.1 Considering Algorithm Selection on a Meta Level
- 6.2 Selecting Single Algorithm Selectors Through Meta Learning
- 6.3 Constructing Ensembles of Algorithm Selectors
- 6.3.1 Aggregation Strategies
- 6.3.2 Voting
- 6.3.3 Bagging
- 6.3.4 Boosting
- 6.3.5 Stacking
- 6.3.6 Comparison of the Approaches
- 6.4 Experimental Evaluation
- 6.4.1 Experiment Setup
- 6.4.2 Meta Learning for Selecting an Algorithm Selector
- 6.4.3 Voting Ensembles
- 6.4.4 Bagging Ensembles
- 6.4.5 Boosting Ensembles
- 6.4.6 Stacking
- 6.4.7 Overall Comparison
- 6.4.8 Discussion of Results
- 6.5 Related Work
- 6.6 Conclusion and Future Work
- 7 Conclusion and Future Work
- 7.1 Conclusion
- 7.2 Future Directions for Algorithm Selection
- 7.3 Thesis Contribution and Impact in a Nutshell
- A Appendix
- A.1 Details on the Experimental Evaluation of Chapter 3
- A.2 Details on the Experimental Evaluation of Chapter 4
- A.3 Details on the Experimental Evaluation of Chapter 5
- A.3.1 Hardware
- A.3.2 Software
- A.3.3 Hyperparameter Settings
- A.3.4 Caveat
- A.3.5 Detailed Performance Data
- A.4 Theoretical Additions to Chapter 5
- A.4.1 Deriving the Bias-Corrected Confidence Bounds
- A.4.2 Deriving the Refined Expected Loss Representation
- A.4.3 Pseudocode and Space-Complexity Details
- A.5 Details on the Experimental Evaluation of Chapter 6
- Full List of my Publications
- Bibliography
- List of Symbols
- List of Abbreviations
- List of Figures
- List of Tables
- Colophon
