Main focus was creating good XGBoost and Neural net solutions for regression and classification. SVM and kNN approaches were also attempted with varying success. Kmeans and Birch clustering was done for their performance. 1. Classification_LaurentLindpointner_XGBoost.txt XGBoost XGBClassifier. HP tuned with Bayesian search/Cross validation (number of estimators). max_depth=6, n_estimators = 2446, eta=0.01. Good Model. Variable choice optimized with SHAP values. 2. Classification_LaurentLindpointner_NeuralNet.txt TensorFlow, 'Bottleneck model'. HP tuned with Random search (200 trials, KerasTuner). 6 Dense hidden layers with [512, 256, 128, 64, 32, 16] neurons, ReLU activation. 2 Dropout layers. 1 neuron output layer, Sigmoid activation. Dropout rate=0.1588, learning_rate=0.00027. Binary-Crossentropy loss function. Used SHAP. 3. Classification_LaurentLindpointner_kNN-DidNotReallyWorkWell.txt SKLearn kNeighbors Classifier.HP tuned with Grid search (165 configurations). leaf_size=5, p=3, n_neighbors=10. Subpar model. Variable choice made with Lasso CV (SkLearn). 4. Regresion_LaurentLindpointner_XGBoost.txt XGBoost XGBRegressor. HP tuned with Bayesian search/Cross validation (number of estimators). max_depth=4, n_estimators=1139, eta=0.01. Good Model. Used SHAP for feature choice. 5. Regresion_LaurentLindpointner_NeuralNet.txt TensorFlow. HP tuned with Random search (150 trials, KerasTuner). 10 Dense hidden layers, 240 neurons each, ReLU activation. 1 neuron output layer. learning_rate=0.00066. Log-Cosh loss function. Used SHAP. 6. Regression_LaurentLindpointner_LinearSVM.txt SKLearn Linear SVM. Squares-epsilon-insensitive loss function. Feature choice according to highest feature weights. 7. Clustering_LaurentLindpointner_KMeans.txt SKLearn Kmeans clustering. Tested 5000 different random sets of 25 parameters for 3-7 clusters. 4 clusters. Chose best feature/n_cluster combination according to combination of best Davies-Bouldin and Silhouette scores (1-DB+S, both from SKLearn.) 8. Clustering_LaurentLindpointner_Birch.txt SKLearn Birch clustering. Tested 5000 different random sets of 25 parameters for 3-7 clusters. 4 clusters. Chose best feature/n_cluster combination according to combination of best Davies-Bouldin and Silhouette scores (1-DB+S, both from SKLearn.)