5 solutions in total. Tree and NN for classification and regression, Kmeans for clustering. For classification and regression features were found by the ranked importance feature gained from running LGBM once. I used same parameters for both problem, using 15 features for classification and 10 for regression. LGBM: 20 leaves, 1000 boost rounds. had a prediction on test set of 94,5% so i figured good enough. Neural network: 20000 max iterations, early stopping rounds 100, hidden layers were 30 and 15. Clustering: tested code with k=3 and k=5, went with 3 since i have no idea how to judge what makes a cluster good or not and figured the only parameter i knew i could optimize for was speed. Features in clustering were the same as in regression