1. Classification_EdwinVargas_XGBClassifier.txt: XGBoost XGBClassifier, {'eta': 0.9793, 'gamma': 9.95, 'max_depth': int(15.84)}, used Bayesian Optimization (5.69 MB) Good result, was fastest result with best accuracy from Bayesian. Same variable choice as for Random Forest. SHAP would run forever :( 2. Classification_EdwinVargas_RandomForestClassifier.txt: SKlearn RandomForestClassifier, {'max_depth': int(12.81), 'max_features': int(5.00), 'min_samples_leaf': 1e-05, 'min_samples_split': 1e-05, 'n_estimators': int(28.65)}. Used Bayesian Optimization (3.15 MB) Not as good as XGBoost but faster. Ran loop over RFC's feature_importance to cut down until I had 15 variables. 3. Classification_EdwinVargas_MLPClassifier.txt: SKlearn MLPClassifier, {'batch_size': 40, 'learning_rate_init': 0.000308, 'hidden_layer_sizes': (15, 1, 1)}. Used Bayesian Optimization (got 1 hidden layer with 1 neuron as optimal ???) (17.981 KB) Not as good as other two algorithms. Same variable choice as for Random Forest. SHAP would run forever. Scaled input sets. 4. Regression_EdwinVargas_XGBRegressor.txt: XGBoost XGBRegressor, {'eta': 0.094595, 'gamma': 0.012027, 'max_depth': int(10.86)}. Used Bayesian optimization (2.35 MB). Satisfied with result. Used SHAP to find variables. Tried scaling and not scaling Energy. Found scaling improves regression. 5. Regression_EdwinVargas_Keras.txt Keras Sequential NN, {10, 8, 8, 1} layers, {'batch_size': 100, 'epochs': 20, 'validation_split':0.2}. Manually optimized (15 KB) Satisfied with result. Scaled inputs and energy to train. Tried using SHAP; used XGBRegressor variables. 6. Regression_EdwinVargas_LGBMRegressor.txt: Lightgbm LGBMRegressor, {'eta': 0.177124, 'gamma': 7.25777, 'max_depth': int(14.178)}. Used Bayesian optimization (275 KB) Satisfied with result. Scaled energies to train. Used SHAP to find variables. 7. Clustering_EdwinVargas_GaussianMixture.txt: SKlearn GaussianMixture, {n_components=3}. For variables used permutation_importance from SKlearn (46 KB) Not really satisfied with result. Used BIC values but minimized around 8 but could not cluster elec; defaulted to 3 components. 8. Clustering_EdwinVargas_BayesianGaussianMixture.txt: SKlearn BayesianGaussianMixture, {n_components=3, 'weight_concentration_prior':0.01}. For variables used permutation_importance from SKlearn (52 KB) Not really satisfied with result. Tried changing weight_concentration_prior but could not find elec for n >= 4; defaulted to 3. 9. Clustering_EdwinVargas_KMeans.txt: SKlearn Kmeans, {n_components=3, 'weight_concentration_prior':0.01}. For variables used permutation_importance from SKlearn (651 KB) Not satisfied with result. Looked at splitting among clusters; would not keep elec together for n >= 4; defaulted to 3.