09:43:11 From Martin Hoffmann Petersen : So n is the different classifications? 09:43:44 From Andreas Malthe Faber : I would guess number of data points we are trying to predict? 09:44:20 From aagnello : N = how many objects you have 09:44:35 From aagnello : n = the current object’s index 09:45:31 From Leon Bozianu : If our estimate of y_n is 1, isn’t the final log term undefined? 09:48:59 From Kevin Kumar : this is regression right? 09:52:17 From Camilla : The sampling is called bootstrapping right? 10:02:20 From Martin Lyskjær Frølund : For the start estimate of y, do we just use a random number? 10:03:44 From Kasper Hede Nielsen : How does the log loss work in case of multiclass? would the order of the lables have an influence since 1 and 5 is further away than 1 and 2 10:04:06 From Martin Lyskjær Frølund : Yes 10:05:36 From aagnello : @Kasper, we’ll cover multi class loss next week, where we re-discuss where the loss functions come from, but it’s actually a good exercise to think about for yourself in the meanwhile ;) 10:10:01 From Morten Poulsen : When we feed it something we know, I assume it's the same as introducing a bias, is it something we'll cover in the ethics part of the course? 10:10:40 From Morten Poulsen : ok thanks :)