09:18:24 From jakob : mondays.. 09:21:24 From Lasse Bonn : this question was not so clear - in the exam could you specifiy with 'exactly one' or 'at least one'? 09:21:46 From Marcus Nørgaard Weng : Yeah I thought about that as well 09:22:45 From Lasse Bonn : thanks 09:22:57 From Amalie Paulsen To Troels Christian Petersen(privately) : How can we know it is a poission, if we dont know what kind of experiment it is? (i mean, there are other distributions which have 1 parameter) and if you dont know what the "ice cube experiment" are? 09:24:20 From Amalie Paulsen To Troels Christian Petersen(privately) : yes ;) 09:28:50 From Peter Andresen : How to get an uncertainty if one solves this numerically? :-) 09:29:41 From Mirjam Partovi Dilami : It seems like the mean for the 20% tallest is the lower limit? 09:29:49 From Peter Andresen : From what calculation would you get the 2% uncertainty? 09:30:47 From Eric Planz : @Peter, I think he means sqrt(2000) 09:31:19 From Peter Andresen : thanks! 09:32:22 From Aske Luja Lehmann Rosted : Troels just for information, when you change to the exam we don't see that. 09:33:27 From Nicolai Ree : But should you use the error propagation here? Because then you get sigma_r^2 = sigma_L^2 + (4/pi^2r^3) sigma_r^2 09:33:52 From Amalie Paulsen : is it enough to write "i use the error propagation formul" because you said in lectures before, that we should not write the formula? 09:33:53 From Nicolai Ree : I ask later 09:35:53 From Marcus Nørgaard Weng : Where did 3.25^2 come from+ 09:35:55 From Marcus Nørgaard Weng : ?* 09:36:10 From Marcus Nørgaard Weng : 3.04* 09:37:19 From Marcus Nørgaard Weng : What's the logic behind squaring it? 09:37:25 From Marcus Nørgaard Weng : I don't remember stat formula 09:37:28 From Marcus Nørgaard Weng : that* 09:38:02 From Marcus Nørgaard Weng : Ahh, that makes sense - Thank you! 09:39:58 From Neus : Do we need to do the fit for the third point? 09:41:05 From Kimi Kreilgaard : Where does the analytical expression for the mean comes from 09:41:15 From Peter Andresen : Why is it not Gaussian with a p=0.99? 09:42:48 From Amalie Albrechtsen : Where does that analytical expectation come from? 09:43:22 From Peter Andresen : Cool, thanks :) 09:45:02 From Kimi Kreilgaard : Can we ask about the analytical expression in monte Carlos later, if you will not answer now? 09:46:54 From Peter Andresen : and the integral should be of x*f(x) right? 09:48:07 From Mirjam Partovi Dilami : How do you see that? :-D 09:48:19 From Albert Bjerregård Sneppen : @kimi For clarity there is a typo in the mean of the monto carlo, the integral of x^(-0.9) is 1/0.1*x^(0.1). So the 1.1 should be 0.1 throughout 09:49:09 From Kiril Klein : Wikipedia says for the KS test: "If either the form or the parameters of F(x) are determined from the data Xi the critical values determined in this way are invalid." Would it still be ok to run it here to check if its a Gaussian? 09:50:07 From Rasmus : Is it possible to find the Python code for some of the figures in the solution manual? 09:51:10 From Kiril Klein : Ok, thanks 09:51:14 From Kimi Kreilgaard : Thank you Albert, that makes a lot more sense.. :) 09:54:13 From Kiril Klein : You ask whats the best separation you could get. But with ML we could separate it perfectly by overfitting 09:54:33 From Sebastian Øllegaard Utecht : When calculating type 1 and type 2 error rates should we divide by the entire dataset for each error type or by the first distribution for type 1 and the second for type 2? 09:54:59 From Steen Bender : how do we find that the fisher spread the data with 3.24 sigma? 09:55:49 From Lasse Bonn : so we quantify separation with error rates? 09:55:56 From Benjamin Henriksen : great thanks :-) 09:56:49 From Sebastian Øllegaard Utecht : Cool - that's what i thought :) 09:57:47 From Sebastian Øllegaard Utecht : in units of which of the standard deviations? 09:57:52 From Kasper Hede Nielsen : and that is the standard deviation from the two gaussians added in quadrature? 10:00:34 From Peter Andresen : In generel, should we not decide on a significance value before fitting? So if I had chosed 5%, I would say it is not constant? 10:01:31 From Jaime Caballer-Revenga : a valid fit would be on the already corrected data? 10:01:32 From Norman Pedersen : in the first problem we say that a p value of 0.025 is not enough to uphold a linear relationship. How can a p value of 0.06 be that much better in the second problem? 10:02:58 From Jaime Caballer-Revenga : How to calculate the uncertainty on the jump? 10:03:11 From Jaime Caballer-Revenga : as simple as that 10:03:12 From Jaime Caballer-Revenga : ok thanks 10:04:01 From Rahul Rajendra Aralikatte : Do we subtract the uncertainty in quadrature? 10:04:22 From Rahul Rajendra Aralikatte : ah ok 10:04:30 From Peter Andresen : Is it a sigmoid shape or something like this? 10:13:18 From Marcus Nørgaard Weng : Where the errors are just sqrt(frequency) of each bin? 10:13:22 From Benjamin Henriksen : great, thanks :-) 10:13:22 From Sebastian Øllegaard Utecht : can we reiterate when er are supposed to use binned likelihood fit and unbinned likelihood fit? It seems like we had enough data for a chi2 fit in the last question, but you give extra points for also using likelihood. 10:14:55 From Sebastian Øllegaard Utecht : And we use it when statistics is low right? 10:15:55 From Sebastian Øllegaard Utecht : Alright - thanks.