{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Classification of b-quark jets in the Aleph simulated data\n", "\n", "Python macro for selecting b-jets in Aleph Z->qqbar MC in various ways:\n", "* Initially, simply with \"if\"-statements making requirements on certain variables. This corresponds to selecting \"boxes\" in the input variable space (typically called \"X\"). One could also try a Fisher discriminant (linear combination of input variables), which corresponds to a plane in the X-space. But as the problem is non-linear, it is likely to be sub-optimal.\n", "\n", "* Next using Machine Learning (ML) methods. We will try both tree based and Neural Net (NN) based methods, and see how complicated (or not) it is to get a good solution, and how much better it performs compared to the \"classic\" selection method.\n", "\n", "In the end, this exercise is the simple start on moving into the territory of multidimensional analasis.\n", "\n", "### Data:\n", "The input variables (X) are:\n", "* energy: Measured energy of the jet in GeV. Should be 45 GeV, but fluctuates.\n", "* cTheta: cos(theta), i.e. the polar angle of the jet with respect to the beam axis.\n", " The detector works best in the central region (|cTheta| small) and less well in the forward regions.\n", "* phi: The azimuth angle of the jet. As the detector is uniform in phi, this should not matter (much).\n", "* prob_b: Probability of being a b-jet from the pointing of the tracks to the vertex.\n", "* spheri: Sphericity of the event, i.e. how spherical it is.\n", "* pt2rel: The transverse momentum squared of the tracks relative to the jet axis, i.e. width of the jet.\n", "* multip: Multiplicity of the jet (in a relative measure).\n", "* bqvjet: b-quark vertex of the jet, i.e. the probability of a detached vertex.\n", "* ptlrel: Transverse momentum (in GeV) of possible lepton with respect to jet axis (about 0 if no leptons).\n", "\n", "The target variable (Y) is:\n", "* isb: 1 if it is from a b-quark and 0, if it is not.\n", "\n", "Finally, those before you (the Aleph collaboration in the mid 90'ies) produced a Neural Net based classification variable, which you can compare to (and compete with?):\n", "* nnbjet: Value of original Aleph b-jet tagging algorithm (for reference).\n", "\n", "\n", "### Task:\n", "Thus, the task before you is to produce a function (ML algorithm), which given the input variables X provides an output variable estimate, Y_est, which is \"closest possible\" to the target variable, Y. The \"closest possible\" is left to the user to define in a _Loss Function_, which we will discuss further. In classification problems (such as this), the typical loss function to use \"Cross Entropy\", see https://en.wikipedia.org/wiki/Cross_entropy.\n", "\n", "\n", "* Author: Troels C. Petersen (NBI)\n", "* Email: petersen@nbi.dk\n", "* Date: 20th of April 2020" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "from __future__ import print_function, division # Ensures Python3 printing & division standard\n", "from matplotlib import pyplot as plt\n", "from matplotlib import colors\n", "from matplotlib.colors import LogNorm\n", "import numpy as np\n", "import csv" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Possible other packages to consider:\n", "cornerplot, seaplot, sklearn.decomposition(PCA)" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "r = np.random\n", "r.seed(42)\n", "\n", "SavePlots = False\n", "plt.close('all')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Evaluate an attempt at classification:\n", "\n", "This is made into a function, as this is called many times. It returns a \"confusion matrix\" and the fraction of wrong classifications." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "def evaluate(bquark) :\n", " N = [[0,0], [0,0]] # Make a list of lists (i.e. matrix) for counting successes/failures.\n", " for i in np.arange(len(isb)):\n", " if (bquark[i] == 0 and isb[i] == 0) : N[0][0] += 1\n", " if (bquark[i] == 0 and isb[i] == 1) : N[0][1] += 1\n", " if (bquark[i] == 1 and isb[i] == 0) : N[1][0] += 1\n", " if (bquark[i] == 1 and isb[i] == 1) : N[1][1] += 1\n", " fracWrong = float(N[0][1]+N[1][0])/float(len(isb))\n", " return N, fracWrong" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Main program start:" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "# Get data (with this very useful NumPy reader):\n", "data = np.genfromtxt('AlephBtag_MC_small_v2.csv', names=True)\n", "\n", "energy = data['energy']\n", "cTheta = data['cTheta']\n", "phi = data['phi']\n", "prob_b = data['prob_b']\n", "spheri = data['spheri']\n", "pt2rel = data['pt2rel']\n", "multip = data['multip']\n", "bqvjet = data['bqvjet']\n", "ptlrel = data['ptlrel']\n", "nnbjet = data['nnbjet']\n", "isb = data['isb']" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Produce 1D figures:\n", "Define the histogram range and binning (important - MatPlotLib is NOT good at this):" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "Nbins = 100\n", "xmin = 0.0\n", "xmax = 1.0" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Make new lists selected based on what the jets really are (b-quark jet or light-quark jet):" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "prob_b_bjets = []\n", "prob_b_ljets = []\n", "bqvjet_bjets = []\n", "bqvjet_ljets = []\n", "for i in np.arange(len(isb)) :\n", " if (isb[i] == 1) :\n", " prob_b_bjets.append(prob_b[i])\n", " bqvjet_bjets.append(bqvjet[i])\n", " else :\n", " prob_b_ljets.append(prob_b[i])\n", " bqvjet_ljets.append(bqvjet[i])\n", "\n", "# Produce the actual figure, here with two histograms in it:\n", "fig, ax = plt.subplots(figsize=(12, 6)) # Create just a single figure and axes (figsize is in inches!)\n", "hist_prob_b_bjets = ax.hist(prob_b_bjets, bins=Nbins, range=(xmin, xmax), histtype='step', linewidth=2, label='prob_b_bjets', color='blue')\n", "hist_prob_b_ljets = ax.hist(prob_b_ljets, bins=Nbins, range=(xmin, xmax), histtype='step', linewidth=2, label='prob_b_ljets', color='red')\n", "ax.set_xlabel(\"Probability of b-quark based on track impact parameters\") # Label of x-axis\n", "ax.set_ylabel(\"Frequency / 0.01\") # Label of y-axis\n", "ax.set_title(\"Distribution of prob_b\") # Title of plot\n", "ax.legend(loc='best') # Legend. Could also be 'upper right'\n", "ax.grid(axis='y')\n", "\n", "fig.tight_layout()\n", "fig.show()\n", "\n", "if SavePlots :\n", " fig.savefig('Hist_prob_b_and_bqvjet.pdf', dpi=600)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Produce 2D figures:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# First we try a scatter plot, to see how the individual events distribute themselves:\n", "fig2, ax2 = plt.subplots(figsize=(12, 6))\n", "scat2_prob_b_vs_bqvjet_bjets = ax2.scatter(prob_b_bjets, bqvjet_bjets, label='b-jets', color='blue')\n", "scat2_prob_b_vs_bqvjet_ljets = ax2.scatter(prob_b_ljets, bqvjet_ljets, label='l-jets', color='red')\n", "ax2.legend(loc='best')\n", "fig2.tight_layout()\n", "fig2.show()\n", "\n", "if SavePlots :\n", " fig2.savefig('Scatter_prob_b_vs_bqvjet.pdf', dpi=600)\n", "\n", "\n", "# However, as can be seen in the figure, the overlap between b-jets and light-jets is large,\n", "# and one covers much of the other in a scatter plot, which also does not show the amount of\n", "# statistics in the dense regions. Therefore, we try two separate 2D histograms (zoomed):\n", "fig3, ax3 = plt.subplots(1, 2, figsize=(12, 6))\n", "hist2_prob_b_vs_bqvjet_bjets = ax3[0].hist2d(prob_b_bjets, bqvjet_bjets, bins=[40,40], range=[[0.0, 0.4], [0.0, 0.4]])\n", "hist2_prob_b_vs_bqvjet_ljets = ax3[1].hist2d(prob_b_ljets, bqvjet_ljets, bins=[40,40], range=[[0.0, 0.4], [0.0, 0.4]])\n", "ax3[0].set_title(\"b-jets\")\n", "ax3[1].set_title(\"light-jets\")\n", "\n", "fig3.tight_layout()\n", "fig3.show()\n", "\n", "if SavePlots :\n", " fig3.savefig('Hist2D_prob_b_vs_bqvjet.pdf', dpi=600)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Selection:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# I give the selection cuts names, so that they only need to be changed in ONE place (also ensures consistency!):\n", "loose_propb = 0.10\n", "tight_propb = 0.16\n", "loose_bqvjet = 0.12\n", "tight_bqvjet = 0.28\n", "\n", "# If either of the variable clearly indicate b-quark, or of both loosely do so, call it a b-quark, otherwise not!\n", "bquark=[]\n", "for i in np.arange(len(prob_b)):\n", " if (prob_b[i] > tight_propb) :\n", " bquark.append(1)\n", " elif (bqvjet[i] > tight_bqvjet) :\n", " bquark.append(1)\n", " elif ((prob_b[i] > loose_propb) and (bqvjet[i] > loose_bqvjet)) :\n", " bquark.append(1)\n", " else :\n", " bquark.append(0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Evaluate the selection:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "N, fracWrong = evaluate(bquark)\n", "print(\"\\nRESULT OF HUMAN ATTEMPT AT A GOOD SELECTION:\")\n", "print(\" First number is my estimate, second is the MC truth:\")\n", "print(\" True-Negative (0,0) = \", N[0][0])\n", "print(\" False-Negative (0,1) = \", N[0][1])\n", "print(\" False-Positive (1,0) = \", N[1][0])\n", "print(\" True-Positive (1,1) = \", N[1][1])\n", "print(\" Fraction wrong = ( (0,1) + (1,0) ) / sum = \", fracWrong)" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "### Compare with NN-approach from 1990'ies:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "bquark=[]\n", "for i in np.arange(len(prob_b)):\n", " if (nnbjet[i] > 0.82) : bquark.append(1)\n", " else : bquark.append(0)\n", "\n", "N, fracWrong = evaluate(bquark)\n", "print(\"\\nALEPH BJET TAG:\")\n", "print(\" First number is my estimate, second is the MC truth:\")\n", "print(\" True-Negative (0,0) = \", N[0][0])\n", "print(\" False-Negative (0,1) = \", N[0][1])\n", "print(\" False-Positive (1,0) = \", N[1][0])\n", "print(\" True-Positive (1,1) = \", N[1][1])\n", "print(\" Fraction wrong = ( (0,1) + (1,0) ) / sum = \", fracWrong)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.3" } }, "nbformat": 4, "nbformat_minor": 2 }