Commit fcaa91c3 authored by Sylvain Marchienne's avatar Sylvain Marchienne
Browse files

TP1 machine learning

parent 16e5e0e8
# TP1 Lundi 21/01/2019
Aujourd'hui vous allez prendre en mains Python et quelques unes des libraires:
* Numpy
* Matplotlib
* Scikit-Learn
Commencez par le notebook `python-numpy-matplotlib.ipynb` (~1h pas plus) puis les notebooks dans le dossier `machine learning` en commençant par `05.00-Machine-Learning.ipynb`.
%% Cell type:markdown id: tags:
# Deuxième partie, introduction au Machine Learning
Les notebooks qui suivent sont issu d'un livre "Python Data Science Handbook" ( ). L'auteur est l'un des plus grands contributeurs au projet Pandas, que vous aurez l'occasion d'utiliser pendant votre projet. Son livre est une référence et il propose de nombeux notebooks sur son Github pour apprendre. Nous vous en avons sélectionné quelques uns pour introduire le Machine Learning. Ne vous inquiétez pas si vous ne comprenez pas tout, vous aurez le temps de jouer avec scikit-learn et de mieux comprendre demain avec Sylvain Rousseau.
Suivez le notebook en prenant soin de comprendre les explications et le code (quand il y en a). Si vous avez besoin d'explications, n'hésitez pas à parler aux tuteurs et/ou poser vos questions sur slack ! Les tuteurs ne sont pas forcément experts et ne pourront pas répondre à toutes vos questions, mais lancer la réflexion avec eux et les autres étudiants est bénéfique !
%% Cell type:markdown id: tags:
# Machine Learning
%% Cell type:markdown id: tags:
In many ways, machine learning is the primary means by which data science manifests itself to the broader world.
Machine learning is where these computational and algorithmic skills of data science meet the statistical thinking of data science, and the result is a collection of approaches to inference and data exploration that are not about effective theory so much as effective computation.
The term "machine learning" is sometimes thrown around as if it is some kind of magic pill: *apply machine learning to your data, and all your problems will be solved!*
As you might expect, the reality is rarely this simple.
While these methods can be incredibly powerful, to be effective they must be approached with a firm grasp of the strengths and weaknesses of each method, as well as a grasp of general concepts such as bias and variance, overfitting and underfitting, and more.
This chapter will dive into practical aspects of machine learning, primarily using Python's [Scikit-Learn]( package.
This is not meant to be a comprehensive introduction to the field of machine learning; that is a large subject and necessitates a more technical approach than we take here. Rather, the goals of this chapter are:
- To introduce the fundamental vocabulary and concepts of machine learning.
- To introduce the Scikit-Learn API and show some examples of its use.
- To take a deeper dive into the details of several of the most important machine learning approaches, and develop an intuition into how they work and when and where they are applicable.
%% Cell type:markdown id: tags:
**Allez au prochain notebook** "05.01-What-Is-Machine-Learning"
%% Cell type:markdown id: tags:
# What Is Machine Learning?
%% Cell type:markdown id: tags:
Before we take a look at the details of various machine learning methods, let's start by looking at what machine learning is, and what it isn't.
Machine learning is often categorized as a subfield of artificial intelligence, but I find that categorization can often be misleading at first brush.
The study of machine learning certainly arose from research in this context, but in the data science application of machine learning methods, it's more helpful to think of machine learning as a means of *building models of data*.
Fundamentally, machine learning involves building mathematical models to help understand data.
"Learning" enters the fray when we give these models *tunable parameters* that can be adapted to observed data; in this way the program can be considered to be "learning" from the data.
Once these models have been fit to previously seen data, they can be used to predict and understand aspects of newly observed data.
I'll leave to the reader the more philosophical digression regarding the extent to which this type of mathematical, model-based "learning" is similar to the "learning" exhibited by the human brain.
Understanding the problem setting in machine learning is essential to using these tools effectively, and so we will start with some broad categorizations of the types of approaches we'll discuss here.
%% Cell type:markdown id: tags:
## Categories of Machine Learning
At the most fundamental level, machine learning can be categorized into two main types: supervised learning and unsupervised learning.
*Supervised learning* involves somehow modeling the relationship between measured features of data and some label associated with the data; once this model is determined, it can be used to apply labels to new, unknown data.
This is further subdivided into *classification* tasks and *regression* tasks: in classification, the labels are discrete categories, while in regression, the labels are continuous quantities.
We will see examples of both types of supervised learning in the following section.
*Unsupervised learning* involves modeling the features of a dataset without reference to any label, and is often described as "letting the dataset speak for itself."
These models include tasks such as *clustering* and *dimensionality reduction.*
Clustering algorithms identify distinct groups of data, while dimensionality reduction algorithms search for more succinct representations of the data.
We will see examples of both types of unsupervised learning in the following section.
In addition, there are so-called *semi-supervised learning* methods, which falls somewhere between supervised learning and unsupervised learning.
Semi-supervised learning methods are often useful when only incomplete labels are available.
%% Cell type:markdown id: tags:
## Qualitative Examples of Machine Learning Applications
To make these ideas more concrete, let's take a look at a few very simple examples of a machine learning task.
These examples are meant to give an intuitive, non-quantitative overview of the types of machine learning tasks we will be looking at in this chapter.
%% Cell type:markdown id: tags:
### Classification: Predicting discrete labels
We will first take a look at a simple *classification* task, in which you are given a set of labeled points and want to use these to classify some unlabeled points.
Imagine that we have the data shown in this figure:
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
Here we have two-dimensional data: that is, we have two *features* for each point, represented by the *(x,y)* positions of the points on the plane.
In addition, we have one of two *class labels* for each point, here represented by the colors of the points.
From these features and labels, we would like to create a model that will let us decide whether a new point should be labeled "blue" or "red."
There are a number of possible models for such a classification task, but here we will use an extremely simple one. We will make the assumption that the two groups can be separated by drawing a straight line through the plane between them, such that points on each side of the line fall in the same group.
Here the *model* is a quantitative version of the statement "a straight line separates the classes", while the *model parameters* are the particular numbers describing the location and orientation of that line for our data.
The optimal values for these model parameters are learned from the data (this is the "learning" in machine learning), which is often called *training the model*.
The following figure shows a visual representation of what the trained model looks like for this data:
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
Now that this model has been trained, it can be generalized to new, unlabeled data.
In other words, we can take a new set of data, draw this model line through it, and assign labels to the new points based on this model.
This stage is usually called *prediction*. See the following figure:
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
This is the basic idea of a classification task in machine learning, where "classification" indicates that the data has discrete class labels.
At first glance this may look fairly trivial: it would be relatively easy to simply look at this data and draw such a discriminatory line to accomplish this classification.
A benefit of the machine learning approach, however, is that it can generalize to much larger datasets in many more dimensions.
For example, this is similar to the task of automated spam detection for email; in this case, we might use the following features and labels:
- *feature 1*, *feature 2*, etc. $\to$ normalized counts of important words or phrases ("Viagra", "Nigerian prince", etc.)
- *label* $\to$ "spam" or "not spam"
For the training set, these labels might be determined by individual inspection of a small representative sample of emails; for the remaining emails, the label would be determined using the model.
For a suitably trained classification algorithm with enough well-constructed features (typically thousands or millions of words or phrases), this type of approach can be very effective.
%% Cell type:markdown id: tags:
### Regression: Predicting continuous labels
In contrast with the discrete labels of a classification algorithm, we will next look at a simple *regression* task in which the labels are continuous quantities.
Consider the data shown in the following figure, which consists of a set of points each with a continuous label:
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
As with the classification example, we have two-dimensional data: that is, there are two features describing each data point.
The color of each point represents the continuous label for that point.
There are a number of possible regression models we might use for this type of data, but here we will use a simple linear regression to predict the points.
This simple linear regression model assumes that if we treat the label as a third spatial dimension, we can fit a plane to the data.
This is a higher-level generalization of the well-known problem of fitting a line to data with two coordinates.
We can visualize this setup as shown in the following figure:
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
Notice that the *feature 1-feature 2* plane here is the same as in the two-dimensional plot from before; in this case, however, we have represented the labels by both color and three-dimensional axis position.
From this view, it seems reasonable that fitting a plane through this three-dimensional data would allow us to predict the expected label for any set of input parameters.
Returning to the two-dimensional projection, when we fit such a plane we get the result shown in the following figure:
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
This plane of fit gives us what we need to predict labels for new points.
Visually, we find the results shown in the following figure:
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
As with the classification example, this may seem rather trivial in a low number of dimensions.
But the power of these methods is that they can be straightforwardly applied and evaluated in the case of data with many, many features.
For example, this is similar to the task of computing the distance to galaxies observed through a telescope—in this case, we might use the following features and labels:
- *feature 1*, *feature 2*, etc. $\to$ brightness of each galaxy at one of several wave lengths or colors
- *label* $\to$ distance or redshift of the galaxy
The distances for a small number of these galaxies might be determined through an independent set of (typically more expensive) observations.
Distances to remaining galaxies could then be estimated using a suitable regression model, without the need to employ the more expensive observation across the entire set.
In astronomy circles, this is known as the "photometric redshift" problem.
Some important regression algorithms that we will discuss are linear regression, support vector machines, and random forest regression.
%% Cell type:markdown id: tags:
### Clustering: Inferring labels on unlabeled data
The classification and regression illustrations we just looked at are examples of supervised learning algorithms, in which we are trying to build a model that will predict labels for new data.
Unsupervised learning involves models that describe data without reference to any known labels.
One common case of unsupervised learning is "clustering," in which data is automatically assigned to some number of discrete groups.
For example, we might have some two-dimensional data like that shown in the following figure:
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
By eye, it is clear that each of these points is part of a distinct group.
Given this