Everything so far has been about predicting a real-valued output. Given house features, predict the price — a number that can be anything from zero to millions. The output lives on a continuous scale.
Classification is different. The output can only take a small number of discrete values. Some examples from the lecture:
What all of these share is that — two possible outcomes, nothing in between, nothing outside. This is the binary classification problem. The value 0 is called the negative class and 1 is called the positive class. The assignment is somewhat arbitrary — by convention, the positive class usually represents the presence of something (spam, malignancy, fraud) and the negative class its absence, but what matters is consistency, not which label you attach to which outcome.
The most natural first attempt: take linear regression as we know it and apply it to a classification problem. Predict a number, and then threshold it — if the prediction is , say ; if it is , say .
This can appear to work on simple examples. If you fit a line to a dataset of tumour sizes labelled 0 or 1, the line might cross 0.5 at a reasonable decision point. But the approach is fundamentally fragile.
The problem becomes clear the moment you add new data far to the right of the existing cluster — a very large tumour, for instance. The best-fit line tilts to accommodate the new point, and its crossing of 0.5 shifts. Cases that were correctly classified as malignant are now below the threshold and get classified as benign. The model got worse by seeing more data, which is exactly the wrong behaviour for a learning algorithm.
There is also a conceptual mismatch: linear regression can output any real number, including values well below 0 and well above 1. But for a binary classification problem the output should represent something like a probability — and probabilities must live in . A hypothesis that predicts or for a binary problem is outputting nonsense.
These two failures together — sensitivity to outliers and unconstrained output range — motivate a different approach.
What we need is a hypothesis that always outputs a value between 0 and 1, regardless of the input. The function that achieves this is the sigmoid function, also called the logistic function:
Its properties are exactly what we need. As , and . As , and . At , . The function is smooth, differentiable everywhere, and always strictly between 0 and 1. Its shape is an S-curve — flat near the extremes, steep in the middle.
The logistic regression hypothesis is formed by plugging the linear combination — the same expression we used in linear regression — into the sigmoid function:
The inner part can take any real value, just as before. But wrapping it in squashes the output into . No matter what the features say, the output of the hypothesis is always a valid number between 0 and 1.
How to interpret this output. The number is interpreted as the estimated probability that given the input , under the parameters :
If for some patient's tumour data, the model is saying: given these features, there is a 70% probability that this tumour is malignant. Since the two classes must sum to 1:
This probabilistic interpretation is principled and useful — it tells you not just the predicted class but also the model's confidence.
Making a prediction. To get a hard 0/1 prediction, we threshold at 0.5:
Since if and only if , and , this threshold rule is equivalent to:
The decision boundary is the set of all points in feature space where the model is exactly on the fence — where , or equivalently where . It is the border separating the region where the model predicts from the region where it predicts .
For a simple two-feature problem, defines a line in the - plane. On one side the model says positive; on the other, negative.
Crucial point: the decision boundary is a property of the hypothesis — specifically, of the parameters . It is not a property of the training data. The training data is used to learn the parameters, but once the parameters are fixed, the decision boundary is determined entirely by , and it would apply equally to any dataset.
With the standard hypothesis , the decision boundary is the line . This is a straight line in feature space — hence "linear" decision boundary.
What if the data is not linearly separable? What if the positive class forms a cluster in the middle, surrounded by the negative class, and no straight line can separate them?
We use the same idea as polynomial regression: add higher-order features. If we define , the decision boundary is — which is a circle in the - plane. With more complex polynomial features, you can produce ellipses, parabolas, and arbitrary curves.
The model is still logistic regression — the same algorithm, the same sigmoid function, the same training procedure. The nonlinearity comes from the feature engineering, not from changing anything fundamental about the algorithm. This is the same trick we saw with polynomial regression: keep the model structure linear in , let the features carry the nonlinearity.
[!NOTE] By choosing richer feature spaces, logistic regression can learn surprisingly complex decision boundaries. The practical question of how complex to make the feature space — and how to avoid overfitting when you do — is the subject of regularisation, which comes later in the course.
We now need to fit the parameters — we need a cost function to minimise. The temptation is to reuse the squared error cost from linear regression:
but with the new sigmoid hypothesis. This does not work. The reason is subtle but important.
With a linear , is a convex bowl — gradient descent finds the global minimum reliably. But with the sigmoid function inside , the composition of the sigmoid with the squared error produces a non-convex function of . The cost surface has many local minima, hills, and plateaus. Gradient descent can get stuck at a suboptimal local minimum and never find the global one. The algorithm would be fundamentally unreliable.
We need a different cost function — one that is convex when combined with the logistic hypothesis. We define the cost for a single training example as:
Understanding why these two cases are the right choice requires looking at each separately.
When : the cost is . If (the model is perfectly confident and correct), — zero cost. If (the model is completely wrong, predicting near-zero probability for a case that is actually positive), — infinite cost. The function penalises wrong predictions with increasing severity as the prediction diverges from the truth, and the penalty grows without bound as the model becomes completely confident in the wrong direction.
When : the cost is . If (correctly predicting zero probability for a negative case), — zero cost. If (predicting near-certainty for a case that is actually negative), — infinite cost again. The same structure: correct confident predictions cost nothing; wrong confident predictions cost infinitely much.
This cost function is called the binary cross-entropy or log loss. The logarithm is what gives it the right shape — the convexity that makes gradient descent reliable.
Writing the cost as a two-case expression is correct but awkward to work with. There is a cleaner way to write exactly the same thing as a single formula:
Verify that this is identical to the two-case version. When : the second term vanishes (since ) and you are left with . When : the first term vanishes (since ) and you are left with . Same function, one formula.
Summing over all training examples, the full cost function for logistic regression is:
This is the standard logistic regression cost function. It is convex in — provable using the theory of convex functions, but we take that as a given here. The implication is that gradient descent will reliably find the global minimum, just as it does for linear regression.
[!MATH] This cost function has a deeper justification beyond its convenient shape. It can be derived from the principle of maximum likelihood estimation: given the probabilistic model , the log-likelihood of the training data is exactly . Minimising is equivalent to maximising this log-likelihood — finding the parameters that make the observed training data most probable under the model. This statistical grounding is why this particular cost function is used universally for logistic regression, and not some arbitrary alternative.
Compare the two cost functions side by side:
Different expressions, same underlying goal: a scalar measure of how badly the current parameters fit the training data, to be minimised.
With the cost function defined, we apply gradient descent to minimise it. The update rule is the same general form as always:
Computing the partial derivative of the logistic cost with respect to requires the chain rule applied to the log and to the sigmoid — the calculation is a few lines of calculus. The result, after simplification, is:
The gradient descent update is therefore:
Repeat until convergence:
Look at this carefully. It is identical in form to the gradient descent update for linear regression. The same expression appears in both.
Does this mean linear regression and logistic regression are the same algorithm? No. The difference is hidden inside . For linear regression, — a linear function. For logistic regression, — the sigmoid of a linear function. The update rules look the same on paper, but they are computing different things because the hypothesis is different.
This is actually a nice result — it means the same gradient descent code can handle both algorithms if the hypothesis function is parameterised correctly.
[!NOTE] Everything we said about gradient descent for linear regression applies here too. Update all parameters simultaneously. Monitor over iterations — it should decrease monotonically. Apply feature scaling if features are on different scales — it speeds up convergence for logistic regression exactly as it does for linear regression. Choose using the same factor-of-3 search strategy.
So far we have only handled binary classification — two classes, . Many real problems have more than two classes: weather forecasting with four outcomes (sunny, rainy, snowy, windy), email foldering with five categories (home, work, family, hobby, spam), medical diagnosis with several possible conditions.
The approach for multi-class classification is called one-vs-all (also written one-vs-rest). The idea is simple and elegant: reduce the multi-class problem to a set of binary problems, one for each class.
Suppose you have classes. For each class :
After training, you have separate classifiers, each specialised to detect one class versus everything else.
Making predictions on a new example : run all classifiers and pick the class whose classifier outputs the highest probability:
The class whose classifier is most confident that the example belongs to it wins.
The geometric picture: each binary classifier learns its own decision boundary, separating one class from the rest of the space. Together the boundaries partition the feature space into regions, one per class. A new point falls into the region where the corresponding classifier fires most strongly.
[!NOTE] One-vs-all is a meta-algorithm — a way to extend any binary classifier to multiple classes. It works with logistic regression but also with SVMs and other models. Its main limitation is that the classifiers are trained independently: classifier 1 does not know what classifiers 2 and 3 are learning. More sophisticated multi-class approaches train jointly, but one-vs-all is the right starting point and performs well in practice.
Logistic regression is the foundational classification algorithm, and probably the most widely used single ML model in practice.
The key ideas in order:
Linear regression fails for classification because its output is unbounded and sensitive to outliers. We need a hypothesis that outputs values in .
The sigmoid function achieves this. The logistic regression hypothesis is , interpreted as the probability that .
The decision boundary is the set of points where . It separates the predicted positive and negative regions. With polynomial features it can be nonlinear.
The squared error cost function is non-convex when combined with the sigmoid — gradient descent would not reliably find the global minimum. The correct cost function is binary cross-entropy (log loss), which is convex and has a maximum likelihood justification.
The gradient descent update for logistic regression has the same algebraic form as for linear regression — only the hypothesis differs.
Multi-class classification with classes is handled by one-vs-all: train binary classifiers, one per class, and at prediction time pick the class with highest confidence.
Logistic regression is not the final word in classification — neural networks, SVMs, and tree-based methods all extend and improve upon it in different ways. But it is the starting point every practitioner needs to understand deeply, because the ideas here recur constantly: sigmoid outputs, cross-entropy loss, and probabilistic interpretation are all central to deep learning as well.