Softmax Regression (SR)

Softmax regression (SR) (or multinomial logistic regression) is a generalization of logistic regression to the case where we want to handle multiple classes. Same as the blog about LR, this blog will detail the modeling approach, loss function, forward and backward propagation of SR. In the end, I will use python with numpy to implement SR and give the use on data sets iris and mnist. You can find all the code here.

Softmax function

The softmax function is defined by the following formula. Where is the number of classes and .

Unfortunately, the original softmax definition has a numerical overflow problem in actual use. For a large positive value, the value of may be quite large and cause a numerical overflow. Similarly, for a smaller negative value, the value of may be very close to zero, resulting in a numerical underflow. Therefore, in practice we use the following equivalent formula.

Where .
We need to use the derivative of softmax in backpropagation, so let’s calculate it first. For writing convenience, let , then .

Modeling approach

In LR we assumed that the labels were binary: . SR allows us to handle classification problem. In SR we often use one hot vector to represent the label. For example, in the MNIST digit recognition task, we will use to represent the label of the image with the number 3. In SR we use softmax function to model probability. Suppose we have a training set of labeled examples, where the input features are . We can use the -th output of the softmax function as the probability that the current sample belongs to the -th class. The formal expression is as follows.

Where , , , .
For the convenience of writing, let .

Loss function

In SR our goal is to

So the loss function of SR is:

Note: is the indicator function, in which

Gradient

The forward propagation of SR is similar with LR, for one example:

vectorization:

We can derivative the gradient based on the chain rule. For one example:

vectorization:

Implementation

Here we use python with numpy to implement the forward and backward propagation of SR.

1
2
3
4
5
6
7
8
9
10
11
def softmax(x):
"""
Softmax regression for a vector or matrix.
Args:
x: [n_examples, n_classes]
Returns: values after softmax.
"""
b = x - np.max(x, axis=1, keepdims=True)
expb = np.exp(b)
softmax = expb / np.sum(expb, axis=1, keepdims=True)
return softmax
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
class SoftmaxRegression:
def __init__(self, max_iter=200, learning_rate=0.01):
self.max_iter = max_iter
self.learning_rate = learning_rate

def fit(self, X, Y):
"""
Train the model.
Args:
X: [n_samples, n_features]
Y: [n_samples, n_classes]
"""
m, n = X.shape
_, K = Y.shape
self.w_ = np.zeros([n, K])
self.b_ = np.zeros([1, K])
self.cost_ = []

for i in range(self.max_iter):
Y_hat = self.predict(X)

cost = -np.sum(Y * np.log(Y_hat)) / m

if i != 0 and i % 10 == 0:
print("Step: " + str(i) + ", Cost: " + str(cost))

self.cost_.append(cost)

self.w_ -= self.learning_rate * np.dot(X.T, Y_hat - Y) / m
self.b_ -= self.learning_rate * np.sum(Y_hat - Y, axis=0) / m

def predict(self, X):
"""
Predict the given examples.
Args:
X: [n_samples, n_features]
"""
z = np.dot(X, self.w_)
return softmax(np.dot(X, self.w_) + self.b_)

def score(self, X, Y):
Y_hat = self.predict(X)
Y_hat = np.argmax(Y_hat, axis=1)
Y = np.argmax(Y, axis=1)
true_num = np.sum(Y_hat == Y)
return true_num / len(X)

Example

In order to verify the correctness of the implementation. I experimented on the irsi dataset and the mnist dataset. The parameters and results of the experiment are as follows:

iris mnist
learnig rate 0.1 0.01
max iterate 100 10000
test accuracy 100% 90.98%

You can find the all the experimental code here and reproduce the experimental results.

Automatic Differentiation Based on Computation Graph Logistic Regression (LR)

Comments

Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×