N = 100# number of points per class D = 2# dimensionality K = 3# number of classes X = np.zeros((N*K,D)) # data matrix (each row = single example) y = np.zeros(N*K, dtype='uint8')# class labels for j in xrange(K): ix = range(N*j,N*(j+1)) r = np.linspace(0.0,1,N) # radius t = np.linspace(j*4,(j+1)*4,N) + np.random.randn(N)*0.2# theta X[ix] = np.c_[r*np.sin(t), r*np.cos(t)] y[ix] = j
# initialize parameters randomly h = 100# size of hidden layer W = 0.01 * np.random.randn(D,h) b = np.zeros((1,h)) W2 = 0.01 * np.random.randn(h,K) b2 = np.zeros((1,K))
# gradient descent loop num_examples = X.shape[0] for i in xrange(10000): # evaluate class scores, [N x K] hidden_layer = np.maximum(0, np.dot(X, W) + b) # note, ReLU activation scores = np.dot(hidden_layer, W2) + b2 # compute the class probabilities exp_scores = np.exp(scores) probs = exp_scores / np.sum(exp_scores, axis=1,keepdims=True)# [N x K] # compute the loss: average cross-entropy loss and regularization correct_logprobs = -np.log(probs[range(num_examples),y]) data_loss = np.sum(correct_logprobs)/num_examples reg_loss = 0.5*reg*np.sum(W*W) + 0.5*reg*np.sum(W2*W2) loss = data_loss + reg_loss if i % 1000 == 0: print "iteration %d: loss %f" % (i, loss) # compute the gradient on scores dscores = probs dscores[range(num_examples),y] -=1 dscores /= num_examples # backpropate the gradient to the parameters # first backprop into parameters W2 and b2 dW2 = np.dot(hidden_layer.T, dscores) db2 = np.sum(dscores, axis=0,keepdims=True) # next backprop into hidden layer dhidden = np.dot(dscores, W2.T) # backprop the ReLU non-linearity dhidden[hidden_layer <= 0] = 0 # finally into W,b dW = np.dot(X.T, dhidden) db = np.sum(dhidden, axis=0,keepdims=True) # add regularization gradient contribution dW2 += reg * W2 dW += reg * W # perform a parameter update W += -step_size * dW b += -step_size * db W2 += -step_size * dW2 b2 += -step_size * db2
第一步:设置批量输入数据和输出数据
批量数据大小
数据维数
类别数
输入数据
输出数据
1 2 3 4 5 6 7 8 9 10 11
N = 100 # number of points per class D = 2 # dimensionality K = 3 # number of classes X = np.zeros((N*K,D)) # data matrix (each row = single example) y = np.zeros(N*K, dtype='uint8') # class labels for j inxrange(K): ix = range(N*j,N*(j+1)) r = np.linspace(0.0,1,N) # radius t = np.linspace(j*4,(j+1)*4,N) + np.random.randn(N)*0.2 # theta X[ix] = np.c_[r*np.sin(t), r*np.cos(t)] y[ix] = j
第二步:初始化权重参数
隐藏层神经元个数
隐藏层权重矩阵
隐藏层偏置向量
输出层权重矩阵
输出层偏置向量
1 2 3 4 5 6
# initialize parameters randomly h = 100# size of hidden layer W = 0.01 * np.random.randn(D,h) b = np.zeros((1,h)) W2 = 0.01 * np.random.randn(h,K) b2 = np.zeros((1,K))
# evaluate class scores, [N x K] hidden_layer = np.maximum(0, np.dot(X, W) + b) # note, ReLU activation scores = np.dot(hidden_layer, W2) + b2
第四步:迭代计算,计算损失值
表示求和向量:
表示每行正确类别的概率
1 2 3 4 5 6 7 8 9 10 11
# compute the class probabilities exp_scores = np.exp(scores) probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True) # [N x K]
# compute the loss: average cross-entropy loss and regularization correct_logprobs =-np.log(probs[range(num_examples),y]) data_loss = np.sum(correct_logprobs)/num_examples reg_loss =0.5*reg*np.sum(W*W) +0.5*reg*np.sum(W2*W2) loss = data_loss + reg_loss if i %1000==0: print "iteration %d: loss %f" % (i, loss)
# compute the gradient on scores dscores = probs dscores[range(num_examples),y] -=1 dscores /= num_examples
第六步:迭代计算,反向传播,计算输出层权重矩阵、偏置向量以及隐藏层输出向量梯度
求输出层权重矩阵梯度
求输出层偏置向量梯度
对偏置向量还需要注意维数,求和批量数据的偏置向量梯度
求隐藏层输出向量梯度
1 2 3 4 5 6
# backpropate the gradient to the parameters # first backprop into parameters W2 and b2 dW2 = np.dot(hidden_layer.T, dscores) db2 = np.sum(dscores, axis=0, keepdims=True) # next backprop into hidden layer dhidden = np.dot(dscores, W2.T)
第七步:迭代计算,反向传播,计算隐藏层输入向量梯度
1 2
# backprop the ReLU non-linearity dhidden[hidden_layer <= 0] = 0
第七步:迭代计算,反向传播,计算隐藏层权重向量和偏置向量梯度
求隐藏层权重向量梯度
求隐藏层偏置向量梯度
对偏置向量还需要注意维数,求和批量数据的偏置向量梯度
1 2 3
# finally into W,b dW = np.dot(X.T, dhidden) db = np.sum(dhidden, axis=0, keepdims=True)
Gitalk 加载中 ...