900字范文,内容丰富有趣,生活中的好帮手!
900字范文 > 机器学习-吴恩达网易云课堂笔记

机器学习-吴恩达网易云课堂笔记

时间:2021-01-26 23:23:23

相关推荐

机器学习-吴恩达网易云课堂笔记

机器学习-吴恩达网易云课堂笔记

Machine Learning: A computer program is said to learn from experience E with respect to some task T and some performance measure P, if its performance on T, as measure by P, improves with experience E.

Supervised learning, Unsupervised learning; Regression, Classification.

Linear regression

Hypothesis function : hθ(x)=θ0+θ1xh_\theta(x) = \theta_0 + \theta_1xhθ​(x)=θ0​+θ1​xCost function : J(θ0,θ1)=12m∑i=1m(hθ(x(i))−y(i))2J(\theta_0,\theta_1) = \frac{1}{2m}\sum_{i = 1}^{m}(h_\theta(x^{(i)}) - y^{(i)})^2J(θ0​,θ1​)=2m1​∑i=1m​(hθ​(x(i))−y(i))2 (Squared error function)Goal : minimizeθ0,θ1J(θ0,θ1)\underset{\theta_0, \theta_1}{minimize} J(\theta_0, \theta_1)θ0​,θ1​minimize​J(θ0​,θ1​)

Gradient descent

用于最小化代价函数。这里以上述线性回归为例子描述该方法具体流程。

算法过程

repeatuntilconvergence{θj:=θj−α∂∂θjJ(θ0,θ1)(forj=0andj=1)}repeat\ until\ convergence \ \{ \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \ \\ \theta_j := \theta_j - \alpha\frac{\partial }{\partial \theta_j}J(\theta_0, \theta_1) \quad (for \ j = 0 \ and \ j = 1) \quad \quad \quad \\ \} \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad repeatuntilconvergence{θj​:=θj​−α∂θj​∂​J(θ0​,θ1​)(forj=0andj=1)}

其中 α\alphaα称为学习率,α\alphaα越大代表梯度下降的越快,α\alphaα越小代表梯度下降的越慢。

如果 α\alphaα太小的话梯度下降的会比较慢, α\alphaα太大的话梯度下降可能会越过最低点甚至无法收敛或者发散。

注:θ0\theta_0θ0​和θ1\theta_1θ1​需要同时更新,更新方式如下:

temp0:=θ0−α∂∂θ0J(θ0,θ1)temp1:=θ1−α∂∂θ1J(θ0,θ1)θ0:=temp0θ1:=temp1temp0 := \theta_0 - \alpha\frac{\partial }{\partial \theta_0}J(\theta_0, \theta_1)\\ temp1 := \theta_1 - \alpha\frac{\partial }{\partial \theta_1}J(\theta_0, \theta_1)\\ \theta_0 := temp0\\ \theta_1 := temp1 temp0:=θ0​−α∂θ0​∂​J(θ0​,θ1​)temp1:=θ1​−α∂θ1​∂​J(θ0​,θ1​)θ0​:=temp0θ1​:=temp1

Gradient descent for linear regression

j=0:∂∂θ0J(θ0,θ1)=1m∑i=1m(hθ(x(i))−y(i))j=1:∂∂θ1J(θ0,θ1)=1m∑i=1m(hθ(x(i))−y(i))x(i)j = 0 :\ \frac{\partial }{\partial \theta_0}J(\theta_0, \theta_1) = \frac{1}{m}\sum_{i = 1}^{m}(h_\theta(x^{(i)}) - y^{(i)}) \quad \ \ \\ j = 1 :\ \frac{\partial }{\partial \theta_1}J(\theta_0, \theta_1) = \frac{1}{m}\sum_{i = 1}^{m}(h_\theta(x^{(i)}) - y^{(i)}) x^{(i)} j=0:∂θ0​∂​J(θ0​,θ1​)=m1​i=1∑m​(hθ​(x(i))−y(i))j=1:∂θ1​∂​J(θ0​,θ1​)=m1​i=1∑m​(hθ​(x(i))−y(i))x(i)

这种梯度下降算法也称为Batch Gradient descent, “Batch” : Each step of gradient descent uses all the training examples.

梯度下降法的缺点:需要选择α\alphaα;需要多次迭代。

Linear regression with multiple variables

Hypothesis function : hθ(x)=θTx=θ0x0+θ1x1+θ2x2+⋯+θnxnh_\theta(x) = \theta^Tx = \theta_0x_0 + \theta_1x_1 + \theta_2x_2 + \cdots + \theta_nx_nhθ​(x)=θTx=θ0​x0​+θ1​x1​+θ2​x2​+⋯+θn​xn​

Cost function : J(θ0,θ1,⋯ ,θn)=12m∑i=1m(hθ(x(i))−y(i))2J(\theta_0,\theta_1,\cdots ,\theta_n) = \frac{1}{2m}\sum_{i = 1}^{m}(h_\theta(x^{(i)}) - y^{(i)})^2J(θ0​,θ1​,⋯,θn​)=2m1​∑i=1m​(hθ​(x(i))−y(i))2

Goal : minimizeθ0,θ1,⋯ ,θnJ(θ0,θ1,⋯ ,θn)\underset{\theta_0, \theta_1, \cdots ,\theta_n}{minimize} J(\theta_0, \theta_1, \cdots ,\theta_n)θ0​,θ1​,⋯,θn​minimize​J(θ0​,θ1​,⋯,θn​)

Gradient descent

repeat{θj:=θj−α∂∂θjJ(θ0,θ1,…,θn)(simultaneouslyupdateforeveryj=0,…,n)}其中∂∂θjJ(θ0,θ1,…,θn)=1m∑i=1m(hθ(x(i))−y(i))xj(i)repeat\ \{ \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \ \\ \ \ \ \theta_j := \theta_j - \alpha\frac{\partial }{\partial \theta_j}J(\theta_0, \theta_1, \dots, \theta_n) \quad (simultaneously \ update \ for \ every \ j = 0, \dots, n) \\ \} \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \\ 其中\frac{\partial }{\partial \theta_j}J(\theta_0, \theta_1, \dots, \theta_n) = \frac{1}{m}\sum_{i = 1}^{m}(h_\theta(x^{(i)}) - y^{(i)}) x_j^{(i)} \quad \quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad repeat{θj​:=θj​−α∂θj​∂​J(θ0​,θ1​,…,θn​)(simultaneouslyupdateforeveryj=0,…,n)}其中∂θj​∂​J(θ0​,θ1​,…,θn​)=m1​i=1∑m​(hθ​(x(i))−y(i))xj(i)​

特征缩放(feature scaling),当不同的特征取值范围相差很大时,可以进行特征缩放,使所有的特征取值范围大致相同,这样可以加快梯度下降的速度。常用**均值归一化(mean normalization)**来进行特征缩放。

正规方程法(normal equation),求解可得θ=(XTX)−1XTy\theta = (X^TX)^{-1}X^Tyθ=(XTX)−1XTy。注:由于需要计算(XTX)−1(X^TX)^{-1}(XTX)−1,故当特征数很多的时候运行的非常慢

Logistic regression

逻辑回归是一种分类算法。

Hypothesis : hθ(x)=g(θTx),g(z)=11+e−zh_\theta(x) = g(\theta^Tx), \quad g(z) = \frac{1}{1 + e^{-z}}hθ​(x)=g(θTx),g(z)=1+e−z1​.

predict “y=1y = 1y=1” if hθ(x)⩾0.5h_\theta(x) \geqslant 0.5hθ​(x)⩾0.5 ; predict “y=0y = 0y=0” if hθ(x)&lt;0.5h_\theta(x) &lt; 0.5hθ​(x)<0.5.

Cost function

J(θ)=1m∑i=1mCost(hθ(x(i)),y(i))Cost(hθ(x),y)={−log(hθ(x))ify=1−log(1−hθ(x))ify=0J(\theta) = \frac{1}{m}\sum_{i = 1}^{m}Cost(h_\theta(x^{(i)}), y^{(i)}) \\ Cost(h_\theta(x), y) = \left\{ \begin{aligned} -log(h_\theta(x)) \quad if \ y = 1\\ -log(1 - h_\theta(x)) \quad if \ y = 0\\ \end{aligned} \right. J(θ)=m1​i=1∑m​Cost(hθ​(x(i)),y(i))Cost(hθ​(x),y)={−log(hθ​(x))ify=1−log(1−hθ​(x))ify=0​

注:这里代价函数不用与线性回归一样的平方误差函数是因为g(z)=11+e−zg(z) = \frac{1}{1 + e^{-z}}g(z)=1+e−z1​是非线性函数,如果还用平方误差函数作为代价函数的话,会导致代价函数非凸,这样再用梯度下降法求解的话很容易陷入局部最优解。

化简上述Cost等式,可得Cost(hθ(x),y)=−ylog(hθ(x))−(1−y)log(1−hθ(x))Cost(h_\theta(x), y) = -ylog(h_\theta(x)) - (1 - y)log(1 - h_\theta(x))Cost(hθ​(x),y)=−ylog(hθ​(x))−(1−y)log(1−hθ​(x)),那么J(θ)=−1m[∑i=1m(y(i)log(hθ(x(i)))+(1−y(i))log(1−hθ(x(i)))]J(\theta) = -\frac{1}{m}[\sum_{i = 1}^{m}(y^{(i)}log(h_\theta(x^{(i)})) + (1 - y^{(i)})log(1 - h_\theta(x^{(i)}))]J(θ)=−m1​[∑i=1m​(y(i)log(hθ​(x(i)))+(1−y(i))log(1−hθ​(x(i)))]

Gradient descent

want minθJ(θ)min_\theta J(\theta)minθ​J(θ) :

repeat{θj:=θj−α∂∂θjJ(θ)(simultaneouslyupdateallθj)}其中∂∂θjJ(θ)=1m∑i=1m(hθ(x(i))−y(i))xj(i)repeat\ \{ \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \ \\ \ \ \ \ \ \theta_j := \theta_j - \alpha\frac{\partial }{\partial \theta_j}J(\theta) \quad (simultaneously \ update \ all \ \theta_j) \quad\quad\quad\quad\quad\quad\quad\quad \\ \} \quad \quad\quad\quad\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \\ 其中\frac{\partial }{\partial \theta_j}J(\theta) = \frac{1}{m}\sum_{i = 1}^{m}(h_\theta(x^{(i)}) - y^{(i)})x_j^{(i)} \quad \quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad repeat{θj​:=θj​−α∂θj​∂​J(θ)(simultaneouslyupdateallθj​)}其中∂θj​∂​J(θ)=m1​i=1∑m​(hθ​(x(i))−y(i))xj(i)​

Regularization

用于减少过拟合(overfitting)问题。解决过拟合方法:减少特征数量;正则化。Regularized linear regression : J(θ)=12m[∑i=1m(hθ(x(i))−y(i))2+λ∑j=1nθj2]J(\theta) = \frac{1}{2m}[\sum_{i = 1}^m(h_\theta(x^{(i)}) - y^{(i)})^2 + \lambda \sum_{j = 1}^n\theta_j^2]J(θ)=2m1​[∑i=1m​(hθ​(x(i))−y(i))2+λ∑j=1n​θj2​]Regularized logistic regression : J(θ)=−1m[∑i=1m(y(i)log(hθ(x(i)))+(1−y(i))log(1−hθ(x(i))))]+λ2m∑j=1nθj2J(\theta) = -\frac{1}{m}[\sum_{i = 1}^{m}(y^{(i)}log(h_\theta(x^{(i)})) + (1 - y^{(i)})log(1 - h_\theta(x^{(i)})))] + \frac{\lambda}{2m}\sum_{j = 1}^n\theta_j^2J(θ)=−m1​[∑i=1m​(y(i)log(hθ​(x(i)))+(1−y(i))log(1−hθ​(x(i))))]+2mλ​∑j=1n​θj2​

Neural Network

神经元:hθ(x)=g(θTx),g(z)=11+e−zh_\theta(x) = g(\theta^Tx), \quad g(z) = \frac{1}{1 + e^{-z}}hθ​(x)=g(θTx),g(z)=1+e−z1​

神经网络的基本模型

前馈传播

a1(2)=g(z1(2)),z1(2)=Θ10(1)x0+Θ11(1)x1+Θ12(1)x2+Θ13(1)x3a2(2)=g(z2(2)),z2(2)=Θ20(1)x0+Θ21(1)x1+Θ22(1)x2+Θ23(1)x3a3(2)=g(z3(2)),z3(2)=Θ30(1)x0+Θ31(1)x1+Θ32(1)x2+Θ33(1)x3hΘ(x)=a1(3)=g(Θ10(2)a0(2)+Θ11(2)a1(2)+Θ12(2)a2(2)+Θ13(2)a3(2))由x=[x0x1x2x3],z(2)=[z1(2)z2(2)z3(2)]可得z(2)=Θ(1)x,a(2)=g(z(2))【向量化】a^{(2)}_1 = g(z_1^{(2)}), z_1^{(2)} = \Theta^{(1)}_{10}x_0 + \Theta^{(1)}_{11}x_1 + \Theta^{(1)}_{12}x_2 + \Theta^{(1)}_{13}x_3\\ a^{(2)}_2 = g(z_2^{(2)}), z_2^{(2)} = \Theta^{(1)}_{20}x_0 + \Theta^{(1)}_{21}x_1 + \Theta^{(1)}_{22}x_2 + \Theta^{(1)}_{23}x_3\\ a^{(2)}_3 = g(z_3^{(2)}), z_3^{(2)} = \Theta^{(1)}_{30}x_0 + \Theta^{(1)}_{31}x_1 + \Theta^{(1)}_{32}x_2 + \Theta^{(1)}_{33}x_3 \\ h_\Theta(x) = a^{(3)}_1 = g(\Theta^{(2)}_{10}a^{(2)}_0 +\Theta^{(2)}_{11}a^{(2)}_1 + \Theta^{(2)}_{12}a^{(2)}_2 + \Theta^{(2)}_{13}a^{(2)}_3) \\ 由x = \begin{bmatrix} x_0 \\ x_1 \\ x_2 \\ x_3 \end{bmatrix},z^{(2)} = \begin{bmatrix} z_1^{(2)} \\ z_2^{(2)} \\ z_3^{(2)} \end{bmatrix}\quad 可得z^{(2)} = \Theta^{(1)}x, a^{(2)} = g(z^{(2)}) \quad 【向量化】 a1(2)​=g(z1(2)​),z1(2)​=Θ10(1)​x0​+Θ11(1)​x1​+Θ12(1)​x2​+Θ13(1)​x3​a2(2)​=g(z2(2)​),z2(2)​=Θ20(1)​x0​+Θ21(1)​x1​+Θ22(1)​x2​+Θ23(1)​x3​a3(2)​=g(z3(2)​),z3(2)​=Θ30(1)​x0​+Θ31(1)​x1​+Θ32(1)​x2​+Θ33(1)​x3​hΘ​(x)=a1(3)​=g(Θ10(2)​a0(2)​+Θ11(2)​a1(2)​+Θ12(2)​a2(2)​+Θ13(2)​a3(2)​)由x=⎣⎢⎢⎡​x0​x1​x2​x3​​⎦⎥⎥⎤​,z(2)=⎣⎢⎡​z1(2)​z2(2)​z3(2)​​⎦⎥⎤​可得z(2)=Θ(1)x,a(2)=g(z(2))【向量化】

为了更全面的理解神经网络模型,下面用一个稍微复杂一点的模型来举例。

前馈传播【向量化】

a(1)=xz(2)=Θ(1)a(1)a(2)=g(z(2))(adda0(2))z(3)=Θ(2)a(2)a(3)=g(z(3))(adda0(3))z(4)=Θ(3)a(3)a(4)=hΘ(x)=g(z(4))a^{(1)} = x \quad\quad\quad\quad\quad\quad\quad \ \\ z^{(2)} = \Theta^{(1)}a^{(1)} \quad\quad\quad\quad\ \ \\ a^{(2)} = g(z^{(2)}) \quad (add \ a_0^{(2)}) \\ z^{(3)} = \Theta^{(2)}a^{(2)} \quad\quad\quad\quad\ \ \\ a^{(3)} = g(z^{(3)}) \quad (add \ a_0^{(3)}) \\ z^{(4)} = \Theta^{(3)}a^{(3)} \quad\quad\quad\quad\ \ \\ a^{(4)} = h_\Theta(x) = g(z^{(4)}) \quad a(1)=xz(2)=Θ(1)a(1)a(2)=g(z(2))(adda0(2)​)z(3)=Θ(2)a(2)a(3)=g(z(3))(adda0(3)​)z(4)=Θ(3)a(3)a(4)=hΘ​(x)=g(z(4))

Cost funcstion

假设进行多元分类,类别数为KKK。

hΘ(x)∈RK(hΘ(x))i=ithoutputJ(Θ)=−1m[∑i=1m∑k=1K(yk(i)log(hΘ(x(i)))k+(1−yk(i))log(1−(hΘ(x(i)))k))]+λ2m∑l=1L−1∑i=1sl∑j=1sl+1(Θji(l))2h_\Theta(x) \in \mathbb{R}^K \quad (h_\Theta(x))_i = i^{th} \ output \\ J(\Theta) = -\frac{1}{m}[\sum_{i = 1}^{m}\sum_{k = 1}^{K}(y_k^{(i)}log(h_\Theta(x^{(i)}))_k + (1 - y_k^{(i)})log(1 - (h_\Theta(x^{(i)}))_k))] + \frac{\lambda}{2m}\sum_{l = 1}^{L - 1}\sum_{i = 1}^{s_l}\sum_{j = 1}^{s_{l + 1}}(\Theta_{ji}^{(l)})^2 \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad hΘ​(x)∈RK(hΘ​(x))i​=ithoutputJ(Θ)=−m1​[i=1∑m​k=1∑K​(yk(i)​log(hΘ​(x(i)))k​+(1−yk(i)​)log(1−(hΘ​(x(i)))k​))]+2mλ​l=1∑L−1​i=1∑sl​​j=1∑sl+1​​(Θji(l)​)2

其中 LLL = total no. of layers in network, sls_lsl​ = no. of units (not counting bias unit) in layer lll

反向传播(BP)算法

注:主要来求解偏导数。

预处理一些数值

Intuition : δj(l)\delta_j^{(l)}δj(l)​ = “error” of node jjj in layer lll.

For each output unit (layer L = 4)

δ(4)=a(4)−y\delta^{(4)} = a^{(4)} - yδ(4)=a(4)−y

δ(3)=(Θ(3))Tδ(4).∗g′(z(3))\delta^{(3)} = (\Theta^{(3)})^T\delta^{(4)}.*g&#x27;(z^{(3)})δ(3)=(Θ(3))Tδ(4).∗g′(z(3))

δ(2)=(Θ(2))Tδ(3).∗g′(z(2))\delta^{(2)} = (\Theta^{(2)})^T\delta^{(3)}.*g&#x27;(z^{(2)})δ(2)=(Θ(2))Tδ(3).∗g′(z(2))

【可以证明∂∂Θij(l)J(Θ)=aj(l)δi(l+1)\frac{\partial}{\partial \Theta_{ij}^{(l)}}J(\Theta) = a_j^{(l)}\delta_i^{(l + 1)}∂Θij(l)​∂​J(Θ)=aj(l)​δi(l+1)​, 忽略正则化的情况下】

下面是反向传播算法(Backpropagation algorithm)的具体步骤:

Training set {(x(1),y(1)),(x(2),y(2)),…,(x(m),y(m))(x^{(1)}, y^{(1)}), (x^{(2)}, y^{(2)}), \dots, (x^{(m)}, y^{(m)})(x(1),y(1)),(x(2),y(2)),…,(x(m),y(m))}

Set Δij(l)=0\Delta_{ij}^{(l)} = 0Δij(l)​=0 (for all l,i,jl, i, jl,i,j)

For i=1i = 1i=1 to mmm

​ \quad Set a(1)=xa^{(1)} = xa(1)=x

​ \quad Peform forward propagation to compute a(l)a^{(l)}a(l) for l=2,3,…,Ll = 2, 3, \dots, Ll=2,3,…,L

​ \quad Using y(i)y^{(i)}y(i), compute δ(L)=a(L)−y(i)\delta^{(L)} = a^{(L)} - y^{(i)}δ(L)=a(L)−y(i)

​ \quad Compute δ(L−1),δL−2,…,δ2\delta^{(L - 1)}, \delta^{L - 2}, \dots, \delta^2δ(L−1),δL−2,…,δ2

​ \quad Δij(l):=Δij(l)+aj(l)δi(l+1)\Delta_{ij}^{(l)} := \Delta_{ij}^{(l)} + a_j^{(l)}\delta_i^{(l + 1)}Δij(l)​:=Δij(l)​+aj(l)​δi(l+1)​ 【向量化表示:Δ(l):=Δ(l)+δ(l+1)(a(l))T\Delta^{(l)} := \Delta^{(l)} +\delta^{(l + 1)}(a^{(l)})^TΔ(l):=Δ(l)+δ(l+1)(a(l))T】

Dij(l):=1mΔij(l)+λΘij(l)ifj≠0D_{ij}^{(l)} := \frac{1}{m}\Delta_{ij}^{(l)} + \lambda\Theta_{ij}^{(l)} \quad if \ j =\not 0Dij(l)​:=m1​Δij(l)​+λΘij(l)​ifj≠​0

Dij(l):=1mΔij(l)ifj=0D_{ij}^{(l)} := \frac{1}{m}\Delta_{ij}^{(l)} \quad\quad \quad \ \ \quad if \ j = 0Dij(l)​:=m1​Δij(l)​ifj=0

最后可得∂∂Θij(l)J(Θ)=Dij(l)\frac{\partial}{\partial \Theta_{ij}^{(l)}}J(\Theta) = D_{ij}^{(l)}∂Θij(l)​∂​J(Θ)=Dij(l)​

求得偏导之后可继续使用梯度下降法或其他高级优化算法。

梯度检验(Gradient checking), 利用ddθJ(θ)≈J(θ+ϵ)−J(θ−ϵ)2ϵ\frac{d}{d\theta}J(\theta) \approx \frac{J(\theta + \epsilon) - J(\theta - \epsilon)}{2\epsilon}dθd​J(θ)≈2ϵJ(θ+ϵ)−J(θ−ϵ)​来近似计算导数,验证所求得的导数是否正确,进而验证前馈传播算法和反向传播算法计算是否正确。

最后,总结一下运用神经网络算法的步骤:

Randomly initialize weights (注意一定不能全部初始化为0)

Implement forward propagation to get hΘ(x(i))h_\Theta(x^{(i)})hΘ​(x(i)) for any x(i)x^{(i)}x(i)

Implement code to compute cost function J(Θ)J(\Theta)J(Θ)

Implement backprop to compute partial derivatives ∂∂Θjk(l)J(Θ)\frac{\partial}{\partial \Theta_{jk}^{(l)}}J(\Theta)∂Θjk(l)​∂​J(Θ)

Use gradient checking to compare ∂∂Θjk(l)J(Θ)\frac{\partial}{\partial \Theta_{jk}^{(l)}}J(\Theta)∂Θjk(l)​∂​J(Θ) computed using backpropagation vs. using numerical estimate of gradient of J(Θ)J(\Theta)J(Θ).

Then disable gradient checking code

Use gradient descent or advanced optimization method with backpropagation to try to minimize J(Θ)J(\Theta)J(Θ) as a function of parameters Θ\ThetaΘ

应用机器学习的建议

按照6:2:2的比例将所有数据分为训练集、交叉验证集和测试集,用训练集来训练模型,用交叉验证集来选择交叉验证误差最小的模型,用测试集来估计模型的泛化误差。

高偏差(high bias)->欠拟合(underfitting):训练误差和交叉验证误差都很大;

高方差(high variance)->过拟合(overfitting):训练误差小,交叉验证误差大。

正则化项参数λ\lambdaλ小的时候,容易过拟合;正则化项参数λ\lambdaλ大的时候,容易欠拟合。

学习曲线(learning curves):训练样本数量为自变量,误差为因变量。训练误差随着训练样本的增大而增大;交叉验证误差随着训练样本的增大而减小【正常情况】。

高偏差(欠拟合)情况:交叉验证误差随着训练样本的增加先减小后不变;训练误差随着训练样本的增加先增加后不变,最后与交叉验证误差非常接近【原因:参数太少,拟合情况不好,导致误差很大】。因此在欠拟合情况下增加训练样本对改进算法没有太大帮助。

高方差(过拟合)情况:训练误差随着训练样本的增加会增加一点,但不会变得很大;交叉验证误差会随着训练样本的增加而减少一点,但还是会非常大【特点:训练误差与交叉验证误差之间有很大的距离,交叉验证误差还有很大的下降空间】。因此在过拟合情况下增加训练样本有助于改进算法。

改进学习算法

高方差问题:获得更多的训练样本;减少特征数量;增大正则化项参数λ\lambdaλ。

高偏差问题:选用更多的特征;增加多项式特征;减小正则化项参数λ\lambdaλ。

偏斜类(skewed classes):正负样本数量差别很大。如果出现了偏斜类这种情况,就不能简单的用误差来评估模型的好坏了,所以引入了查准率和查全率。

查准率(Precision):真正例(TP) / (真正例(TP) + 假正例(FP));

召回率(Recall):真正例(TP) / (真正例(TP) + 假反例(FN))。

F1Score:2PRP+RF_1 Score : 2\frac{PR}{P + R}F1​Score:2P+RPR​, 利用F1F_1F1​来度量模型的好坏。

SVM

支持向量机的cost function是根据逻辑回归改动得来的,与逻辑回归的cost function类似。

优化目标:minθC∑i=1m[y(i)cost1(θTx(i))+(1−y(i))cost0(θTx(i))]+12∑j=1nθj2\underset{\theta}{min}C\sum_{i = 1}^m[y^{(i)}cost_1(\theta^Tx^{(i)}) + (1 - y^{(i)})cost_0(\theta^Tx^{(i)})] + \frac{1}{2}\sum_{j = 1}^n\theta_j^2θmin​C∑i=1m​[y(i)cost1​(θTx(i))+(1−y(i))cost0​(θTx(i))]+21​∑j=1n​θj2​

其中cost1(z)cost_1(z)cost1​(z)的图像如左图所示,cost0(z)cost_0(z)cost0​(z)的图像如右图所示

理想的假设:If y=1y = 1y=1 , we want θTx≥1\theta^Tx \ge 1θTx≥1 (not just ≥0\ge 0≥0); If y=0y = 0y=0 , we want θTx≤−1\theta^T x\le -1θTx≤−1 (not just < 0).

如果优化目标中的系数CCC非常大,那么优化目标的第一项就需要等于0,这时就需要上述"理想的假设"成立,那么优化目标就会变为

minθ12∑j=1nθj2=12∣∣θ∣∣2s.t.θTx(i)≥1ify(i)=1θTx(i)≤−1ify(i)=0\underset{\theta}{min} \ \frac{1}{2}\sum_{j = 1}^n\theta_j^2 = \frac{1}{2}||\theta||^2 \quad \quad \quad \\ s.t. \quad \theta^Tx^{(i)} \ge 1 \quad if \ y^{(i)} = 1 \\ \quad \quad \quad \ \ \theta^Tx^{(i)} \le -1 \quad if \ y^{(i)} = 0 \ \ θmin​21​j=1∑n​θj2​=21​∣∣θ∣∣2s.t.θTx(i)≥1ify(i)=1θTx(i)≤−1ify(i)=0

核函数(kernel):通过特征点和核函数(常用高斯核函数)来定义新的特征变量,从而训练复杂的非线性模型。

高斯核函数:k(x,l)=exp(−∣∣x−l∣∣22δ2)k(x, l) = exp(-\frac{||x - l||^2}{2\delta^2})k(x,l)=exp(−2δ2∣∣x−l∣∣2​)

举个简单的例子:

Given (x(1),y(1)),(x(2),y(2)),…,(x(m),y(m))(x^{(1)}, y^{(1)}), (x^{(2)}, y^{(2)}), \dots, (x^{(m)}, y^{(m)})(x(1),y(1)),(x(2),y(2)),…,(x(m),y(m)),

choose l(1)=x(1),l(2)=x(2),…,l(m)=x(m)l^{(1)} = x^{(1)}, l^{(2)} = x^{(2)}, \dots, l^{(m)} = x^{(m)}l(1)=x(1),l(2)=x(2),…,l(m)=x(m). 【特征点】

新的特征f=[f0f1f2…fm]新的特征f = \begin{bmatrix} f_0 \\ f_1 \\ f_2 \\ \dots \\ f_m \end{bmatrix} \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad 新的特征f=⎣⎢⎢⎢⎢⎡​f0​f1​f2​…fm​​⎦⎥⎥⎥⎥⎤​

For training example x(i),y(i)x^{(i)}, y^{(i)}x(i),y(i) 【similarity函数常用高斯核函数】

​ \quad f1(i)=similarity(x(i),l(1))f_1^{(i)} = similarity(x^{(i)}, l^{(1)})f1(i)​=similarity(x(i),l(1))

​ \quad f2(i)=similarity(x(i),l(2))f_2^{(i)} = similarity(x^{(i)}, l^{(2)})f2(i)​=similarity(x(i),l(2))

\quad …\dots…

​ \quad fm(i)=similarity(x(i),l(m))f_m^{(i)} = similarity(x^{(i)}, l^{(m)})fm(i)​=similarity(x(i),l(m))

Hypothesis : Given xxx, compute features f∈Rm+1f \in \mathbb{R}^{m + 1}f∈Rm+1

​ \quad Predict “y = 1” if θTf≥0\theta^Tf \ge 0θTf≥0

Training : minθC∑i=1m[y(i)cost1(θTf(i))+(1−y(i))cost0(θTf(i))]+12∑j=1mθj2\underset{\theta}{min}C\sum_{i = 1}^m[y^{(i)}cost_1(\theta^Tf^{(i)}) + (1 - y^{(i)})cost_0(\theta^Tf^{(i)})] + \frac{1}{2}\sum_{j = 1}^m\theta_j^2θmin​C∑i=1m​[y(i)cost1​(θTf(i))+(1−y(i))cost0​(θTf(i))]+21​∑j=1m​θj2​

Large C : Lower bias, higher variance; Small C : Higher bias, lower variance.

Large δ2\delta^2δ2 : Featuers fif_ifi​ vary more smoothly. Higher bias, lower variance;

Small δ2\delta^2δ2 : Featuers fif_ifi​ vary less smoothly. Lower bias, higher variance.

K-means

Input : - KKK (number of clusters)

​ \quad \quad - Training set {x(1),x(2),…,x(m)x^{(1)}, x^{(2)}, \dots, x^{(m)}x(1),x(2),…,x(m)} , x(i)∈Rnx^{(i)} \in \mathbb{R}^nx(i)∈Rn

Randomly initialize KKK cluster centroids μ1,μ2,…,μK∈Rn\mu_1, \mu_2, \dots, \mu_K \in\mathbb{R}^nμ1​,μ2​,…,μK​∈Rn

Repeat {

​\quad for i=1i = 1i=1 to mmm

​ \quad \quad c(i)c^{(i)}c(i) := index (from 1 to KKK) of cluster centroid closest to x(i)x^{(i)}x(i)

\quad ​ for k=1k = 1k=1 to KKK

​ \quad \quad μk\mu_kμk​ := average (mean) of points assigned to cluster kkk

}

Optimization objective :

​ \quad J(c(1),…,c(m),μ1,…,μK)=1m∑i=1m∣∣x(i)−μc(i)∣∣2J(c^{(1)}, \dots, c^{(m)},\mu_1, \dots, \mu_K) = \frac{1}{m}\sum_{i = 1}^m||x^{(i)} - \mu_{c^{(i)}}||^2J(c(1),…,c(m),μ1​,…,μK​)=m1​∑i=1m​∣∣x(i)−μc(i)​∣∣2

​ \quad minc(1),…,c(m),μ1,…,μKJ(c(1),…,c(m),μ1,…,μK)\underset{c^{(1)}, \dots, c^{(m)}, \\ \mu_1, \dots, \mu_K}{min} J(c^{(1)}, \dots, c^{(m)},\mu_1, \dots, \mu_K)c(1),…,c(m),μ1​,…,μK​min​J(c(1),…,c(m),μ1​,…,μK​)

其中c(i)c^{(i)}c(i) = index of cluster (1,2,...,K1, 2, ..., K1,2,...,K) to which example x(i)x^{(i)}x(i) is currently assigned

​ \quad μk\mu_kμk​ = cluster centroid kkk (μk∈Rn\mu_k \in \mathbb{R}^nμk​∈Rn)

​ \quad μc(i)\mu_{c^{(i)}}μc(i)​ = cluster centroid of cluster to which example x(i)x^{(i)}x(i) has been assigned

为了防止算法陷入局部最优解,可以通过多次随机初始化聚类中心,然后选择畸变值最小的那次聚类结果即可。

选择聚类数量:通常通过观察数据来人为进行选择,有时也可通过"肘部法则"来帮助我们选择聚类数量,但并不能期待它每次都能取得好的效果。

降维

降维可以压缩数据和可视化数据。

主成分分析法(PCA):将数据投影到低维平面。

数据预处理:对原始数据进行均值标准化和特征缩放。计算协方差矩阵:Sigma=1m∑i=1m(x(i))(x(i))TSigma = \frac{1}{m}\sum_{i = 1}^m(x^{(i)})(x^{(i)})^TSigma=m1​∑i=1m​(x(i))(x(i))T计算协方差矩阵的特征向量:[U,S,V]=svd(Sigma)[U, S, V] = svd(Sigma)[U,S,V]=svd(Sigma) 【svd为奇异值分解函数】通过svdsvdsvd函数我们可以得到n∗nn*nn∗n的特征矩阵UUU,取矩阵UUU的前kkk列作为kkk维平面的kkk个方向向量即可,即Ureduce=U(:,1:k)U_{reduce} = U(:, 1:k)Ureduce​=U(:,1:k)最后可以通过计算得到降维后的数据:z=Ureduce′∗xz = U_{reduce}&#x27; * xz=Ureduce′​∗x

压缩重现/数据的重构:xapprox=Ureduce∗zx_{approx} = U_{reduce}*zxapprox​=Ureduce​∗z

PCA中主成分数量kkk的选择(原理)

Average squared projection error (平均平方映射误差) : 1m∑i=1m∣∣x(i)−xapprox(i)∣∣2\frac{1}{m}\sum_{i = 1}^m||x^{(i)} -x_{approx}^{(i)}||^2m1​∑i=1m​∣∣x(i)−xapprox(i)​∣∣2

Total variation in the data (数据的总变差) : 1m∑i=1m∣∣x(i)∣∣2\frac{1}{m}\sum_{i = 1}^m||x^{(i)}||^2m1​∑i=1m​∣∣x(i)∣∣2

Choose kkk to be the smallest value so that 1m∑i=1m∣∣x(i)−xapprox(i)∣∣21m∑i=1m∣∣x(i)∣∣2≤0.01\frac{\frac{1}{m}\sum_{i = 1}^m||x^{(i)} -x_{approx}^{(i)}||^2}{\frac{1}{m}\sum_{i = 1}^m||x^{(i)}||^2} \le 0.01m1​∑i=1m​∣∣x(i)∣∣2m1​∑i=1m​∣∣x(i)−xapprox(i)​∣∣2​≤0.01 (1%)

“99% of variance is retained”

PCA中主成分数量kkk的选择(实际操作)

[U, S, V] = svd(Sigma)

Pick smallest value of kkk for which 1−∑i=1kSii∑i=1mSii≤0.011 - \frac{\sum_{i = 1}^kS_{ii}}{\sum_{i = 1}^mS_{ii}} \le 0.011−∑i=1m​Sii​∑i=1k​Sii​​≤0.01, 即∑i=1kSii∑i=1mSii≥0.99\frac{\sum_{i = 1}^kS_{ii}}{\sum_{i = 1}^mS_{ii}} \ge 0.99∑i=1m​Sii​∑i=1k​Sii​​≥0.99

“99% of variance is retained”

异常检测问题

高斯分布(正态分布)

p(x;μ,σ2)=12πσexp(−(x−μ)22σ2)p(x; \mu, \sigma^2) = \frac{1}{\sqrt{2\pi}\sigma}exp(-\frac{(x - \mu)^2}{2\sigma^2})p(x;μ,σ2)=2π​σ1​exp(−2σ2(x−μ)2​)参数估计:μ=1m∑i=1mx(i)\mu = \frac{1}{m}\sum_{i = 1}^mx^{(i)}μ=m1​∑i=1m​x(i), σ2=1m∑i=1m(x(i)−μ)2\sigma^2 = \frac{1}{m}\sum_{i = 1}^m(x^{(i)} - \mu)^2σ2=m1​∑i=1m​(x(i)−μ)2

Anomaly detection algorithm

Choose features xix_ixi​ that you think might be indicative of anomalous examples.

Fit parameters μ1,…,μn,σ12,…,σn2\mu_1,\dots, \mu_n,\sigma_1^2,\dots,\sigma_n^2μ1​,…,μn​,σ12​,…,σn2​

μj=1m∑i=1mxj(i)σj2=1m∑i=1m(xj(i)−μj)2\mu_j = \frac{1}{m}\sum_{i = 1}^mx_j^{(i)} \quad \quad \sigma_j^2 = \frac{1}{m}\sum_{i = 1}^m(x_j^{(i)} - \mu_j)^2μj​=m1​∑i=1m​xj(i)​σj2​=m1​∑i=1m​(xj(i)​−μj​)2

Given new example xxx, compute p(x)p(x)p(x) :

p(x)=∏j=1np(xj;μj,σj2)=∏j=1n12πσjexp(−(xj−μj)22σj2)p(x) = \prod_{j = 1}^{n}p(x_j;\mu_j, \sigma_j^2) = \prod_{j = 1}^{n}\frac{1}{\sqrt{2\pi}\sigma_j}exp(-\frac{(x_j - \mu_j)^2}{2\sigma_j^2})p(x)=∏j=1n​p(xj​;μj​,σj2​)=∏j=1n​2π​σj​1​exp(−2σj2​(xj​−μj​)2​)

Anomaly if p(x)&lt;ϵp(x) &lt; \epsilonp(x)<ϵ

异常检测与监督学习的试用情况:

异常检测:大量的负样本,极少的正样本;许多不同类型的异常;未来可能出现新的异常类型。

监督学习:大量的正负样本。

注:在使用异常检测算法时,如果有的特征分布不符合高斯分布,可以对这个特征的数据进行对数变换、指数变换等使其符合高斯分布,这样算法运行的会更好。

多元高斯分布

p(x;μ,Σ)=1(2π)n/2∣Σ∣1/2exp(−12(x−μ)TΣ−1(x−μ))p(x; \mu, \Sigma) = \frac{1}{(2\pi)^{n/2}|\Sigma|^{1/2}}exp(-\frac{1}{2}(x - \mu)^T\Sigma^{-1}(x - \mu))p(x;μ,Σ)=(2π)n/2∣Σ∣1/21​exp(−21​(x−μ)TΣ−1(x−μ)) 其中μ∈R,Σ∈Rn×n\mu \in \mathbb{R}, \Sigma \in \mathbb{R}^{n\times n}μ∈R,Σ∈Rn×n参数估计:μ=1m∑i=1mx(i)\mu = \frac{1}{m}\sum_{i = 1}^mx^{(i)}μ=m1​∑i=1m​x(i), Σ=1m∑i=1m(x(i)−μ)(x(i)−μ)T\Sigma = \frac{1}{m}\sum_{i = 1}^m(x^{(i)} - \mu)(x^{(i)} - \mu)^TΣ=m1​∑i=1m​(x(i)−μ)(x(i)−μ)T

Recommender Systems(推荐系统)

Content-based recommendations(基于内容的推荐算法)

r(i,j)=1r(i, j) =1r(i,j)=1 if user jjj has rated movie iii (0 otherwise)

y(i,j)=y^{(i, j)} =y(i,j)= rating by user jjj on movie iii (if defined)

θ(j)=\theta^{(j)} =θ(j)= parameter vector for user jjj

x(i)=x^{(i)} =x(i)= feature vector for movie iii

For user jjj, movie iii, predicted rating: (θ(j))T(x(i))(\theta^{(j)})^T(x^{(i)})(θ(j))T(x(i))

m(j)=m^{(j)} =m(j)= no. of movies rated by user jjj

To learn θ(j)\theta^{(j)}θ(j) (parameter for user jjj): minθ(j)12∑i:r(i,j)=1((θ(j))Tx(i)−y(i,j))2+λ2∑k=1n(θk(j))2\underset{\theta^{(j)}}{min}\frac{1}{2}\sum_{i:r(i, j) = 1}((\theta^{(j)})^Tx^{(i)} - y^{(i, j)})^2 + \frac{\lambda}{2}\sum_{k = 1}^n(\theta_k^{(j)})^2θ(j)min​21​∑i:r(i,j)=1​((θ(j))Tx(i)−y(i,j))2+2λ​∑k=1n​(θk(j)​)2

To learn θ(1),θ(2),…,θ(nu)\theta^{(1)},\theta^{(2)},\dots,\theta^{(n_u)}θ(1),θ(2),…,θ(nu​) :

\quad minθ(1),…,θ(nu)12∑j=1nu∑i:r(i,j)=1((θ(j))Tx(i)−y(i,j))2+λ2∑j=1nu∑k=1n(θk(j))2\underset{\theta^{(1)},\dots,\theta^{(n_u)}}{min}\frac{1}{2}\sum_{j = 1}^{n_u}\sum_{i:r(i, j) = 1}((\theta^{(j)})^Tx^{(i)} - y^{(i, j)})^2 + \frac{\lambda}{2}\sum_{j = 1}^{n_u}\sum_{k = 1}^n(\theta_k^{(j)})^2θ(1),…,θ(nu​)min​21​∑j=1nu​​∑i:r(i,j)=1​((θ(j))Tx(i)−y(i,j))2+2λ​∑j=1nu​​∑k=1n​(θk(j)​)2

Gradient descent :

​ \quad θk(j):=θk(j)−α∑i:r(i,j)=1((θ(j))Tx(i)−y(i,j))xk(i)(fork=0)\theta_k^{(j)} := \theta_k^{(j)} - \alpha\sum_{i:r(i,j) = 1}((\theta^{(j)})^Tx^{(i)} - y^{(i,j)})x_k^{(i)} \quad (for \ k = 0)θk(j)​:=θk(j)​−α∑i:r(i,j)=1​((θ(j))Tx(i)−y(i,j))xk(i)​(fork=0)

​ \quad θk(j):=θk(j)−α∑i:r(i,j)=1(((θ(j))Tx(i)−y(i,j))xk(i)+λθ(j))(fork≠0)\theta_k^{(j)} := \theta_k^{(j)} - \alpha\sum_{i:r(i,j) = 1}(((\theta^{(j)})^Tx^{(i)} - y^{(i,j)})x_k^{(i)} + \lambda\theta^{(j)}) \quad (for \ k \ne 0)θk(j)​:=θk(j)​−α∑i:r(i,j)=1​(((θ(j))Tx(i)−y(i,j))xk(i)​+λθ(j))(fork̸​=0)

Collaborative filtering(协同过滤)

优化目标

Given x(1),…,x(nm)x^{(1)},\dots,x^{(n_m)}x(1),…,x(nm​), estimate θ(1),…,θ(nu)\theta^{(1)}, \dots, \theta^{(n_u)}θ(1),…,θ(nu​) :

\quad minθ(1),…,θ(nu)12∑j=1nu∑i:r(i,j)=1((θ(j))Tx(i)−y(i,j))2+λ2∑j=1nu∑k=1n(θk(j))2\underset{\theta^{(1)},\dots,\theta^{(n_u)}}{min}\frac{1}{2}\sum_{j = 1}^{n_u}\sum_{i:r(i, j) = 1}((\theta^{(j)})^Tx^{(i)} - y^{(i, j)})^2 + \frac{\lambda}{2}\sum_{j = 1}^{n_u}\sum_{k = 1}^n(\theta_k^{(j)})^2θ(1),…,θ(nu​)min​21​∑j=1nu​​∑i:r(i,j)=1​((θ(j))Tx(i)−y(i,j))2+2λ​∑j=1nu​​∑k=1n​(θk(j)​)2

Given θ(1),…,θ(nu)\theta^{(1)}, \dots,\theta^{(n_u)}θ(1),…,θ(nu​), estimate x(1),…,x(nm)x^{(1)},\dots,x^{(n_m)}x(1),…,x(nm​) :

​ \quad minx(1),…,x(nm)12∑i=1nm∑j:r(i,j)=1((θ(j))Tx(i)−y(i,j))2+λ2∑i=1nm∑k=1n(xk(i))2\underset{x^{(1)},\dots,x^{(n_m)}}{min}\frac{1}{2}\sum_{i = 1}^{n_m}\sum_{j:r(i, j) = 1}((\theta^{(j)})^Tx^{(i)} - y^{(i, j)})^2 + \frac{\lambda}{2}\sum_{i = 1}^{n_m}\sum_{k = 1}^n(x_k^{(i)})^2x(1),…,x(nm​)min​21​∑i=1nm​​∑j:r(i,j)=1​((θ(j))Tx(i)−y(i,j))2+2λ​∑i=1nm​​∑k=1n​(xk(i)​)2

Minimizing x(1),…,x(nm)x^{(1)},\dots,x^{(n_m)}x(1),…,x(nm​) and θ(1),…,θ(nu)\theta^{(1)},\dots,\theta^{(n_u)}θ(1),…,θ(nu​) simultaneously :

​ \quad J(x(1),…,x(nm),θ(1),…,θ(nu))=12∑(i,j):r(i,j)=1((θ(j))Tx(i)−y(i,j))2+λ2∑i=1nm∑k=1n(xk(i))2+λ2∑j=1nu∑k=1n(θk(j))2J(x^{(1)},\dots,x^{(n_m)},\theta^{(1)},\dots,\theta^{(n_u)}) = \frac{1}{2}\sum_{(i,j):r(i,j) = 1}((\theta^{(j)})^Tx^{(i)} - y^{(i, j)})^2 \\ \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad + \frac{\lambda}{2}\sum_{i = 1}^{n_m}\sum_{k = 1}^n(x_k^{(i)})^2 + \frac{\lambda}{2}\sum_{j = 1}^{n_u}\sum_{k = 1}^n(\theta_k^{(j)})^2J(x(1),…,x(nm​),θ(1),…,θ(nu​))=21​∑(i,j):r(i,j)=1​((θ(j))Tx(i)−y(i,j))2+2λ​∑i=1nm​​∑k=1n​(xk(i)​)2+2λ​∑j=1nu​​∑k=1n​(θk(j)​)2

​ minx(1),…,x(nm)θ(1),…,θ(nu)J(x(1),…,x(nm),θ(1),…,θ(nu))\underset{x^{(1)},\dots,x^{(n_m)} \\ \theta^{(1)},\dots,\theta^{(n_u)}}{min}J(x^{(1)},\dots,x^{(n_m)},\theta^{(1)},\dots,\theta^{(n_u)})x(1),…,x(nm​)θ(1),…,θ(nu​)min​J(x(1),…,x(nm​),θ(1),…,θ(nu​))

算法过程

Initialize x(1),…,x(nm),θ(1),…,θ(nu)x^{(1)},\dots,x^{(n_m)},\theta^{(1)},\dots,\theta^{(n_u)}x(1),…,x(nm​),θ(1),…,θ(nu​) to small random values.

Minimize J(x(1),…,x(nm),θ(1),…,θ(nu))J(x^{(1)},\dots,x^{(n_m)},\theta^{(1)},\dots,\theta^{(n_u)})J(x(1),…,x(nm​),θ(1),…,θ(nu​)) using gradient descent (or an advanced optimization algorithm).E.g. for every j=1,…,nu,i=1…,nmj = 1, \dots,n_u,i = 1\dots,n_mj=1,…,nu​,i=1…,nm​ :

​ \quad xk(i):=xk(i)−α∑j:r(i,j)=1((θ(j))Tx(i)−y(i,j))θk(j)+λxk(j))x_k^{(i)} := x_k^{(i)} - \alpha\sum_{j:r(i,j) = 1}((\theta^{(j)})^Tx^{(i)} - y^{(i,j)})\theta_k^{(j)} + \lambda x_k^{(j)})xk(i)​:=xk(i)​−α∑j:r(i,j)=1​((θ(j))Tx(i)−y(i,j))θk(j)​+λxk(j)​)

​ \quad θk(j):=θk(j)−α∑i:r(i,j)=1((θ(j))Tx(i)−y(i,j))xk(i)+λθk(j))\theta_k^{(j)} := \theta_k^{(j)} - \alpha\sum_{i:r(i,j) = 1}((\theta^{(j)})^Tx^{(i)} - y^{(i,j)})x_k^{(i)} + \lambda\theta_k^{(j)})θk(j)​:=θk(j)​−α∑i:r(i,j)=1​((θ(j))Tx(i)−y(i,j))xk(i)​+λθk(j)​)

For a user with parameters θ\thetaθ and a movie with (learned) features xxx, predict a star rating of θTx\theta^TxθTx

改进:如果有一个用户iii没有对任何电影进行评分,那么执行上述的协调过滤算法之后预测出的θ(i)\theta^{(i)}θ(i)全为0,(θ(j))Tx(x(i))(\theta^{(j)})^Tx(x^{(i)})(θ(j))Tx(x(i))也会全为0,即用户iii对所有的电影的预测评分均为0,便无法向其推荐电影。我们可以先利用均值归一化来预处理所有的评分数据,然后再执行协同过滤算法,而且在预测评分的时候采用(θ(j))T(x(i))+μi(\theta^{(j)})^T(x^{(i)}) + \mu_i(θ(j))T(x(i))+μi​, 即可解决上述问题。[其中矩阵μ\muμ为所有电影的平均评分]

大规模机器学习

大规模机器学习需要处理海量的数据,会带来计算问题。

Stochastic gradient descent (随机梯度下降)

cost(θ,(x(i),y(i)))=12(hθ(x(i))−y(i))2cost(\theta,(x^{(i)},y^{(i)})) = \frac{1}{2}(h_\theta(x^{(i)}) - y^{(i)})^2cost(θ,(x(i),y(i)))=21​(hθ​(x(i))−y(i))2, Jtrain(θ)=1m∑i=1mcost(θ,(x(i),y(i)))J_{train}(\theta) = \frac{1}{m}\sum_{i = 1}^m cost(\theta,(x^{(i)},y^{(i)}))Jtrain​(θ)=m1​∑i=1m​cost(θ,(x(i),y(i)))

Randomly shuffle (reorder) tarining examples.

Repeat {

​\quad for i:=1,…,mi := 1,\dots,mi:=1,…,m {

​ \quad\quad θj:=θj−α(hθ(x(i))−y(i))xj(i)\theta_j := \theta_j - \alpha(h_\theta(x^{(i)}) - y^{(i)})x_j^{(i)}θj​:=θj​−α(hθ​(x(i))−y(i))xj(i)​ (for every j=0,…,nj = 0,\dots,nj=0,…,n)

​\quad }

}

Mini-batch gradient descent

Say b=10,m=1000b = 10, m = 1000b=10,m=1000.

Repeat {

\quad​ for i=1,11,21,31,…,991i = 1, 11, 21, 31, \dots, 991i=1,11,21,31,…,991 {

​ \quad\quad θj:=θj−α110∑k=ii+9(hθ(x(k))−y(k))xj(k)\theta_j := \theta_j - \alpha\frac{1}{10}\sum_{k = i}^{i + 9}(h_\theta(x^{(k)}) - y^{(k)})x_j^{(k)}θj​:=θj​−α101​∑k=ii+9​(hθ​(x(k))−y(k))xj(k)​ (for every j=0,…,nj = 0,\dots,nj=0,…,n)

​ \quad}

}

Map-reduce: 将大数据集分布到多台电脑上面并行计算。

Photo OCR(照片光学字符识别): Text dection; Character segmentation; Character recognition.通常用滑动窗口来检测图片中含有文字的区域,然后再利用滑动窗口来进行文字切割,最后用神经网络或者其他算法进行字符识别。

人工数据合成:自己创造数据;从已有数据集中按照某种方法生成更多数据。

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。