Stochastic Gradient Descent

ChoCho

NCU Math

2023-03-01

MODIFIED

  • 2022-03-02

    • Change \operatorname{argmin}_{\theta} to \underset{\theta}{\operatorname{argmin}}.
  • 2022-03-12

  • 2022-03-25

    • Change \mathtt{Net}_\theta to \mathtt{model}_{\theta}.

Regression

Table of Contents

  • Regression

  • Stochastic Gradient Descent (SGD)

  • Questions

Linear Regression

  • For simplicity, we see linear regression from \mathbb R to \mathbb R.

  • Suppose that we have the data (x^{(1)},y^{(1)}), \cdots , (x^{(n)},y^{(n)}) in \mathbb R\times \mathbb R.

  • We want to find some linear function f:\mathbb R\longrightarrow \mathbb R, say f(x) = wx+b, such that \begin{aligned} f(x^{(i)}) \approx y^{(i)} , \qquad \forall i = 1\sim n. \end{aligned}

  • In math, our goal is to find \begin{aligned} \underset{w,b\in \mathbb R}{\operatorname{argmin}}\sum_{i=1}^{n} \ell \bigl( y^{(i)} , f(x^{(i)}) \bigr), \end{aligned} where \ell:\mathbb R^2 \longrightarrow [0,\infty) is what we need to determine.

  • What is the best choice for \ell (in some sense)?

    • We may use the MLE (maximum likelihood function).

MLE (maximum likelihood function)

  • See the page 13 of Rice (2006).

  • Suppose that X_1,\cdots,X_n are i.i.d. from f(\cdot \vert \theta_{\text{real}}), and let \begin{aligned} \mathtt{lik}(\theta) = \prod_{i=1}^n f(X_i\vert \theta). \end{aligned} Then the MLE of (X_1,\cdots,X_n) is \begin{aligned} \widehat{\theta} = \underset{\theta\in \Theta}{\operatorname{argmin}} \,\mathtt{lik}(\theta). \end{aligned}

    THEOREM B (page 277 in Rice (2006))

    Then under some smoothness conditions of f, \begin{aligned} \sqrt{n} \big( \widehat{\theta} -\theta_{\text{real}} \big) \end{aligned} converges weakly to a normal distribution.

  • We regard (x^{(1)},y^{(1)}), \cdots , (x^{(n)},y^{(n)}) as realizations of random variables X,Y, and we assume that \begin{aligned} Y=wX+b+\mathtt{noise}, \end{aligned} where

    • w,b are two parameters in \mathbb R, and

    • \mathtt{noise}\sim N(0,\sigma^2) for some fixed \sigma>0.

  • In math, we give the joint density of (X,Y) by the following:

    • Let X be any continuous random variable with the density function f_X.

    • We give the density of Y, conditioning on X=x, is \begin{aligned} f_{Y\vert X} (y\vert x)=\frac{1}{\sqrt{2\pi \sigma^2}}\exp\Bigl\lbrace - \frac{1}{2\sigma^2} \bigl( y-(wx+b) \bigr)^2 \Bigr\rbrace. \end{aligned}

    • The joint density of (X,Y) is \begin{aligned} f_{X,Y}(x,y) = f_{Y\vert X}(y\vert x) \cdot f_X(x). \end{aligned}

  • Let (X_i,Y_i) be i.i.d. with (X_i,Y_i)\stackrel{d}{=} (X,Y) and Z_i = \mathbf E[Y_i\vert X_i].

  • The likelihood function of \bigl\lbrace Z_i:i=1\sim n \bigr\rbrace for the realizations (X_i,Y_i)=(x^{(i)},y^{(i)}) is \begin{aligned} \mathtt{lik}(w,b)\Big\vert_{(x^{(1)},y^{(1)}), \cdots , (x^{(n)},y^{(n)})} = \prod_{i=1}^n \frac{1}{\sqrt{2\pi \sigma^2}}\exp\Bigl\lbrace - \frac{1}{2\sigma^2} \bigl( y^{(i)}-(wx^{(i)}+b) \bigr)^2 \Bigr\rbrace. \end{aligned}

  • To maximize \mathtt{lik}(w,b)\Big\vert_{(x^{(1)},y^{(1)}), \cdots , (x^{(n)},y^{(n)})} \iff To minimize \sum_{i=1}^n \bigl( y^{(i)}-(wx^{(i)}+b) \bigr)^2.

    • How to minimize it?
  • For the regression, the best choice of \ell (in the sense of statistics) is \ell(y,\widehat{y})=(y-\widehat{y})^2.

Regression

  • Suppose that we have the data (x^{(1)},y^{(1)}), \cdots , (x^{(n)},y^{(n)}) in \mathbb R^{\text{in}}\times \mathbb R^{\text{out}}.

  • We first need to give a function set \lbrace f_\theta: \theta \in \Theta \rbrace, where each f_\theta is a function from \mathbb R^{\text{in}} to \mathbb R^{\text{out}}.

  • We want to find some \theta\in \Theta such that \begin{aligned} f_\theta(x^{(i)}) \approx y^{(i)} , \qquad \forall i = 1\sim n. \end{aligned}

  • In math, our goal is to find \begin{aligned} \underset{\theta\in \Theta}{\operatorname{argmin}} \sum_{i=1}^{n} \ell \bigl( y^{(i)} , f_\theta(x^{(i)}) \bigr), \end{aligned} where \ell(y,\widehat{y}) = (y-\widehat{y})^2.

Deep learning (DL) and statistic

Deep learning Statistic
(x^{(1)},y^{(1)}), \cdots , (x^{(n)},y^{(n)}) Data Sample
Build a model \mathtt{model}_{\theta} (denote the estimators \mathtt{model}_{\theta}(x) by \widehat{y}) Regard x^{(i)},y^{(i)} as the realizations of random elements X,Y
\theta parameters of model parameters of density or mass
optimize parameters Min. \sum_{i=1}^n\ell(y^{(i)},\widehat{y}^{(i)}), where \widehat{y}^{(i)}=\mathtt{model}_{\theta}(x^{(i)}) Max. the likelihood

  • \ell is called the loss function.

  • How to minimize \sum_{i=1}^n\ell(y^{(i)},\widehat{y}^{(i)})?

    https://kevinbinz.com/2019/05/26/intro-gradient-descent/

    https://baptiste-monpezat.github.io/blog/stochastic-gradient-descent-for-machine-learning-clearly-explained

Gradient Descent (GD)

  • Our goal is to find a local min. of \begin{aligned} L(\theta) = \frac{1}{n}\sum_{i=1}^n\ell(y^{(i)},\widehat{y}^{(i)}). \end{aligned}

  • Given any initial value \theta_0\in \mathbb \Theta.

  • Given any \eta\in (0,\infty) small enough.

    • \eta: learning rate.

      • Momentum, Adam, \cdots.
  • We set \lbrace \theta_i \rbrace_{i\in \mathbb N} by \begin{aligned} \theta_{i} \leftarrow \theta_{i-1} - \eta \cdot \nabla L(\theta_{i-1}),\quad i\in \mathbb N. \end{aligned}

  • Then \theta_i,i\in \mathbb N may converges to a local min. of L.

https://egallic.fr/Enseignement/ML/ECB/book/gradient-descent.html

  • In practice, this can be extremely slow: we must pass over the entire dataset before making a single update, even if the update steps might be very powerful (Liu and Nocedal 1989).

    • Solution: Stochastic Gradient Descent (SGD).

Why GD works?

  • Let f:\mathbb R^n\longrightarrow \mathbb R be any differentiable function.

  • Fix \theta\in \mathbb \Theta.

  • We assume that \nabla f(\theta) \neq \mathbf{0}.

  • Our goal is to find some h\in \mathbb R^n such that f(\theta+h) < f(\theta).

  • By Taylor’s theorem, \begin{aligned} f(\theta+h) \approx f(\theta) + \begin{bmatrix} & \nabla f(\theta) & \\\end{bmatrix} \cdot \begin{bmatrix} \\ h \\ \\\end{bmatrix} \end{aligned} for h\in \mathbb R^n with \left\lVert h \right\rVert small enough.

  • So for \left\lVert h \right\rVert small enough, f(\theta+h)<f(\theta) \iff \nabla f(\theta) \cdot h < 0.

  • Given \eta_0\in (0,\infty) small. By Cauchy’s inequality, \begin{aligned} \underset{h\in \mathbb R^n,\left\lVert h \right\rVert = \eta_0}{\operatorname{argmin}} \nabla f(\theta) \cdot h = -\eta_0 \cdot \frac{\nabla f(\theta)}{\left\lVert \nabla f(\theta) \right\rVert}. \end{aligned}

  • Hence, we choose \begin{aligned} h = -\eta \cdot \nabla f(\theta) \end{aligned} for some \eta\in (0,\infty) small.

Stochastic Gradient Descent (SGD)

Table of Contents

  • Regression

  • Stochastic Gradient Descent (SGD)

  • Questions

Our ideas here are taken from https://ocw.mit.edu/courses/18-065-matrix-methods-in-data-analysis-signal-processing-and-machine-learning-spring-2018/resources/lecture-25-stochastic-gradient-descent/.

Stochastic Gradient Descent (single)

  • Our goal is to find a local min. of \begin{aligned} L(\theta) = \frac{1}{n}\sum_{i=1}^n\ell(y^{(i)},\widehat{y}^{(i)}). \end{aligned}

  • Let \ell_i(\theta) = \ell(y^{(i)},\widehat{y}^{(i)}).

  • Given any initial value \theta\in \mathbb \Theta and any \eta\in (0,\infty) small enough.

  • Let P be a permutation of \lbrace 1,2,\cdots, n \rbrace.

    Then for each j=1,\cdots, n, we update \theta by \begin{aligned} \theta \leftarrow \theta - \eta \cdot \nabla \ell_{P(j)}(\theta). \end{aligned}

    • This means that we only optimize \ell_{P(j)}(\theta), not L(\theta).
  • After repeating the above step several times, the final \theta may be close to a local min. of L.


  • The SGD (single) can be an effective strategy, even for large datasets (Bottou 2010).

Why SGD (single) works

  • We see the case of linear regression from \mathbb R to \mathbb R.

  • For simplicity, we assume that b is a fixed constant, not a parameter.

  • Now our parameters are only w.

  • We rewrite \begin{aligned} L(w)=\sum_{i=1}^n \bigl( y^{(i)} - (w x^{(i)} + b) \bigr)^2 \end{aligned} to \begin{aligned} L(w) = \sum_{i=1}^n \bigl( a_i w - b_i \bigr)^2 \stackrel{\text{denote}}{=} \sum_{i=1}^n \ell_i(w). \end{aligned}

  • Let w_i = b_i/a_i and \begin{aligned} w_* = \min_i w_i , \quad w^* = \max_i w_i. \end{aligned}

  • Note that \begin{aligned} w_* \leq \underset{w\in \mathbb R}{\operatorname{argmin}}\, L(w) = \frac{\sum_{i}a_i b_i}{\sum_i a_i^2} \leq w^*. \end{aligned}

  • For each i, if we update w by \begin{aligned} w' = w - \eta \cdot \nabla_w \ell_i (w), \end{aligned} then w' will try to enter [w_*,w^*] if w not in this interval.

Minibatch Stochastic Gradient Descent

  • Given any initial value \theta\in \mathbb \Theta and any \eta\in (0,\infty) small enough.

  • Given a number n_B, called the batch size.

  • Split \lbrace 1,2,\cdots,n \rbrace into \begin{aligned} \mathcal{B}_1 = &\lbrace 1,2,\cdots, n_B \rbrace, \cr \mathcal{B}_2 = &\lbrace n_B+ 1, n_B+ 2,\cdots, 2 n_B \rbrace, \cr &\vdots \cr & \lbrace \cdots, n \rbrace. \end{aligned}

  • Let P be a permutation of \lbrace 1,2,\cdots,n \rbrace.

    • For k=1,2,\cdots, we update \theta by \begin{aligned} \theta \leftarrow \theta - \eta \cdot \frac{1}{n_B} \sum_{j\in \mathcal{B}_k} \nabla\ell_{P(j)}(\theta) . \end{aligned} We call an epoch if we run over all k.

https://www.kaggle.com/code/ryanholbrook/stochastic-gradient-descent

Why SGD (minibatch) works

  • Fix \theta\in \Theta.

  • The main idea here is to pick some random vector X such that \mathbf E[X]=\nabla_{\theta} L(\theta).

  • Recall that our goal is to find a local min. of \begin{aligned} L(\theta) = \frac{1}{n}\sum_{i=1}^n\ell(y^{(i)},\widehat{y}^{(i)}). \end{aligned}

  • If (X,Y) is a random sample from (x^{(1)},y^{(1)}), \cdots , (x^{(n)},y^{(n)}), then \begin{aligned} \mathbf{E} \bigl[ \nabla_\theta \ell\bigl(Y,\mathtt{model}_\theta(X)\bigr) \bigr] = \frac{1}{n} \sum_{i=1}^n \nabla_{\theta} \ell\bigl( y^{(i)} , \mathtt{model}_{\theta} (x^{(i)}) \bigr) = \nabla_{\theta} L(\theta). \end{aligned} \tag{1}

  • Let \bigl\lbrace (X_k,Y_k) \bigr\rbrace_{k=1}^m be a simple random sampling with replacement from (x^{(1)},y^{(1)}), \cdots , (x^{(n)},y^{(n)}). Then, by 1, \begin{aligned} \dfrac{1}{m}\sum_{k=1}^m \nabla_\theta \ell\bigl(Y_k,\mathtt{model}_\theta(X_k)\bigr) \end{aligned} is an unbiased estimation of \nabla_{\theta} L(\theta).

    Moreover, by SLLN, as m\rightarrow\infty, \begin{aligned} \dfrac{1}{m}\sum_{k=1}^m \nabla_\theta \ell\bigl(Y_k,\mathtt{model}_\theta(X_k)\bigr) \stackrel{\text{a.s.}}{\longrightarrow} \nabla_{\theta}L(\theta). \end{aligned}

  • In practice, we use simple random sampling without replacement for efficiency.

Comparison of GD and SGD

Questions

Table of Contents

  • Regression

  • Stochastic Gradient Descent (SGD)

  • Questions

Questions

  • What kind of \mathtt{model}_{\theta} can we choose?

  • How to calculate \nabla_{\theta} L(\theta)?

  • The above is dependent on \mathtt{model}_{\theta}, so what kind of \mathtt{model}_{\theta} will we use?

  • What is the best choice for \ell for the classification?

Reference

Bottou, Léon. 2010. “Large-Scale Machine Learning with Stochastic Gradient Descent.” In Proceedings of COMPSTAT’2010, 177–86. Springer.
Liu, Dong C, and Jorge Nocedal. 1989. “On the Limited Memory BFGS Method for Large Scale Optimization.” Mathematical Programming 45 (1): 503–28.
Rice, J. A. 2006. Mathematical Statistics and Data Analysis. Cengage Learning. https://books.google.com.tw/books?id=7SI8AAAAQBAJ.