Two signatures added

This commit is contained in:
Shakil Rafi 2024-04-17 12:31:17 -05:00
parent 8836f29605
commit cbbf2bde67
6 changed files with 32 additions and 5 deletions

BIN
.DS_Store vendored

Binary file not shown.

View File

@ -20,7 +20,7 @@ This dissertation is broken down into three parts. At the end of each part, we w
We introduce here basic notations that we will be using throughout this dissertation. Large parts are taken from standard literature inspired by \textit{Matrix Computations} by Golub \& van Loan, \cite{golub2013matrix}, \textit{Probability: Theory \& Examples} by Rick Durrett, \cite{durrett2019probability}, and \textit{Concrete Mathematics} by Knuth, Graham \& Patashnik, \cite{graham_concrete_1994}. We introduce here basic notations that we will be using throughout this dissertation. Large parts are taken from standard literature inspired by \textit{Matrix Computations} by Golub \& van Loan, \cite{golub2013matrix}, \textit{Probability: Theory \& Examples} by Rick Durrett, \cite{durrett2019probability}, and \textit{Concrete Mathematics} by Knuth, Graham \& Patashnik, \cite{graham_concrete_1994}.
\subsection{Norms and Inner Products} \subsection{Norms and Inner Products}
\begin{definition}[Euclidean Norm] \begin{definition}[Euclidean Norm]
Let $\left\|\cdot\right\|_E: \R^d \rightarrow [0,\infty)$ denote the Euclidean norm defined for every $d \in \N_0$ and for all $x= \{x_1,x_2,\cdots, x_d\}\in \R^d$ as: Let $\left\|\cdot\right\|_E: \R^d \rightarrow [0,\infty)$ denote the Euclidean norm defined for every $d \in \N$ and for all $x= \{x_1,x_2,\cdots, x_d\}\in \R^d$ as:
\begin{align} \begin{align}
\| x\|_E = \lp \sum_{i=1}^d x_i^2 \rp^{\frac{1}{2}} \| x\|_E = \lp \sum_{i=1}^d x_i^2 \rp^{\frac{1}{2}}
\end{align} \end{align}

View File

@ -4,17 +4,17 @@ We will present three avenues of further research and related work on parameter
\section{Further operations} \section{Further operations}
Note, for instance, that several classical operations are done on neural networks that have yet to be accounted for in this framework and talked about in the literature. We will discuss two of them \textit{dropout} and \textit{merger} and discuss how they may be brought into this framework. Note, for instance, that several classical operations are done on neural networks that have yet to be accounted for in this framework and talked about in the literature. We will discuss one of them \textit{dropout} and discuss how they may be brought into this framework.
Overfitting presents an important challenge for all machine learning models, including deep learning. There ex Overfitting presents an important challenge for all machine learning models, including deep learning. There exists a technique called \textit{dropout} introduced in \cite{srivastava_dropout_2014} that seeks to ameliorate this situation.
\begin{definition}[Hadamard Product] \begin{definition}[Hadamard Product]
Let $m,n \in \N$. Let $A,B \in \R^{m \times n}$. For all $i \in \{ 1,2,\hdots,m\}$ and $j \in \{ 1,2,\hdots,n\}$ define the Hadamard product $\odot: \R^{m\times n} \times \R^{m \times n} \rightarrow \R^{m \times n}$ as: Let $m,n \in \N$. Let $A,B \in \R^{m \times n}$. For all $i \in \{ 1,2,\hdots,m\}$ and $j \in \{ 1,2,\hdots,n\}$ define the Hadamard product $\odot: \R^{m\times n} \times \R^{m \times n} \rightarrow \R^{m \times n}$ as:
\begin{align} \begin{align}
A \odot B \coloneqq \lb A \odot B \rb _{i,j} = \lb A \rb_{i,j} \times \lb B \rb_{i,j} \quad \forall i,j A \odot B \coloneqq \lb A \odot B \rb _{i,j} = \lb A \rb_{i,j} \times \lb B \rb_{i,j} \quad \forall i,j
\end{align} \end{align}
\end{definition} \end{definition}
We will also define the dropout operator introduced in \cite{srivastava_dropout_2014}, and explained further in \cite{Goodfellow-et-al-2016}. We will define the dropout operator introduced in \cite{srivastava_dropout_2014}, and explained further in \cite{Goodfellow-et-al-2016}.
\begin{definition}[Instantiation with dropout] \begin{definition}[Instantiation with dropout]

Binary file not shown.

Binary file not shown.

View File

@ -1,8 +1,26 @@
\documentclass[12pt]{report} \documentclass[12pt]{report}
\usepackage{sectsty}
% Set chapter and section font sizes to 12pt
\chapterfont{\fontsize{12}{12}\selectfont}
\sectionfont{\fontsize{12}{12}\selectfont}
\subsectionfont{\fontsize{12}{12}\selectfont}
\usepackage{titlesec}
\titleformat{\part}[display]
{\normalfont\fontsize{12}{12}\selectfont\bfseries}{\partname\ \thepart}{0pt}{\fontsize{12}{12}\selectfont}[\vspace{1in}]
\titleformat{\chapter}[display]{\normalfont\fontsize{12}{12} \bfseries }{\chaptertitlename\ \thechapter.}{1ex}{}{}
\titlespacing{\chapter}{0pt}{-20pt}{0pt}
\usepackage{setspace} \usepackage{setspace}
\doublespacing \doublespacing
\usepackage[toc,page]{appendix} \usepackage[toc,page]{appendix}
\usepackage{mleftright} \usepackage{mleftright}
% Set chapter heading font size to 12pt
\usepackage{pdfpages} \usepackage{pdfpages}
\usepackage[]{amsmath} \usepackage[]{amsmath}
@ -10,7 +28,16 @@
\usepackage{mathtools} \usepackage{mathtools}
\numberwithin{equation}{section} \numberwithin{equation}{section}
\usepackage[]{amssymb} \usepackage[]{amssymb}
\usepackage[margin=1in]{geometry}
\usepackage{geometry}
\geometry{
left=1in, % Left margin
right=1in, % Right margin
top=1in, % Top margin
bottom=1in, % Bottom margin
}
\usepackage{url} \usepackage{url}
\usepackage[T1]{fontenc} \usepackage[T1]{fontenc}