diff --git a/text/thesis/02MaterialsAndMethods.tex b/text/thesis/02MaterialsAndMethods.tex index 2e5b9e4..e9026be 100644 --- a/text/thesis/02MaterialsAndMethods.tex +++ b/text/thesis/02MaterialsAndMethods.tex @@ -72,13 +72,16 @@ \subsection{PCA} \label{mat:pca} Principal Component Analysis (PCA) is probably the most common technique for dimensionality reduction. The idea is to use those dimensions with the highest variance to keep as much information as possible in the lower dimensional room.\\ + Invented PCA was in 1901 by Karl Pearson (\cite{Pearson01}). The intention was to get the line closest to a set of data. This line also is the one that explains most variance.\\ + The PCA of data can be done in different ways. One is calculating the eigenvectors of the covariance matrix. The principal component is the eigenvector with the highest eigenvalue. Other eigenvectors follow ordered by their eigenvalues.\\ \begin{figure} \centering - \includegraphics[width=0.6\textwidth]{GaussianScatterPCA.jpg} + \includegraphics[width=0.7\textwidth]{GaussianScatterPCA.jpg} \caption{Eigenvectors of Gaussian scatter} \label{fig:pca} \end{figure} - %TODO: Explanation, formula, ... + In Figure~\ref{fig:pca} we see the eigenvectors of the data. The longer vector is the principal component the shorter one is orthogonal to it and explains the remaining variance. The second component here also is the component which explains least variance, since most variance is orthogonal to it. + %TODO? formula, ... \subsection{NMF} \label{mat:nmf} In some applications Non-negative Matrix Factorization (NMF) is preferred over PCA (cf. \cite{Lee99}). This is because it does not learn eigenvectors but decomposes the input into parts which are all possibly used in the input. When seen as matrix factorization PCA yields matrices of arbitrary sign where one represents the eigenvectors the other the specific mixture of them. Because an entry may be negative cancellation is possible. This leads to unintuitive representation in the first matrix.\\ @@ -108,7 +111,14 @@ \subsection{Autoencoders} \label{mat:autoenc} Autoencoders are a specific type of artificial neural networks (ANN). They work like typical ANNs by adjusting weights between the layers however they don't predict an unknown output but they predict their own input. What is interesting now is manipulating the size of the hidden layer where the size of the hidden layer has to be smaller than the input size. Now in the hidden layer the information of the input can be found in a condensed form (e.g. synergies instead of single muscle activity).\\ - Autoencoders have been successfully used by Spüler et al. to extract synergies from EMG (\cite{Spueler16}). Especially with a lower number of synergies autoencoders perform better than PCA or NMF because linear models fail to discover the agonist-antagonist relations that are typical for muscle movements. These however can be detected by autoencoders which allows for good estimations with half the synergies. + Autoencoders have been successfully used by Spüler et al. to extract synergies from EMG (\cite{Spueler16}). Especially with a lower number of synergies autoencoders should perform better than PCA or NMF because linear models fail to discover the agonist-antagonist relations that are typical for muscle movements. These however can be detected by autoencoders which allows for good estimations with half the synergies.\\ + An autoencoder's input layer has as many neurons as there are input dimensions (e.g. one for each EMG channel). The number of hidden layer neurons may be varied. We mostly used 3. The output layer is of the same size as the input layer. This autoencoder is shown in Figure~\ref{fig:autoenc}. + \begin{figure} + \centering + \input{autoencoder.tikz} + \caption{Autoencoder (6-3-6)} + \label{fig:autoenc} + \end{figure} \section{Experimental design} The data used for this work were mainly recorded by Farid Shiman, Nerea Irastorza-Landa, and Andrea Sarasola-Sanz for their work (\cite{Shiman15},\cite{Sarasola15}). We were allowed to use them for further analysis.\\ There were 9 right-handed subjects%TODO diff --git a/text/thesis/autoencoder.tikz b/text/thesis/autoencoder.tikz new file mode 100644 index 0000000..9bb422b --- /dev/null +++ b/text/thesis/autoencoder.tikz @@ -0,0 +1,29 @@ +\begin{tikzpicture}[->,auto,x=1.5cm, y=1.5cm] + +\foreach \i in {1,2,3,4,5,6} + \node [state] (input-\i) at (0,2.5-\i) {}; + +\foreach \i in {1,2,3} + \node [state] (hidden-\i) at (3,2-\i*1.25) {}; + +\foreach \i in {1,2,3,4,5,6} + \node [state] (output-\i) at (6,2.5-\i) {}; + +\foreach \i in {1,2,3} + \foreach \j in {1,2,3,4,5,6} + \draw (hidden-\i) -- (output-\j); + +\foreach \i in {1,2,3,4,5,6} + \foreach \j in {1,2,3} + \draw (input-\i) -- (hidden-\j); + + +\foreach \i in {1,2,3,4,5,6} + \draw [<-] (input-\i) -- ++(-1,0) + node [above, midway] {Ch \i}; + +\foreach \i in {1,2,3,4,5,6} + \draw [->] (output-\i) -- ++(1,0) + node [above, midway] {Ch \i}; + +\end{tikzpicture} diff --git a/text/thesis/mylit.bib b/text/thesis/mylit.bib index dcb17f6..5567de5 100755 --- a/text/thesis/mylit.bib +++ b/text/thesis/mylit.bib @@ -207,6 +207,13 @@ volume = "131", pages = "518–532" } +@article{Pearson01, + author = "Karl Pearson", + title = "On Lines and Planes of Closest Fit to Systems of Points in Space", + year = "1901", + journal = "Philosophical Magazine", + volume = "2" +} @article{Ting07, diff --git a/text/thesis/thesis.tex b/text/thesis/thesis.tex index 837151b..6d5f87a 100644 --- a/text/thesis/thesis.tex +++ b/text/thesis/thesis.tex @@ -22,6 +22,10 @@ \usepackage{algpseudocode} %\renewcommand{\familydefault}{\sfdefault} +\usepackage{tikz} +\usetikzlibrary{positioning,shadows,arrows,automata} + + \newcommand{\qq}[1]{``#1''} \textwidth 14cm diff --git a/usedMcode/psdPlot.m b/usedMcode/psdPlot.m new file mode 100644 index 0000000..a50a44b --- /dev/null +++ b/usedMcode/psdPlot.m @@ -0,0 +1,13 @@ +[sig, state, params] = load_bcidat(strcat(pathToFile,... + sprintf('%s/%s_B100%i/%s_B1S00%iR02.dat',subject,subject,... + number,subject,number))); + +fig=figure(); + +subplot(2,1,1); +pwelch(sig(:,14),[],[],2048,2500); +xlim([0.01,0.5]) + +subplot(2,1,2); +pburg(double(sig(:,14)),pburgOrder,2048,2500); +xlim([0.01,0.5]) \ No newline at end of file