diff --git a/text/thesis/04Discussion.tex b/text/thesis/04Discussion.tex index 63fe232..fd96321 100644 --- a/text/thesis/04Discussion.tex +++ b/text/thesis/04Discussion.tex @@ -1,10 +1,4 @@ \chapter{Discussion} -\section{Number of Synergies} -\label{dis:noSyn} - As shown in section~\ref{res:noSyn} 2 and 4 Synergies are good values for Autoencoder since the slope of the mean prediction is steeper before than after. Another neuron doesn't improve the result as much as the last.\\ - For PCA and NNMF this value is reached at 3 as figure \ref{fig:noSyn} shows. - %TODO: 2, 4 - % Autoencoder better when having fewer synergies(?) \section{EMG} \label{dis:emg} Predictions of velocities and positions are quite bad from EMG. @@ -45,11 +39,18 @@ Applying an offset when using EEG-data or not does not make a significant difference. This is probably due to our configuration with EEG windows as large as 1 second. If smaller windows were used an offset could help, here there is no difference. \section{Synergies} \label{dis:synergies} - \subsection{Autoencoder} - \subsection{Principal Component Analysis} - \subsection{Non-negative Matrix Factorization} + \subsection{Number of Synergies} + \label{dis:noSyn} + As shown in section~\ref{res:noSyn} 2 and 4 Synergies are good values for Autoencoder since the slope of the mean prediction is steeper before than after. Another neuron doesn't improve the result as much as the last.\\ + For PCA and NNMF this value is reached at 3 as figure \ref{fig:noSyn} shows. + %TODO: 2, 4 + % Autoencoder better when having fewer synergies(?) + \subsection{Autoencoder, PCA or NMF} + %TODO \subsection{Prediction via Synergies} - + Of course the prediction via Synergies is a bit worse than direct prediction, since the machine learning techniques could do the same dimensionality reduction and also much more.\\ + Nevertheless we also get good correlations when predicting from synergies meaning the model may match the reality. %TODO formulierung? + In addition synergies are predicted significantly ($p<0.05$) better from EEG than EMG. So the representation as synergies probably matches the representation in the brain better. This could mean that the controlling of a prostheses should be done via synergies - representing the representation in the brain and being easier to implement than a prosthesis listening to 32 EEG channels. \subsection{Comparison with EMG} The results show that the dimensionality reduction from 6 dimensional EMG to 3 dimensional Synergies (here via autoencoder) does not cost much information when predicting velocities and positions.\\ For velocities there is no significant difference and even for positions the mean only differs about $0.03$ (EMG: $0.23$, Autoencoder: $0.20$). diff --git a/text/thesis/05Future.tex b/text/thesis/05Future.tex index 5db1ab7..a215025 100644 --- a/text/thesis/05Future.tex +++ b/text/thesis/05Future.tex @@ -4,3 +4,4 @@ \section{Offset} There is no significant effect of an offset in our configuration. When using smaller EEG windows however there might be one. This could be tried in further analyses with small EEG windows.\\ These small windows however will probably bring other problems as e.g. unstable transformation into Fourier space. So maybe it is necessary to use large windows, then an offset is unnecessary. + \section{Synergies} diff --git a/text/thesis/Bfunctions.tex b/text/thesis/Bfunctions.tex index a7cb410..bb848b7 100644 --- a/text/thesis/Bfunctions.tex +++ b/text/thesis/Bfunctions.tex @@ -1,19 +1,52 @@ -\chapter{Documentation of the \matlab{}-Code} +\chapter{Documentation of the Code} +The documentation of the Code will be split into parts according to the usage. in this parts the order will be alphabetically in the name of the function. \section{\texttt{callAll.m}} + \texttt{callAll.m} and \texttt{callAllPos.m} are the central point in the corresponding calculations. From this script every other function is called and the parameters are defined here.\\ + The default values for the parameters are given in table \ref{tab:default}. + + There are two independent scripts since this makes it possible to do calculations at the same time on the same machine and change called scripts without influencing the other run. In addition when calculating positions instead of velocities, some calculations do not need to be redone. Those are left out in \texttt{callAllPos.m}. \section{Data Acquisition} + \subsection{\texttt{balanceClasses.m}} + Balances classes e.g. for a SVM by dropping data from bigger classes. + + This function also takes an additional parameter \texttt{maxPerClass} which defines how many elements per class are taken in at most. This parameter can be used to enhance computations. + + The number of classes (\texttt{noClasses}) has to be passed to make sure no class is completely omitted because there was no such value in the training set. + + This function throws an error if the number of samples per class is lower than 1. \subsection{\texttt{classifyAccordingToEMG.m}} \label{code:classify} + Here the reclassification as described in section~\ref{mm:newClass} is done. + + For each data point it is decided whether it belongs to movement or not according to the given threshold in EMG activity. If not there is no movement so the class is set to 0 (rest).\\ + If there is movement it is decided whether there should be according to the given classification. If not the old class is applied also for this point. If yes it is checked whether the movement just started (up to now class was 0). If the movement just started, 1 second before is taken out (pause \true) or half second before is classified same, 0.5s to 1s is dropped (pause \false). \subsection{\texttt{readAll.m}} \label{code:readAll} + This is the central function for the acquisition of data. + + First the name of the generated file is composed out of the given parameters. In this way the acquisition step only has to be done once.\\ + If the file is not existing yet, it is created in the following steps:\\ + Data from BCI2000 is read along the corresponding kinematic information. Then this data is transformed in the form we want to use it (cf. \texttt{generateTrainingData} \ref{code:generate}). The data from each of the five runs (cf. section~\ref{mm:design}) is aggregated in one variable per modality. + + As a next step the classification is done using \texttt{classifyAccordingToEMG.m} (\ref{code:classify}). The result is then smoothed and adjusted to the length of EEG data. + + Finally the kinematics and the synergies are generated matching the size of EMG data. All is then saved under the given path as \texttt{.mat} file. \subsection{\texttt{readAllPos.m}} Same as \ref{code:readAll} but using position instead of movement from kinematics data. \subsection{\texttt{generateTrainingData.m}} \label{code:generate} + %TODO \subsection{\texttt{generateTrainingDataPos.m}} Same as \ref{code:generate} but using position instead of movement from kinematics data. \section{Data Analysis} - -\section{Plots} - -\section{miscellaneous} + \subsection{\texttt{correlation2.m}} + Calculates the squared correlation between each corresponding columns of two matrices.\\ + This is used as a measure for fit when comparing the number of synergies. +\section{Evaluation} + \subsection{\texttt{evaluation.m}} + Script for composing the file containing the results (\texttt{evaluation.mat}). This file is used for the evaluations after some manual post-processing (mainly renaming). + \subsection{\texttt{evaluationAccuracys.m}} + Evaluations run on results from SVMs. + \subsection{\texttt{evaluationCorrelations.m}} + Evaluations run on results from ridge regressions. diff --git a/usedMcode/correlation2.m b/usedMcode/correlation2.m index db4b233..5fa53b2 100644 --- a/usedMcode/correlation2.m +++ b/usedMcode/correlation2.m @@ -1,6 +1,6 @@ function [R2]=correlation2(A,B) if size(A)~=size(B) - error('dimension missmatch') + error('dimension mismatch') end R2=zeros(size(A,2),1); for i=1:size(A,2)