diff --git a/text/thesis/02MaterialsAndMethods.tex b/text/thesis/02MaterialsAndMethods.tex index 8e4f2df..f999a92 100644 --- a/text/thesis/02MaterialsAndMethods.tex +++ b/text/thesis/02MaterialsAndMethods.tex @@ -335,7 +335,9 @@ Eventually autoencoders are trained with a hidden layer of size $n$ and afterwards EMG data is encoded with the learned weights. This is equivalent to taking the hidden layer activity for the corresponding time step.\\ Since synergies are generated from EMG they have the same dimensionality in the first dimension\footnote{only depending on window size and shift for EMG data and the recoding duration} and $n$ in the second. \subsection{Kinematics} - %TODO + Kinematic data we used either as movement or as position. The position was directly recorded, the movement is the first derivative of the position in time.\\ + The kinematic record was started after the EEG recording. In synchronization channel\footnote{cf. Table~\ref{tab:channelNames}} there is a peak when kinematic recording is started. This was used to align movement with EEG and EMG data. In addition we adjusted the kinematic data to the EMG window and shift to be able to use corresponding data for the same time step. This was done by summing all differences (for movement) or by calculating the mean position in the time window.\\ + Size of this data is same as EMG and Synergies in length but has only three features per time step since we used only 3D positioning ($x,y$ and $\theta$) of the hand and no information about the fingers. \section{Data Analysis} \subsection{Classification} Classification can be done in different ways. First approach was discriminating Movement from Rest. This was done by training an SVM and testing its results with 10-fold cross validation. This was done with EMG, EEG and LF data. EMG in this setting is trivial since it was the basis for the classification (cf. \ref{sec:newClass}).\\