diff --git a/07_final_assignment/paper/main.tex b/07_final_assignment/paper/main.tex index c5ad7e7..4e3fb81 100644 --- a/07_final_assignment/paper/main.tex +++ b/07_final_assignment/paper/main.tex @@ -1,4 +1,4 @@ -\documentclass[a4paper, 12pt]{apa6} +\documentclass[a4paper, 12pt,longtable]{apa6} \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage[american]{babel} @@ -15,6 +15,7 @@ \title{Simulation of \cite{Grainger245} with Rescorla Wagner equations} \shorttitle{Grainger et al. (2012) simulation with RW equations} \author{R. Geirhos (3827808), K. Grethen (3899962), D.-E. Künstle (3822829),\\ A.-K. Mahlke (3897867), J. Maier (3879869), F. Saar (3818590), M. Weller (3837283)} +\leftheader{Geirhos, Grethen, Künstle, Mahlke, Maier, Saar, Weller} \affiliation{Linguistics for Cognitive Science Course, University of Tübingen} \abstract{In \citeyear{Grainger245}, \citeauthor{Grainger245} conducted a word learning experiments with baboons. Interestingly, monkeys are able to discriminate words from non-words with high accuracies. We simulate the learning experience with the Rescorla-Wagner learning model \parencite{rescorla1972theory}. @@ -93,8 +94,9 @@ \subsubsection{Random Parameter} The random parameter $ r $ was set to 0.65, which proved to be reasonable value in preliminary experiments. That means, in 65\% of the cases the monkey would guess for either word or nonword with equal probabilities. Therefore, the maximum possible performance $ p_{max} $ is: $$ p_{max} = 1 - \frac{r}{2} = 0.675$$ In other words, the maximum possible performance is no longer 1.0 (for a very intelligent monkey) but rather restricted by $ r $. If a monkey's performance is slightly better than $ p_{max} $, this is assured to be due to chance. -\subsubsection{Alpha and Beta} %TODO alpha and beta are important - we have to explain their meaning in a sentence - see Lambda, that looks excellent. -Both $ \alpha $ and $ \beta $ were our independent variables which we manipulated over the course of the experiments. We gathered data for every possible combination of $ \alpha $ and $ \beta $ values within an equally spaced range from 0.0 to 0.3. A total of 15 values for each $ \alpha $ and $ \beta $ were combined to $ 15*15 = 225 $ possible combinations. Since $ \alpha $ and $ \beta $ were internally multiplied to a single value, we expected the outcome to be more or less symmetrical due to the commutativity of the multiplication operation and therefore calculated each combination of $ \alpha $ and $ \beta $ only once, which we used as a trick to improve the overall runtime. Therefore, $\sum_{i=1}^{15}i = 120$ combinations remained to be explored. +\subsubsection{Alpha and Beta} How fast a Cue-Outcome connection is learned or unlearned depends on a learn rate which determines which fraction of the activation difference will be added or removed per learn event. The learn rate in a event is the multiplication of the learn rate of the cue $\alpha$ and the learn rate of the outcome $\beta$. +In our case we keep the learn rate constant over all cues and outcomes. +Both $ \alpha $ and $ \beta $ were our independent variables which we manipulated over the course of the experiments. We gathered data for every possible combination of $ \alpha $ and $ \beta $ values within an equally spaced range from 0.0 to 0.3. A total of 15 values for each $ \alpha $ and $ \beta $ were combined to $ 15 \cdot 15 = 225 $ possible combinations. Since $ \alpha $ and $ \beta $ were internally multiplied to a single value, we expected the outcome to be more or less symmetrical due to the commutativity of the multiplication operation and therefore calculated each combination of $ \alpha $ and $ \beta $ only once, which we used as a trick to improve the overall runtime. Therefore, $\sum_{i=1}^{15}i = 120$ combinations remained to be explored. \subsubsection{Lambda} The independent variable $\lambda$ represents the maximum activation in the Rescorla-Wagner model and therefore limits the learning. It makes it possible to modulate saliency of a stimulus. A more salient stimulus could not only have higher learning rates but also a higher maximum activation. In the original experiment the stimulus were same colored words and nonwords with four letters on an equally colored background. We assume the single words and nonwords are equally salient and keep therefore $\lambda$ constant to a value of 1. @@ -152,19 +154,9 @@ \onecolumn -\section{Complete Results} %TODO perhaps format this in a nicer way... doesn't look amazing. -Here are the complete results of our experiments. The abbreviations used are: -\begin{APAitemize} -\item \#Trials: Number of trials -\item \#LearnedW: Number of learned words -\item \#PresW: Number of presented words -\item GenAcc: General accuracy -\item WAcc: Word accuracy -\item NWAcc: Nonword accuracy -\end{APAitemize} - \input{result_tables.tex} +\newpage \lstinputlisting[language=R]{../baboonSimulation.R} \end{document} \ No newline at end of file diff --git a/07_final_assignment/paper/result_tables.tex b/07_final_assignment/paper/result_tables.tex index 4db1312..f5d0f98 100644 --- a/07_final_assignment/paper/result_tables.tex +++ b/07_final_assignment/paper/result_tables.tex @@ -1,7 +1,5 @@ % latex table generated in R 3.1.0 by xtable 1.8-2 package % Thu Mar 31 15:21:52 2016 -\begin{table}[ht] -\centering \begin{tabular}{rrrrrrrrr} \hline & alpha & beta & \#Trials & \#LearnedW & \#PresW & GenAcc & WAcc & NWAcc \\ @@ -48,12 +46,9 @@ 40 & 0.17 & 0.06 & 50000 & 307 & 7506 & 0.66 & 0.67 & 0.66 \\ \hline \end{tabular} -\end{table} % latex table generated in R 3.1.0 by xtable 1.8-2 package % Thu Mar 31 15:22:04 2016 -\begin{table}[ht] -\centering \begin{tabular}{rrrrrrrrr} \hline & alpha & beta & \#Trials & \#LearnedW & \#PresW & GenAcc & WAcc & NWAcc \\ @@ -100,12 +95,9 @@ 80 & 0.26 & 0.02 & 50000 & 298 & 7493 & 0.65 & 0.65 & 0.65 \\ \hline \end{tabular} -\end{table} % latex table generated in R 3.1.0 by xtable 1.8-2 package % Thu Mar 31 15:22:19 2016 -\begin{table}[ht] -\centering \begin{tabular}{rrrrrrrrr} \hline & alpha & beta & \#Trials & \#LearnedW & \#PresW & GenAcc & WAcc & NWAcc \\ @@ -152,4 +144,13 @@ 120 & 0.30 & 0.30 & 50000 & 307 & 7545 & 0.67 & 0.67 & 0.67 \\ \hline \end{tabular} -\end{table} \ No newline at end of file + +The complete results of our experiments. The abbreviations used are: +\begin{APAitemize} +\item \#Trials: Number of trials +\item \#LearnedW: Number of learned words +\item \#PresW: Number of presented words +\item GenAcc: General accuracy +\item WAcc: Word accuracy +\item NWAcc: Nonword accuracy +\end{APAitemize}