diff --git a/07_final_assignment/paper/main.tex b/07_final_assignment/paper/main.tex index eb0c454..93c1525 100644 --- a/07_final_assignment/paper/main.tex +++ b/07_final_assignment/paper/main.tex @@ -3,6 +3,7 @@ \usepackage[utf8]{inputenc} \usepackage[american]{babel} \usepackage[style=apa,sortcites=true,sorting=nyt,backend=biber]{biblatex} +\usepackage{hyperref} \DeclareLanguageMapping{american}{american-apa} \addbibresource{references.bib} @@ -106,12 +107,12 @@ Running an experiment with a single combination of $ \alpha $ and $ \beta $ on a normal desktop computer took about 75 minutes. Therefore, the parameter space one could explore within a reasonable amount of time was quite restricted. We decided to write a parallelized version of the code to reduce the overall runtime. Using the R packages foreach \parencite{Rforeach}, parallel \parencite{Rparallel} and doParallel \parencite{RdoParallel}, restructured the experiment. Since conflicts can easily occur when more than one core is trying to access a shared data structure at the same time, we implemented a parallelized version that is able to run without even containing critical sections. Instead, each thread has its own data structure, a .txt file, and in the end the results are harvested and combined. This version of the experiment ran on a cluster with 15 cores, each performing a total amount of eight experiments. Altogether, 120 combinations of $ \alpha $ and $ \beta $ were explored overnight, which would have taken about 150 hours (!) in a non-parallelized version. \section{Results} -The number of words learned by the actual monkeys ranged between 87 and 308. With the chosen range for $\alpha$ and $\beta$, we obtained between 275 and 307 learned words, however, it is important to note that we only presented 307 different words, so the model reached maximum learning potential even for small learn rates (see \ref{fig:numwords}). +The number of words learned by the actual monkeys ranged between 87 and 308. With the chosen range for $\alpha$ and $\beta$, we obtained between 275 and 307 learned words, however, it is important to note that we only presented 307 different words, so the model reached maximum learning potential even for small learn rates (see \autoref{fig:numwords}). The general accuracy for the real monkeys lay between 71.14\% and 79.81\%, while our accuracies moved between 60\% and 68\% with random parameter $r=0.65$ depending on used learn rates. -Because the absolute accuracy depends heavily on the random parameter (see \ref{sec:randparam}), we could easily match increase the accuracy with modifying it. A more interesting property is the range of word accuracy which is $.0867$ for the monkeys and $.08$ for the simulation. +Because the absolute accuracy depends heavily on the random parameter (see above), we could easily match increase the accuracy with modifying it. A more interesting property is the range of word accuracy which is $.0867$ for the monkeys and $.08$ for the simulation. Using non-liner regression models (GAM) we find non linear ($df>1$) main effects for learn rates predicting the word or nonword accuracy with fixed random parameter without an interactive effect. -Because of the multiplicatory commutativity one of $\alpha$, $\beta$ is enough for explanation as learn rates here. In \ref{fig:accuracy}, second row we see the accuracies growing fast and converging to a plateau in one dimension with almost no effect of the other learn rate dimension. +Because of the multiplicatory commutativity one of $\alpha$, $\beta$ is enough for explanation as learn rates here. In \autoref{fig:accuracy}, second row we see the accuracies growing fast and converging to a plateau in one dimension with almost no effect of the other learn rate dimension. The complete result data is attached in the appendix of this paper.