Follow
Andrew Kyle Lampinen
Andrew Kyle Lampinen
Research Scientist, DeepMind
Verified email at google.com - Homepage
Title
Cited by
Cited by
Year
Environmental drivers of systematicity and generalization in a situated agent
F Hill, A Lampinen, R Schneider, S Clark, M Botvinick, JL McClelland, ...
arXiv preprint arXiv:1910.00571, 2019
88*2019
An analytic theory of generalization dynamics and transfer learning in deep linear networks
AK Lampinen, S Ganguli
7th International Conference on Learning Representations (ICLR 2019), 2018
782018
Automated curricula through setter-solver interactions
S Racaniere, AK Lampinen, A Santoro, DP Reichert, V Firoiu, TP Lillicrap
8th International Conference on Learning Representations (ICLR 2020), 2019
53*2019
What shapes feature representations? exploring datasets, architectures, and training
KL Hermann, AK Lampinen
Advances in Neural Information Processing Systems, 2020
502020
Integration of new information in memory: new insights from a complementary learning systems perspective
JL McClelland, BL McNaughton, AK Lampinen
Philosophical Transactions of the Royal Society B 375 (1799), 20190637, 2020
452020
Improving the replicability of psychological science through pedagogy
RXD Hawkins, EN Smith, C Au, JM Arias, R Catapano, E Hermann, M Keil, ...
Advances in Methods and Practices in Psychological Science 1 (1), 7-18, 2018
32*2018
One-shot and few-shot learning of word embeddings
AK Lampinen, JL McClelland
arXiv preprint arXiv:1710.10280, 2017
192017
Can language models learn from explanations in context?
AK Lampinen, I Dasgupta, SCY Chan, K Matthewson, MH Tessler, ...
arXiv preprint arXiv:2204.02329, 2022
182022
Symbolic behaviour in artificial intelligence
A Santoro, A Lampinen, K Mathewson, T Lillicrap, D Raposo
arXiv preprint arXiv:2102.03406, 2021
162021
Transforming task representations to perform novel tasks
AK Lampinen, JL McClelland
Proceedings of the National Academy of Sciences 117 (52), 32970-32981, 2020
112020
Towards mental time travel: a hierarchical memory for reinforcement learning agents
A Lampinen, S Chan, A Banino, F Hill
Advances in Neural Information Processing Systems 34, 28182-28195, 2021
102021
Different presentations of a mathematical concept can support learning in complementary ways.
AK Lampinen, JL McClelland
Journal of Educational Psychology 110 (5), 664, 2018
102018
Semantic exploration from language abstractions and pretrained representations
AC Tam, NC Rabinowitz, AK Lampinen, NA Roy, SCY Chan, DJ Strouse, ...
arXiv preprint arXiv:2204.05080, 2022
92022
Tell me why! Explanations support learning relational and causal structure
AK Lampinen, N Roy, I Dasgupta, SCY Chan, A Tam, J Mcclelland, C Yan, ...
International Conference on Machine Learning, 11868-11890, 2022
82022
Building on prior knowledge without building it in
SS Hansen, A Lampinen, G Suri, JL McClelland
Behavioral and Brain Sciences 40, e268, 2017
82017
Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models
A Srivastava, A Rastogi, A Rao, AAM Shoeb, A Abid, A Fisch, AR Brown, ...
arXiv preprint arXiv:2206.04615, 2022
72022
Data Distributional Properties Drive Emergent In-Context Learning in Transformers
SCY Chan, A Santoro, AK Lampinen, JX Wang, A Singh, PH Richemond, ...
arXiv preprint arXiv:2205.05055, 2022
62022
Analogies Emerge from Learning Dyamics in Neural Networks.
AK Lampinen, S Hsu, JL McClelland
CogSci, 2017
62017
Language models show human-like content effects on reasoning
I Dasgupta, AK Lampinen, SCY Chan, A Creswell, D Kumaran, ...
arXiv preprint arXiv:2207.07051, 2022
52022
Zipfian environments for Reinforcement Learning
SCY Chan, AK Lampinen, PH Richemond, F Hill
arXiv preprint arXiv:2203.08222, 2022
22022
The system can't perform the operation now. Try again later.
Articles 1–20