[1] Hubel, D. H., & Wiesel, T. N. (1962). Receptive fields, binocular interaction and functional architecture in the cat's visual cortex. The Journal of physiology, 160(1), 106.
[2] Olshausen, B. A., & Field, D. J. (1996). Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381(6583), 607-609.
[3] DiCarlo, J. J., Zoccolan, D., & Rust, N. C. (2012). How does the brain solve visual object recognition?. Neuron, 73(3), 415-434.
[4]Lindsay, G. W. (2021). Convolutional neural networks as a model of the visual system: Past, present, and future. Journal of cognitive neuroscience, 33(10), 2017-2031.
[5]Sucholutsky, I., Muttenthaler, L., Weller, A., Peng, A., Bobu, A., Kim, B., ... & Griffiths, T. L. (2023). Getting aligned on representational alignment. arXiv preprint arXiv:2310.13018.
[6] Mahner, F. P., Muttenthaler, L., Güçlü, U., & Hebart, M. N. (2024). Dimensions underlying the representational alignment of deep neural networks with humans. arXiv preprint arXiv:2406.19087.
[7] Du, Changde, et al. "Human-like object concept representations emerge naturally in multimodal large language models." arXiv preprint arXiv:2407.01067 (2024).
[8] Kornblith, S., Norouzi, M., Lee, H., & Hinton, G. (2019, May). Similarity of neural network representations revisited. In International conference on machine learning (pp. 3519-3529). PMLR.
[9] Yamins, D. L., Hong, H., Cadieu, C. F., Solomon, E. A., Seibert, D., & DiCarlo, J. J. (2014). Performance-optimized hierarchical models predict neural responses in higher visual cortex. Proceedings of the national academy of sciences, 111(23), 8619-8624.
[10] Schrimpf, M., Kubilius, J., Hong, H., Majaj, N. J., Rajalingham, R., Issa, E. B., ... & DiCarlo, J. J. (2018). Brain-score: Which artificial neural network for object recognition is most brain-like?. BioRxiv, 407007.
[11] Conwell, C., Prince, J. S., Kay, K. N., Alvarez, G. A., & Konkle, T. (2022). What can 1.8 billion regressions tell us about the pressures shaping high-level visual representation in brains and machines?. BioRxiv, 2022-03.
[12] Allen, E. J., St-Yves, G., Wu, Y., Breedlove, J. L., Prince, J. S., Dowdle, L. T., ... & Kay, K. (2022). A massive 7T fMRI dataset to bridge cognitive neuroscience and artificial intelligence. Nature neuroscience, 25(1), 116-126.
[13] Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., ... & Duchesnay, É. (2011). Scikit-learn: Machine learning in Python. the Journal of machine Learning research, 12, 2825-2830.
[14] Kriegeskorte, N., Mur, M., & Bandettini, P. A. (2008). Representational similarity analysis-connecting the branches of systems neuroscience. Frontiers in systems neuroscience, 2, 249.
[15]Khaligh-Razavi, S. M., Henriksson, L., Kay, K., & Kriegeskorte, N. (2017). Fixed versus mixed RSA: Explaining visual representations by fixed and mixed feature sets from shallow and deep computational models. Journal of Mathematical Psychology, 76, 184-197.
[16]Kaniuth, P., & Hebart, M. N. (2022). Feature-reweighted representational similarity analysis: A method for improving the fit between computational models, brains, and behavior. NeuroImage, 257, 119294.
[17]Konkle, T., & Alvarez, G. A. (2022). A self-supervised domain-general learning framework for human ventral stream representation. Nature communications, 13(1), 491.
[18]Yamins, D. L., & DiCarlo, J. J. (2016). Using goal-driven deep learning models to understand sensory cortex. Nature neuroscience, 19(3), 356-365.
[19] Soni, A., Srivastava, S., Khosla, M., & Kording, K. P. (2024). Conclusions about Neural Network to Brain Alignment are Profoundly Impacted by the Similarity Measure. bioRxiv, 2024-08.
[20] Golan, T., Raju, P. C., & Kriegeskorte, N. (2020). Controversial stimuli: Pitting neural networks against each other as models of human cognition. Proceedings of the National Academy of Sciences, 117(47), 29330-29337.
[21] Haxby, J. V., Guntupalli, J. S., Connolly, A. C., Halchenko, Y. O., Conroy, B. R., Gobbini, M. I., ... & Ramadge, P. J. (2011). A common, high-dimensional model of the representational space in human ventral temporal cortex. Neuron, 72(2), 404-416.
[22] Hinton, G. (2015). Distilling the Knowledge in a Neural Network. arXiv preprint arXiv:1503.02531.
[23] Phuong, M., & Lampert, C. (2019, May). Towards understanding knowledge distillation. In International conference on machine learning (pp. 5142-5151). PMLR.
[24] Tian, Y., Krishnan, D., & Isola, P. (2019). Contrastive representation distillation. arXiv preprint arXiv:1910.10699.
[25] Feghhi, E., Hadidi, N., Song, B., Blank, I. A., & Kao, J. C. (2024). What Are Large Language Models Mapping to in the Brain? A Case Against Over-Reliance on Brain Scores. arXiv preprint arXiv:2406.01538.
[26] McMahon, E., Bonner, M. F., & Isik, L. (2023). Hierarchical organization of social action features along the lateral visual pathway. Current Biology, 33(23), 5035-5047.
[27] Richter, D., Kietzmann, T. C., & de Lange, F. P. (2023). High-level prediction errors in low-level visual cortex. bioRxiv, 2023-08.
[28]Han, Y., Poggio, T. A., & Cheung, B. (2023, July). System identification of neural systems: If we got it right, would we know?. In International Conference on Machine Learning (pp. 12430-12444). PMLR.
[29]Li, Z., Brendel, W., Walker, E., Cobos, E., Muhammad, T., Reimer, J., ... & Tolias, A. (2019). Learning from brains how to regularize machines. Advances in neural information processing systems, 32.
[30]Shao, Z., Ma, L., Li, B., & Beck, D. M. (2024). Leveraging the Human Ventral Visual Stream to Improve Neural Network Robustness. arXiv preprint arXiv:2405.02564.
[31]Khosla, M., Williams, A. H., McDermott, J., & Kanwisher, N. (2024). Privileged representational axes in biological and artificial neural networks. bioRxiv, 2024-06.
[32]Roads, B. D., & Love, B. C. (2024). The Dimensions of dimensionality. Trends in Cognitive Sciences.
[33]Rigotti, M., Barak, O., Warden, M. R., Wang, X. J., Daw, N. D., Miller, E. K., & Fusi, S. (2013). The importance of mixed selectivity in complex cognitive tasks. Nature, 497(7451), 585-590.
[34]Fusi, S., Miller, E. K., & Rigotti, M. (2016). Why neurons mix: high dimensionality for higher cognition. Current opinion in neurobiology, 37, 66-74.
[35]Tye, K. M., Miller, E. K., Taschbach, F. H., Benna, M. K., Rigotti, M., & Fusi, S. (2024). Mixed selectivity: Cellular computations for complexity. Neuron.
[36] Stringer, C., Pachitariu, M., Steinmetz, N., Carandini, M., & Harris, K. D. (2019). High-dimensional geometry of population responses in visual cortex. Nature, 571(7765), 361-365.
[37]Ghosh, A., Mondal, A. K., Agrawal, K. K., & Richards, B. (2022). Investigating power laws in deep representation learning. arXiv preprint arXiv:2202.05808.
[38]Gauthaman, R. M., Ménard, B., & Bonner, M. F. (2024). Universal scale-free representations in human visual cortex. arXiv preprint arXiv:2409.06843.
[39]Gauthaman, Raj Magesh, Florentin Guth, Atlas Kazemian, Zirui Chen, and Michael Bonner. 2023. “A High-Dimensional View of Neuroscience.” August 26, 2023. https://BonnerLab.github.io/ccn-tutorial//.
[40] Gauthaman, R. M., Ménard, B., & Bonner, M. F. Universality in mouse and human visual cortex: relating covariance to the spatial structure of latent dimensions.
[41]Pospisil, D. A., & Pillow, J. W. (2024). Revisiting the high-dimensional geometry of population responses in visual cortex. bioRxiv, 2024-02.
[42]Elmoznino, E., & Bonner, M. F. (2024). High-performing neural network models of visual cortex benefit from high latent dimensionality. PLOS Computational Biology, 20(1), e1011792.
[43]Long, B., Yu, C. P., & Konkle, T. (2018). Mid-level visual features underlie the high-level categorical organization of the ventral stream. Proceedings of the National Academy of Sciences, 115(38), E9015-E9024.
[44]Huh, M., Cheung, B., Wang, T., & Isola, P. (2024). The platonic representation hypothesis. arXiv preprint arXiv:2405.07987.
[45]Chen, Z., & Bonner, M. F. (2024). Universal dimensions of visual representation. arXiv preprint arXiv:2408.12804.
[46]Richards, B. A., Lillicrap, T. P., Beaudoin, P., Bengio, Y., Bogacz, R., Christensen, A., ... & Kording, K. P. (2019). A deep learning framework for neuroscience. Nature neuroscience, 22(11), 1761-1770.
[47]Doerig, A., Sommers, R. P., Seeliger, K., Richards, B., Ismael, J., Lindsay, G. W., ... & Kietzmann, T. C. (2023). The neuroconnectionist research programme. Nature Reviews Neuroscience, 24(7), 431-450.
[48]Saxe, A., Nelli, S., & Summerfield, C. (2021). If deep learning is the answer, what is the question?. Nature Reviews Neuroscience, 22(1), 55-67.
[49]Kanwisher, N., Khosla, M., & Dobs, K. (2023). Using artificial neural networks to ask ‘why’ questions of minds and brains. Trends in Neurosciences, 46(3), 240-254.
[50]Kriegeskorte, N., & Douglas, P. K. (2018). Cognitive computational neuroscience. Nature neuroscience, 21(9), 1148-1160.
[51]Yang, G. R., & Wang, X. J. (2020). Artificial neural networks for neuroscientists: a primer. Neuron, 107(6), 1048-1070.
[52]Bowers, J. S., Malhotra, G., Dujmović, M., Montero, M. L., Tsvetkov, C., Biscione, V., ... & Blything, R. (2023). Deep problems with neural network models of human vision. Behavioral and Brain Sciences, 46, e385.