posted on 2025-07-15, 16:25authored byMatthew Rooney
<p dir="ltr">While artificial neural networks (ANNs) now rival human performance in object categorization, whether these ANNs account for the early development of visual categorization remains unclear. Recently, new ANN models such as Computational Representation of Object Recognition Network (Cornet), TC-SAY-resnext, Context-aware Voxel-wise Contrastive Learning (CVCL), VOneCORnet-S, and VOneResnet50 have started to bridge that gap. The current study therefore seeks to establish the extent to which such newer, infant-aligned ANN models currently account for how human infants represent visual objects. To compare how these ANNs and the human infant brain represent visual objects, we derived Representational Dissimilarity Matrices (RDMs). ANN-derived RDMs were created using cosine similarity applied to model outputs. Infant-derived-RDMs were created using pairwise multivariate classification accuracy, estimated within participants, to classify images based on spontaneous electroencephalogram (EEG) responses collected from 12-15-month-olds. Using representational similarity analysis with Spearman’s correlations and permutation-based cluster-correction for multiple comparisons across the EEG time-points, we examined similarities between infant-derived and ANN-derived RDMs. Findings establish a similarity benchmark that advances the computational understanding of early high-level vision development, and potentially informs future development of infant-aligned ANN models that can better account for the development of the human visual stream.</p>