A team of neuroscientists from the Massachusetts Institute of Technology (MIT) has found that one of the latest generation of computer networks – called “deep neural networks"
– matches the primate brain. ‘This improved understanding of how the primate brain works could lead to better artificial intelligence and, someday, lead to new ways to repair visual dysfunction,’ said Charles Cadieu from MIT’s McGovern Institute for Brain Research and the paper’s lead author.
For decades, neuroscientists have been trying to design computer networks that can mimic visual skills such as recognizing objects which the human brain does very accurately and quickly. Until now, no computer model has been able to match the primate brain at visual object recognition during a brief glance.
‘The new model encapsulates our current best understanding as to what is going on in this previously mysterious portion of the brain,’ said James DiCarlo, professor of neuroscience and head of MIT’s department of brain and cognitive sciences. For this study, the team of researchers first measured the brain’s object recognition ability. This allowed them to see the neural representation – the population of neurons that respond – for every object that the animals looked at.
Neuro-imaging would have been delayed or never performed in three to seven percent of patients with brain tumors. ‘We support careful and sensible use of neuro-imaging in which physicians exercise excellent clinical judgment to reduce waste in the medical system.