Art Courtesy of Melody Jiang.
Color is a crucial part of how most people perceive the world. Even still, most people have little trouble recognizing an image that has its colors modified—after all, most people have no difficulty recognizing objects or people in a black-and-white film. New research may indicate, however, that without a specific developmental trajectory, some people may end up relying too heavily on color.
Investigations into the experiences of blind people who gained their sight later in life through a program called Project Prakash, along with simulations of complex computer learning models, may indicate more about the exact timeline of development for color vision in humans.
Project Prakash was launched in 2005 by Pawan Sinha in India, where there is an outsize population of people with untreated cataracts. The project sought to restore vision to children in India born with cataracts in both eyes—a form of blindness. While the noble cause didn’t begin with the purpose of learning more about our vision developmental timelines, MIT postdoctoral researchers Marin and Lukas Vogelsang, alongside Project Prakash research scientist Priti Gupta, saw an opportunity to do just that.
“I would say the research question that really drove the study—so really the motivation, the motivating piece—is quite simple, which is ‘how come you and I, and all of us, are so good at recognizing objects or also faces in these old, say, black-and-white movies or photographs?’” Lukas Vogelsang said. “Because in our daily lives we see all these colors, and they’re so vivid and they seem so important, but if you remove them, we’re still quite good at pretty much everything…We seem to be so good at this, but it’s not so clear as to why.”
With that question floating around Gupta’s and the Vogelsangs’ minds, they began to study the rather unique population of children from Project Prakash. In exploring the cases of Project Prakash, the researchers discovered something quite remarkable: although the children in question were living without sight for a long period of time, their visual processing abilities were able to recover and adapt to our world quite quickly, save for a few key aspects.
In one revealing experiment, groups of children were first shown images of everyday objects, like a banana or a tree, in full color as well as in grayscale. For the born-sighted child, there was almost no difference in recognition success between the color images and the grayscale images. Demonstrating the human body’s ability to quickly adapt, the late-sighted children didn’t actually have much problem readjusting to chromatic vision and performed quite well at recognizing the color images. However, the same could not be said for the grayscale images.
“As we would expect, the normally sighted controls, like you or me, have almost no difference in [chromatic versus grayscale] recognition. But what we did find—and this is a bit interesting—is that these late-sighted children…they do have a strong difference,” Vogelsang said. “So for them, when color information is removed, their recognition is actually quite poor.”
Evidently, there’s something peculiar about how these late-sighted children develop vision. For most newborns, the eye’s retina and cortex are not fully developed at birth, meaning that the color information the brain receives is quite limited. As such, the brain learns to distinguish visual information based on shape, luminance, and other features. When the eyes improve later on in infancy, color information is incorporated into the mix. However, this is not the case for late-sighted individuals, whose color information is incorporated far earlier. As such, the researchers hypothesized, this overreliance on color to distinguish images may hinder visual development.
“These limitations of normal development may be, if you will, a feature, not a bug,” Vogelsang said.
To test this hypothesis, the research team simulated the development of the human visual system using AlexNet, a well-known convolutional neural network (a computational model of the brain capable of learning). This allowed them to reliably and ethically carry out experiments without harming the development of real children. “These networks are by no means perfect models of the biological system,” Vogelsang said. “But they still serve an important purpose.”
Two different instances of AlexNet were trained on different sets of images. In the first training approach, researchers started by only feeding the neural network grayscale images, and later introduced color images (gray to color, or G2C). This mimicked the way human vision most often develops, where infants initially see in limited color and gradually experience full color as their vision matures in the first few years of life. In the second approach, the neural network was trained on colored images only from the very beginning (C2C). This modeled the visual development of the Prakash children.
The researchers discovered that the model simulating born-sighted development could successfully identify objects in both grayscale and color images, and it remained robust against other color changes. In contrast, the model trained only on color images, like the Project Prakash children, still did well on full-color images but struggled to generalize to grayscale or hue-altered images, highlighting the importance of early exposure to diverse visual inputs.
In addition to seeing training data in grayscale at all, it also mattered when that data was seen. When the researchers swapped the order of visual input, training the model first on colored images and then grayscale ones (C2G), the model became very good at identifying grayscale images, but lost its ability to identify color well.
When the researchers analyzed the inner workings of their G2C model, they found that it was relying on luminance to identify objects. When color input was introduced, this fundamental approach did not change, since the same strategy worked just as well. On the other hand, when the C2G model was introduced to grayscale images later in its training process, it could not adapt its strategy well enough to identify both full-color and colorless images accurately, and it instead became confused.
The combination of these two findings confirmed the research team’s suspicions. Limited visual input during infancy is crucial to visual development, and this limitation has to happen at the very beginning of the visual development process in order to be effective.
The research team also believes that their findings could be generalized to other sensory systems in humans. It is well known that brain plasticity is higher during infancy, meaning that it learns faster and adapts more easily. Therefore, training on limited input—whether that be color, auditory queues, or something else—earlier in life may in fact be beneficial to development. In previous studies, the team had investigated visual acuity using a similar approach and found that infants with more precise vision earlier on tended to focus on small details and had a harder time seeing the larger shapes and contours of a scene. “So this theory we have—of early limitations being a good thing—might be quite a broad one,” Vogelsang said.
Vogelsang hopes that these results could also help guide the rehabilitation process of children who gain vision later in life. By somehow artificially limiting their color perception immediately after recovering vision, the same developmental arc could potentially happen, leading to a more normal visual perception system.
“This would really be closing the full loop,” Vogelsang said “If we learned something about normal development from these children, and then could improve the outcomes for these children using what we learned…that would be the ultimate dream.”