5 minute read

Since graduating high school six years ago, I’ve always felt like my long-term memory and ability to meaningfully learn new concepts has been declining. Anecdotally, it seems I’m by no means the only one. Why, for example, do I remember the Teapot Dome Scandal, and yet, I have to look up the formula for a Fourier Transformation every time? I’ve had to use the formula dozens of times over the years, but it just doesn’t stick in my brain. At first, I feared it was simply a phenomenon of age, but I’m far too young for that to be the case. My next concern was to address my sleep. Memory consolidation occurs mainly during sleep, so I tried to determine how my sleep habits have changed since high school. Although I listen to podcasts now as I fall asleep, it’s not like I never did that back then. More importantly, I only slept an average of five hours every night in HS (yeah, ABRHS was terrible). Since then, I’ve prioritized my sleep above almost anything else, so if anything, I would think my consolidation should have improved in college. Alas, my declining ability to synaptically retrieve what I learned yesterday is not a symptom of a decaying brain—instead, it is the side-effect of how the internet has fundamentally changed the way our brains work.

It’s not that our brains can no longer learn as well as they used to; it’s that the deep concentration required for textual reading has been replaced with a new skill: the ability for quick cursory scans and knowledge for how to look up the full information later. This alteration in thinking is essentially the premise of Nick Carr’s The Shallows: What the Internet Is Doing to Our Brains. I’ve been reading his book this week—perhaps ironically, on Kindle—and I’ve been thinking a lot about the effects of the internet age on my brain. Whereas Daniel Kahneman wrote about thinking fast and slow [2], Carr writes about thinking deep and shallow. He argues the way our brain processes information on the internet is fundamentally different from the way it processes on-paper text. Reading a book requires a special type of concentration where you tune out the rest of the world and are focused solely on one task. It is quite meditative, really. In contrast, internet articles are riddled with images and hyperlinks, not to mention the infinite other distractions laying a click away. Carr argues (with the aid of many studies and examples) how, while reading an internet article, the ever-present distractions and micro-decisions (such as whether or not to click a hyperlink) never allows us to devote the same attention to learning as we can while reading a book or listening to a traditional college lecture. Further, Carr argues that not only does learning from internet sources make it harder to “grok” information, but our constant connection to technology has altered the way our brains work. That is why the majority of adults no longer read. That’s why I retained more of what I learned in high school than in college.

Neuroplasticity and the Internet

At Williams, I took a course called Cognitive Psychology and a year later, at TUM, I took one on Deep Learning. While artificial neural networks (ANNs) are based on the biological counterpart, I’ve recently been making connections the other way ‘round. Recurrent neural network (RNN) models like LSTM bare a salient resemblance to George Miller’s seminal paper on working memory [3]; the different lobes of the brain remind me of the latent space analysis being conducted by my colleagues at BNL [4]; and the way I learned German reminds me of applying transfer learning onto a pre-trained model [5]. Lately, I’ve been thinking about the real neural network that is my brain, specifically how it’s been continuously trained over our lives via varying experiences. Neuroplasticity is most active during childhood, yes, but the brain is an ever-evolving organ. Memories are not permanent; they are reconstructive. The relative strength of neural connections—which are manifested as memories—decay with time. In their place, new connections are formed and strengthened—similar to how transfer learning or fine-tuning alters an ANN’s predictive domain.

I don’t think it’s controversial to say that the average attention span has dropped over the last decade. Just look at the meteoric rise of TikTok, an app for which the median video length is around 10 seconds. A popular trend at the moment involves a 10-second dance where questions and answers appear as text over the dancer. Not only has our attention dropped to 10 seconds, but we need the distraction of a popular dance to focus on reading the brief text. We have re-trained our brains to optimize performance for multitasking, so much so that we struggle to do any “linear” task which requires our exclusive attention. If you think about it, we fully expect to be interrupted during anything we do, thanks to a notification toast or a text message ping. So, of course our brains have changed to optimize our ability to handle multitasking.

If we think of the brain as an ANN, in high school, our brains were trained off our experiences in a pre-smartphone world. The current state of our brain is the result of re-training the network on the covariate-shifted dataset that is the modern world.

Reflections

While I see the advantage of a brain capable of multitasking and optimized to find answers via Googling, I can’t help but feel despondent and nostalgic for my brain that once was. Is it possible to undo the damage? Can we re-train our brains on a new corpus of world data by reading books and eschewing our phones? I suspect the answer is yes, but I’m certainly not going to become Amish. I wonder if the bygone era where everyone read books was simply a historical anomaly. Perhaps, we may soon regress to a time where the only ones who read books are considered the “intellectual elite.”

References

[1] Carr, N. (2011). The Shallows: What the Internet Is Doing to Our Brains. WW Norton.

[2] Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux.

[3] Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63(2), 81–97. https://doi.org/10.1037/h0043158

[4] Routh, P.K. et al. (2021). Latent Representation Learning for Structural Characterization of Catalysts. The Journal of Physical Chemistry Letters 2021 12 (8), 2086-2094. DOI: 10.1021/acs.jpclett.0c03792

[5] Thaller, Jeremy K. “Investigation of Bond Strain Effects on XANES Spectra via Artificial Neural Networks.” Ludwig-Maximilians-Universität München, 2021.