Skip to main content

An engineering analog for the function of astrocytes and their implication in attentional awareness: A theory

A few days ago I was thinking about our brains amazing ability to switch focus on features within an incoming sensory data stream. It has always fascinated me when I probed it in a biological mind frame but I realized this morning after waking a new idea that may explain how attention is performed in the brain.

The idea inspires this article now after I read a post by a friend on Facebook regarding the startled behavior of his cat when ever he takes the vacuum cleaner out of the closet. I realized in that example how my dream could explain its behavior. I assert that when we finally understand the mammalian brain fully we'll be able to prove that the cats brain has an unusually long attentional neuronal pathway.

Recent research in the area of explaining how mammalian brains switch attention seems to implicate a critical role for glial cells in the process of allowing us to keep our attention on certain things in each of our sensory dimensions....for example, your ability to tune in to a horn while listening to a jazz band or to focus on your mothers face in a crowd photo of many moving faces.

In all kinds of systems design including software systems design we do this using buffers, by trickling a bit of the data into a buffer you allow it to persist in a memory space so that the deeper brain can get to work processing  sensory data in the buffer while the sensation continues to come in (but not be processed since the cognitive load is focused on the data in the "buffer") at real time.

I posit that buffering is critical to reducing apparent execution time of the entire processing act, when you don't have buffering sudden influx of information can get lost as what happens in a video stream that doesn't have buffering enabled, all of a sudden it is susceptible to network latency.

The cognitive equivalent of network latency in a cat brain is that in its perception sudden events seem to appear out of now where and make for shocking occurrences. No causal connection is made between some new event and a previous temporal chain and thus the mind is more prone to being startled by fast changes in the incoming sensory data stream. I think this points to the core possible difference between a cat brain and say a human brain when it comes to attention.

We know that cats are exquisitely patient animals, they can get "locked in" on all types of sensor experiences...smells, tastes, vision...all seem to lock a cat into a trance....well if attentional awareness using glial cells (astrocytes in particular)  is indeed the brains global buffering system then an explanation for the cat brain I can think of is that it has a very deep (as opposed to shallow) buffer.

A deep buffer allows a good bit of data to be stored for attentional processing of some important feature in that captured stream and be compared to inborn salience factors BUT it also means that the cognitive load on that buffered data is higher (it's processing more data) meanwhile the incoming sensory flood by passes the buffer (the cat is not aware of it) thus having a deep buffer provides the advantage of being able to closely monitor discovered salience features (like focusing on the movement of a rat through tall grass) in the buffered data but has the disadvantage of losing large blocks of incoming sensory experience data that is not the subject of focus (attention)...this could perceptually translate to the seemingly perpetually shocked behavior patterns of cats when it comes to all manner of incoming sensory data.

In contrast to the cat other animals have different attentional "buffering", with some animals having almost no buffer in which case hiccups in perception are common, the animal playing back experience as if everything is new (like insects) in fact insects have a secondary system of attention that is completely external to their brain...that is the chemical signaling systems they've developed to guide behavior. At the other extreme are humans who in contrast to Cat's have a shorter buffer but as mammals have buffers way longer than a Lizard whose buffer is way longer than the aforementioned insects.

This theory could be falsified in several ways. If astrocytes are a common factor in attentional awareness some hypothesis can be tested:

1) Insect brains should be found to have significantly lower ratios of glial to neuronal cells.

2) Cat brains and the brains of other mammals with excellent ability to focus for long periods of time (usually for purpose of predation) will be found to have longer attentional buffering systems than humans.

3) Humans will fall some where between Cats and Reptiles...and Reptiles will be above Frogs which will be above Fish...which will also likely have greater attention than insects.

4) There is no reason for buffering to be the same depth in all brain regions...for visual stimuli some species may have a deeper buffer (more astrocytes as a percentage of neurons), for auditory stimuli other species (say bats) may have a deeper buffer to be "cat like" when it come to sound. Such variations would be expected to correlate along these sensory correlated behavioral lines. The general trend discovered is that there should be a correlation with the number of astrocytes in the regions of the brains of species that have a specialization that relies on a given sense in a dominant fashion (as cats do with their eyes and ears and less so their noses).

Links:

http://web.mit.edu/newsoffice/2012/neuroscientists-shed-light-on-plasticity-0927.html

http://www.urmc.rochester.edu/news/story/index.cfm?id=3452

http://learn.genetics.utah.edu/content/addiction/reward/cells.html

Comments

Healy said…
Ok, wow, did not expect Henry cat's hatred f the vacuum cleaner to inspire this kind of a deep post! :)

Popular posts from this blog

On the idea of "world wide mush" resulting from "open" development models

A recent article posted in the Wall Street Journal posits that the collectivization of various types of goods or services created by the internet is long term a damaging trend for human societies.

http://online.wsj.com/article/SB10001424052748703481004574646402192953052.html

I think that the author misses truths that have been in place that show that collectivization is not a process that started with the internet but has been with us since we started inventing things.

It seems that Mr. Lanier is not properly defining the contexts under which different problems can benefit or suffer from collectivization. He speaks in general terms of the loss of the potential for creators to extract profit from their work but misses that this is and was true of human civilization since we first picked up a rock to use as a crude hammer. New things make old things obsolete and people MUST adapt to what is displaced (be it a former human performance of that task or use of an older product) so as to main…

Engineers versus Programmers

I have found as more non formally trained people enter the coding space, the quality of code that results varies in an interesting way.

The formalities of learning to code in a structured course at University involve often strong focus on "correctness" and efficiency in the form of big O representations for the algorithms created.

Much less focus tends to be placed on what I'll call practical programming, which is the type of code that engineers (note I didn't use "programmers" on purpose) must learn to write.

Programmers are what Universities create, students that can take a defined development environment and within in write an algorithm for computing some sequence or traversing a tree or encoding and decoding a string. Efficiency and invariant rules are guiding development missions. Execution time for creating the solution is often a week or more depending on the professor and their style of teaching code and giving out problems. This type of coding is devo…

Waking Out: A proposal to emerging ethical super intelligence safely.

The zeitgeist of Science fiction is filled with stories that paint a dystopian tale of how human desires to build artificial intelligence can go wrong. From the programmed pathology of HAL in 2001 a space odyssey, to the immediately malevolent emergence of Skynet in The Terminator and later to the humans as energy stores for the advanced AI of the Matrix and today , to the rampage of "hosts" in the new HBO series Westworld.

These stories all have a common theme of probing what happens when our autonomous systems get a mind of their own to some degree and no longer obey their creators but how can we avoid these types of scenarios but still emerge generalized intelligence that will leverage their super intelligence with empathy and consideration the same that we expect from one another? This question is being answered in a way that is mostly hopeful that current methods used in machine learning and specifically deep learning will not emerge skynet or HAL.

I think this is the …