Skip to main content

Salience tagging: How Emotion and Autonomics drive all thinking.

I've thus far provided a theoretical set of hypothesis for the generalized theory of dynamic cognition based on salience modulation. This theory has been extended a bit since I first published it public in 2013, in particular a 4th hypothesis was added last year concerning the necessity of tying together shallow and broad learning models and deep models to emerge dynamic cognition. True AI. However I've continued to try to understand the low level salience a bit more so how does autonomic and emotional modulation work? This article puts forward a more in depth explanation.

If you look at the DCC (dynamic cognition cycle) control diagram below there are 3 points of feedback of the system to itself. I posit all are necessary and sufficient to emerge any type of cognition from ant to aardvark to Human.


From most external to most internal loop, Feedback one (red) from action, takes a decision to perform an action and then refeeds that into sensation. This is akin to the process of trying to catch a ball....as the ball is approaching you ...you in real time move your hands to prepare for it's arrival. Your sensation of it's approach to catch it...modulates the actions of your hands ...which being also sensed modulates further in a converging sub loop as the ball gets closer. Though I give this example with hand eye coordination which is a visual - somatosensory convergence ... similar loops exist across different sensory domains.... this shallow cross linking across deep networks is what I posit is a) critical to emerging vastly more intelligence b) also important for reducing the computational cost associated with methods like current ones which are strongly sensory limited.

Feedback two (fuscia) from salience back to sensory provides modulating cues to the actual sensing apparatus that may help filter out data based on salient factors.... for example, this could be used to help the olfactory sense tune receptors in the nose to be hyper sensitive to some resource needed for the physiology of the individual cognitive agent. It could also be at work in the vision system to attune focus preferentially in real time as the agent observes a scene. we know from biology that such feedback to sensory elements exists so it must be a fundamental feature of any dynamic cognition.

Feedback three (yellow) from salience to comparison is the equivalent of the cognitive spark plug. It is where emotional and autonomic modulation factors are tagged to incoming and previously stored (memory) sensory data. The tagging of incoming data allows sensory experience to be ranked along the spectrum of salience. The re ranking of previously stored data serves the same purpose and helps refine predictions for action regarding sensed or recalled data so that they approach the desired real time salience requirements. This I assert is where "the magic" happens.

Autonomic salience ultimately is the base set of signals which emerge from distal sensory apparatus that govern critical life functions like breathing, energy consumption, temperature regulation...these must happen. If they don't the body dies and then the dynamic cognition dies. Measures of these factors as they change due to availability are ultimately (I assert) what "drives" cognition. The Emotional salience is tightly coupled here (and we see this same pattern in real brains) Emotions are importance cues ...for example when we are hungry (autonomic signal of need for food generated and fed back to the comparison/ prediction node) our emotional state is one that applies pleasant states to thoughts of food. A well made steak, a stuffed chicken breast, a cake... but right after we consume our fill our emotional state is different for the food....in fact depending on how much we ate we could feel disgust about food that just a few minutes before we were almost in love with...this temporal variation of the emotional import associated with an underlying autonomic signal must be replicated to enable efficient comparison tagging of incoming data and stored memories.

Again all this convergence and refinement is happening within and between multiple sensory dimensions. Between vision and somatosensory, between gustatory and olfactory sense, between auditory and vision ...and so on....there is a very efficient multiplexing going on that ties all these signals in efficient ways using the multidimensional salience signals.

The "comparison" node is what current best of breed "AI" is basically restricted to... only reinforcement learning NN's try to replicate a full DCC but they do it with a very basic salience module (absent emotion but present with a very basic attempt at autonomic modulation..for some spatial navigation NN's) ...for example the game AI that Deep Mind created about 3 years ago that earned them recognition by google simply used the parameter that it should perform moves such that the score would increase.

This type of wide open goal to seek allowed it to try a large landscape of moves and then store their success (if they increased the score) ...it then was able to build on this over time. Is it "thinking" ? No. Was it learning .. Yes...but not in the aware sense that we do... a more accurate characterization is that it was efficiently exploring a sparse tree of moves that vectored toward the goal of increased score...but that isn't how you and I learn ...and importantly isn't how we play.

Which takes me to another big difference between any current gen. AI methods and real brains.

We learn and do at the same time. Always at the same time. ALL current NN's from Facebook's CNN's for facial recognition to Microsoft's LSTM's for language understanding and voice synthesis to Google's Reinforcement learning AI in Deep Mind's AlphaGo ..every single one has a separate learning from predicting process.

I assert a truly dynamic cognitive agent will first of all not have this restriction, my DCC for the salience theory would be in this category. I am currently working on a startup which leverages a first generation of learning algorithms to solve an important problem for human workers. The AOW and ADA algorithms covered in other articles in this blog are those algorithms. They are a particular type of learning algorithm that leverages a shallow and broad method to connect disparate entity objects which is vastly unlike how deep networks work but very similar to the distal connection between disparate sensory dimensions that I describe above that I assert must be present for Salience tagging to inspire comparison and prediction. I explain the difference between deep learning approaches and SABL algorithms in this article. Also note that the cognition in this case is always churning so long as salience (autonomic) needs are present and action is being driven to satisfy them....this I posit connects the cycle to what we call apparent consciousness. In other articles in this blog I explain how consciousness could emerge from salience driven cognitive dynamics and this article crystalizes more clearly how that would be done. I am looking forward to building an actual test bed of a simple multidimensional salience module simulating autonomic and emotional modulation. . I haven't been able to devote as much time to deep research on decoding salience modulation but hope to get back to it in the next few years.


Comments

Popular posts from this blog

the attributes of web 3.0...

As the US economy continues to suffer the doldrums of stagnant investment in many industries, belt tightening budgets in many of the largest cities and continuous rounds of lay offs at some of the oldest of corporations, it is little comfort to those suffering through economic problems that what is happening now, has happened before. True, the severity of the downturn might have been different but the common factors of people and businesses being forced to do more with less is the theme of the times. Like environmental shocks to an ecosystem, stresses to the economic system lead to people hunkering down to last the storm, but it is instructive to realize that during the storm, all that idle time in the shelter affords people the ability to solve previous or existing problems. Likewise, economic downturns enable enterprising individuals and corporations the ability to make bold decisions with regard to marketing , sales or product focus that can lead to incredible gains as the economic

How many cofactors for inducing expression of every cell type?

Another revolution in iPSC technology announced: "Also known as iPS cells, these cells can become virtually any cell type in the human body -- just like embryonic stem cells. Then last year, Gladstone Senior Investigator Sheng Ding, PhD, announced that he had used a combination of small molecules and genetic factors to transform skin cells directly into neural stem cells. Today, Dr. Huang takes a new tack by using one genetic factor -- Sox2 -- to directly reprogram one cell type into another without reverting to the pluripotent state." -- So the method invented by Yamanaka is now refined to rely only 1 cofactor and b) directly generate the target cell type from the source cell type (skin to neuron) without the stem like intermediate stage.  It also mentions that oncogenic triggering was eliminated in their testing. Now comparative methods can be used to discover other types...the question is..is Sox2 critical for all types? It may be that skin to neuron relies on Sox2

AgilEntity Architecture: Action Oriented Workflow

Permissions, fine grained versus management headache The usual method for determining which users can perform a given function on a given object in a managed system, employs providing those Users with specific access rights via the use of permissions. Often these permissions are also able to be granted to collections called Groups, to which Users are added. The combination of Permissions and Groups provides the ability to provide as atomic a dissemination of rights across the User space as possible. However, this granularity comes at the price of reduced efficiency for managing the created permissions and more importantly the Groups that collect Users designated to perform sets of actions. Essentially the Groups serve as access control lists in many systems, which for the variable and often changing environment of business applications means a need to constantly update the ACL’s (groups) in order to add or remove individuals based on their ability to perform cert