Skip to main content

New Cognitive Flow Diagram and the possible need for artificial minds to sleep.

The Simple Dynamic Cognition Cycle flow diagram above is topologically identical to this one:



I published with an earlier post on the Salience theory of dynamic cognition and consciousness but this one shows the multiple feedback lines from the salience node as all left facing. The only one that is right facing feeds to Action. 
This is important because it shows that Salience evaluations directly modulate Action while also bypassing action for continued Sensation, some times a prediction is incorrect and is bypassed for continued approximation of stored memory with new sensation in the Comparison node which takes feedback from salience itself (emotional and autonomic).
I posit that cognition (the mind) happens between Sensation and Salience nodes and some times bubbles up Action, the "self" and consciousness are emergent reflections of this sea of comparisons in real time.

When we sleep the outer red flow arrows from salience to action are minimally triggered but subconscious cognition can still continue as memories are recalled for continued sub sensory evaluation, after all memories are basically copies of incoming sensation stored to make predictions as modulated by salience (tagged with emotional and autonomic import ratings) so if the Action is not driven that doesn't mean that comparison and salience evaluation aren't still on going particularly since autonomic salience must continue to be monitored as the agent "sleeps".

In the mammalian brain sleep seems to be important for low level memory consolidation and organization activities that would be inefficient to do during the wake state (they'd mar cognitive performance for obvious reasons you would be adding new sensations while trying to consolidate old ones). It should be possible to create an artificial cognition that doesn't need sleep (or needs less of it) by allowing consolidation to happen in parallel using an independent computational engine from the one processing real time sensation.

I hypothesize that in a dynamic cognition that correctly mirrors biological cognitive flow if an attempt to consolidate memories is not made the efficiency of cognitive processes will steadily degrade over time this would be due to the over loading of action delta data in the early virtual neuron layers of the sensory comparison and storage stacks for each dimensional modality.
In a sequential neural network the rate at which new data is fed into the system for training is deterministic and fixed, once trained new evaluations happen independent of continuous training, but in a real cognitive agent one has no control over when sensory data is coming into the system and one must handle it in that moment...doing so re weights low level network layers and thus reducing the efficiency of all the cognitive dimensional comparison tasks associated with those layers. The ability to accurately make predictions goes down as this noise piles up in lower layers of the cognitive stack. 

So I posit that some kind of offline consolidation will be needed to push sensed memories deeper into the virtual neuron stack and thus allowing future predictions to be more accurate, assuming of course that the DCC above is the only valid control flow that can emerge "mind". A different control flow and implementing architecture may be able to forgo this need.

The fact that the control flow described above between the 4 nodes may be a sign that it is correct as it seems to replicate the need for an analog of sleep. Again this is the simple diagram, the complex diagram has some particularly important symmetries identified that define how the cognitive engine would precisely flow between the different sensory modalities...getting those flows correct from the salience node is critical to emerging cognition. I will not be publishing the complex flow as I plan on working to implement it over the next 10 years or so...the significance of the simple diagram is that it presents the key innovation of salience evaluation as a fundamental requirement of dynamic cognition while describing exactly where it goes in the cognitive flow diagram.

Comments

Moulton said…
David, it occurs to me that your model is congruent with a comparable model exploring the relationships of cognition, affect, and learning.
David Saintloth said…
That model is not a low level theory of cognition ...it is addressing much higher level concepts than that salience theory does which is proposing solutions for the atoms of cognition (driven by salience evaluation) itself...which then in their comparison dynamics emerge high level learning heuristics.
Kaiser Basileus said…
Can you put this in words a 10th grader would understand?

Popular posts from this blog

Highly targeted Cpg vaccine immunotherapy for a range of cancer

Significance?


This will surely go down as a seminal advance in cancer therapy. It reads like magic:

So this new approach looks for the specific proteins that are associated with a given tumors resistance to attack by the body's T cells, it then adjusts those T cells to be hyper sensitive to the specific oncogenic proteins targeted. These cells become essentially The Terminator​ T cells in the specific tumor AND have the multiplied effect of traveling along the immune pathway of spreading that the cancer many have metastasized. This is huge squared because it means you can essentially use targeting one tumor to identify and eliminate distal tumors that you many not even realize exist.

This allows the therapy for treating cancer to, for the first time; end the "wack a mole" problem that has frustrated traditional shot gun methods of treatment involving radiation and chemotherapy ...which by their nature unfortunately damage parts of the body that are not cancer laden but …

Engineers versus Programmers

I have found as more non formally trained people enter the coding space, the quality of code that results varies in an interesting way.

The formalities of learning to code in a structured course at University involve often strong focus on "correctness" and efficiency in the form of big O representations for the algorithms created.

Much less focus tends to be placed on what I'll call practical programming, which is the type of code that engineers (note I didn't use "programmers" on purpose) must learn to write.

Programmers are what Universities create, students that can take a defined development environment and within in write an algorithm for computing some sequence or traversing a tree or encoding and decoding a string. Efficiency and invariant rules are guiding development missions. Execution time for creating the solution is often a week or more depending on the professor and their style of teaching code and giving out problems. This type of coding is devo…

First *extra Galactic* planetary scale bodies observed

This headline


Significance?
So every so often I see a story that has me sitting at the keyboard for a few seconds...actually trying to make sure the story is not some kind of satire site because the headline reads immediately a nonsense.
This headline did just that.
So I proceeded to frantically click through and it appears it was a valid news item from a valid news source and my jaw hit the floor.
Many of you know that we've been finding new planets outside of our solar system for about 25 years now.
In fact the Kepler satellite and other ground observatories have been accelerating their rate of extra-solar planet discoveries in the last few years but those planets are all within our galaxy the Milky Way.
The three major methods used to detect the bulk of planets thus far are wobble detection, radial transit and this method micro lensing which relies on a gravitational effect that was predicted by Einstein in his general theory of relativity exactly 103 years ago.
https://exoplanet…