Skip to main content

Rough road to dynamic cognition...

With the completion of the ADA (action delta assessment) algorithm that expands Action Oriented Workflow from explicit workflow creation to implicit workflow that is inferred over time. I've laid out the sketch of an approach to building a fully dynamic cognitive agent that uses a statistical learning model as employed by ADA and that converges to stable in the emotional areas that we must ensure before building such an agent. We must safe guard against instability for two reasons, first once the cognition emerges it will learn using the model of emotions that we build into them. Emotions serve as sensory import factors that link the consistent metronome of autonomic signals to the comparison module of the brain. The comparison module consists of the linked input sensory processing regions against the stored memory (that maps to those senses).The second reason has to do with avoiding cognitive paranoia.

Avoiding paranoia

It may be easier than we think to create a constantly paranoid entity, continuously shocked by the experiences of the world. It is important that the mind can converge on and change experiences roughly at the same rate as humans or faster in order to avoid interaction difficulties on our part. Refining a theory of emotion that can be used for this model is an on going task that I am publicly debating in posts here to my blog. In previous articles I've covered the importance I feel emotion has to emerging consciousness of the type we possess and have laid out how it should be connected roughly into the cognitive machine. In this post I will talk about emotional resolution, the idea that not only is it important for us to simulate emotions (and the key autonomic drivers that essentially base line them) but we must do so with enough fidelity or resolution across each emotion to enable the nuanced experience that social animals possess that we had better build into our cognitive agents. In order for them to be desirous to work for us they will need to feel they can work with us and that won't happen if we do not build the necessary emotional resolution.

Summary of sensation

The story revealed so far on sensation is that it comes in from the world via our 5 traditional senses, gets shuttled to processing in the neocortex, is compared to stored memory of similar sensations if present from deep memory (hippocampus), is referenced for import(emotional factor, amygdala) to current autonomic signals (brain stem, medula) and then modulates action as the processed sensory bundle (it's more than "experience" so I can't call it that...and it is not what some neuroscientists call "qualia" so I can't call it that) is compared in real time to the next momentary sensory bundle...because all sensory experience requires the emotional import retrieval step and emotional import spans the space of possible sensory emotions..I believe it gives is a clue as to what we need to know to model the algorithms correctly for creating the dynamic flow of continuous states of "experience" which to me exist in the echoes of sensory comparison that occur as external sensation is essentially compared to internal sensation (autonomics) continuously.

Animal guide to emotional resolution

So to me it makes sense that as we look across animals of various cognitive complexity we always see a correlation, not between neocortical area and intelligence...that is only a loose correlate, more important is comparison of mapping between the emotional centers and the processing regions deeper in the brain. I assert smarter animals have higher resolution in being able to assign import, or autonomic meaning to compared sensation and that massively increases the space of possible reactions to sensory experience and that set of possible reactions coupled with the processing comparison (surface area of neocortex) combined are what emerge very fluid dynamic emotionally variable cognition.

Autonomic meaning is retrieved from the internal physical drives of the system, hungry or sated, in pain (internal) or in pleasure. These drives would have to be simulated in some way in order to provide a metronome upon which all the subsequent conscious emergence is clocked.

So then the question is asked why would some animals have more emotional "resolution" than others? I think the answer is clear when we examine the social sphere. Social animals were paid with survival by being able to gauge the interpersonal nuance of other members of the group, the awareness of the states of "self" (itself an illusion of the dynamic cognitive process of comparison that all individuals engage) of other beings would be enabled by being able to identify those states recognize the slow brooding that leads to explosive anger or the shifts in body language that might indicate movement into estrus. Those individuals with high emotional resolution could more easily read the "shades" of import associated with the experience reports of others as they take in the world...this is obviously an evolutionarily advantageous ability in a social species and thus propagated as it conferred survival advantages. Mammals and more directly Primates were privy to quite a good deal of this type of selection as they evolved under ecological conditions of always being the underdog until after the opening of the ecological niches left empty by the extinction of most dinosaurs...forced to hide away in the brush, this planted the first seeds of social living as ....when you can't run free you've got not much choice but to stay with others like you and doing so makes it important to identify and learn from their internal states of cognition.

Thus the social revolution has fueled the cognitive dynamism that defines mammal like conscious states and using it as a model we can best create agents that are in line with the social substrate that enables us to relate in part to one another. I am still thinking if empathy needs to be formally defined in this emotional module or if it is something that itself is emergent a reflection simply of a feedback loop between current comparison and previously stored emotional import and then back to the sensory source in some way, still lots to think about but that is the next area to which I will devote more mind time.


Popular posts from this blog

On the idea of "world wide mush" resulting from "open" development models

A recent article posted in the Wall Street Journal posits that the collectivization of various types of goods or services created by the internet is long term a damaging trend for human societies.

I think that the author misses truths that have been in place that show that collectivization is not a process that started with the internet but has been with us since we started inventing things.

It seems that Mr. Lanier is not properly defining the contexts under which different problems can benefit or suffer from collectivization. He speaks in general terms of the loss of the potential for creators to extract profit from their work but misses that this is and was true of human civilization since we first picked up a rock to use as a crude hammer. New things make old things obsolete and people MUST adapt to what is displaced (be it a former human performance of that task or use of an older product) so as to main…

Engineers versus Programmers

I have found as more non formally trained people enter the coding space, the quality of code that results varies in an interesting way.

The formalities of learning to code in a structured course at University involve often strong focus on "correctness" and efficiency in the form of big O representations for the algorithms created.

Much less focus tends to be placed on what I'll call practical programming, which is the type of code that engineers (note I didn't use "programmers" on purpose) must learn to write.

Programmers are what Universities create, students that can take a defined development environment and within in write an algorithm for computing some sequence or traversing a tree or encoding and decoding a string. Efficiency and invariant rules are guiding development missions. Execution time for creating the solution is often a week or more depending on the professor and their style of teaching code and giving out problems. This type of coding is devo…

Waking Out: A proposal to emerging ethical super intelligence safely.

The zeitgeist of Science fiction is filled with stories that paint a dystopian tale of how human desires to build artificial intelligence can go wrong. From the programmed pathology of HAL in 2001 a space odyssey, to the immediately malevolent emergence of Skynet in The Terminator and later to the humans as energy stores for the advanced AI of the Matrix and today , to the rampage of "hosts" in the new HBO series Westworld.

These stories all have a common theme of probing what happens when our autonomous systems get a mind of their own to some degree and no longer obey their creators but how can we avoid these types of scenarios but still emerge generalized intelligence that will leverage their super intelligence with empathy and consideration the same that we expect from one another? This question is being answered in a way that is mostly hopeful that current methods used in machine learning and specifically deep learning will not emerge skynet or HAL.

I think this is the …