Skip to main content

On Consciousness: There is no "binding problem" (period).


Prologue:

In the last few years as I was privately working on the approach to extending the Action Oriented Workflow paradigm to include an implicit workflow capability. I had to do a great deal of sampling in the space of work in neuroscience, comparative evolutionary brain history, the current work in machine learning algorithms and approaches to survey the landscape and understand from a holistic vantage point how to solve the problem.

In AOW, workflows were as it was originally designed, constructed manually to allow possible User agents to serve on a "Stage" where they could or could not perform a requested "Action". The Actions were the atomic 8 that I'd identified in the origination of my construction of the paradigm and in fact are the basis of the 8 pointed Summa Star logo that defines the AgilEntity platform that implements AOW.

At the time, 2004 the systems available for building workflows for human to system to human business processes were needlessly complex, involving the need to code using languages like BPML and other grammars. The solutions in place were simply over complex in my view mostly due to the fact that they approached it from a application specific perspective rather than a general perspective. AOW eliminated that tedium by allowing manual construction of workflows and enabling the business objects to be designed and built into AgilEntity via extensibility...thus allowing as arbitrarily complex a flow between object types and actions as necessary but it was unsatisfying to me in that I wondered:

"Is there a way to have the system discover the best workflows automatically and route actions between the agents discovered?"

It was about 2006 and I was focused at the time on building a second proof of concept application into the framework (business focused web based collaboration) and set the task of extending AOW for a later date. That date came after I fired myself from McGraw Hill after coming back from Venezuela. I had mentally determined in the intervening years where I would modify the existing AOW code to provide what is called in the system "implicit" workflow (as opposed to the "explicit" workflow of manual construction that was the default innovation). Implicit workflows would directly utilize what I'd learned about machine learning but more so relied on what I knew about *the brain*. I had always been fascinated by the workings of the brain and the relative functional homogeneity (cell types are mostly neurons and glia) was a strong hint to me of two things. 1) Symmetry at many scales of operation. 2) Simplicity of generalized approach. However, when I started reading of the work that others were doing in the software space to try to create artificial minds I was boggled by the over complexity of the approaches.

Looking into a world of AI chaos

All types of mathematical models were used to try to propagate information through brains the same way signals were propagated through electronic systems. Neural networks with finite input modulation and over complex statistical models all rounded out the many approaches I've read about...some with varied domains of success but all abysmally bad at general autonomous learning. I was able in a moment of insight to realize that the solution lie in replicating the neuronal patterns of connection independent of structural representation of the neurons. After all ultimately what they were doing were remodulating inputs and outputs to other neurons...the core function is remodulation...to arbitrarily fine grained levels to other memory elements. This pointed immediately to the simple algorithm that is the basis of the Action Delta Assessment (ADA) evaluation that would occur in the "implicit" workflow extension to AOW. My ability to arrive at that algorithm was made possible by my deep understanding of how brains were built upon neuronal connections and more completely what I'd been seeing in the results of the bounty of fMRI studies that were being produced in the mid to late oughts just when I was turning my gaze to the problem of extending AOW with an autonomous component.

Neuroscientists that don't code or learn about how computing systems operate are absolutely deficient in being able to understand how the mind works because in computer systems we have efficiently created systems for efficient cognition. It was a computer scientist Alan Turing who in fact defined the limits of *ALL* possible forms of cognition.

On the computer side of things computer Scientists have been trapped in this mindset of thinking that cognition relies on fixed state transitions between known binary storage and processing elements (dumb!) when they should have been looking at the brain to see how it encodes information by simply reweighting values between dendritic connections to other neurons. There is nothing fixed about how they work...they are an n scale mesh of possible ways to encode ... what ever is being shuttled in from the senses.

Needless to say as a computer programmer who has written what I believe is the basis of a cortical algorithm that can emerge dynamic cognition (the term "artificial intelligence" is unnecessarily anthropomorphic) I am not suprised at all at the weakness between citation flow connections between the research areas of computer science and neuroscience as shown in the areas in the diagram below:



The binding problem that doesn't exist

Some one sufficiently playing in both playpen's of computer science and neuroscience would quickly conclude that there is no "binding problem". I've read about this silly idea (mostly from philosophers who also rarely take the time to pull up their sleeves and actually BUILD ANYTHING) when I first came across about 3 years ago as I befriended philosophers on the social networks...I thought it must surely had been a joke or that I simply didn't understand it. Well, I understand it...and it is a view that is focused on a dualist perspective that has zero evidence to substantiate. It's also completely ignorant to the fact that we have simulated consciousness already...the graphical user interface you are using to  read this is a visual metaphor for the consciousness of your computer. It is a seemingly dynamic, ever present area for representing symbolic representations of computing structures created in an ad hoc fashion to enable you to interact with the system. This is a precise analog of what consciousness is for living agents.

The illusion of a "binding problem" asserts because the separation between the concept of the possibility space of connections between neuronal elements (called a Qualia space in neuroscience research) has been troubling to many philosophers in particular. They posit, how is it possible for the real experiences of people to match the objective representations of our experiences as encoded in seemingly ad hoc fashion by the varied connections of the brain. This however is the wrong question, the cortical algorithm because of it's simplicity gives rise to a great deal of homogeneity *between brains* that is ignored in that common bit of mathematical legerdemain that makes our brains such good pattern finding tools.

So long as in the aggregate of all the necessary modulations between connections that define some experiential pathway there is correlation between different brains there will emerge similar experience. It really is that simple...to ask why my blue is the same blue as yours misses the point that cognitively we are in a sense "tuned" to recognize similar "blue". It is only via extensive modification to our neural systems function that we can effect real changes to this comparison process, how? Give one person a psychoactive drug and compare their perception of color to be convinced of this. I touched on this critical idea toward my finding a solution in this blog post from a few years back.

What does this have to do with the so called Binding Problem?, if there is sufficient variability to encode nuances of similarity using neuronal connections...such that two people can look up in the sky and see bears in star patterns there is surely enough variability to encode similarity in other modes of perception. "Blue" is not "bound" between different experiences it is "bound" by *mostly* similar connection patterns between the pathways of visual processing that lead to "blue" being perceived. This is a de facto truth, we know how the eye processes light, we know the importance of neurotransmitters to relaying that information to the visual cortex, we know how the perceived image is broken down to be processed by the visual cortex all of these actions happen within a pathway cone that is nearly identical for every person in their bulk...that is until something (the aforementioned psychoactive drug) dramatically changes a portion of the pathway and thus necessarily changes the "perception". If it is so easy to change such perception then nothing is really "bound" at all.

The other argument against this idea was alluded to by the desktop example above, it is an argument by analogy but it's one that only those aware of the function and construction of computing devices would be aware of (as an electrical engineer I was privy to that knowledge years ago). It stands as a very strong analogy between the brain and so called conscious experience and computers. Those who assert that there is a "binding problem" in the brain would have to explain how there is no place where "blue" resides in a computers registers, nor is there a place where "icon" or "window" or "folder" reside. They are abstractions given different visual form on different systems yet no where "bound" internally to the system but still have the identical meaning across systems...though with variation about that via types...but there goes our pattern finding brain being awesome at what it does again. It's entirely to easy to get lost in pattern finding and by doing so expect some "binding" point but as explained earlier it is a superfluous aim.

Finally the concept of temporal flow has all been left out of the work of people trying to build cognitive agents, systems in the works use pattern matching algorithms but only recently are some starting to distribute pattern finding (by using distributed sensation in robotics) beyond that the conscious mind is not a static environment, like the desktop which appears static it is a dynamic moment to moment recreation of constantly moving electrical signals and register state changes. The conscious brain is like wise a constantly moving landscape of abstractions to physical elements experienced externally, restored from internal memory and evaluated via emotional and autonomic import modulation. The flow of consciousness must be simulated and in so doing again the idea of a "binding" problem falls apart as if consciousness is a roiling sea of ideas regarding real time analysis of the world...it is only by emulating the same roiling sea using non biological means that we can emerge dynamic cognition of a similar sort.

Here's my set of posts covering consciousness as popularly discussed by some "experts" in the independent thought islands of neuroscience and computer based artificial intelligence.

http://sent2null.blogspot.com/search?q=consciousness

 I spent the last 2 years building and testing my algorithm, I know my algorithm works because I've already tested it.  I don't know if I can tie it together in the multi dimensional ways necessary to build a dynamic cognitive agent but I am pretty sure I can do that (thank you fMRI studies of the last 4 years!) now that I've compiled what I believe is a right state diagram to induce the engine into action.

I got here because I studied philosophy, mathematics, hardware engineering, evolutionary biology,  neuroscience and I write code...I am sure those who are also taking these step will get along the same path especially now that fMRI studies are so clearly explaining how the brain is internally wired...but I am glad I got here before such pictures were available on the strength of my knowledge exploring the various germane disciplines. Yet another validation of the importance of cross disciplinary study to illuminating new landscapes on the road to discovery.

Comments

Popular posts from this blog

the attributes of web 3.0...

As the US economy continues to suffer the doldrums of stagnant investment in many industries, belt tightening budgets in many of the largest cities and continuous rounds of lay offs at some of the oldest of corporations, it is little comfort to those suffering through economic problems that what is happening now, has happened before. True, the severity of the downturn might have been different but the common factors of people and businesses being forced to do more with less is the theme of the times. Like environmental shocks to an ecosystem, stresses to the economic system lead to people hunkering down to last the storm, but it is instructive to realize that during the storm, all that idle time in the shelter affords people the ability to solve previous or existing problems. Likewise, economic downturns enable enterprising individuals and corporations the ability to make bold decisions with regard to marketing , sales or product focus that can lead to incredible gains as the economic

How many cofactors for inducing expression of every cell type?

Another revolution in iPSC technology announced: "Also known as iPS cells, these cells can become virtually any cell type in the human body -- just like embryonic stem cells. Then last year, Gladstone Senior Investigator Sheng Ding, PhD, announced that he had used a combination of small molecules and genetic factors to transform skin cells directly into neural stem cells. Today, Dr. Huang takes a new tack by using one genetic factor -- Sox2 -- to directly reprogram one cell type into another without reverting to the pluripotent state." -- So the method invented by Yamanaka is now refined to rely only 1 cofactor and b) directly generate the target cell type from the source cell type (skin to neuron) without the stem like intermediate stage.  It also mentions that oncogenic triggering was eliminated in their testing. Now comparative methods can be used to discover other types...the question is..is Sox2 critical for all types? It may be that skin to neuron relies on Sox2

AgilEntity Architecture: Action Oriented Workflow

Permissions, fine grained versus management headache The usual method for determining which users can perform a given function on a given object in a managed system, employs providing those Users with specific access rights via the use of permissions. Often these permissions are also able to be granted to collections called Groups, to which Users are added. The combination of Permissions and Groups provides the ability to provide as atomic a dissemination of rights across the User space as possible. However, this granularity comes at the price of reduced efficiency for managing the created permissions and more importantly the Groups that collect Users designated to perform sets of actions. Essentially the Groups serve as access control lists in many systems, which for the variable and often changing environment of business applications means a need to constantly update the ACL’s (groups) in order to add or remove individuals based on their ability to perform cert