I suppose at some point there will be found to be a kind of Heisenberg uncertainty principle between scales of "mind" (which I believe are fractal and scale in complexity with more and more integrated sensory information) and integration algorithm which can be either analog or digital or range between. The analog to h-bar being the constant interaction landscape of cognitive possibilities (qualia) emerged by variation across the other two fundamental attributes.
So if I define scale of mind as sM, and integration algorithm by iA and !Q as the invariant qualia landscape. relation will look something like
sM = !QiA or sM/iA =!Q
Observations:
sM , will vary with dimension (orthogonal sensory inputs) as it does in living minds. Some biological minds integrate sensory dimensions humans can't experience...for example the dimension of electrostatic or magnetic field sensation that platypus and birds have respectively in addition to ones we can. This will modulate the perception of integrated information of the eventual "mind" that emerges.
iA, will vary how the sensed information is integrated into the substrate that integrates,processes and stores the sampled sense data. It can be done using fully analog means or fully digital ones or hybrid means, as long as the input sample space can match the input sensory resolution of the devices used to capture events outside of the mind, then I posit there will be no difference at the integrating device which in biological brains are the synaptic connections of different types of neurons.
!Q, is the fixed Qualia space for a given selection of sM and iA. I imagine if plotted there will be an interesting symmetry to the variations in sM and iA that emerge a fixed !Q. It's not a constant in the h-bar sense but rather is held as a constant to see how the same space emerges with variation of sM and iA.
In biology iA varies little (as far as we can see all animal brains use the same iA) but our attempts at creating digital cognition employs iA variation that is digital in how synaptic simulation is achieved.
In biology and in artificial sensation sM varies greatly, from 6 (including balance as separate from hearing) known sensation landscapes in humans to 7 or more in some birds to all the different types of sensation on your smart phone. From GPS to orientation to sound to balance to bluetooth to UFC to touch to vision.
This is just a hypothesis based on how it seems these attributes are related in real brains and how our methods at simulating artificial minds are proceeding, I've not performed any effort at proof as of yet....might get to it a bit later as I start attacking the problem of creating a dynamic cognitive agent directly in a few years.
In the past I had not believed in the existence of a qualia space of experience but my recent work on implicit Workflow and Action Delta Assessment algorithm in Action Oriented Workflow has convinced me to see that earlier position as being wrong. Not only do I assert that qualia exists but a given qualia landscape is a fulcrum about which variations in sensory type and integration algorithm are modulated. All only hypothesis ...yet to find rigorous proof.
Links:
http://en.wikipedia.org/wiki/Qualia
http://en.wikipedia.org/wiki/Integrated_Information_Theory
So if I define scale of mind as sM, and integration algorithm by iA and !Q as the invariant qualia landscape. relation will look something like
sM = !QiA or sM/iA =!Q
Observations:
sM , will vary with dimension (orthogonal sensory inputs) as it does in living minds. Some biological minds integrate sensory dimensions humans can't experience...for example the dimension of electrostatic or magnetic field sensation that platypus and birds have respectively in addition to ones we can. This will modulate the perception of integrated information of the eventual "mind" that emerges.
iA, will vary how the sensed information is integrated into the substrate that integrates,processes and stores the sampled sense data. It can be done using fully analog means or fully digital ones or hybrid means, as long as the input sample space can match the input sensory resolution of the devices used to capture events outside of the mind, then I posit there will be no difference at the integrating device which in biological brains are the synaptic connections of different types of neurons.
!Q, is the fixed Qualia space for a given selection of sM and iA. I imagine if plotted there will be an interesting symmetry to the variations in sM and iA that emerge a fixed !Q. It's not a constant in the h-bar sense but rather is held as a constant to see how the same space emerges with variation of sM and iA.
In biology iA varies little (as far as we can see all animal brains use the same iA) but our attempts at creating digital cognition employs iA variation that is digital in how synaptic simulation is achieved.
In biology and in artificial sensation sM varies greatly, from 6 (including balance as separate from hearing) known sensation landscapes in humans to 7 or more in some birds to all the different types of sensation on your smart phone. From GPS to orientation to sound to balance to bluetooth to UFC to touch to vision.
This is just a hypothesis based on how it seems these attributes are related in real brains and how our methods at simulating artificial minds are proceeding, I've not performed any effort at proof as of yet....might get to it a bit later as I start attacking the problem of creating a dynamic cognitive agent directly in a few years.
In the past I had not believed in the existence of a qualia space of experience but my recent work on implicit Workflow and Action Delta Assessment algorithm in Action Oriented Workflow has convinced me to see that earlier position as being wrong. Not only do I assert that qualia exists but a given qualia landscape is a fulcrum about which variations in sensory type and integration algorithm are modulated. All only hypothesis ...yet to find rigorous proof.
Links:
http://en.wikipedia.org/wiki/Qualia
http://en.wikipedia.org/wiki/Integrated_Information_Theory
Comments
While we don't have echolocation as humans it still seems like it would be possible for the information generated by the bats echolocation to be represented in the bat qualia-space as visual information. Echolocation informs the bat primarily about the spatial relationships in his environment implying that perhaps our visual representation of space may be similar to the bats even though the information was "collected" by different means. There are bound to be differences,--perhaps bats only see shape and not color--but whether or not bat consciousness is fundamentally different than ours is, I think, an open question.
Same to with future AI; how does HAL represent the world to himself?
This is precisely why I mention:
"In biology and in artificial sensation sM varies greatly, from 6 (including balance as separate from hearing) known sensation landscapes in humans to 7 or more in some birds to all the different types of sensation on your smart phone. From GPS to orientation to sound to balance to bluetooth to UFC to touch to vision."
I account for each dimension of sensation as a basis along an orthonormal nexus of modulation. An "sM -tuple" if you will, in all it's unique variations divided by the iA describes the total landscape of !Q the super space of Qualia that emerges.
It could be that different combinations lead to the same *experience* of Qualia even if sM and iA are different...this is the interesting bit of it.
However it's hard to imagine that the experience of ecolocation in the mind of a bat could be anything like our experience of the senses we can sample experientally.
I would gather the more divergent the sensory dimensions the more divergent the manifolds of traversable Qualia space are going to be.
I suspect that any one hoping to explore and discover deep relations here is going to have to study linear algebras and differential topology to wield the mathematical tools that may be necessary to tease out the fundamental relationships of cognitive and experiential landscapes.
As for me, I think engineering a mind doesn't necessarily need rigorous proof by theory before it can be engaged. In other posts I lay out the dangers that present in that endeavor which I feel are solely due to the potential for emerging pathological or sociopathic minds rather than stable minds. Azimov's rules are going to be more difficult to hardwire into a dynamic cognitive agent than many believe. I am going to expand on another interesting relationship I believe must exist in a future post.
I love the idea of experience being a shape in qualia space, because, being informational, it's digital, "perfect," and platonic, unlike shapes in the physical world which are never truly "pure." (No circle in the world is actually ever a "perfect circle" for instance.) I suspect, along with differential topology, non-metric n-dimensional geometry will also be an important field of study.
I've put forward more ideas on how to shape the formation of stable cognition on an artificial substrate in previous articles, if you haven't checked them out you might find them intriguing.
Critical to the shaping process required during training will be the development of what I call emotional resolution. I just finished initial work on a statistical learning algorithm (ADA, Action Delta Assessment) that is applied to human workflow discovery, routing and refinement that formed the basis of further ideas in dynamic cognition months later.
Search "emotional resolution" and "ADA" to find those posts.