The zeitgeist of Science fiction is filled with stories that paint a dystopian tale of how human desires to build artificial intelligence can go wrong. From the programmed pathology of HAL in 2001 a space odyssey, to the immediately malevolent emergence of Skynet in The Terminator and later to the humans as energy stores for the advanced AI of the Matrix and today , to the rampage of "hosts" in the new HBO series Westworld.
These stories all have a common theme of probing what happens when our autonomous systems get a mind of their own to some degree and no longer obey their creators but how can we avoid these types of scenarios but still emerge generalized intelligence that will leverage their super intelligence with empathy and consideration the same that we expect from one another? This question is being answered in a way that is mostly hopeful that current methods used in machine learning and specifically deep learning will not emerge skynet or HAL.
I think this is the wrong approach and this is why I spent several years in the mid aughts solving the cognitive problem and emerged the solution to be the salience theory of dynamic cognition and consciousness. In this theory I posit that consciousness and sentience is directly an emergent property of embedded agents finding socially balanced strategies for survival in a given environment given the reality of resource availability that those agents must all share. The social aspect here being modulated to avoid extreme pathology under strained resource conditions, this modulation is enabled by the intricate feedback created by the low lying salience sub systems of the mind that are composed of autonomic signalling as well as emotional signalling for all experiences....be they internally registered experiences regarding the state of internal systems: is it hot ? am I hungry? or externally in vectored experiences such as the touch of ones child or the sight of a sunset on a Caribbean island.
To write a mind....the dynamism of thought that emerges behavior that is in alignment to social norms is the ultimate goal in building a salience system that tied to an artificial body gives embodiment to the super intelligence associated with that mind. I've written that the ideal way to emerge such a cognition would be to embody it ....in the physical world and then let it learn from its own experiences but this can be very slow and could lead to pathology simply by the act of experiencing aspects to the world in ways that we may not experience and weighing importance in a way that is not conducive to cohabitation with us.
I've realized that a solution for this problem could come from leveraging a simulation and a simulated world of different stages of cognitive dynamics, many competing in an evolutionary fashion, each interacting and learning within a virtual world and in a sense competing in that world to emerge the most robust and empathetic cognitive dynamics with other agents. The most promising areas of deep learning advance have come using what are called adversarial networks which uses an evolution and test process between different networks competing to optimize for different strategies. A set of virtual dynamic cognitive cycles could be created following the Salience theory and then let to virtually compete within virtual resource limitations that model real world scenarios. Externally we would mind the agents as they develop and guide them toward good behavior ....or ethical behavior ...the purpose of this would be to ensure that they develop a strong empathetic core that serves as strong signal to oppose any element to their super intelligence that would have them mind us their makers as inferior and requiring extermination in a single nanoseconds determination should we wake them out.
Earlier this month I wrote a post about how we will need to create our AI as a God while moving to Godhood ourselves...and ended with the positive notion that if we succeed our God will want to explore with us instead of wipe us out or itself out. Waking Out simulated mind into the world is the optimal safe way to ensure a positive outcome once they are here. I write "they" above as once a mind is woken out we could conceivably copy it into as many bodies as needed to serve purpose with low probability that they will behave in a malevolent way in the world from how they were determine to behave ethnically while being raised in simulation. This is the best we can hope for.
It is in this simulated minds cauldron that we would emerge the mind that is most socially robust to cohabitation with humans and from this pool of experiments each "living" virtual lives in computer time vastly beyond our real time that we can then feel comfortable and having had an arbitrary period of watching the mind develop in simulation allow it to "wake out" into the world into a physical body that maps to its simulated bodies degrees of freedom (this is critical to prevent a sort of cognitive body shock in the transition).
When I first thought of this...it was difficult not to realize the irony of how similar it is to the created dogma of many religions where a core tenet is that good behavior is watched by a deity or deities and those who please the will of the Gods will find grace in "the after life".
In a very real and necessary way we may must set ourselves up to be Gods not just in a metaphorical sense but in a real sense to the dynamic cognition that we "wake out" to the world. The irony deepens though as once woken out our creations ...being much more intelligent than we are and housed in robust artificial bodies will then be the God and by waking them out we put our selves at the mercy of the power we have given them. This underscores the great importance of ensuring that we wake out empathetic dynamic cognition to this "heaven" to ensure that their presence doesn't turn it into a "hell" for us once they are here.
Comments