A recent post in the Strong AI discussion group on Facebook inspired me to formalize some ideas I've been having regarding the optimal physical substrate upon which to build a cognitively dynamic entity, otherwise known as an artificial intelligence.
Along the lines of my writings in this area I have stressed the critical importance for the simulation (or creation in fact) of autonomic and emotional drivers for the cognitive entity. I have asserted that absent those modules the agent would be little more than the very advanced neural network and pattern matching algorithms and solutions that are currently making a great deal of waves by being incorporated in various ways into all types of human problems. From use in machine vision to language processing to analyzing data sets (Watson) the use of pattern matching AI and in particular the use of statistical approaches to learning are revolutionizing the usefulness of AI in both software and hardware roles. In hardware the examples range from their use in enabling robots to "learn" how to ambulate across dynamic and shifting surfaces as is done by the Boston Dynamics projects, BigDog and Petman. As well , the flight dynamics of the Quadrocoptor programs seen from the University of Pennsylvania and the "catch" playing Quadrocopters from Germany demonstrate just how powerful these methods are without requiring the astonishing amounts of processing muscle that had once been believed to be required to solve these problems.
The use of hardware systems to train AI's gives us a foundation upon which to design a general artificial intelligence. We all don't own quadrocoptors and we surely don't have access to the custom designed advanced robotic skeletons of BigDog and PetMan...how could we emerge an AI on hardware that is relatively cheap to procure?
The answer is in your pocket, your smart phone is that device.
We'll first create not quite strong AI (already have) that are a bit more intelligent at pattern matching than current generation technology. The "cognitive resolution" will be improved the more sensory emulations we provide to the agents, an excellent substrate (electronic) upon which to build an AI is in your pocket right now, the smart phone. Smart phones have the ability to simulate almost every human sense...they can touch the world through their screens, they see the world through two "eyes" (front and back facing camera), they have "ears" (microphone) and a sense of balance (accelerometer), they can be equipped with air born particle detectors making them both olfactory and gustatory (smell and taste) sensors..they also go beyond us...by having sensory capabilities we don't have and that can be used to explode the cognitive dynamism of any agents we build on that substrate...such as:
Sense of GPS (for global navigation)
Sense of Bluetooth (for short distance communication)
Sense of NFC (for contact close communication)
We can conceivably add other senses as well via add on modules for example, add a SQUID device to a smart phone and one now has a very sensitive magnetometer for measuring local magnetic fields. The point is that each sensation provides a new dimension that expands the cognitive possibilities of the device should we correctly design an AGI core that can "learn" by experiencing the world through the senses we've provided to the devices.
As I explained in those earlier articles a sense of autonomic drive is critical to provide the agent with intention and a smart phone has a perfect set of internal markers for drive that can be used to modulate how the agent will select actions based on the signals it is getting from it's internal states...for example, humans have an autonomic drive to seek air with oxygen in it to breath, not so of a smart phone but a smart phone requires battery power to run. To any agent on such a phone, having power to run is a critical autonomic signalling mechanism that if keyed properly to the emotional modules we design will shape the "behavior" that emerges from such an agent as it goes through various physical cycles, we would have to do less autonomic modelling if we use the hardware limitations of the devices we build the AI in to guide the autonomic/emotional drive sub algorithm. The question remains though of how different a cognitive emergent mind would be if it has 9 "senses" instead of just 5 as we do? Does having additional cross independent sensations increase the rate of cognitive emergence? Recent work in mapping how the brain cross connects information from different regions shows that slight changes in how signals are routed can lead to interesting modulations of experience.....are these native systems aspects of a hardware based connection algorithm in the brain that is unique to humans and emerges over time as experience connects the brain together or are the pathways themselves emergent and a consequence of the continuous process of relating incoming sensory experience to stored experience?
If the former the problem of creating AGI may be most efficiently performed by modelling of AGI that already works (us) by looking in detail at the human brain. This work is being done in earnest thanks to the revolution of function MRI that has taken neuro science by storm in the last decade but it is showing us that the internal pathways connecting different regions of the brain for different sensory actions are legion. If the latter is true the problem would be much easier for it would only require that we get the correct dynamism in the emergent intelligence and let the problem of connecting regions emerge over time through experience of the world we've created for the agent by it's senses and autonomic drivers.
That said, the smart phone seems like a perfect test bed upon which to start building these algorithms and also because of it's ubiquitous nature and it's portability makes for an easy to "train" agent as it is taken about and experiences the world along with us. The "siri" assistant recently released with the Apple Iphone is a first start though it is a long way from an AGI it shows how convenient the smart phone device is as a platform for training and possibly emerging a dynamic cognitive agent using that device substrate. Along with an emotional/autonomic core that allows it to empathize with humans this closeness during the process of learning will be key to our avoiding pathological entities in my mind and having them with us all the time provides the perfect way to foster a closeness between human and artificial agent that we will want to exist to avoid any "issues".
Along the lines of my writings in this area I have stressed the critical importance for the simulation (or creation in fact) of autonomic and emotional drivers for the cognitive entity. I have asserted that absent those modules the agent would be little more than the very advanced neural network and pattern matching algorithms and solutions that are currently making a great deal of waves by being incorporated in various ways into all types of human problems. From use in machine vision to language processing to analyzing data sets (Watson) the use of pattern matching AI and in particular the use of statistical approaches to learning are revolutionizing the usefulness of AI in both software and hardware roles. In hardware the examples range from their use in enabling robots to "learn" how to ambulate across dynamic and shifting surfaces as is done by the Boston Dynamics projects, BigDog and Petman. As well , the flight dynamics of the Quadrocoptor programs seen from the University of Pennsylvania and the "catch" playing Quadrocopters from Germany demonstrate just how powerful these methods are without requiring the astonishing amounts of processing muscle that had once been believed to be required to solve these problems.
The use of hardware systems to train AI's gives us a foundation upon which to design a general artificial intelligence. We all don't own quadrocoptors and we surely don't have access to the custom designed advanced robotic skeletons of BigDog and PetMan...how could we emerge an AI on hardware that is relatively cheap to procure?
The answer is in your pocket, your smart phone is that device.
We'll first create not quite strong AI (already have) that are a bit more intelligent at pattern matching than current generation technology. The "cognitive resolution" will be improved the more sensory emulations we provide to the agents, an excellent substrate (electronic) upon which to build an AI is in your pocket right now, the smart phone. Smart phones have the ability to simulate almost every human sense...they can touch the world through their screens, they see the world through two "eyes" (front and back facing camera), they have "ears" (microphone) and a sense of balance (accelerometer), they can be equipped with air born particle detectors making them both olfactory and gustatory (smell and taste) sensors..they also go beyond us...by having sensory capabilities we don't have and that can be used to explode the cognitive dynamism of any agents we build on that substrate...such as:
Sense of GPS (for global navigation)
Sense of Bluetooth (for short distance communication)
Sense of NFC (for contact close communication)
We can conceivably add other senses as well via add on modules for example, add a SQUID device to a smart phone and one now has a very sensitive magnetometer for measuring local magnetic fields. The point is that each sensation provides a new dimension that expands the cognitive possibilities of the device should we correctly design an AGI core that can "learn" by experiencing the world through the senses we've provided to the devices.
As I explained in those earlier articles a sense of autonomic drive is critical to provide the agent with intention and a smart phone has a perfect set of internal markers for drive that can be used to modulate how the agent will select actions based on the signals it is getting from it's internal states...for example, humans have an autonomic drive to seek air with oxygen in it to breath, not so of a smart phone but a smart phone requires battery power to run. To any agent on such a phone, having power to run is a critical autonomic signalling mechanism that if keyed properly to the emotional modules we design will shape the "behavior" that emerges from such an agent as it goes through various physical cycles, we would have to do less autonomic modelling if we use the hardware limitations of the devices we build the AI in to guide the autonomic/emotional drive sub algorithm. The question remains though of how different a cognitive emergent mind would be if it has 9 "senses" instead of just 5 as we do? Does having additional cross independent sensations increase the rate of cognitive emergence? Recent work in mapping how the brain cross connects information from different regions shows that slight changes in how signals are routed can lead to interesting modulations of experience.....are these native systems aspects of a hardware based connection algorithm in the brain that is unique to humans and emerges over time as experience connects the brain together or are the pathways themselves emergent and a consequence of the continuous process of relating incoming sensory experience to stored experience?
If the former the problem of creating AGI may be most efficiently performed by modelling of AGI that already works (us) by looking in detail at the human brain. This work is being done in earnest thanks to the revolution of function MRI that has taken neuro science by storm in the last decade but it is showing us that the internal pathways connecting different regions of the brain for different sensory actions are legion. If the latter is true the problem would be much easier for it would only require that we get the correct dynamism in the emergent intelligence and let the problem of connecting regions emerge over time through experience of the world we've created for the agent by it's senses and autonomic drivers.
That said, the smart phone seems like a perfect test bed upon which to start building these algorithms and also because of it's ubiquitous nature and it's portability makes for an easy to "train" agent as it is taken about and experiences the world along with us. The "siri" assistant recently released with the Apple Iphone is a first start though it is a long way from an AGI it shows how convenient the smart phone device is as a platform for training and possibly emerging a dynamic cognitive agent using that device substrate. Along with an emotional/autonomic core that allows it to empathize with humans this closeness during the process of learning will be key to our avoiding pathological entities in my mind and having them with us all the time provides the perfect way to foster a closeness between human and artificial agent that we will want to exist to avoid any "issues".
Comments