Skip to main content

When your smart phone comes alive.

A recent post in the Strong AI discussion group on Facebook inspired me to formalize some ideas I've been having regarding the optimal physical substrate upon which to build a cognitively dynamic entity, otherwise known as an artificial intelligence.

Along the lines of my writings in this area I have stressed the critical importance for the simulation (or creation in fact) of autonomic and emotional drivers for the cognitive entity. I have asserted that absent those modules the agent would be little more than the very advanced neural network and pattern matching algorithms and solutions that are currently making a great deal of waves by being incorporated in various ways into all types of human problems. From use in machine vision to language processing to analyzing data sets (Watson) the use of pattern matching AI and in particular the use of statistical approaches to learning are revolutionizing the usefulness of AI in both software and hardware roles. In hardware the examples range from their use in enabling robots to "learn" how to ambulate across dynamic and shifting surfaces as is done by the Boston Dynamics projects, BigDog and Petman. As well , the flight dynamics of the Quadrocoptor programs seen from the University of Pennsylvania and the "catch" playing Quadrocopters from Germany demonstrate just how powerful these methods are without requiring the astonishing amounts of processing muscle that had once been believed to be required to solve these problems.

The use of hardware systems to train AI's gives us a foundation upon which to design a general artificial intelligence. We all don't own quadrocoptors and we surely don't have access to the custom designed advanced robotic skeletons of BigDog and PetMan...how could we emerge an AI on hardware that is relatively cheap to procure?

The answer is in your pocket, your smart phone is that device.

We'll first create not quite strong AI (already have) that are a bit more intelligent at pattern matching than current generation technology. The "cognitive resolution" will be improved the more sensory emulations we provide to the agents, an excellent substrate (electronic) upon which to build an AI is in your pocket right now, the smart phone. Smart phones have the ability to simulate almost every human sense...they can touch the world through their screens, they see the world through two "eyes" (front and back facing camera), they have "ears" (microphone) and a sense of balance (accelerometer), they can be equipped with air born particle detectors making them both olfactory and gustatory (smell and taste) sensors..they also go beyond us...by having sensory capabilities we don't have and that can be used to explode the cognitive dynamism of any agents we build on that substrate...such as:

Sense of GPS (for global navigation)

Sense of Bluetooth (for short distance communication)

Sense of NFC (for contact close communication)



We can conceivably add other senses as well via add on modules for example, add a SQUID device to a smart phone and one now has a very sensitive magnetometer for measuring local magnetic fields. The point is that each sensation provides a new dimension that expands the cognitive possibilities of the device should we correctly design an AGI core that can "learn" by experiencing the world through the senses we've provided to the devices.

As I explained in those earlier articles a sense of autonomic drive is critical to provide the agent with intention and a smart phone has a perfect set of internal markers for drive that can be used to modulate how the agent will select actions based on the signals it is getting from it's internal states...for example, humans have an autonomic drive to seek air with oxygen in it to breath, not so of a smart phone but a smart phone requires battery power to run. To any agent on such a phone, having power to run is a critical autonomic signalling mechanism that if keyed properly to the emotional modules we design will shape the "behavior" that emerges from such an agent as it goes through various physical cycles, we would have to do less autonomic modelling if we use the hardware limitations of the devices we build the AI in to guide the autonomic/emotional drive sub algorithm.  The question remains though of how different a cognitive emergent mind would be if it has 9 "senses" instead of just 5 as we do? Does having additional cross independent sensations increase the rate of cognitive emergence? Recent work in mapping how the brain cross connects information from different regions shows that slight changes in how signals are routed can lead to interesting modulations of experience.....are these native systems aspects of a hardware based connection algorithm in the brain that is unique to humans and emerges over time as experience connects the brain together or are the pathways themselves emergent and a consequence of the continuous process of relating incoming sensory experience to stored experience?

If the former the problem of creating AGI may be most efficiently performed by modelling of AGI that already works (us) by looking in detail at the human brain. This work is being done in earnest thanks to the revolution of function MRI that has taken neuro science by storm in the last decade but it is showing us that the internal pathways connecting different regions of the brain for different sensory actions are legion. If the latter is true the problem would be much easier for it would only require that we get the correct dynamism in the emergent intelligence and let the problem of connecting regions emerge over time through experience of the world we've created for the agent by it's senses and autonomic drivers.

That said, the smart phone seems like a perfect test bed upon which to start building these algorithms and also because of it's ubiquitous nature and it's portability makes for an easy to "train" agent as it is taken about and experiences the world along with us. The "siri" assistant recently released with the Apple Iphone is a first start though it is a long way from an AGI it shows how convenient the smart phone device is as a platform for training and possibly emerging a dynamic cognitive agent using that device substrate.  Along with an emotional/autonomic core that allows it to empathize with humans this closeness during the process of learning will be key to our avoiding pathological entities in my mind and having them with us all the time provides the perfect way to foster a closeness between human and artificial agent that we will want to exist to avoid any "issues".



Comments

Brilliant post! I'd add the caveat that SQUID magnetometers, as superconductors, have to be supercooled, likely precluding their integration into low-power, distributed mobile devices. We can always hope for room-temperature superconductors, an advance that would revolutionize more fields than one. :)

Popular posts from this blog

the attributes of web 3.0...

As the US economy continues to suffer the doldrums of stagnant investment in many industries, belt tightening budgets in many of the largest cities and continuous rounds of lay offs at some of the oldest of corporations, it is little comfort to those suffering through economic problems that what is happening now, has happened before. True, the severity of the downturn might have been different but the common factors of people and businesses being forced to do more with less is the theme of the times. Like environmental shocks to an ecosystem, stresses to the economic system lead to people hunkering down to last the storm, but it is instructive to realize that during the storm, all that idle time in the shelter affords people the ability to solve previous or existing problems. Likewise, economic downturns enable enterprising individuals and corporations the ability to make bold decisions with regard to marketing , sales or product focus that can lead to incredible gains as the economic ...

How many cofactors for inducing expression of every cell type?

Another revolution in iPSC technology announced: "Also known as iPS cells, these cells can become virtually any cell type in the human body -- just like embryonic stem cells. Then last year, Gladstone Senior Investigator Sheng Ding, PhD, announced that he had used a combination of small molecules and genetic factors to transform skin cells directly into neural stem cells. Today, Dr. Huang takes a new tack by using one genetic factor -- Sox2 -- to directly reprogram one cell type into another without reverting to the pluripotent state." -- So the method invented by Yamanaka is now refined to rely only 1 cofactor and b) directly generate the target cell type from the source cell type (skin to neuron) without the stem like intermediate stage.  It also mentions that oncogenic triggering was eliminated in their testing. Now comparative methods can be used to discover other types...the question is..is Sox2 critical for all types? It may be that skin to neuron relies on Sox2 ...

AgilEntity Architecture: Action Oriented Workflow

Permissions, fine grained versus management headache The usual method for determining which users can perform a given function on a given object in a managed system, employs providing those Users with specific access rights via the use of permissions. Often these permissions are also able to be granted to collections called Groups, to which Users are added. The combination of Permissions and Groups provides the ability to provide as atomic a dissemination of rights across the User space as possible. However, this granularity comes at the price of reduced efficiency for managing the created permissions and more importantly the Groups that collect Users designated to perform sets of actions. Essentially the Groups serve as access control lists in many systems, which for the variable and often changing environment of business applications means a need to constantly update the ACL’s (groups) in order to add or remove individuals based on their ability to perform cert...