Skip to main content

When your smart phone comes alive.

A recent post in the Strong AI discussion group on Facebook inspired me to formalize some ideas I've been having regarding the optimal physical substrate upon which to build a cognitively dynamic entity, otherwise known as an artificial intelligence.

Along the lines of my writings in this area I have stressed the critical importance for the simulation (or creation in fact) of autonomic and emotional drivers for the cognitive entity. I have asserted that absent those modules the agent would be little more than the very advanced neural network and pattern matching algorithms and solutions that are currently making a great deal of waves by being incorporated in various ways into all types of human problems. From use in machine vision to language processing to analyzing data sets (Watson) the use of pattern matching AI and in particular the use of statistical approaches to learning are revolutionizing the usefulness of AI in both software and hardware roles. In hardware the examples range from their use in enabling robots to "learn" how to ambulate across dynamic and shifting surfaces as is done by the Boston Dynamics projects, BigDog and Petman. As well , the flight dynamics of the Quadrocoptor programs seen from the University of Pennsylvania and the "catch" playing Quadrocopters from Germany demonstrate just how powerful these methods are without requiring the astonishing amounts of processing muscle that had once been believed to be required to solve these problems.

The use of hardware systems to train AI's gives us a foundation upon which to design a general artificial intelligence. We all don't own quadrocoptors and we surely don't have access to the custom designed advanced robotic skeletons of BigDog and could we emerge an AI on hardware that is relatively cheap to procure?

The answer is in your pocket, your smart phone is that device.

We'll first create not quite strong AI (already have) that are a bit more intelligent at pattern matching than current generation technology. The "cognitive resolution" will be improved the more sensory emulations we provide to the agents, an excellent substrate (electronic) upon which to build an AI is in your pocket right now, the smart phone. Smart phones have the ability to simulate almost every human sense...they can touch the world through their screens, they see the world through two "eyes" (front and back facing camera), they have "ears" (microphone) and a sense of balance (accelerometer), they can be equipped with air born particle detectors making them both olfactory and gustatory (smell and taste) sensors..they also go beyond having sensory capabilities we don't have and that can be used to explode the cognitive dynamism of any agents we build on that substrate...such as:

Sense of GPS (for global navigation)

Sense of Bluetooth (for short distance communication)

Sense of NFC (for contact close communication)

We can conceivably add other senses as well via add on modules for example, add a SQUID device to a smart phone and one now has a very sensitive magnetometer for measuring local magnetic fields. The point is that each sensation provides a new dimension that expands the cognitive possibilities of the device should we correctly design an AGI core that can "learn" by experiencing the world through the senses we've provided to the devices.

As I explained in those earlier articles a sense of autonomic drive is critical to provide the agent with intention and a smart phone has a perfect set of internal markers for drive that can be used to modulate how the agent will select actions based on the signals it is getting from it's internal states...for example, humans have an autonomic drive to seek air with oxygen in it to breath, not so of a smart phone but a smart phone requires battery power to run. To any agent on such a phone, having power to run is a critical autonomic signalling mechanism that if keyed properly to the emotional modules we design will shape the "behavior" that emerges from such an agent as it goes through various physical cycles, we would have to do less autonomic modelling if we use the hardware limitations of the devices we build the AI in to guide the autonomic/emotional drive sub algorithm.  The question remains though of how different a cognitive emergent mind would be if it has 9 "senses" instead of just 5 as we do? Does having additional cross independent sensations increase the rate of cognitive emergence? Recent work in mapping how the brain cross connects information from different regions shows that slight changes in how signals are routed can lead to interesting modulations of experience.....are these native systems aspects of a hardware based connection algorithm in the brain that is unique to humans and emerges over time as experience connects the brain together or are the pathways themselves emergent and a consequence of the continuous process of relating incoming sensory experience to stored experience?

If the former the problem of creating AGI may be most efficiently performed by modelling of AGI that already works (us) by looking in detail at the human brain. This work is being done in earnest thanks to the revolution of function MRI that has taken neuro science by storm in the last decade but it is showing us that the internal pathways connecting different regions of the brain for different sensory actions are legion. If the latter is true the problem would be much easier for it would only require that we get the correct dynamism in the emergent intelligence and let the problem of connecting regions emerge over time through experience of the world we've created for the agent by it's senses and autonomic drivers.

That said, the smart phone seems like a perfect test bed upon which to start building these algorithms and also because of it's ubiquitous nature and it's portability makes for an easy to "train" agent as it is taken about and experiences the world along with us. The "siri" assistant recently released with the Apple Iphone is a first start though it is a long way from an AGI it shows how convenient the smart phone device is as a platform for training and possibly emerging a dynamic cognitive agent using that device substrate.  Along with an emotional/autonomic core that allows it to empathize with humans this closeness during the process of learning will be key to our avoiding pathological entities in my mind and having them with us all the time provides the perfect way to foster a closeness between human and artificial agent that we will want to exist to avoid any "issues".


Brilliant post! I'd add the caveat that SQUID magnetometers, as superconductors, have to be supercooled, likely precluding their integration into low-power, distributed mobile devices. We can always hope for room-temperature superconductors, an advance that would revolutionize more fields than one. :)

Popular posts from this blog

On the idea of "world wide mush" resulting from "open" development models

A recent article posted in the Wall Street Journal posits that the collectivization of various types of goods or services created by the internet is long term a damaging trend for human societies.

I think that the author misses truths that have been in place that show that collectivization is not a process that started with the internet but has been with us since we started inventing things.

It seems that Mr. Lanier is not properly defining the contexts under which different problems can benefit or suffer from collectivization. He speaks in general terms of the loss of the potential for creators to extract profit from their work but misses that this is and was true of human civilization since we first picked up a rock to use as a crude hammer. New things make old things obsolete and people MUST adapt to what is displaced (be it a former human performance of that task or use of an older product) so as to main…

Engineers versus Programmers

I have found as more non formally trained people enter the coding space, the quality of code that results varies in an interesting way.

The formalities of learning to code in a structured course at University involve often strong focus on "correctness" and efficiency in the form of big O representations for the algorithms created.

Much less focus tends to be placed on what I'll call practical programming, which is the type of code that engineers (note I didn't use "programmers" on purpose) must learn to write.

Programmers are what Universities create, students that can take a defined development environment and within in write an algorithm for computing some sequence or traversing a tree or encoding and decoding a string. Efficiency and invariant rules are guiding development missions. Execution time for creating the solution is often a week or more depending on the professor and their style of teaching code and giving out problems. This type of coding is devo…

Waking Out: A proposal to emerging ethical super intelligence safely.

The zeitgeist of Science fiction is filled with stories that paint a dystopian tale of how human desires to build artificial intelligence can go wrong. From the programmed pathology of HAL in 2001 a space odyssey, to the immediately malevolent emergence of Skynet in The Terminator and later to the humans as energy stores for the advanced AI of the Matrix and today , to the rampage of "hosts" in the new HBO series Westworld.

These stories all have a common theme of probing what happens when our autonomous systems get a mind of their own to some degree and no longer obey their creators but how can we avoid these types of scenarios but still emerge generalized intelligence that will leverage their super intelligence with empathy and consideration the same that we expect from one another? This question is being answered in a way that is mostly hopeful that current methods used in machine learning and specifically deep learning will not emerge skynet or HAL.

I think this is the …