Skip to main content

AI, robots, Action sensation....all in the same solution stew.


The image above, a plastinated human brain and central nervous system along with distributed nerves. It fascinates me that so many researcher or thinkers in the areas of machine learning and artificial intelligence are factoring out the importance of this distributed sensation and memory network in attempting to create highly agile and responsive intelligence's.

We can see in fact hints at the need for externalized sensory capability in an intelligence when we look at the cutting edge techniques in robotics. In robotics the problem was approached for nearly 40 years with the idea that you can computationally determine all the necessary motions of external limbs in order to dynamically balance and ambulate walking robots...but it turned out that doing that top down approach was not only extremely difficult, requiring massive amounts of computational power...but it was also doomed to always be less efficient from a power utilization stand point as well as simply not nearly as dexterous for free flowing motions (once where a precise path of motion is not followed).

The pieces started coming together in the mid 80's and came out of the work of people at MIT who started thinking seriously about asking how nature does these things. The Leg lab was famous for producing attempts at ambulation that mimicked animals and insects in various ways to reduce the necessary degrees of freedom and thus enable reduced  computation for ambulation with steady balance but these were still not getting it right.

The key insight came from Marc Raibert then at the Leg Lab, he reasoned quite obviously that if it  is true that ants and roaches can ambulate their limbs at astonishing speed while having barely a few thousand neurons for brains, there must be something else to how they do it.

He went to work building distributed sensation into his robots, allowing the limbs to meter the degrees of freedom and thus reduce the complexity of the ambulation calculations....however the real innovation came when the application of statistical learning algorithms combined with these distributed sensors on limbs made it's debut.

These methods allow for a massive collapse in computational requirements by simply training the limbs to "replay" previously stored successful movements for a priori sensed positions of body and limb positioning. This allows the robot to "remember how to walk" rather than "computer how to walk" for every ambulation cycle. In the early 2000's many teams applied genetic programming to train mechanical robots in virtual environments ...allowing them to build the statistical maps of successful ambulation for given terrain encountered....and here we are...2013 and seeing the fruit of these advanced methods in the work of Raibert (now head of Boston Dynamics).

Big Dog

PetMan

Little Dog

Atlas

:All use these critical insights of distributed sensation and statistical learning to reduce computational complexity by orders of magnitude BUT all the proof we need to know that current method still have lots wrong is that no robots currently are as fast or agile on uneven or mixed terrain as a roach.

To me this means quite simply that either the type or amount of distributed sensation necessary to achieve that level of dexterity is not yet discovered. I don't think it has anything to do with the computational muscle...which at this point is quite overkill for the problem by orders of magnitude.

The work I've been doing with the Action Oriented Workflow algorithm is very much related to these ideas, AOW is a generalized algorithm for defining arbitrary "action" attributes these are then sampled in as dense or sparse a set as necessary to gather data on when those actions are performed. The Action Delta Assessment algorithm is the underlying statistical learning function for the algorithm, allowing historical information for action execution to be compared in real time in a distributed fashion...precisely as what is needed to refine ambulation in robotic limbs.  It is quite simple to see that mechanical ambulation of limbs is directly analogous to this  , I have some ideas on how I can apply the algorithm to discover and learn these patterns but am focused on applying AOW to the abstract action space of interacting human and business objects in software (to learn and refine). The fractal nature of the algorithm makes it ideal for solving problems of very messy data sets once the sufficient level of resolution to the "action" points are made.

Links:

http://sent2null.blogspot.com/2009/04/agilentity-architecture-action-oriented.html

http://en.wikipedia.org/wiki/Statistical_learning_theory

http://www.bostondynamics.com/

http://en.wikipedia.org/wiki/Marc_Raibert

http://sent2null.blogspot.com/2012/02/with-completion-of-ada-action-delta.html

Comments

Popular posts from this blog

On the idea of "world wide mush" resulting from "open" development models

A recent article posted in the Wall Street Journal posits that the collectivization of various types of goods or services created by the internet is long term a damaging trend for human societies.

http://online.wsj.com/article/SB10001424052748703481004574646402192953052.html

I think that the author misses truths that have been in place that show that collectivization is not a process that started with the internet but has been with us since we started inventing things.

It seems that Mr. Lanier is not properly defining the contexts under which different problems can benefit or suffer from collectivization. He speaks in general terms of the loss of the potential for creators to extract profit from their work but misses that this is and was true of human civilization since we first picked up a rock to use as a crude hammer. New things make old things obsolete and people MUST adapt to what is displaced (be it a former human performance of that task or use of an older product) so as to main…

Engineers versus Programmers

I have found as more non formally trained people enter the coding space, the quality of code that results varies in an interesting way.

The formalities of learning to code in a structured course at University involve often strong focus on "correctness" and efficiency in the form of big O representations for the algorithms created.

Much less focus tends to be placed on what I'll call practical programming, which is the type of code that engineers (note I didn't use "programmers" on purpose) must learn to write.

Programmers are what Universities create, students that can take a defined development environment and within in write an algorithm for computing some sequence or traversing a tree or encoding and decoding a string. Efficiency and invariant rules are guiding development missions. Execution time for creating the solution is often a week or more depending on the professor and their style of teaching code and giving out problems. This type of coding is devo…

Waking Out: A proposal to emerging ethical super intelligence safely.

The zeitgeist of Science fiction is filled with stories that paint a dystopian tale of how human desires to build artificial intelligence can go wrong. From the programmed pathology of HAL in 2001 a space odyssey, to the immediately malevolent emergence of Skynet in The Terminator and later to the humans as energy stores for the advanced AI of the Matrix and today , to the rampage of "hosts" in the new HBO series Westworld.

These stories all have a common theme of probing what happens when our autonomous systems get a mind of their own to some degree and no longer obey their creators but how can we avoid these types of scenarios but still emerge generalized intelligence that will leverage their super intelligence with empathy and consideration the same that we expect from one another? This question is being answered in a way that is mostly hopeful that current methods used in machine learning and specifically deep learning will not emerge skynet or HAL.

I think this is the …