Skip to main content

Dynamic Cognition in babies, in the abstract

A recently published article  reveals a truth about the cognitive powers of young human babies relative to their primate equal aged cousins but also revealed another tantalizing truth, that the babies had more developed powers of abstract reasoning than children just a few years older.

As I read this I was immediately struck with a possible explanation for this which comes out of what has been theorized about how the brain encodes information in neuronal and other connections and how the current field of artificial intelligence is proceeding apace to try to classify various types of identification and categorization problems using algorithms of various types.

Classification


First, what is classification? In the machine learning space classification is the process of gathering and sorting by specific attributes of relatedness any given bit of information along some dimension of salience. Classification algorithms take samples of the given data and attempt to make sense out of that data by grouping elements together, for example a classifier that works on audio data may decide to separate sounds into specific frequency components and then store the sound information in boxes of unique frequency. It may as well do the same in terms of amplitude for the sounds and thus decompose the original sound signal into it's components. Identifying various types of patterns in sound can then be made more easy by looking at them as decomposed sets of data.

The same can be done in visual classification problems. Machine learning researchers are trying to describe the content that may be present say in captured frames of a web cam. video. Where are the people? How to read emotion on their faces? What is a wall and what is a floor? These questions are in principle answered by classifying the frame data in various ways. One way could be to identify swift changes in contrast and note if those areas of the image move across the screen in various ways across frames (and thus encode that temporal data as part of the visual correlation). One could also decompose the visual data into the chromatic and luminance components of the pixels that compose the frame data and discover and mine patterns that consistently present when specific items are on the screen...but ultimately this is also classification.

For any dimension of sensory input you care about some classification scheme can be devised, so what does this have to do with a baby being better at some types of abstraction than a child a few years older?

Well, from the moment a child starts forming a brain, the brain is knitting together reality via the encoded experiences. I've asserted in earlier posts that the somatosensory sense is likely the most important sense for grounding of a developing human (or any animal) the reason being that the embodiment of physical self plays the foundation about which subsequent sensory abstractions are pinned.

However, the process of building up the maps of experience that correlate to the classification tasks indicated above in sound and vision analysis takes time and initially starts with a lot of noise as the mind connects random bits of information in a very low sample set. As more experiences are built into the ever connecting network of neuronal connections in the particular areas of the brain tasked to process a given sensory dimension the more difficult it becomes to traverse the hierarchy to the root of any given abstraction. For example, a baby starts out visually recording very simple concepts since it is incapable of making sense of anything more complex (it literally has no grounding) and so its mind starts FROM abstraction in all things and then over time refines concrete representation and labeling.

This makes sense as over time though the brain becomes more densely connected with bits of information about the world it necessarily increases the time required to identify the classes of things (which require comparison across large sets of related variants) from particular instances of things...which require only a local estimation of specific differences within the larger class.

Over fit

So this process of continual refinement gets to another machine learning idea that is associated with supervised learning algorithms called overfitting. When an algorithm over fits it increasingly becomes attuned to a certain type of sub problem (which may or may not be salient) in a set of sampled data and thus becomes insensitive to other patterns that may be important at the time. For example an over fitting visual algorithm designed to identify people moving about an air port terminal may mistake a shopping cart for a separate individual if its classification criteria are not specific enough...it could be over fit to a specific type of attributes to the exclusion of others.

I've theorized and have modeled my work on the idea that the cognitive algorithm is extremely simple, fractal and likely purely binary in nature and recursive over time and element data. The algorithm would there for be very flexible across types of data sets (exactly what  you'd want in a generalized approach to encoding several dimensions of sensory import) but would be inflexible in terms of convergence time as low density sets provide insufficient information to form refined estimations BUT serve well to define gross abstractions. However, this sounds a lot like what I described earlier about how a baby begins life and I assert it is in fact identical.

The baby mind has a development process that has it creating an increasing number of connections in the first couple of years, this deep building of information and relations I assert skews the mind away from the generalized abstraction roots of  each dimension into wider and deeper sets of salient object information but this shifts the cognitive attention away from abstraction and toward concrete...the mind becomes fit to concrete descriptions and relations of types of things and must evaluate larger samples to extract the type relationships (the abstraction!) and so the 1 year old out classes the 5 year old in these types of tests.

This trade off is a rather elegant demonstration of the fact that the cognitive algorithm is not perfect across all of sample history...with the trade off of abstraction focus for concrete representation focus being made and likely made to independent degree across the sensory dimensions. The latter assertion could then explain why there can be such variance in our aptitudes at various dimensions of sensory experience...for example, why some people are tone deaf and others have perfect pitch or why some people are super tasters and others aren't or why some people can dance and others....flail away on the dance floor embarrassing their partners.

In my work designing the Action Oriented Workflow paradigm the work routing, Action Delta Assessment algorithm is general and dynamic, not particularly tuned to any features of the data set outside of those constrained by the modeling process which maps to the hierarchical stratification of the mammalian and other brains. As a result the ADA algorithm should exhibit the same relaxation to specificity that the baby brain exhibits as children build more dense maps of concrete sub relations. Initial testing shows it does indeed do this over time hopefully over the general class of sampled problems it will do this in a "fit" way.

Links:

http://en.wikipedia.org/wiki/Overfitting

http://www.zerotothree.org/child-development/brain-development/baby-brain-map.html

http://sent2null.blogspot.com/2013/05/ada-on-road-to-dynamic-cognition-how-is.html

http://sent2null.blogspot.com/2012/09/if-memory-is-hierarchicalwhat-builds.html

http://sent2null.blogspot.com/2013/02/on-consciousness-there-is-no-binding.html

Comments

Popular posts from this blog

the attributes of web 3.0...

As the US economy continues to suffer the doldrums of stagnant investment in many industries, belt tightening budgets in many of the largest cities and continuous rounds of lay offs at some of the oldest of corporations, it is little comfort to those suffering through economic problems that what is happening now, has happened before. True, the severity of the downturn might have been different but the common factors of people and businesses being forced to do more with less is the theme of the times. Like environmental shocks to an ecosystem, stresses to the economic system lead to people hunkering down to last the storm, but it is instructive to realize that during the storm, all that idle time in the shelter affords people the ability to solve previous or existing problems. Likewise, economic downturns enable enterprising individuals and corporations the ability to make bold decisions with regard to marketing , sales or product focus that can lead to incredible gains as the economic

How many cofactors for inducing expression of every cell type?

Another revolution in iPSC technology announced: "Also known as iPS cells, these cells can become virtually any cell type in the human body -- just like embryonic stem cells. Then last year, Gladstone Senior Investigator Sheng Ding, PhD, announced that he had used a combination of small molecules and genetic factors to transform skin cells directly into neural stem cells. Today, Dr. Huang takes a new tack by using one genetic factor -- Sox2 -- to directly reprogram one cell type into another without reverting to the pluripotent state." -- So the method invented by Yamanaka is now refined to rely only 1 cofactor and b) directly generate the target cell type from the source cell type (skin to neuron) without the stem like intermediate stage.  It also mentions that oncogenic triggering was eliminated in their testing. Now comparative methods can be used to discover other types...the question is..is Sox2 critical for all types? It may be that skin to neuron relies on Sox2

AgilEntity Architecture: Action Oriented Workflow

Permissions, fine grained versus management headache The usual method for determining which users can perform a given function on a given object in a managed system, employs providing those Users with specific access rights via the use of permissions. Often these permissions are also able to be granted to collections called Groups, to which Users are added. The combination of Permissions and Groups provides the ability to provide as atomic a dissemination of rights across the User space as possible. However, this granularity comes at the price of reduced efficiency for managing the created permissions and more importantly the Groups that collect Users designated to perform sets of actions. Essentially the Groups serve as access control lists in many systems, which for the variable and often changing environment of business applications means a need to constantly update the ACL’s (groups) in order to add or remove individuals based on their ability to perform cert