Skip to main content

Dynamic Cognition in babies, in the abstract

A recently published article  reveals a truth about the cognitive powers of young human babies relative to their primate equal aged cousins but also revealed another tantalizing truth, that the babies had more developed powers of abstract reasoning than children just a few years older.

As I read this I was immediately struck with a possible explanation for this which comes out of what has been theorized about how the brain encodes information in neuronal and other connections and how the current field of artificial intelligence is proceeding apace to try to classify various types of identification and categorization problems using algorithms of various types.

Classification


First, what is classification? In the machine learning space classification is the process of gathering and sorting by specific attributes of relatedness any given bit of information along some dimension of salience. Classification algorithms take samples of the given data and attempt to make sense out of that data by grouping elements together, for example a classifier that works on audio data may decide to separate sounds into specific frequency components and then store the sound information in boxes of unique frequency. It may as well do the same in terms of amplitude for the sounds and thus decompose the original sound signal into it's components. Identifying various types of patterns in sound can then be made more easy by looking at them as decomposed sets of data.

The same can be done in visual classification problems. Machine learning researchers are trying to describe the content that may be present say in captured frames of a web cam. video. Where are the people? How to read emotion on their faces? What is a wall and what is a floor? These questions are in principle answered by classifying the frame data in various ways. One way could be to identify swift changes in contrast and note if those areas of the image move across the screen in various ways across frames (and thus encode that temporal data as part of the visual correlation). One could also decompose the visual data into the chromatic and luminance components of the pixels that compose the frame data and discover and mine patterns that consistently present when specific items are on the screen...but ultimately this is also classification.

For any dimension of sensory input you care about some classification scheme can be devised, so what does this have to do with a baby being better at some types of abstraction than a child a few years older?

Well, from the moment a child starts forming a brain, the brain is knitting together reality via the encoded experiences. I've asserted in earlier posts that the somatosensory sense is likely the most important sense for grounding of a developing human (or any animal) the reason being that the embodiment of physical self plays the foundation about which subsequent sensory abstractions are pinned.

However, the process of building up the maps of experience that correlate to the classification tasks indicated above in sound and vision analysis takes time and initially starts with a lot of noise as the mind connects random bits of information in a very low sample set. As more experiences are built into the ever connecting network of neuronal connections in the particular areas of the brain tasked to process a given sensory dimension the more difficult it becomes to traverse the hierarchy to the root of any given abstraction. For example, a baby starts out visually recording very simple concepts since it is incapable of making sense of anything more complex (it literally has no grounding) and so its mind starts FROM abstraction in all things and then over time refines concrete representation and labeling.

This makes sense as over time though the brain becomes more densely connected with bits of information about the world it necessarily increases the time required to identify the classes of things (which require comparison across large sets of related variants) from particular instances of things...which require only a local estimation of specific differences within the larger class.

Over fit

So this process of continual refinement gets to another machine learning idea that is associated with supervised learning algorithms called overfitting. When an algorithm over fits it increasingly becomes attuned to a certain type of sub problem (which may or may not be salient) in a set of sampled data and thus becomes insensitive to other patterns that may be important at the time. For example an over fitting visual algorithm designed to identify people moving about an air port terminal may mistake a shopping cart for a separate individual if its classification criteria are not specific enough...it could be over fit to a specific type of attributes to the exclusion of others.

I've theorized and have modeled my work on the idea that the cognitive algorithm is extremely simple, fractal and likely purely binary in nature and recursive over time and element data. The algorithm would there for be very flexible across types of data sets (exactly what  you'd want in a generalized approach to encoding several dimensions of sensory import) but would be inflexible in terms of convergence time as low density sets provide insufficient information to form refined estimations BUT serve well to define gross abstractions. However, this sounds a lot like what I described earlier about how a baby begins life and I assert it is in fact identical.

The baby mind has a development process that has it creating an increasing number of connections in the first couple of years, this deep building of information and relations I assert skews the mind away from the generalized abstraction roots of  each dimension into wider and deeper sets of salient object information but this shifts the cognitive attention away from abstraction and toward concrete...the mind becomes fit to concrete descriptions and relations of types of things and must evaluate larger samples to extract the type relationships (the abstraction!) and so the 1 year old out classes the 5 year old in these types of tests.

This trade off is a rather elegant demonstration of the fact that the cognitive algorithm is not perfect across all of sample history...with the trade off of abstraction focus for concrete representation focus being made and likely made to independent degree across the sensory dimensions. The latter assertion could then explain why there can be such variance in our aptitudes at various dimensions of sensory experience...for example, why some people are tone deaf and others have perfect pitch or why some people are super tasters and others aren't or why some people can dance and others....flail away on the dance floor embarrassing their partners.

In my work designing the Action Oriented Workflow paradigm the work routing, Action Delta Assessment algorithm is general and dynamic, not particularly tuned to any features of the data set outside of those constrained by the modeling process which maps to the hierarchical stratification of the mammalian and other brains. As a result the ADA algorithm should exhibit the same relaxation to specificity that the baby brain exhibits as children build more dense maps of concrete sub relations. Initial testing shows it does indeed do this over time hopefully over the general class of sampled problems it will do this in a "fit" way.

Links:

http://en.wikipedia.org/wiki/Overfitting

http://www.zerotothree.org/child-development/brain-development/baby-brain-map.html

http://sent2null.blogspot.com/2013/05/ada-on-road-to-dynamic-cognition-how-is.html

http://sent2null.blogspot.com/2012/09/if-memory-is-hierarchicalwhat-builds.html

http://sent2null.blogspot.com/2013/02/on-consciousness-there-is-no-binding.html

Comments

Popular posts from this blog

Highly targeted Cpg vaccine immunotherapy for a range of cancer

Significance?


This will surely go down as a seminal advance in cancer therapy. It reads like magic:

So this new approach looks for the specific proteins that are associated with a given tumors resistance to attack by the body's T cells, it then adjusts those T cells to be hyper sensitive to the specific oncogenic proteins targeted. These cells become essentially The Terminator​ T cells in the specific tumor AND have the multiplied effect of traveling along the immune pathway of spreading that the cancer many have metastasized. This is huge squared because it means you can essentially use targeting one tumor to identify and eliminate distal tumors that you many not even realize exist.

This allows the therapy for treating cancer to, for the first time; end the "wack a mole" problem that has frustrated traditional shot gun methods of treatment involving radiation and chemotherapy ...which by their nature unfortunately damage parts of the body that are not cancer laden but …

Engineers versus Programmers

I have found as more non formally trained people enter the coding space, the quality of code that results varies in an interesting way.

The formalities of learning to code in a structured course at University involve often strong focus on "correctness" and efficiency in the form of big O representations for the algorithms created.

Much less focus tends to be placed on what I'll call practical programming, which is the type of code that engineers (note I didn't use "programmers" on purpose) must learn to write.

Programmers are what Universities create, students that can take a defined development environment and within in write an algorithm for computing some sequence or traversing a tree or encoding and decoding a string. Efficiency and invariant rules are guiding development missions. Execution time for creating the solution is often a week or more depending on the professor and their style of teaching code and giving out problems. This type of coding is devo…

First *extra Galactic* planetary scale bodies observed

This headline


Significance?
So every so often I see a story that has me sitting at the keyboard for a few seconds...actually trying to make sure the story is not some kind of satire site because the headline reads immediately a nonsense.
This headline did just that.
So I proceeded to frantically click through and it appears it was a valid news item from a valid news source and my jaw hit the floor.
Many of you know that we've been finding new planets outside of our solar system for about 25 years now.
In fact the Kepler satellite and other ground observatories have been accelerating their rate of extra-solar planet discoveries in the last few years but those planets are all within our galaxy the Milky Way.
The three major methods used to detect the bulk of planets thus far are wobble detection, radial transit and this method micro lensing which relies on a gravitational effect that was predicted by Einstein in his general theory of relativity exactly 103 years ago.
https://exoplanet…