Skip to main content

Is the core principle that guides evolution conservation or non conservation?


I completely disagree on a visceral level with this statement as I understand it:

"Mutations or other causal differences will allow us to stress that "non conservation principles" are at the core of evolution, in contrast to physical dynamics, largely based on conservation principles as symmetries."

From a recently published paper at arxiv.

Completely wrong, continuous conservation is precisely why evolution is happening and precisely why it seems pseudorandom over a large set of interacting forms. The large size of the phase space in terms of changing inter-related emergence hides this underlying continuous conservation.

I posit, It's the same reason why computer code of a given complexity tends to look the same way as an evolved organism...like a seemingly hodge podge construction but one that works and does so because a continuous process of conservation was undertaken as it developed.

I've written a bit about this connection between code conservation and evolution. A symmetry I first observed in the 90's when I started to learn OO code but had a good understanding of biology and genetics.:

http://sent2null.blogspot.com/2008/02/on-origins-of-comlexity-in-life.html

http://sent2null.blogspot.com/2012/03/humans-locomotionwhy-bipedality-robots.html

Now it is one thing to note the process and then note that as it proceeds and as the complexity of the system increased the ability to be highly conservative from step to step goes DOWN (which it does as the phase space becomes more complex to identify and utilize previously utilized adaptations). The same is precisely true in computer code, a point comes where the cost of doing something from scratch is lower than the cost of searching for existing solutions in the corpus of the work...in the part of code this is done by a human agent.

In evolution it is done by no agent..simply the proximal dynamics (in the form of physio-chemical energy conservation between components of the biological system) either finding it easier to appropriate some near by molecule through some chance interaction or being incapable of doing so if said molecule is not physio-chemically proximal to it. That's it...nothing all...still conservative but the scale of proximity is what creates the illusion of non conservation for potential interactions between objects that are more physio-chemically distal to one another.

I think the authors are being confused by the very fast way that the underlying physio- chemical conservation principles blow up to apparent chaos in the number of interacting and mutating agents and systems. That is just not the same as saying that non conservation principles are at "the core" of evolution...physio-chemical interaction is what induces mutation....without some closeness of entities new forms simply will not arise between disparate systems. A mutation in a cat is not going to trigger a mutation in a bird in the tree...not now, not ever. Fully understanding why explains the gross error of at least that statement above in this paper in my view.

Interestingly in the paper:

"They have an internal, permanently reconstructed autonomy, in Kant’s
sense, or Varela’s autopoiesis, that gives them an ever changing, yet “inertial” structural stability.
They achieve a closure in a task space by which they reproduce, and evolve and adapt by processes
alone or together out of the indefinite and unorderable set of uses, of finding new uses to sustain
this in the ongoing evolution of the biosphere."

Don't they see that this emergent extended interaction space which indeed is indefinite and unorderable in uses only exists because of those physio-chemical proximal mutation events? Maybe we are disagreeing more in their word use at seeming to state that the underlying  energy conservation is not the heart of the process as if there were no conservation of energy there would be no mutational evolution to speak of, the system would fall apart before it could use the induced vector of a possibly preferential mutation...as the entire system would be falling apart rather than falling together.

So I agree with them that at the emergent level there is a state transition, the interaction of the systems create new dynamics that seem almost completely random and unpredictable in the variation they can produce, however all that structure only exists because of tight constraints at the physio-chemical level inside the interacting cell and structures as mutations occur between "near by" entities and are highly constrained by physics to observe conservation of energy principles. Think again on the example given above on the mutation in the cat it is the key idea in my view.

Comments

David Saintloth said…
Look at that took about 9 years for my original hypothesis to be confirmed by a scientific paper.

https://www.eurekalert.org/news-releases/941828

Popular posts from this blog

the attributes of web 3.0...

As the US economy continues to suffer the doldrums of stagnant investment in many industries, belt tightening budgets in many of the largest cities and continuous rounds of lay offs at some of the oldest of corporations, it is little comfort to those suffering through economic problems that what is happening now, has happened before. True, the severity of the downturn might have been different but the common factors of people and businesses being forced to do more with less is the theme of the times. Like environmental shocks to an ecosystem, stresses to the economic system lead to people hunkering down to last the storm, but it is instructive to realize that during the storm, all that idle time in the shelter affords people the ability to solve previous or existing problems. Likewise, economic downturns enable enterprising individuals and corporations the ability to make bold decisions with regard to marketing , sales or product focus that can lead to incredible gains as the economic ...

How many cofactors for inducing expression of every cell type?

Another revolution in iPSC technology announced: "Also known as iPS cells, these cells can become virtually any cell type in the human body -- just like embryonic stem cells. Then last year, Gladstone Senior Investigator Sheng Ding, PhD, announced that he had used a combination of small molecules and genetic factors to transform skin cells directly into neural stem cells. Today, Dr. Huang takes a new tack by using one genetic factor -- Sox2 -- to directly reprogram one cell type into another without reverting to the pluripotent state." -- So the method invented by Yamanaka is now refined to rely only 1 cofactor and b) directly generate the target cell type from the source cell type (skin to neuron) without the stem like intermediate stage.  It also mentions that oncogenic triggering was eliminated in their testing. Now comparative methods can be used to discover other types...the question is..is Sox2 critical for all types? It may be that skin to neuron relies on Sox2 ...

AgilEntity Architecture: Action Oriented Workflow

Permissions, fine grained versus management headache The usual method for determining which users can perform a given function on a given object in a managed system, employs providing those Users with specific access rights via the use of permissions. Often these permissions are also able to be granted to collections called Groups, to which Users are added. The combination of Permissions and Groups provides the ability to provide as atomic a dissemination of rights across the User space as possible. However, this granularity comes at the price of reduced efficiency for managing the created permissions and more importantly the Groups that collect Users designated to perform sets of actions. Essentially the Groups serve as access control lists in many systems, which for the variable and often changing environment of business applications means a need to constantly update the ACL’s (groups) in order to add or remove individuals based on their ability to perform cert...