Skip to main content

Is the core principle that guides evolution conservation or non conservation?


I completely disagree on a visceral level with this statement as I understand it:

"Mutations or other causal differences will allow us to stress that "non conservation principles" are at the core of evolution, in contrast to physical dynamics, largely based on conservation principles as symmetries."

From a recently published paper at arxiv.

Completely wrong, continuous conservation is precisely why evolution is happening and precisely why it seems pseudorandom over a large set of interacting forms. The large size of the phase space in terms of changing inter-related emergence hides this underlying continuous conservation.

I posit, It's the same reason why computer code of a given complexity tends to look the same way as an evolved organism...like a seemingly hodge podge construction but one that works and does so because a continuous process of conservation was undertaken as it developed.

I've written a bit about this connection between code conservation and evolution. A symmetry I first observed in the 90's when I started to learn OO code but had a good understanding of biology and genetics.:

http://sent2null.blogspot.com/2008/02/on-origins-of-comlexity-in-life.html

http://sent2null.blogspot.com/2012/03/humans-locomotionwhy-bipedality-robots.html

Now it is one thing to note the process and then note that as it proceeds and as the complexity of the system increased the ability to be highly conservative from step to step goes DOWN (which it does as the phase space becomes more complex to identify and utilize previously utilized adaptations). The same is precisely true in computer code, a point comes where the cost of doing something from scratch is lower than the cost of searching for existing solutions in the corpus of the work...in the part of code this is done by a human agent.

In evolution it is done by no agent..simply the proximal dynamics (in the form of physio-chemical energy conservation between components of the biological system) either finding it easier to appropriate some near by molecule through some chance interaction or being incapable of doing so if said molecule is not physio-chemically proximal to it. That's it...nothing all...still conservative but the scale of proximity is what creates the illusion of non conservation for potential interactions between objects that are more physio-chemically distal to one another.

I think the authors are being confused by the very fast way that the underlying physio- chemical conservation principles blow up to apparent chaos in the number of interacting and mutating agents and systems. That is just not the same as saying that non conservation principles are at "the core" of evolution...physio-chemical interaction is what induces mutation....without some closeness of entities new forms simply will not arise between disparate systems. A mutation in a cat is not going to trigger a mutation in a bird in the tree...not now, not ever. Fully understanding why explains the gross error of at least that statement above in this paper in my view.

Interestingly in the paper:

"They have an internal, permanently reconstructed autonomy, in Kant’s
sense, or Varela’s autopoiesis, that gives them an ever changing, yet “inertial” structural stability.
They achieve a closure in a task space by which they reproduce, and evolve and adapt by processes
alone or together out of the indefinite and unorderable set of uses, of finding new uses to sustain
this in the ongoing evolution of the biosphere."

Don't they see that this emergent extended interaction space which indeed is indefinite and unorderable in uses only exists because of those physio-chemical proximal mutation events? Maybe we are disagreeing more in their word use at seeming to state that the underlying  energy conservation is not the heart of the process as if there were no conservation of energy there would be no mutational evolution to speak of, the system would fall apart before it could use the induced vector of a possibly preferential mutation...as the entire system would be falling apart rather than falling together.

So I agree with them that at the emergent level there is a state transition, the interaction of the systems create new dynamics that seem almost completely random and unpredictable in the variation they can produce, however all that structure only exists because of tight constraints at the physio-chemical level inside the interacting cell and structures as mutations occur between "near by" entities and are highly constrained by physics to observe conservation of energy principles. Think again on the example given above on the mutation in the cat it is the key idea in my view.

Comments

Popular posts from this blog

On the idea of "world wide mush" resulting from "open" development models

A recent article posted in the Wall Street Journal posits that the collectivization of various types of goods or services created by the internet is long term a damaging trend for human societies.

http://online.wsj.com/article/SB10001424052748703481004574646402192953052.html

I think that the author misses truths that have been in place that show that collectivization is not a process that started with the internet but has been with us since we started inventing things.

It seems that Mr. Lanier is not properly defining the contexts under which different problems can benefit or suffer from collectivization. He speaks in general terms of the loss of the potential for creators to extract profit from their work but misses that this is and was true of human civilization since we first picked up a rock to use as a crude hammer. New things make old things obsolete and people MUST adapt to what is displaced (be it a former human performance of that task or use of an older product) so as to main…

Highly targeted Cpg vaccine immunotherapy for a range of cancer

Significance?


This will surely go down as a seminal advance in cancer therapy. It reads like magic:

So this new approach looks for the specific proteins that are associated with a given tumors resistance to attack by the body's T cells, it then adjusts those T cells to be hyper sensitive to the specific oncogenic proteins targeted. These cells become essentially The Terminator​ T cells in the specific tumor AND have the multiplied effect of traveling along the immune pathway of spreading that the cancer many have metastasized. This is huge squared because it means you can essentially use targeting one tumor to identify and eliminate distal tumors that you many not even realize exist.

This allows the therapy for treating cancer to, for the first time; end the "wack a mole" problem that has frustrated traditional shot gun methods of treatment involving radiation and chemotherapy ...which by their nature unfortunately damage parts of the body that are not cancer laden but …

Engineers versus Programmers

I have found as more non formally trained people enter the coding space, the quality of code that results varies in an interesting way.

The formalities of learning to code in a structured course at University involve often strong focus on "correctness" and efficiency in the form of big O representations for the algorithms created.

Much less focus tends to be placed on what I'll call practical programming, which is the type of code that engineers (note I didn't use "programmers" on purpose) must learn to write.

Programmers are what Universities create, students that can take a defined development environment and within in write an algorithm for computing some sequence or traversing a tree or encoding and decoding a string. Efficiency and invariant rules are guiding development missions. Execution time for creating the solution is often a week or more depending on the professor and their style of teaching code and giving out problems. This type of coding is devo…