Skip to main content

Complexity in the Universe, conservation of energy and object orientation...

The following article was originally a post written at RD.net but I thought it should be copied here for non members to read. Enjoy!

Original link here

I seem to have added flames to a controversy with one of the posters in that thread , rainbow. I just realized that Steve Zara and others took up the standard and continued to present information to support the presented ideas. (I got really busy after posting the original and neglected to come back to respond directly to rainbow.)

Out of curiosity I just went over rainbows' other contributions to the site, turns out that the entire set of posts are to that one thread. It seems rainbow is a pseudonym for a hit and run poster, some one who created an account simply to comment on the post topic. Now there is nothing about this, often I've started new accounts to provide an alternate view but since that thread rainbow has been MIA from the site, every post is restricted to that thread. This gives me the idea combined with the points made that rainbow was a shill. He demonstrated a wide knowledge of some of the issues but lacked the critical reasoning to refrain from drawing certain conclusions (for example he assumed that the early biomolecules had no method of motility and this is patently false) based on false assumptions. Interesting, no matter my post in full follows below, you can click on the link provided above to read both rainbow's posts before it and the responses of others to his objections after for a good bit of entertainment. ;)



Regarding this discussion on the origins of life. I think rainbow is assuming that the chemistry of the early Earth was as inhomogeneous as it is today. This is a faulty assumption according to the fossil record, the planet was amazingly homogeneous in chemistry for a long long time. We know that life in the form of stromatolites were thriving along shores all along the forming continents (they were still being accreated from the interaction of the volcanic eruptions of the Earth's crust and the early seas) as early as 3.2 billion years ago. That is an astonishingly long time, but as wild as that is the very Earth itself had only formed barely a billion and a half year earlier.



I think the more likely process that led to abiogenisis was that the much more homogeneous chemistry allowed for a massive experimental space in which the early molecules could combine and recombine to form the first early protein chains and replicators. Once the first cannibalizing replicators emerged they would have rapidly spread through out the then much more homogeneous environment consuming useful submolecules. You have to look at the early Earth as almost a single global ecosystem. Today the Earth's biodiversity owes a significant reason for its existence to the great variety in biomes under which natural selection processes can proceed. In the early Earth the diversity was not in phenotypes selected through natural selection but rather in molecular types being nothing more than standard chemical affinities. Life today is highly segregated compared to the progenitors of life which being no different from complex chemical molecules could combine and recombine freely due to the laws of chemistry. From this perspective there was a much higher likelihood of chance molecular interactions producing new more complicated molecules. If we see these early molecules as "life" the natural selection was induced by the laws of chemistry providing a gigantic potential variability in the results compared to the present day biodiversity which may "look" far more diverse but in reality is much less so (if you think about it we all, from bacteria to bull are nothing more than genetic iterations of one another, highly conservative in energy due to our common use of structures that emerged in the billions of years that preceded the first visible signs of biodiversity during the Cambrian 542 mya.



If you look at a cell you see that the internal systems are more or less autonomous, they only require a particular environment and raw materials under which to carry out their chemical actions. Golgi bodies, mitochondria, ribosomes these simple engines perform specific tasks and do so without aim or direction so long as they have the necessary raw materials. The usefulness of their actions only makes sense in the context of the internals of the cell. Ironically, ID proponents claim that subcomponents have no use other then for which they were designed. I believe that for some relationships such as the relationship between a cell and its mitochondria for example, evolution itself predicts what the ID proponents conclude but for other reasons. Mitochondria are internally autonomous because the cytoplasm in which they thrive is the only remnant of the early environments in which their progenitors formed. Those environments only exist today in cells so we should not be surprised that we find them no where else. Note though that we still see that organisms amazingly similar to mitochondria exist outside cells as bacteria replete with similar DNA, the similarity is all we need to affirm the hypothesis of evolution. Just as we see symbiotic relationships between more advanced organisms today, it is very likely that the cell arose from just such beneficial symbiotic relationships arising between the various types of machinery that arose in the early biomes.



Different replicators in the form of the progenitors of the organelles coming together in order to enhance survivability by pooling resources or being forced together after being consumed by a parent organelle(the ancestors of the cell membrane or wall).



Tangentially in object oriented programming we perform this task as a matter of conservation of code, composition of class instances inside parent class objects allows us to use the attributes of the consumed objects without paying the penalty of having to recode the associated methods. When I look at a cell I see a superclass with composed child objects of other class types, by composing the objects the parent object avoids the energy expenditure of designing the consumed objects sub mechanisms. It simply need provide the environment and raw materials or in the programming case (data) required of the sub objects in order to have the sub object "catalyze" or process the data to yield a desired output data or perform some function. A good OO programmer composes well designed subobjects into parent objects to reduce the total code that must be written. The smaller the code, the faster the desired object performs its functions, the more such objects can be loaded into available memory and the faster the over all application runs. In this example the code maps to the data in the DNA but the possibilities for created functionality go up exponentially when the composed objects themselves have complex code behind their formation. In other words a cell would have taken possibly tens of billions of years to engineer a rhibosome through natural selection as the rhibosome is a relatively complex object but in the early soup the chemistry must have made such molecules exceedingly likely to form even if we have yet to form them. We can't assume that because we see a small set of internal organelles in living things today that this is all their was. As time went forward and the molecules became more complex energy would be conserved by interaction. Just as conservation of energy leads a ball down a slope using a particular path depending on the curvature of that slope and the energy in the ball, so to did the early molecules seek to conserve energy by interacting. Computer scientists can and have created simulations of the emergence of complex relationships such as these using interacting programs called cellular automatons. Though the relationships that arise are far simpler than that say between a ribosome and a cell they are emergent and non deterministic. The computer scientist has no idea how the initial conditions will seed for the emergence of interesting "behavior" in the automatons.



Now someone looking for design would claim the early Earth chemistry got "help" from a designer but there is no reason to make this assumption, the chemistry, the homogeneity of the early forming conditions and the law of conservation of energy and time are all that are required. Just as a computer scientists looking at an emergent simulated environment of interacting cellular automatons does not invoke God for the complexity, there is no reason to suppose God for the biological analog simply because we have yet to replicate the conditions that gave rise to our life.



For those unfamiliar with OO, here are some links on object oriented programming:



http://en.wikipedia.org/wiki/Object_orientation



http://java.sun.com/docs/books/tutorial/java/concepts/

Complexity is brilliantly explained in the book by Nobel Laureate Murray Gell-Mann "The Quark and the Jaguar" I highly recommend it!

Get the book here

Comments

Popular posts from this blog

Highly targeted Cpg vaccine immunotherapy for a range of cancer

Significance?


This will surely go down as a seminal advance in cancer therapy. It reads like magic:

So this new approach looks for the specific proteins that are associated with a given tumors resistance to attack by the body's T cells, it then adjusts those T cells to be hyper sensitive to the specific oncogenic proteins targeted. These cells become essentially The Terminator​ T cells in the specific tumor AND have the multiplied effect of traveling along the immune pathway of spreading that the cancer many have metastasized. This is huge squared because it means you can essentially use targeting one tumor to identify and eliminate distal tumors that you many not even realize exist.

This allows the therapy for treating cancer to, for the first time; end the "wack a mole" problem that has frustrated traditional shot gun methods of treatment involving radiation and chemotherapy ...which by their nature unfortunately damage parts of the body that are not cancer laden but …

Engineers versus Programmers

I have found as more non formally trained people enter the coding space, the quality of code that results varies in an interesting way.

The formalities of learning to code in a structured course at University involve often strong focus on "correctness" and efficiency in the form of big O representations for the algorithms created.

Much less focus tends to be placed on what I'll call practical programming, which is the type of code that engineers (note I didn't use "programmers" on purpose) must learn to write.

Programmers are what Universities create, students that can take a defined development environment and within in write an algorithm for computing some sequence or traversing a tree or encoding and decoding a string. Efficiency and invariant rules are guiding development missions. Execution time for creating the solution is often a week or more depending on the professor and their style of teaching code and giving out problems. This type of coding is devo…

AgilEntity Architecture: Action Oriented Workflow

Permissions, fine grained versus management headache
The usual method for determining which users can perform a given function on a given object in a managed system, employs providing those Users with specific access rights via the use of permissions. Often these permissions are also able to be granted to collections called Groups, to which Users are added. The combination of Permissions and Groups provides the ability to provide as atomic a dissemination of rights across the User space as possible. However, this granularity comes at the price of reduced efficiency for managing the created permissions and more importantly the Groups that collect Users designated to perform sets of actions. Essentially the Groups serve as access control lists in many systems, which for the variable and often changing environment of business applications means a need to constantly update the ACL’s (groups) in order to add or remove individuals based on their ability to perform certain actions. Also, the…