28 January, 2013

Rube Goldberg rules software development


I've often touched on the topic around this articles thesis in my blog, here is a most recent post:

http://sent2null.blogspot.com/2012/03/engineers-versus-programmers.html

:for example. I've often said that engineering is an art while programming is more of a Science...unfortunately for those who are really good at programming...building big complex software is more art than Science.

The differences are ones of scale, the biological analog serves purpose here to illustrate. A living being is an amazingly complex organism with billions of cells acting seemingly in harmony, together over the course of about 70 years for most people...these cells replicate, divide and grow without issue until the whole clock work comes grinding down to a halt under the effects of progressive degeneration. Yet the darn thing runs for 70 years, it is amazing  that it works at all.

Now that we are getting into the code that defines how cells are organized, replicate and grow...we see how. It is an amazingly *over engineered* system, instead of having end to end efficiency in mind it was designed in another way...to be maximally efficient from one step to the next (that's the only way evolution could do it without a watch maker at the works) and so, the happy accidents of efficiency...many due to proximity of chemical or molecular affinity piled up over the last few BILLION years to create the super complex systems of living beings of today...some of which that can run like us to 70 years before falling apart!

Software is much like biology in this way, save the difference is the clockwork is in our classes, methods, objects, threads...however just like in biology, the big picture view over how those many components interact over application run time BEFORE the application is written is a huge mystery to most coders. They are great at defining the optimal algorithm to sort a bunch of digits or find edges between various shapes but bad at seeing down the road a bit to understand how those low level interactions bubble up for the entire system. We've known for decades about the various systems in the cell but only in the last decade are we figuring out how they are instantiated (stem cells) how those cells differentiate to specific types (polymorphism of cells) how those differentiations are regulated across tissue types to make sure (for the most part) the wrong cell types don't appear in the wrong places (cancer). Knowing how a cell works doesn't answer any of those  questions...which are locked up in the regulating and developmental code of the dna (much of it non coding and formerly called "junk"!).

The problem with software development today is that the planning of the engineering side of it is lacking while the planning of the programming side is well developed. So rather than taking the time to really understand the problem scope and shape a generalized solution to it...coders just get to work filling in patches to leaks as defined by a rough understanding of what the client wants. Putting grout and brick in place until the leaks stop...the end result of course is not always pretty or elegant...and like biological systems over time and stress reveal itself to be an inefficient Rube Goldberg like process. I detailed a list of considerations to keep in mind to perform good design in the following post:

http://sent2null.blogspot.com/2008/02/considerations-during-design.html

Now, UML is something that has been pushed for some time, the only time I've used it on a project was when I was using it to extract my class hierarchy from the source code into pretty diagrams I could print. That was it, the over all design of the application had already been diagrammed dozens of times on paper and pencil. The slogan of Apriority LLC the company I founded to write AgilEntity the project in question is "taking the time to properly design". I saw over and over again the problems that happen when a long design stage is not enabled for writing code...you end up building things that work...but just like Rube Goldberg machines. Incredibly fragile to changes that push the system out of the precisely designed for regime. This is a major reason why so many enterprise software projects eventually crumble under their own inefficient weight once load hits them and need to be re-architected. I thank the many horrible engineers out there writing code for all the jobs I've had where I was paid well to help fix such systems!

That said, specifications are different. A spec. can be simply a well detailed header comment above a class or method...it can reference the related classes it will be used in and why and maybe note how it could be changed in the future to enable some yet undesired functionality. The key is enough supplemental information about what the code is doing to allow it to be a) understood and b) changed in relation to all other code that may call on it without causing unseen pathology..once the changes are deployed.

I have a few examples of my developments in AgilEntity chronicled that show how good *engineering* can enable massive refactoring tasks in short time that in less well designed systems would simply not be attempted. Here's one:

http://sent2null.blogspot.com/2009/03/late-hour-super-class-surgery.html

When the time for refactoring is greater than the time of writing from scratch it isn't worth it to refactor.

17 January, 2013

Rumored Playstation 4 10 year life...hardware and history explains why.


In a recently reported article a Sony representative had this to say about their console plans:


"We at PlayStation have never subscribed to the concept that a console should last only a half-decade. Both the original PlayStation and PlayStation 2 had life cycles of more than 10 years, and PlayStation 3 will as well. The 10-year life cycle is a commitment we've made with every PlayStation consumer to date, and it's part of our philosophy that we provide hardware that will stand the test of time providing that fun experience you get from day one for the next decade."



It makes a lot of sense, but the trend of longer life times is not something that is unique to Sony's devices. Release periods between new game systems have been growing longer since the days of the Atari VCS. Note the current generation of video game systems last for a lot longer than the ones that were out when I used to play them which were updated yearly.

Modern systems only have finite processing requirements given that so many have already maxed out high frame rate performance for 3D gaming (the most processor and memory intensive) at high resolution on the most used panels (today LCD and LED panels of either 720 or 1080 p resolution). All the necessary hardware muscle to drive that panel depth can safely be packed into a cell phone sized screen device today...in fact there are smart phones today that have the *same* max. resolution pixel (1920 x 1080) wise as the Playstation 3.

Once processing needs are maxxed out there is no real reason to deploy new hardware, you simply focus on making better game experiences with the hardware that is already *good enough* to give great performance on the games designed for it.

There is a reason why all the game console makers basically stopped even using resolution as a marketing strategy in their sell of the games when I was smaller resolution was a top of line item to tout that you had it above your competitors console because then you could *see* the difference, not any more.


The latest generation graphics boards and chips are now doing stuff that was unheard of 10 years ago *in real time* let alone simulating them using old tricks like various types of mapping or shadowing procedures. Real time physics is all the rage, real time object deformation, real time fire and water effects...the hardware simply has gotten far beyond the applications that the developers are programming it to perform.

Any PS 4 is going to be at least current gen. capable and that would make it a *beast* when run at the relatively small resolution (of a standard large screen panel) of 1080p hd. Now things will change once high bit density displays start hitting the 50" + size panel market and those will show the lower resolution of a 1080p (upscaled to Ultra HD) for what it is...which of course will get people wanting a player which can drive the physics in real time, the textures and the impacts and all the rest at the higher resolution...that will require more horse power but Ultra HD panels are still tens of thousands of dollars just coming to CES...it will be ta da...at least 10 years before they get down in price to where the gamers start buying them...and noticing that their (then 10 year old) PS4 can't drive them without slow frame rates...and then they will want to upgrade.





16 January, 2013

Is the core principle that guides evolution conservation or non conservation?


I completely disagree on a visceral level with this statement as I understand it:

"Mutations or other causal differences will allow us to stress that "non conservation principles" are at the core of evolution, in contrast to physical dynamics, largely based on conservation principles as symmetries."

From a recently published paper at arxiv.

Completely wrong, continuous conservation is precisely why evolution is happening and precisely why it seems pseudorandom over a large set of interacting forms. The large size of the phase space in terms of changing inter-related emergence hides this underlying continuous conservation.

I posit, It's the same reason why computer code of a given complexity tends to look the same way as an evolved organism...like a seemingly hodge podge construction but one that works and does so because a continuous process of conservation was undertaken as it developed.

I've written a bit about this connection between code conservation and evolution. A symmetry I first observed in the 90's when I started to learn OO code but had a good understanding of biology and genetics.:

http://sent2null.blogspot.com/2008/02/on-origins-of-comlexity-in-life.html

http://sent2null.blogspot.com/2012/03/humans-locomotionwhy-bipedality-robots.html

Now it is one thing to note the process and then note that as it proceeds and as the complexity of the system increased the ability to be highly conservative from step to step goes DOWN (which it does as the phase space becomes more complex to identify and utilize previously utilized adaptations). The same is precisely true in computer code, a point comes where the cost of doing something from scratch is lower than the cost of searching for existing solutions in the corpus of the work...in the part of code this is done by a human agent.

In evolution it is done by no agent..simply the proximal dynamics (in the form of physio-chemical energy conservation between components of the biological system) either finding it easier to appropriate some near by molecule through some chance interaction or being incapable of doing so if said molecule is not physio-chemically proximal to it. That's it...nothing all...still conservative but the scale of proximity is what creates the illusion of non conservation for potential interactions between objects that are more physio-chemically distal to one another.

I think the authors are being confused by the very fast way that the underlying physio- chemical conservation principles blow up to apparent chaos in the number of interacting and mutating agents and systems. That is just not the same as saying that non conservation principles are at "the core" of evolution...physio-chemical interaction is what induces mutation....without some closeness of entities new forms simply will not arise between disparate systems. A mutation in a cat is not going to trigger a mutation in a bird in the tree...not now, not ever. Fully understanding why explains the gross error of at least that statement above in this paper in my view.

Interestingly in the paper:

"They have an internal, permanently reconstructed autonomy, in Kant’s
sense, or Varela’s autopoiesis, that gives them an ever changing, yet “inertial” structural stability.
They achieve a closure in a task space by which they reproduce, and evolve and adapt by processes
alone or together out of the indefinite and unorderable set of uses, of finding new uses to sustain
this in the ongoing evolution of the biosphere."

Don't they see that this emergent extended interaction space which indeed is indefinite and unorderable in uses only exists because of those physio-chemical proximal mutation events? Maybe we are disagreeing more in their word use at seeming to state that the underlying  energy conservation is not the heart of the process as if there were no conservation of energy there would be no mutational evolution to speak of, the system would fall apart before it could use the induced vector of a possibly preferential mutation...as the entire system would be falling apart rather than falling together.

So I agree with them that at the emergent level there is a state transition, the interaction of the systems create new dynamics that seem almost completely random and unpredictable in the variation they can produce, however all that structure only exists because of tight constraints at the physio-chemical level inside the interacting cell and structures as mutations occur between "near by" entities and are highly constrained by physics to observe conservation of energy principles. Think again on the example given above on the mutation in the cat it is the key idea in my view.

06 January, 2013

Exoplanetary altruism...will it be similar to ours?

Facebook friend Johnathan Vos Post posed a question regarding altruism in humans as compared to what we might find when extraplanetry species are encountered. My answer:

It has to, otherwise the species would self destruct. It would never grow society complex enough to take advantage of the increased brain size of the individuals.

The threshold for competing with one another would prevent civilization from ever forming...so you'd get species like we have here on Earth...which have intelligence but have never emerged complex civilization from the cultural tricks that they've evolved. Many species exhibit clear signs of altruism..it being necessary in fact for parents to even *care* about or for their progeny.

It may be instinctive but that makes no difference...the induced empathy is what leads to the formation of relationships that allow younger generations to become older and move on the gene pool...absent that empathy you've got a self destructive situation for the entire species.

In human evolutionary history keep in mind that despite the fact that we have very large intelligent brains and have possibly the most complex social interactions of all animals...it still took us nearly 200,000 years to emerge civilization.

I say this was so for two reasons:

1) Gathering intelligence collectively over time is hard. Intelligence isn't enough in individuals...some means of copying it across individuals as they grow, age and die must exist. While the environment is trying to kill you this is hard to do consistently. I am sure there were many Einsteins born 150,000 , 90,000 and 45,000 years ago...and they died because of where they happened to be...or their efforts given the paltry culture and tools that existed where they happened to be born only allowed them limited ability to advance things in their area before they were expunged...by some virus, or some disaster or war. This likely happened thousands of times all over the world.

2) The natural tendency to fear "the other" is a powerful motivator of anti-altruistic behavior , especially when "the other" is speaking a different language, wears different clothes and prays to a different god. Rather that be seen as a bonus all that difference is a reason to want to get rid of the "other" as quickly as possible. We see this again over and over...of conflict fomented by just the existence of perceived difference. Where it not for the ability for us to collect intelligence over time and use that to increase survival...and thus produce societies where more smart brains can think beyond survival needs and then postulate the possibility of altruism being applied with the "other" to achieve common goals of survival...we would still be a thousand little bands of warring factions...each surviving but all fearful of the next raid or attack from some near by group.

The answer is really about the math of what maximizes survival of the species in a given competitive environment when both intelligence and social living are present. You can't even get to social living without some level of individual give and take...and that requires altruism...so absent that, you won't even emerge social species complex enough to achieve civilization.


I also expanded a bit on how I felt empathy and altruism were related in the species and in fact all species.

At base:

Empathy (the ability to see through the eyes of another) > Sympathy (using empathy to feel what another feels once seeing what they see) > altruism (giving just to give without expectation of return) > cooperation (giving with hope to get something in exchange..either material or social favor down the line)

It's starting to look like the base (empathy) is hard wired and if a species thus doesn't have it, or has it at different expression levels that species will find it very difficult to ever rise above "the noise of survival" to even get to being altruistic in the social sense that we humans exhibit.

So my conclusion is, in order for exoplanetary species to even get to a point where they are advanced and can probe beyond their home world's the same as we do with signals and robot probes to other bodies in their solar systems they would need to have the built in machinery to exhibit empathy and that would need to be combined with advanced intelligence (simply scale) such that over interaction time separate groups could learn to apply the ability for derived cooperation (which in my view exists once altruism exists) to reduce survival constraints for all groups to the point that cooperation between groups is not only likely but advantageous....the wrong balance in the species leads to it either self destructing at some point of development or of never ever rising  beyond a given point of social complexity.