Skip to main content

Functionality stuffed into procedures...


The funny thing about "functional programming" to me is that when you compartmentalize the structure of your code in any language and do it very well...you will naturally embrace a modular paradigm that combines functional and procedural pockets.

In OO code for example, you often write procedural chunks inside atomic methods which you then invoke in a functional way against the method that one writes the code in and attaches to a given class. This is at the class programming stage.

The functional aspect to the development of OO code then comes into place when the client programmers then take the created classes and put their methods and external program attributes together to create a highly (hopefully) function frame work for the solution of a general problem scope for the  given problem at hand.

For example, if you want to write a class that allows you to perform a transformation of a geometrical object of the screen, you'd first create functional encapsulated representations of the "how to draw any object" , "how to draw *this* object" and "how to move any object" and "how to move *this* object"  problems.

Once encoded into the necessary classes you are then just invoking functionally those procedural bits of code against an input data set that represents the desired object.

Object.move(fromlocation,tolocation);

or the operation can be functionally independent of the object, like:

Move(object,fromlocation,tolocation);

Ones choice of design in solving the functional problems described previously will constrain which approach of the two above is most useful in the functional domain. For example....if a general decoupling of what it means to "Move" can be made for all objects in a given draw space (say rectangular grid of pixels) then the second form makes sense for all types of geometries that "object" can define.

One the other hand if "move" is contextually polymorphic against the type of "object" then the first form above is more appropriate. It's the developers art to know which one would be most useful to the client programmers uses BEFORE releasing this functionality to them. Hence underscoring the importance of UNIT TESTING to a) test out the solution scope spans a large enough region of the problem scope . b) does it efficiently (not with spaghetti or non performant solution) c) does not harbor any execution pathologies that may be fatal (race conditions induced by concurrency bugs or by bad data values).

Programming...nay Engineering is part Art and part Science for this reason. It's FAR more than just knowing how to write an efficient sorting algorithm in programming language X or Y or Z....when dealing with code that runs on distributed systems (and often using different programming languages) a whole new world of architecture problems need to be confronted by the class programmer. One has to develop this vision for "the right" solution over time...it can't be taught in any rigorous text on how to solve problems in a given language.

A carpenter is not limited in most cases by the tools he has, his hammer and nails, his wood, saws and levels. He's limited by his ability to USE those tools to construct a gabled roof, or a triple level home or a patio deck. The tools are the mechanics, the Science of the profession of being a carpenter the knowledge of how to put them together to solve a given problem...well that's pure art.

I've covered aspects of these ideas in similar posts over the last few years:

http://sent2null.blogspot.com/2008/02/considerations-during-design.html

http://sent2null.blogspot.com/2008/04/avoiding-de-spaghettification-in-client.html

http://sent2null.blogspot.com/2008/04/another-bug-in-eternity-bin.html

Comments

Popular posts from this blog

the attributes of web 3.0...

As the US economy continues to suffer the doldrums of stagnant investment in many industries, belt tightening budgets in many of the largest cities and continuous rounds of lay offs at some of the oldest of corporations, it is little comfort to those suffering through economic problems that what is happening now, has happened before. True, the severity of the downturn might have been different but the common factors of people and businesses being forced to do more with less is the theme of the times. Like environmental shocks to an ecosystem, stresses to the economic system lead to people hunkering down to last the storm, but it is instructive to realize that during the storm, all that idle time in the shelter affords people the ability to solve previous or existing problems. Likewise, economic downturns enable enterprising individuals and corporations the ability to make bold decisions with regard to marketing , sales or product focus that can lead to incredible gains as the economic

How many cofactors for inducing expression of every cell type?

Another revolution in iPSC technology announced: "Also known as iPS cells, these cells can become virtually any cell type in the human body -- just like embryonic stem cells. Then last year, Gladstone Senior Investigator Sheng Ding, PhD, announced that he had used a combination of small molecules and genetic factors to transform skin cells directly into neural stem cells. Today, Dr. Huang takes a new tack by using one genetic factor -- Sox2 -- to directly reprogram one cell type into another without reverting to the pluripotent state." -- So the method invented by Yamanaka is now refined to rely only 1 cofactor and b) directly generate the target cell type from the source cell type (skin to neuron) without the stem like intermediate stage.  It also mentions that oncogenic triggering was eliminated in their testing. Now comparative methods can be used to discover other types...the question is..is Sox2 critical for all types? It may be that skin to neuron relies on Sox2

AgilEntity Architecture: Action Oriented Workflow

Permissions, fine grained versus management headache The usual method for determining which users can perform a given function on a given object in a managed system, employs providing those Users with specific access rights via the use of permissions. Often these permissions are also able to be granted to collections called Groups, to which Users are added. The combination of Permissions and Groups provides the ability to provide as atomic a dissemination of rights across the User space as possible. However, this granularity comes at the price of reduced efficiency for managing the created permissions and more importantly the Groups that collect Users designated to perform sets of actions. Essentially the Groups serve as access control lists in many systems, which for the variable and often changing environment of business applications means a need to constantly update the ACL’s (groups) in order to add or remove individuals based on their ability to perform cert