Skip to main content

avoiding de spaghettification in client implementations of good OO classes

http://en.wikipedia.org/wiki/Wilkins_Sound

That's about as political as this post will get. nuff said.

In other news, I did not get much done today. I rebuilt the software distribution for a windows environment , incorporating the fix from last night that I was working on for the previous 3 days (but thought would only take 3 minutes)...ha!

The fix was in the code to my guest pm dashboard API that I added a few weeks ago. If you have ever been to a site with a "live help" feature, you know what a guest pm dashboard is about. I targeted it as an easily added service to my collaboration API because it takes advantage of the unique architecture of my distributed framework. (Built in multi-tenancy, build in fine grained permissions, automatic auditing of guest requests and agent engagement history) The solution details are not really relevant but the bug highlights a problem that can crop up when too much code is put into a single dynamic resource. For example, recommended OO design principles involve creating classes in methods to encapsulate the specific elements of functionality in the problem space into methods associated with classes that map to entities in the problem space. This is the art of OO design that comes only with much practice encoding problems into solutions. I talked about it previously in several posts , like this one .
The thing is, is that some problems have many methods or functions that are mapped logically. In the final implementation these classes can end up being very thick, even if at any given time, the client code that uses the class will invoke only a fraction of the methods at any given time. So it is possible to follow the correct design precepts of OO and end up designing classes which during run time are memory inefficient because of the specific use pattern of the class objects as designed in the final client code that implements the class.

This type of problem sneaks up on the coder slowly and quietly. I had this happen not with a class but with the actual client code as implemented in a jsp template. A single template named "emit" is used to manage the authentication and interaction in a conversation regardless of the type of conversation. The associated class conversation has a type attribute which defines 4 distinct types so far.

  • Instant message (2 participants max)
  • conference (n participants)
  • im mail (1 participant)
  • guest pm client im.

Each conversation type has associated code in the emit template that is unique to that type, now during my initial implementation only two types (IM and conference) were planned but the other two types were added as those functions were deemed necessary. Slowly the emit template ballooned to support the different initiation logic for each type of conversation and the actual code for the unique elements. Currently the emit template is a fat 155kb uncompiled but slims down to 119kb in the server executed compiled form. I can optimize the code in it to get it under 100kb per conversation instance but the ideal solution would be to cut up the functions for each conversation type into separate jsp or servlets. I can keep the common elements (authentication, message management, presence and file display) in one template and then create specific templates for the conversation unique invocation elements. (initializing an IM , conversation , im mail or guest pm) This would then allow the 119kb to be cut up into smaller blocks that execute only in the time that they need. This would allow the continuous memory hit for run time conversation actions under load to have a lower profile and fewer spikes on the servers. Of course the cost is additional work for me in time but the benefits are lower average memory load for conversations, which allows more conversations to be active in a given amount of memory , which allows better scalability for a given amount of memory which is simply cheaper for me to procure. I literally get a more gradual scalability profile under load, which for the paid service options directly determines how much revenue can be pulled from each server. So though I haven't implemented the described client code split, I have it targetted for optimization just before launch. However, the point is that it is possible to follow OO principles and because of the loaded nature of the problem space (many different types of conversations in this one) certain objects can become "heavy" on resource utilization under load and negatively impact performance. In such situations it is necessary to break up the solution into logically related items that can be invoked just in time to ensure an efficient resource utilization profile. It is actually a good problem to have in the sense that optimization may result in significant memory efficiency gains for little more than the time of doing the optimization. (which amounts to taking out scissors on the client code) How do you determine if you'll have this issue? You simply ask the question if the number of variants for a given attribute of the class will be finite and if those variants require associated unique code in client implementations. If it does, the best option is to create unique client blocks of dynamic code (jsp, servlet in this case) for each attribute variant.

So keep an eye out for solutions that overload in such a way that they may lead to inefficient memory or processor utilization at run time under loaded conditions, this way you can do the cutting before hand and only code the chunks that are distinct for the new function being added. Of course, if you are doing good OO, the only difference between doing it before hand and after is that when you do it after you may have to do some de spaghettification of the combined client code but you get to realize the resource reduction as a hopefully noticable increase in scalability on your servers. ;)

Comments

Popular posts from this blog

the attributes of web 3.0...

As the US economy continues to suffer the doldrums of stagnant investment in many industries, belt tightening budgets in many of the largest cities and continuous rounds of lay offs at some of the oldest of corporations, it is little comfort to those suffering through economic problems that what is happening now, has happened before. True, the severity of the downturn might have been different but the common factors of people and businesses being forced to do more with less is the theme of the times. Like environmental shocks to an ecosystem, stresses to the economic system lead to people hunkering down to last the storm, but it is instructive to realize that during the storm, all that idle time in the shelter affords people the ability to solve previous or existing problems. Likewise, economic downturns enable enterprising individuals and corporations the ability to make bold decisions with regard to marketing , sales or product focus that can lead to incredible gains as the economic ...

How many cofactors for inducing expression of every cell type?

Another revolution in iPSC technology announced: "Also known as iPS cells, these cells can become virtually any cell type in the human body -- just like embryonic stem cells. Then last year, Gladstone Senior Investigator Sheng Ding, PhD, announced that he had used a combination of small molecules and genetic factors to transform skin cells directly into neural stem cells. Today, Dr. Huang takes a new tack by using one genetic factor -- Sox2 -- to directly reprogram one cell type into another without reverting to the pluripotent state." -- So the method invented by Yamanaka is now refined to rely only 1 cofactor and b) directly generate the target cell type from the source cell type (skin to neuron) without the stem like intermediate stage.  It also mentions that oncogenic triggering was eliminated in their testing. Now comparative methods can be used to discover other types...the question is..is Sox2 critical for all types? It may be that skin to neuron relies on Sox2 ...

AgilEntity Architecture: Action Oriented Workflow

Permissions, fine grained versus management headache The usual method for determining which users can perform a given function on a given object in a managed system, employs providing those Users with specific access rights via the use of permissions. Often these permissions are also able to be granted to collections called Groups, to which Users are added. The combination of Permissions and Groups provides the ability to provide as atomic a dissemination of rights across the User space as possible. However, this granularity comes at the price of reduced efficiency for managing the created permissions and more importantly the Groups that collect Users designated to perform sets of actions. Essentially the Groups serve as access control lists in many systems, which for the variable and often changing environment of business applications means a need to constantly update the ACL’s (groups) in order to add or remove individuals based on their ability to perform cert...