Skip to main content

considerations during design

The software design process boils down to a few key abilities:

  • Defining the problem scope.
  • Targeting the most applicable solution domain for the scope of the problem. Scope encompasses all extremes of the problem space, from the unlikely and rare scenarios to the very common scenarios, these extremes moderate the demands for resources, disk, processor and bandwidth. The art of good design lies in knowing how to tailor your solutions to the most applicable solution domain for the problem at hand.
  • Implementing the solution for the applicable domain of importance that is most efficient as opposed to most expedient.
  • If multiple solution domains must be covered ensuring seamless transition of solutions from one algorithm to the next.

The first and second points are most important, as you won't know how best to solve a problem if you can't define its' extents. Many times developers are unable to put their fingers on all aspects to a problem, this is unfortunate as it may severely restrict the solution they engineer as their ignorance of aspects of the problem that can be exploited for symmetry leads them to make inferior algorithm choices for the solution domain. How do you determine a problems extents? You make sure you test it at the extremes of performance using an old tool, a thought experiment. Consider a very unlikely high load or activity situation and then roughly define an algorithm that solves it (one solution domain), then consider the opposite , very low load and determine the optimum solution domain for that problem regime, finally determine a middle of the road situation and define a solution for that..depending on the problem, you may find that a given solution is optimal across the entire problem space or it may indeed require separate optimizations within the problem scope. Once you are done testing these edges or the event horizon of the problem, you have covered all the realizable conditions and thus can be assured you are engineering optimum solutions even if the eventual algorithm for most cases will not extend into the extreme scenarios discovered. In fact the act of defining the problem already sets you on the road to the solution, as by this task you also determine which of the identified solution domains are the most likely use case for the load, resource and bandwidth constraints of the final implementation.

The next two points cover the implementation which I like to call popcorn, the solution domains have been isolated, the optimal domain(s) for the problem in question explored using the previous intellectual muscle work and now it is time to just do the grunt work of building it. Now the mind shifts from looking at big picture concerns of latency between servers in a cluster to little picture concerns local to the executing machine during run time. A good example is in noticing how the choice of a variable declaration as static can effect memory utilization on a machine, other similar concerns are the choice of implementing a method as a concrete or as a forced overridden method from a abstract base class or interface inheritance. These choices can hinder or help the execution of code efficiently on any given system. One that I tend to pay particular attention to is byte size, every character in your code that is not needed is memory taken during execution, under loaded conditions these bytes add up to significant performance reduction so making your code as tight as possible through extreme parsimony of characters directly benefits efficiency in the long run. The rule of thumb I use is , use as many characters as required to ensure intelligibility of the code and no more. Another major source of issues lies in making classes too big, a class should only contain methods that are inately associated with the class. It makes sense for a "File" class to have a "read" method but it probably doesn't make sense for it to have its own "copy to" (as "copy to" should be something you do to Files not that Files do to themselves, a very subtle distinction), also note when a function that you wish to add to a class could also be useful to other classes. These generalized functions are better off in a static Utilities class where they can be employed on the different classes that need them, and where the static nature of the class ensures minimal loading of the method code for each instance of classes that employ the function. For example, if "copy to" was implemented in "File" it would be loaded to memory every time a File was instanced, taking up critical resources to do it, under load this seemingly small difference could prematurely curtail performance and directly impact operating cost. Moreover, loading the class instance (and all its methods) does not guarantee they will be used...so the loading is waste for most cases (especially for a File class where you most likely want to read from it, write to it rather than copy it to some location) By having the "copy to" method in a static Utility you ensure that it is highly likely to be used over the landscape and lifecycle of ALL classes that are in your class hierarchy that may require the function.

Finally, and related to the last point of putting methods where they are most likely to be used, is the idea of just in time coding, you want to make sure that when you load something you will be using ALL of its code, if you don't then you should consider loading the thing in parts (or exporting the functions that aren't used to static classes as mentioned previously) what this ends up doing is it makes your class hierarchy wide but not tall. Meaning your memory profile looks like loading of many LITTLE classes rather than loading of few LARGE classes. Under loaded conditions the latter option is far more inefficient with resources than the former so remember , many Little is better than few Large when it comes to classes (or any atomically executed code unit; html page, javascript, jsp templates...etc.) Considerations like these are the ones that refine the big picture solution domains into solutions that highly conform to the over all problem over its entire scope and yield an optimal and elegant solution. As with anything else , practice performing the steps on real problems gains one facility at performing the tasks.

Comments

Popular posts from this blog

the attributes of web 3.0...

As the US economy continues to suffer the doldrums of stagnant investment in many industries, belt tightening budgets in many of the largest cities and continuous rounds of lay offs at some of the oldest of corporations, it is little comfort to those suffering through economic problems that what is happening now, has happened before. True, the severity of the downturn might have been different but the common factors of people and businesses being forced to do more with less is the theme of the times. Like environmental shocks to an ecosystem, stresses to the economic system lead to people hunkering down to last the storm, but it is instructive to realize that during the storm, all that idle time in the shelter affords people the ability to solve previous or existing problems. Likewise, economic downturns enable enterprising individuals and corporations the ability to make bold decisions with regard to marketing , sales or product focus that can lead to incredible gains as the economic ...

How many cofactors for inducing expression of every cell type?

Another revolution in iPSC technology announced: "Also known as iPS cells, these cells can become virtually any cell type in the human body -- just like embryonic stem cells. Then last year, Gladstone Senior Investigator Sheng Ding, PhD, announced that he had used a combination of small molecules and genetic factors to transform skin cells directly into neural stem cells. Today, Dr. Huang takes a new tack by using one genetic factor -- Sox2 -- to directly reprogram one cell type into another without reverting to the pluripotent state." -- So the method invented by Yamanaka is now refined to rely only 1 cofactor and b) directly generate the target cell type from the source cell type (skin to neuron) without the stem like intermediate stage.  It also mentions that oncogenic triggering was eliminated in their testing. Now comparative methods can be used to discover other types...the question is..is Sox2 critical for all types? It may be that skin to neuron relies on Sox2 ...

AgilEntity Architecture: Action Oriented Workflow

Permissions, fine grained versus management headache The usual method for determining which users can perform a given function on a given object in a managed system, employs providing those Users with specific access rights via the use of permissions. Often these permissions are also able to be granted to collections called Groups, to which Users are added. The combination of Permissions and Groups provides the ability to provide as atomic a dissemination of rights across the User space as possible. However, this granularity comes at the price of reduced efficiency for managing the created permissions and more importantly the Groups that collect Users designated to perform sets of actions. Essentially the Groups serve as access control lists in many systems, which for the variable and often changing environment of business applications means a need to constantly update the ACL’s (groups) in order to add or remove individuals based on their ability to perform cert...