Skip to main content

considerations during design

The software design process boils down to a few key abilities:

  • Defining the problem scope.
  • Targeting the most applicable solution domain for the scope of the problem. Scope encompasses all extremes of the problem space, from the unlikely and rare scenarios to the very common scenarios, these extremes moderate the demands for resources, disk, processor and bandwidth. The art of good design lies in knowing how to tailor your solutions to the most applicable solution domain for the problem at hand.
  • Implementing the solution for the applicable domain of importance that is most efficient as opposed to most expedient.
  • If multiple solution domains must be covered ensuring seamless transition of solutions from one algorithm to the next.

The first and second points are most important, as you won't know how best to solve a problem if you can't define its' extents. Many times developers are unable to put their fingers on all aspects to a problem, this is unfortunate as it may severely restrict the solution they engineer as their ignorance of aspects of the problem that can be exploited for symmetry leads them to make inferior algorithm choices for the solution domain. How do you determine a problems extents? You make sure you test it at the extremes of performance using an old tool, a thought experiment. Consider a very unlikely high load or activity situation and then roughly define an algorithm that solves it (one solution domain), then consider the opposite , very low load and determine the optimum solution domain for that problem regime, finally determine a middle of the road situation and define a solution for that..depending on the problem, you may find that a given solution is optimal across the entire problem space or it may indeed require separate optimizations within the problem scope. Once you are done testing these edges or the event horizon of the problem, you have covered all the realizable conditions and thus can be assured you are engineering optimum solutions even if the eventual algorithm for most cases will not extend into the extreme scenarios discovered. In fact the act of defining the problem already sets you on the road to the solution, as by this task you also determine which of the identified solution domains are the most likely use case for the load, resource and bandwidth constraints of the final implementation.

The next two points cover the implementation which I like to call popcorn, the solution domains have been isolated, the optimal domain(s) for the problem in question explored using the previous intellectual muscle work and now it is time to just do the grunt work of building it. Now the mind shifts from looking at big picture concerns of latency between servers in a cluster to little picture concerns local to the executing machine during run time. A good example is in noticing how the choice of a variable declaration as static can effect memory utilization on a machine, other similar concerns are the choice of implementing a method as a concrete or as a forced overridden method from a abstract base class or interface inheritance. These choices can hinder or help the execution of code efficiently on any given system. One that I tend to pay particular attention to is byte size, every character in your code that is not needed is memory taken during execution, under loaded conditions these bytes add up to significant performance reduction so making your code as tight as possible through extreme parsimony of characters directly benefits efficiency in the long run. The rule of thumb I use is , use as many characters as required to ensure intelligibility of the code and no more. Another major source of issues lies in making classes too big, a class should only contain methods that are inately associated with the class. It makes sense for a "File" class to have a "read" method but it probably doesn't make sense for it to have its own "copy to" (as "copy to" should be something you do to Files not that Files do to themselves, a very subtle distinction), also note when a function that you wish to add to a class could also be useful to other classes. These generalized functions are better off in a static Utilities class where they can be employed on the different classes that need them, and where the static nature of the class ensures minimal loading of the method code for each instance of classes that employ the function. For example, if "copy to" was implemented in "File" it would be loaded to memory every time a File was instanced, taking up critical resources to do it, under load this seemingly small difference could prematurely curtail performance and directly impact operating cost. Moreover, loading the class instance (and all its methods) does not guarantee they will be the loading is waste for most cases (especially for a File class where you most likely want to read from it, write to it rather than copy it to some location) By having the "copy to" method in a static Utility you ensure that it is highly likely to be used over the landscape and lifecycle of ALL classes that are in your class hierarchy that may require the function.

Finally, and related to the last point of putting methods where they are most likely to be used, is the idea of just in time coding, you want to make sure that when you load something you will be using ALL of its code, if you don't then you should consider loading the thing in parts (or exporting the functions that aren't used to static classes as mentioned previously) what this ends up doing is it makes your class hierarchy wide but not tall. Meaning your memory profile looks like loading of many LITTLE classes rather than loading of few LARGE classes. Under loaded conditions the latter option is far more inefficient with resources than the former so remember , many Little is better than few Large when it comes to classes (or any atomically executed code unit; html page, javascript, jsp templates...etc.) Considerations like these are the ones that refine the big picture solution domains into solutions that highly conform to the over all problem over its entire scope and yield an optimal and elegant solution. As with anything else , practice performing the steps on real problems gains one facility at performing the tasks.


Popular posts from this blog

Highly targeted Cpg vaccine immunotherapy for a range of cancer


This will surely go down as a seminal advance in cancer therapy. It reads like magic:

So this new approach looks for the specific proteins that are associated with a given tumors resistance to attack by the body's T cells, it then adjusts those T cells to be hyper sensitive to the specific oncogenic proteins targeted. These cells become essentially The Terminator​ T cells in the specific tumor AND have the multiplied effect of traveling along the immune pathway of spreading that the cancer many have metastasized. This is huge squared because it means you can essentially use targeting one tumor to identify and eliminate distal tumors that you many not even realize exist.

This allows the therapy for treating cancer to, for the first time; end the "wack a mole" problem that has frustrated traditional shot gun methods of treatment involving radiation and chemotherapy ...which by their nature unfortunately damage parts of the body that are not cancer laden but …

Engineers versus Programmers

I have found as more non formally trained people enter the coding space, the quality of code that results varies in an interesting way.

The formalities of learning to code in a structured course at University involve often strong focus on "correctness" and efficiency in the form of big O representations for the algorithms created.

Much less focus tends to be placed on what I'll call practical programming, which is the type of code that engineers (note I didn't use "programmers" on purpose) must learn to write.

Programmers are what Universities create, students that can take a defined development environment and within in write an algorithm for computing some sequence or traversing a tree or encoding and decoding a string. Efficiency and invariant rules are guiding development missions. Execution time for creating the solution is often a week or more depending on the professor and their style of teaching code and giving out problems. This type of coding is devo…

AgilEntity Architecture: Action Oriented Workflow

Permissions, fine grained versus management headache
The usual method for determining which users can perform a given function on a given object in a managed system, employs providing those Users with specific access rights via the use of permissions. Often these permissions are also able to be granted to collections called Groups, to which Users are added. The combination of Permissions and Groups provides the ability to provide as atomic a dissemination of rights across the User space as possible. However, this granularity comes at the price of reduced efficiency for managing the created permissions and more importantly the Groups that collect Users designated to perform sets of actions. Essentially the Groups serve as access control lists in many systems, which for the variable and often changing environment of business applications means a need to constantly update the ACL’s (groups) in order to add or remove individuals based on their ability to perform certain actions. Also, the…