Skip to main content

The 10 Commandments of Efficient Coding.

I was recently asked for advise on how the manage a development project and decided to make it into a 10 commandments of efficient coding. This is ideal for any team size and if followed diligently leads to efficiently written code. Have any more that you would add to the list? Feel free to indicate yours in the comments below.
  1.  Thou shalt spend at least 50% of ones time thinking about a problem before actually beginning any code. Conceptual design is critical to properly grounding the code in the ballpark of the problem being solved such that the solution spans the problem space as much as possible. Not attending to this stage could lead to spectacularly bad choices of solution for the scope of a problem. 
  2. Thou shalt document everything. Every single thought on the development, ideas of implementation should be written down. I keep multiple "to do" lists in .txt files and as well in the memo field entries for my source control tool when I am checking in new code. Documentation is crucial to illuminating the dynamic thought process that informs code and allows forensic analysis in the event of problems to proceed quickly. Inside code, it goes without saying that any part of code that can't have it's function inferred on inspection should have supplemental comments added to do so...this is why comments are formal part of programming languages, use them or get lost in a sea of code mounting that would require developer time jut to figure out what it does before one even embarks on refactoring it.
  3. Thou shalt master object oriented programming. Often the debate is made that one has an equal right to choose object oriented (OO) versus procedural methods when coding but this is nonsense. OO is the default truth of reality, biology is OO, engineering is OO, physics (energy transfer and mediation) is OO and coding in that way ultimately returns efficiency benefits down the line that often can't be seen when coding. The basic tenet of code reuse ultimately enables the developer to apply conservation of energy to the act of solving big problems. The extra time that might be taken to code solutions in OO more than returns itself in much easier extension of the code as new needs emerge down the line. In some areas procedural solutions are all that are needed say for various types of scripting but even these can be embedded into an OO process. For example I use an OO workflow and file version management system that allows me to manage changes to procedural scripts so in a way OO is enabling those scripts to be more efficiently managed and executed by composition inside an OO framework.
  4. Thou shalt socially code as often as possible. Pair programming or extreme programming either through side by side programming of a task or via a visibility workflow system (like action oriented workflow) allows foolish mistakes to be caught as soon as possible. If a developer can get the aid of a buddy to assist in debugging something she's been working on for the last hour to solve the problem in a few seconds with fresh eyes than all those saved hours will add up. The milestones will be hit and more work can be done in less time. Never waste developer time with undo meetings, creative people should be given time to be creative...facilitate their creativity and more work will be done.
  5. Thou shalt always unit test ones code. Never write more than a few lines of code before testing, OO languages like java, c# make this trivial by provide the main() facility for inline testing of new code. One should never code a new method without testing how it works in main() if that method has more than 5 lines of implementation testing should probably follow with each 5 lines of code written. Catching scope errors, caste errors or null exception errors for passed in attributes during the method implementation stage is far less costly than coding everything up and then deploying the code only to find that the client coding bubbles up nasty pathologies that are hard to find. So testing in main() should be reflexive, writing test code for each method that follows one or all of the possible use cases in client code mandatory as one is writing it and one will encounter much less errors at run time in client code and the ones that one does find will likely be sourced in the client code and will not require refactoring and recompiling in the core class hierarchy.
  6. Thou shalt always close class instances that can be opened. Classes with "close()" usually do for a very important reason, to prevent some resource access from leaking memory. Be that resource access to a db via an open socket connection or access to a file on a file system, what is opened must be closed. This should not be done as part of an optimization step AFTER coding is complete it should be a core part of coding. Developers should not get lazy about closing opened resource connections otherwise nasty surprises await once client code starts being written and under massive instance creation key resources cause memory leaks that soon crash apps. and there are probably few bugs harder to track down and fix than memory leak bugs..being diligent with closing open resources can remove a significant time sink for debugging once code is deployed.
  7. Though shalt always use "finally" condition for code that can be catastrophically interrupted. In java and c# some code execution events can be interrupted by catastrophic events (pulled plugs, blown hard drives, dead processor fans) well written code heals from these events either by not implementing the results of a method in non atomic ways (ie. using lots of synchronization and critical sections for code that absolutely MUST run as a unit or not run at all)  or by guaranteeing recovery code is run should something happen. Thus both languages have "try" , "catch" , "finally" operations. Open resources in the body of the "try" should always be closed in the "finally". Interrupting exceptions should always be handled in the "catch"...the compile cost of using this condition is far outweighed by final running code that is robust to these type of failures which by the way are another major way in which memory leaks occur.
  8. Thou shalt always catch and handle exceptions, even if "handle" means "write entry to log". Using exceptions to indicate where code went wrong is a critical aid to debugging, nothing tracks a problem back to it's source better than a well written exception design. Ideally each app. should have it's own set of Exceptions that report app. specific failures and report any such failures to a log file.
  9. Though shalt always log. Every aspect of a running application should be logged to at least a text file, more elaborate means can include email or xml files meant to report to web pages but logging is critical to ensuring the health state of a running application. As a cross cutting concern to the problem solved by the app. logging is often seen by developers as icing but it is indeed the cake, efficient logging allows errors to be trapped and noted immediately and give a time line of their occurrence that can be used to tie problems to recently implemented code or other issues.
  10. Though shalt always optimize and use a profiler when available. Profilers allow a running application to be inspected for the long term inefficiency trends in memory utilization and object creation and destruction that can kill an app. for highly scalable uses (ie all the startups that junior coders out of college are writing in hopes they are the next Google or Facebook). Profilers can identify which objects are being created without being destroyed or closed, can show where possible access conflicts loom in code that accesses common resources and can indicate a trend line for how efficiently an app. will scale over time. Most critically to the needs of a running business...profilers allow developers to see how efficient their code is running and where they can gain performance and resource efficiency by simply optimizing code.Tight code is code that does a lot in a little space and is fast and scalable over instances of processing and memory resource...that is critical to the operating cost of the software and the pricing on any services that it provides. Lower pricing for services comes from efficient code which benefits the consumer who chose your service over the other guy on pricing...thus tight code helps you beat the competition. Efficient code is code that puts more money in your pocket once it is pitted against what the competition is doing. Though shalt aim to always reduce the overall code base while adding new features. A truly efficient general approach to a problem is maximally functional while being minimally complex, in the initial feature creation stage code base size scales nearly directly with feature addition as features may be very different but as code reuse of critical elements increases the cost in terms of code base addition for adding new features should reduce over time not grow if the solution is optimal. Optimizations such as name shortening in core classes and UI templates, refactoring of the same and consolidating of block of code that are reused are critical to achieving tight code that runs fast, scales deeply in memory across Users and over all costs less per server....which ultimately benefits the bottom line.

Comments

Anonymous said…
item 2: s/it's/its/

Popular posts from this blog

the attributes of web 3.0...

As the US economy continues to suffer the doldrums of stagnant investment in many industries, belt tightening budgets in many of the largest cities and continuous rounds of lay offs at some of the oldest of corporations, it is little comfort to those suffering through economic problems that what is happening now, has happened before. True, the severity of the downturn might have been different but the common factors of people and businesses being forced to do more with less is the theme of the times. Like environmental shocks to an ecosystem, stresses to the economic system lead to people hunkering down to last the storm, but it is instructive to realize that during the storm, all that idle time in the shelter affords people the ability to solve previous or existing problems. Likewise, economic downturns enable enterprising individuals and corporations the ability to make bold decisions with regard to marketing , sales or product focus that can lead to incredible gains as the economic

How many cofactors for inducing expression of every cell type?

Another revolution in iPSC technology announced: "Also known as iPS cells, these cells can become virtually any cell type in the human body -- just like embryonic stem cells. Then last year, Gladstone Senior Investigator Sheng Ding, PhD, announced that he had used a combination of small molecules and genetic factors to transform skin cells directly into neural stem cells. Today, Dr. Huang takes a new tack by using one genetic factor -- Sox2 -- to directly reprogram one cell type into another without reverting to the pluripotent state." -- So the method invented by Yamanaka is now refined to rely only 1 cofactor and b) directly generate the target cell type from the source cell type (skin to neuron) without the stem like intermediate stage.  It also mentions that oncogenic triggering was eliminated in their testing. Now comparative methods can be used to discover other types...the question is..is Sox2 critical for all types? It may be that skin to neuron relies on Sox2

AgilEntity Architecture: Action Oriented Workflow

Permissions, fine grained versus management headache The usual method for determining which users can perform a given function on a given object in a managed system, employs providing those Users with specific access rights via the use of permissions. Often these permissions are also able to be granted to collections called Groups, to which Users are added. The combination of Permissions and Groups provides the ability to provide as atomic a dissemination of rights across the User space as possible. However, this granularity comes at the price of reduced efficiency for managing the created permissions and more importantly the Groups that collect Users designated to perform sets of actions. Essentially the Groups serve as access control lists in many systems, which for the variable and often changing environment of business applications means a need to constantly update the ACL’s (groups) in order to add or remove individuals based on their ability to perform cert