Skip to main content

the butterfly effect and green coding

Optimization of code is for me one of the most fun parts of development, at the same time it tends to be one of the most fleeting. The reason is that if you take care to ensure that your design is optimal for the problem scope encountered during the application life time, you reduce the need in a general sense for optimizations that require design changes. This minimizes a class of optimizations that tend to be the most costly to repair, which is a good thing. Still though, you are likely to need several rounds of optimization that involves the elements that are not directly made more efficient even when you have selected the optimal overall design, these elements show up in the minutia of the actual code, inside the classes and methods. The Java language has various gotchas that can affect code and reduce performance even while the code itself looks fine. Circular object references can pop up and despite judicious attempts by the automatic garbage collector , memory leaks will result as those circular references prevent proper and total object cleanup for objects no longer in use.

This post isn't about the details of garbage collection it is more about the little things that we can do to optimize code by ensuring we are executing minimal code in memory. Often developers fail to realize how much performance improvement they can get simply by naming a commonly requested object or variable with a long name like twicethelengthofi to the more minimal 2timesle, these hypothetical examples are used to illustrate a point. Let us say that the code for the object is instanced quite frequently in the application for performing a common action for all User sessions say, assume that the Server node running the operation has a nice 4 Gigabytes of ram, assume also that the executing code is a total of 50kb without the name and references the object 5 times in it , how many more User sessions can be fit in that 4 Gigaytes of ram using 2timeslen as the object name than using twicethelengthofit , with the shorter label only 8 bits shorter than the first, it doesn't seem like much, one byte saved for each reference of the name , per 50kb code section loaded into memory.

So that is:

1 byte X 5 object name references = 5 bytes per code chunk for name of 2timeslen

making each User session require 55kb out of that 4 Gigs which through simple division supports
72727.27272 User code chunks loaded in our available 4 Gb space.

Now if we double the name reference size (note only the name) we require 2 bytes per references.

making each User session require 10kb , which brings per User code to 60kb and that reduces our simultaneous User count to:


A difference of over 6060 Users.

Now as a fraction of the total ram using the long name reference, 6060 is only ~8% of 72,727 but that is 8% loss in the usefulness of our memory to a whole bunch (6060) of Users. For an application to run under scaled conditions , every drop of memory must be conserved and if cutting a simple object name reference in half will allow support of 6,000 more Users then why not consciously think about the affects of loaded memory size for our code. Now, one key issue that must be recognized is the fact that we assumed that the reference to the object in the code was only limited to 5, in OO code because of how methods are invoked it could be possible that for a given piece of a code the name is referenced a lot more times than 5 in the code. Thus depending on how many name references are in the loaded code chunk the number of Users lost could go up quite a bit more. This is particularly true for multi threaded code that requires that each reference must be unique (say to ensure some persistence of state for each User), these areas of the application are the ones that are memory inefficient in object references and should be targeted for this type of optimization, the savings realized could be quite surprising. In my own testing on optimizations made to my framework code I have been able to save great deals of memory because the design is heavy on multi-threaded principles. The use of multi-threading allows us to press objects or classes into service and then time invoke them as they are requested by Users, we then retire them after use this design is great for ensuring a scalable system but it is subject to the mentioned memory efficiency issues. I went through the code that is invoked in this way and reduced unnecessary object and parameter labels to their barest minimum while preserving comprehension and was able to realize a noticeable improvement in performance and a slower ramp of memory over time under scaled conditions.

If for example we had 10 references , again we have one byte per reference, at 20 bytes (2 bytes per reference times 10) that brings our code chunk per User to 70kb and reduces simultaneous chunks in RAM to:

57142.85 , a full loss of over 13,000 Users. Now this example is hypothetical, we didn't for example account for any other code that might be needed per User so that we could see the effects of just changing the object allocation names by small amounts a single byte! to see how those changes eat into our available memory as large numbers of requests are made to the system. It might seem like an academic exercise, a warning that paying attention to minimal labels in our code is important but it directly affects the bottom line once that code is on a server.
The ability to serve 13,000 more Users can make the difference between software that is cost efficient to run versus one that simply wastes resources and requires either more ram or more servers to accommodate a given number of Users. The business end only cares if they can fit as many Users as possible using a given set of machines and resources. The developer tends to only care about optimization from the birds eye view of overall architecture but tends to ignore the importance of "little things" like optimizing object names. Imagine a simple task, going through your code and reducing object names by bits at a time could significantly boost the performance of your application while simultaneously getting the business side of the company to see you as a God by allowing a greater level of scalability for a given expenditure to acquire the hardware and memory /disk resources to accommodate the application. More users per server , means less memory per server, which means less cost for memory to support a given number of Users, which means cheaper procurement costs per server , which means either more servers for a given cost (to support more users) or less servers and less operating costs to support a given set of Users and that means a leaner business with a better ability to redeploy that money to other aspects of the business like say making the product or service provided more cost competitive with other providers. Another important advantage of this relentless efficiency seeking view on coding is the savings in operating costs for the required servers, if it costs less power to provide a service it means less effect on the environment, less power means less stress on the power generation infrastructure which is heavy on carbon based fuels to allow it. The computers themselves are an actual green house source in the ozone effects that accompany electric power conversion. Like the effects of tiny wind currents generated about the wings of a butterfly , the distortions ripple their energy through out the atmosphere triggering or exacerbating climate conditions in completely different parts of the world. Similarly, the effects of our code practices ripple in often unrealized ways into the efficiency of the over all business. The term "think global act local" comes time mind to explain how we should be looking at the effects of our code in order to make it both faster and tighter and reduce our impact on the business and the businesses impact on the world. I call this way of coding, green coding for obvious reasons.

So think about how you can go through your code, analyze it for references that are called often in memory and reduce their label size (could performance improvement be any easier!) might see a bigger boost in performance than you could have ever realized just by doing this simple task and your boss might be more loose with his wallet when bonus time comes around. ;)


concurrency (multi threading) in java

butterfly effect


Popular posts from this blog

Highly targeted Cpg vaccine immunotherapy for a range of cancer


This will surely go down as a seminal advance in cancer therapy. It reads like magic:

So this new approach looks for the specific proteins that are associated with a given tumors resistance to attack by the body's T cells, it then adjusts those T cells to be hyper sensitive to the specific oncogenic proteins targeted. These cells become essentially The Terminator​ T cells in the specific tumor AND have the multiplied effect of traveling along the immune pathway of spreading that the cancer many have metastasized. This is huge squared because it means you can essentially use targeting one tumor to identify and eliminate distal tumors that you many not even realize exist.

This allows the therapy for treating cancer to, for the first time; end the "wack a mole" problem that has frustrated traditional shot gun methods of treatment involving radiation and chemotherapy ...which by their nature unfortunately damage parts of the body that are not cancer laden but …

Engineers versus Programmers

I have found as more non formally trained people enter the coding space, the quality of code that results varies in an interesting way.

The formalities of learning to code in a structured course at University involve often strong focus on "correctness" and efficiency in the form of big O representations for the algorithms created.

Much less focus tends to be placed on what I'll call practical programming, which is the type of code that engineers (note I didn't use "programmers" on purpose) must learn to write.

Programmers are what Universities create, students that can take a defined development environment and within in write an algorithm for computing some sequence or traversing a tree or encoding and decoding a string. Efficiency and invariant rules are guiding development missions. Execution time for creating the solution is often a week or more depending on the professor and their style of teaching code and giving out problems. This type of coding is devo…

AgilEntity Architecture: Action Oriented Workflow

Permissions, fine grained versus management headache
The usual method for determining which users can perform a given function on a given object in a managed system, employs providing those Users with specific access rights via the use of permissions. Often these permissions are also able to be granted to collections called Groups, to which Users are added. The combination of Permissions and Groups provides the ability to provide as atomic a dissemination of rights across the User space as possible. However, this granularity comes at the price of reduced efficiency for managing the created permissions and more importantly the Groups that collect Users designated to perform sets of actions. Essentially the Groups serve as access control lists in many systems, which for the variable and often changing environment of business applications means a need to constantly update the ACL’s (groups) in order to add or remove individuals based on their ability to perform certain actions. Also, the…