Skip to main content

the butterfly effect and green coding

Optimization of code is for me one of the most fun parts of development, at the same time it tends to be one of the most fleeting. The reason is that if you take care to ensure that your design is optimal for the problem scope encountered during the application life time, you reduce the need in a general sense for optimizations that require design changes. This minimizes a class of optimizations that tend to be the most costly to repair, which is a good thing. Still though, you are likely to need several rounds of optimization that involves the elements that are not directly made more efficient even when you have selected the optimal overall design, these elements show up in the minutia of the actual code, inside the classes and methods. The Java language has various gotchas that can affect code and reduce performance even while the code itself looks fine. Circular object references can pop up and despite judicious attempts by the automatic garbage collector , memory leaks will result as those circular references prevent proper and total object cleanup for objects no longer in use.

This post isn't about the details of garbage collection it is more about the little things that we can do to optimize code by ensuring we are executing minimal code in memory. Often developers fail to realize how much performance improvement they can get simply by naming a commonly requested object or variable with a long name like twicethelengthofi to the more minimal 2timesle, these hypothetical examples are used to illustrate a point. Let us say that the code for the object is instanced quite frequently in the application for performing a common action for all User sessions say, assume that the Server node running the operation has a nice 4 Gigabytes of ram, assume also that the executing code is a total of 50kb without the name and references the object 5 times in it , how many more User sessions can be fit in that 4 Gigaytes of ram using 2timeslen as the object name than using twicethelengthofit , with the shorter label only 8 bits shorter than the first, it doesn't seem like much, one byte saved for each reference of the name , per 50kb code section loaded into memory.


So that is:

1 byte X 5 object name references = 5 bytes per code chunk for name of 2timeslen

making each User session require 55kb out of that 4 Gigs which through simple division supports
72727.27272 User code chunks loaded in our available 4 Gb space.


Now if we double the name reference size (note only the name) we require 2 bytes per references.

making each User session require 10kb , which brings per User code to 60kb and that reduces our simultaneous User count to:

66666.666

A difference of over 6060 Users.

Now as a fraction of the total ram using the long name reference, 6060 is only ~8% of 72,727 but that is 8% loss in the usefulness of our memory to a whole bunch (6060) of Users. For an application to run under scaled conditions , every drop of memory must be conserved and if cutting a simple object name reference in half will allow support of 6,000 more Users then why not consciously think about the affects of loaded memory size for our code. Now, one key issue that must be recognized is the fact that we assumed that the reference to the object in the code was only limited to 5, in OO code because of how methods are invoked it could be possible that for a given piece of a code the name is referenced a lot more times than 5 in the code. Thus depending on how many name references are in the loaded code chunk the number of Users lost could go up quite a bit more. This is particularly true for multi threaded code that requires that each reference must be unique (say to ensure some persistence of state for each User), these areas of the application are the ones that are memory inefficient in object references and should be targeted for this type of optimization, the savings realized could be quite surprising. In my own testing on optimizations made to my framework code I have been able to save great deals of memory because the design is heavy on multi-threaded principles. The use of multi-threading allows us to press objects or classes into service and then time invoke them as they are requested by Users, we then retire them after use this design is great for ensuring a scalable system but it is subject to the mentioned memory efficiency issues. I went through the code that is invoked in this way and reduced unnecessary object and parameter labels to their barest minimum while preserving comprehension and was able to realize a noticeable improvement in performance and a slower ramp of memory over time under scaled conditions.

If for example we had 10 references , again we have one byte per reference, at 20 bytes (2 bytes per reference times 10) that brings our code chunk per User to 70kb and reduces simultaneous chunks in RAM to:

57142.85 , a full loss of over 13,000 Users. Now this example is hypothetical, we didn't for example account for any other code that might be needed per User so that we could see the effects of just changing the object allocation names by small amounts a single byte! to see how those changes eat into our available memory as large numbers of requests are made to the system. It might seem like an academic exercise, a warning that paying attention to minimal labels in our code is important but it directly affects the bottom line once that code is on a server.
The ability to serve 13,000 more Users can make the difference between software that is cost efficient to run versus one that simply wastes resources and requires either more ram or more servers to accommodate a given number of Users. The business end only cares if they can fit as many Users as possible using a given set of machines and resources. The developer tends to only care about optimization from the birds eye view of overall architecture but tends to ignore the importance of "little things" like optimizing object names. Imagine a simple task, going through your code and reducing object names by bits at a time could significantly boost the performance of your application while simultaneously getting the business side of the company to see you as a God by allowing a greater level of scalability for a given expenditure to acquire the hardware and memory /disk resources to accommodate the application. More users per server , means less memory per server, which means less cost for memory to support a given number of Users, which means cheaper procurement costs per server , which means either more servers for a given cost (to support more users) or less servers and less operating costs to support a given set of Users and that means a leaner business with a better ability to redeploy that money to other aspects of the business like say making the product or service provided more cost competitive with other providers. Another important advantage of this relentless efficiency seeking view on coding is the savings in operating costs for the required servers, if it costs less power to provide a service it means less effect on the environment, less power means less stress on the power generation infrastructure which is heavy on carbon based fuels to allow it. The computers themselves are an actual green house source in the ozone effects that accompany electric power conversion. Like the effects of tiny wind currents generated about the wings of a butterfly , the distortions ripple their energy through out the atmosphere triggering or exacerbating climate conditions in completely different parts of the world. Similarly, the effects of our code practices ripple in often unrealized ways into the efficiency of the over all business. The term "think global act local" comes time mind to explain how we should be looking at the effects of our code in order to make it both faster and tighter and reduce our impact on the business and the businesses impact on the world. I call this way of coding, green coding for obvious reasons.

So think about how you can go through your code, analyze it for references that are called often in memory and reduce their label size (could performance improvement be any easier!)...you might see a bigger boost in performance than you could have ever realized just by doing this simple task and your boss might be more loose with his wallet when bonus time comes around. ;)

Links:


concurrency (multi threading) in java

butterfly effect






Comments

Popular posts from this blog

the attributes of web 3.0...

As the US economy continues to suffer the doldrums of stagnant investment in many industries, belt tightening budgets in many of the largest cities and continuous rounds of lay offs at some of the oldest of corporations, it is little comfort to those suffering through economic problems that what is happening now, has happened before. True, the severity of the downturn might have been different but the common factors of people and businesses being forced to do more with less is the theme of the times. Like environmental shocks to an ecosystem, stresses to the economic system lead to people hunkering down to last the storm, but it is instructive to realize that during the storm, all that idle time in the shelter affords people the ability to solve previous or existing problems. Likewise, economic downturns enable enterprising individuals and corporations the ability to make bold decisions with regard to marketing , sales or product focus that can lead to incredible gains as the economic

How many cofactors for inducing expression of every cell type?

Another revolution in iPSC technology announced: "Also known as iPS cells, these cells can become virtually any cell type in the human body -- just like embryonic stem cells. Then last year, Gladstone Senior Investigator Sheng Ding, PhD, announced that he had used a combination of small molecules and genetic factors to transform skin cells directly into neural stem cells. Today, Dr. Huang takes a new tack by using one genetic factor -- Sox2 -- to directly reprogram one cell type into another without reverting to the pluripotent state." -- So the method invented by Yamanaka is now refined to rely only 1 cofactor and b) directly generate the target cell type from the source cell type (skin to neuron) without the stem like intermediate stage.  It also mentions that oncogenic triggering was eliminated in their testing. Now comparative methods can be used to discover other types...the question is..is Sox2 critical for all types? It may be that skin to neuron relies on Sox2

AgilEntity Architecture: Action Oriented Workflow

Permissions, fine grained versus management headache The usual method for determining which users can perform a given function on a given object in a managed system, employs providing those Users with specific access rights via the use of permissions. Often these permissions are also able to be granted to collections called Groups, to which Users are added. The combination of Permissions and Groups provides the ability to provide as atomic a dissemination of rights across the User space as possible. However, this granularity comes at the price of reduced efficiency for managing the created permissions and more importantly the Groups that collect Users designated to perform sets of actions. Essentially the Groups serve as access control lists in many systems, which for the variable and often changing environment of business applications means a need to constantly update the ACL’s (groups) in order to add or remove individuals based on their ability to perform cert