Java Performance Series

March 22, 2012 | By kiranbadi1991 | Filed in: Development, Environment, Memory, Performance Engineering.

It’s really been long time I have worked on performance testing of the java based applications (Good 2+ years), so in order to rehearse my past experience on java based applications, I am thinking to start series of posts which will showcase my thoughts on the testing/identifying/isolating/fixing/suggesting some of the key performance issues which I have seen/observed/fixed while working on the java based applications.

We know that performance tuning of the java based applications is a kind of painful iterative process where there is no single size fit all solution which can help to determine the optimum memory requirements of the java based application. I call this as painful process for the simple reason that there are very few people who are ready to make changes to the code base in order to fix the performance issue and almost no one in case if we are dealing with legacy systems or legacy applications which has no original SME’s working for that application. Any change in the code base is considered as a high risk item unless it’s a very low hanging fruit and something which is external and yet impacts application performance(think load balancing). So I believe that’s one of primary reasons as why lot many people turn to tune memory allocation requirements rather than fix the badly composed/written or outdated data structure code used by the java based application. Another good valid reason I could think of is that hardware has become lot cheaper than hiring the developer to fix the issue and however this approach also by no means assures the business that it’s going to fix the original issue without any side effects to the other part of the code.There always exists a risk for regression.

Allocating the right size of the memory to the Java heap along the right JVM runtime environments can help to mitigate some/most of the performance issues but definitely not all especially if you have designed the application without keeping performance engineering requirements in your mind. Memory requirements for Java based application are quite often described/measured in terms of Java heap size. Lot many folks says larger the heap size better the performance in terms of latency and throughput, but I believe otherwise for simple reason that if you have bad code which is consuming a lot of memory, larger heap size will give that bad piece of code extra time to live rather than make it fail fast.That’s band aid and not the permanent fix. (IIS App pool recycling technique used by IIS is one such good example for this).

Tuning the JVM often helps in ensuring that application meets acceptable level of response time/throughput/availability .To large extent we can also improve the start time/latency/throughput and manageability of the application by the tuning the JVM and using right runtime environment. The availability of the applications can also be improved by deploying the applications across multiple JVM’s provided your application is designed in such a way that it supports this solution. Client JVM Runtime environments often have good start up time and provide good throughput and latency compare Server JVM Runtime environments, but lacks the code optimization techniques used by the server runtime environment. Depending on the application and system requirements one can choose between client and server runtime environments.

That’s it for now, stay tuned for next post on some of my weird thoughts on Java performance stuff.

Technorati Tags:


Comments are closed here.