Home     RSSRSS

Posts Tagged ‘Performance Engineering’

Log the Servlet Processing Time

January 12, 2014 by kiranbadi1991 | Comments Off on Log the Servlet Processing Time | Filed in Development, Environment, Performance Engineering

Recently one of my friend who had a Java based web application which was was recently developed and deployed in the production environment was facing some performance issues contacted me. For some reasons in that application couple of functionality was taking more time to show up than few other functionalities, after doing some investigation, we realized that couple of servlets were taking more time than others for processing on the server side. First we identified servlets, then identified DAO associated with those servlets and then finally queries associated with that DAO.Once queries were identified we did some load testing for those queries using different data sets.

We used the below program to gather the required information about the servlet processing time,

import javax.servlet.ServletRequest;
import javax.servlet.ServletRequestEvent;
import javax.servlet.ServletRequestListener;
import javax.servlet.annotation.WebListener;
import javax.servlet.http.HttpServletRequest;
public class logprocessingtime implements ServletRequestListener {
    public void requestInitialized(ServletRequestEvent sre) {
        ServletRequest servletRequest = sre.getServletRequest();
        servletRequest.setAttribute("start", System.nanoTime());
    public void requestDestroyed(ServletRequestEvent sre) {
        ServletRequest servletRequest = sre.getServletRequest();
        Long start = (Long) servletRequest.getAttribute("start");
        Long end = System.nanoTime();
        HttpServletRequest httpServletRequest =
                (HttpServletRequest) servletRequest;
        String uri = httpServletRequest.getRequestURI();
        System.out.println("time taken to execute servlet " + uri + ":" + ((end - start) / 1000) + "microseconds");

The above java program is basically a web listener which we registered it to the servlet container and it listens for every request that web application receives. For every request the application receives we log the response time taken for it to process the request.

This program meets the servlet 3.0 Specs and in case if you want it to port to 2.4 , then probably you need to register the listener with web.xml.

More information about Servlet Request listener can be found here. Its powerful interface and it helps to get lot of troubleshooting information.

Tags: , , ,

Some Good,Not so Good,Bad things about Logging and its impact on Application Performance.

February 15, 2013 by kiranbadi1991 | Comments Off on Some Good,Not so Good,Bad things about Logging and its impact on Application Performance. | Filed in Development, Environment, Performance Engineering, Quality, SilkPerformer

Over the years working in projects and after analyzing logs year after year I have seen one trend which is been frequently repeated by developer or development teams, they just don’t log things about application what they need to log. Since they don’t log information correctly, all activities further downstream become painful and subsequently lead to not so memorable fix ever for a smaller issue.

Now why I am giving so much importance to logging? Probably I love logs and probably I know something about importance of logging and therefore maybe I can share with you as why logging is important, what things we need to log, what logging levels are, how they help project teams and when they become bottleneck.

Log4j is one of most frequently used logging libraries out there and it’s also one of the most robust libraries used for logging. Log4j can be configured via properties file and this is good approach for setting up logging. With log4j, we can log information with quite a few levels of granularity, namely debug level, info level, warn level, error level, and fatal level, all and trace level. Below is my high level understanding of these levels,

  • Debug levels are used when we need to debug the issues which are minor in nature and are occurring frequently.
  • Info level is used when we need to log events which are significant in nature and are important events in the application life cycle such as initialization of JNDI Resource, Data source etc. Logging all the significant events helps get to information without running into the code. We can also track the progress of the application at granular level.
  • Warn Level is used to log minor problems which are more of warnings and these are ones which at times are external in nature. They are used to log potentially harmful events. Example logging the input parameters which are not acceptable to the program.
  • Error Level is used to log exception details or errors which break the functionality of the application. Normally errors do not crash the application, however it might break the subset of functionality which generates the error.
  • Fatal levels are used to log the fatal errors .Fatal errors are generally related to crash of the application or some components of the application. With fatal errors, applications normally abort working.
  • Trace levels are used to trace fine level informational events.
  • All level is used to log all the information events. It turns on full logging.

One can choose any of the above levels to log information depending on the priority and severity of the issue. One can also customize and extend the logger class to create the custom level if required. But I feel that would be an extra work and definitely not required since existing levels will provide the information as what we require to confirm the issue.

Logging helps in many ways. Unless we log we will never know as what exactly our code is doing. We will never know the path our code takes to implement the functionality. Logging the information also helps to understand and troubleshoot issues after application goes live. In fact job of the many technical product support people who are not so good in technical aspect of software rely just on logs and provide solutions to the issues encountered by the users. Logs also help in User data mining. Google/Facebook/Amazon extensively logs the information about its users and then uses that information to understand the user’s behavior and then come up with some functionality that delights its users. In short logging also helps to grow the business.

However there is certain information which should not get into the logs, like,

  • User sensitive information like password, user ids etc.
  • Any personal information that can be used to identify and trace the user and has potential do harm to user in case it falls in wrong hands.
  • Any information which is financial in nature like bank accounts, credit cards details etc.
  • Logs should not also contain information about infrastructure on which application is running. They should only contain information which techy in nature.

Now let’s talk about some good ways and things that should get into the logs,

  • Logging should always include the source of the event, namely class, file or package names that generate the event.
  • Logging should always be categorized as info, error or debug as per your application requirements.
  • Logs should be rotated always once their reach defined file size or preferably on daily basis.
  • Logs should contain timestamps preferably server time stamps.
  • Logs should be human readable and parse able so that information can be extracted from it easily.
  • Logs should be stored preferably on local machines.
  • Logging should contain application start/stop details, significant events related to application life cycle etc.
  • We should also log details of events like how much time it took to connect to database, how much time it took to execute query, how big was the result set etc.

Each of the information in the logs should always contains details as who (Class), when (timestamp), where (Which part of code), what (what action did code do) and finally output that gets generated for the action. Some of the folks I know who built codebase that processed millions of requests in an hour had used logging extensively to fine tune to their code base. Once their code base was tuned appropriately, they turned off the logging and then deployed it in test/or live environment.

There are also few things which are not so good about logging which we need to understand,

  • Excessive logging leads to bad performance of the application especially when logs are written in network drives. It chokes the network.
  • Using incorrect logging level for getting information is also bad practice.
  • I personally prefer to log information asynchronously as this will not block my code execution. (Take this suggestion with pinch of salt as I am still implementing logging for my code, so this statement might be wrong, but I know we can log in async mode, however it also depends on our code, it should allow multiple threads to run).
  • Logging on the same local drive as server often has IO overheads. Think as what happens when we have 100’s or 1000’s of users are on application and they all are logging.

Though lot many experts believe logging do not take more than couple of nanoseconds to couple of milliseconds to log information per user, however based on my experience, people implement logging very poorly, they just don’t log the information what is required or do excessive logging for various reasons or log it sync mode. Under load test based on my experiences, application which has got info level logging will have at least 4 to 10% more response time than when it has got logging level as on error. Of course this statement is true  if developer has some logging set up else expect that its going to long journey to with steps something like setup the logging infrastructure,reproduce the issue, verify the issue in logs, check the code for execution path,confirm the issue in code and then provide the fix for the issue.

Tags: , ,

Some Facts about IRCTC Website

January 26, 2013 by kiranbadi1991 | 7 Comments | Filed in Environment, Performance Engineering, Quality, Small Businesses

Recently I have seen lot many people complaining about the performance of the Indian Railways website and most complains relates to the fact they are not able to book the tickets online and it takes hours for them to book the ticket via IRCTC.I also had faced the similar situation many times but I must say I have been fortunate enough that more than 95% percent of the time,I was able to book it in the first attempt.Maybe its just my luck or my good timings.So I thought let me do some investigation in my spare time and see how much traffic does this site handle and what all are the real trouble the users are facing while using this site.So with this intention,I pulled out the below facts about IRCTC site,(Please do remember that Government employee’s rarely exaggerates the numbers as they just don’t have any motives to play with numbers,based on this wisdom, I believe these below numbers are correct or should be on lower side),

  • IRCTC site receives close to 10 to 12 lakhs(10,00,000 to 12,00,000) hits per minute.
  • The site uses around 450mbps bandwidth.
  • Nearly 5 lakhs tickets are booked on daily basis.
  • At any point of time, it has got close to 10,00,000 concurrent connections open to it.

Of course IRCTC has taken some measures to increase the scalability of its site by adding flash memory drives,restricting the activities of its agent and ensuring that it does regular maintenance of its infrastructure,but I do feel personally it still needs lot of improvement given the kind of traffic it gets.

I can understand general users who don’t understand the complexity of the software complaining about it,but I just don’t understand as what makes IT Folks who had spend years designing IT Systems/Testing to complain about this site,can’t they understand the complexity/volume of the traffic the site handles and the way bureaucracy is slowing it down or they believe these numbers are negligible in volumes.

I feel the folks who are working on this site and folks who had scaled this site has done remarkable job to the people of India.They are in fact heroes and they have wealth of technical information with them. So I would request people who run technical IT magazines contact these folks and maybe you can ask them as what is secret of their scalability in spite of working in tight/constrained and limited work environment.How on earth can they maintain close to 10 million concurrent connections ? Isn’t this a interesting case study worth a attention ?.

Source for this article are here, here,here and here .

Tags: , ,