Home     RSSRSS

Archives: Decompression

Compression,Decompression,Mobile Performance and LoadRunner

February 4, 2013 by kiranbadi1991 | 2 Comments | Filed in Decompression, Development, LoadRunner, Performance Center, Performance Engineering, Scripting, SilkPerformer

Recently I inherited some of the  LR scripts from one of my colleagues,it was all about building the json calls for stressing the backend spring security framework which was first layer of entry into the mobile infrastructure.Those scripts were simple scripts  built using the custom request with json string as a body part.One of the things that really surprised me as part of this effort was that web custom request in itself was taking close to 100ms to 300ms to do decompression of the server response during load testing.

Okay first let me give you some background,servers were configured to send the response compressed in gzip format with content encoding header as gzip.The functionality under scope had SLA of 1 sec max and quite a few functionality in scope also had SLA that was less than 500ms.Quite a challenging SLA’s I would say.But again these functionality were supposed to be accessed over the mobile device,so probably less the response time better it is for users.

Most of the response coming from the server for all functionality was served as chunked bytes,so what it means is that server sends initially some bytes as response in compressed gzip format,LR decompresses  those bytes in 5 to 10ms and then again server sends next range of bytes as chunked gzip response and then again LR will spend close to 5 to 10ms to decompress those bytes and like wise the process continues till we have final set of bytes.All these process happens in the single connection and connection never closes with the server.In case if you do have some server response validation in place, then expect that it will add another 10ms to do that validation.

Now I have measured all these times in the single iteration of vugen,these times increase exponentially when we are running the Load Test in controller or PC and this overhead of decoding the gzip content becomes a quite an issue when response time SLA are in ms.

Here is how it looks when you see the behavior in LR Vugen with decompression on in the script.You can see that it takes 5ms to decode the 154 bytes of response.Now imagine the normal webpage will have size of 2mb of data gzipped,so you can see the impact of this decoding  when size of page increase specially when response is coming as chunked bytes with no fixed content length from the server.

pic1

 

I think HP LR team might also be aware of this behavior and probably that the reason as why they might have come up with function to disable this.Use Web set option with decode content flag turned off if you are running the scripts which do not require validation and has response time SLA’s in ms.The drawback of disabling this feature is that all your correlation and other checks for server response will fail since server response will show up as binary content like below.

pic3

 

I would suggest you to disable this feature if you can and do the response validation by using the other techniques like verifying server logs etc.By disabling this you will gain close to 15 to 20% reduction in response time reported by LR.

Is this expected behavior of LoadRunner ?, I think they have to do this,unless they decode the response, none of the other function like web reg save param or web reg find will work and these functions are core functions of LoadRunner.Probably the right way is that LR should not add these decompression timing in their transaction markers.These timing really pollute the results specially for web applications or probably they can increase the speed of this decompression library what they are using in LoadRunner.

Tags: , , , ,

Decompression Errors explained

July 18, 2011 by kiranbadi1991 | Comments Off on Decompression Errors explained | Filed in Browser, Decompression, LoadRunner, Performance Center, Performance Engineering, Scripting, SilkPerformer

This in continuation to my earlier post where I had written some of my observation to decompression errors which we see frequently while running the load tests in Performance Center or LoadRunner.These kinds of errors should not be hard  to debug if we know the variables  that comes into the play while replaying.So I will try to explain as why we normally see decompression errors while replaying the scripts in Load Testing tools.

Compressing the data and transmitting it over the wire saves the huge bandwidth and space.This also helps in reducing the overall response time of the page.It is also best practice to compress the data and send it across the wire as by doing this you are sending less amount of data across the wire. gzip/Deflate are the one of the most commonly used schemes to compress and send the data across the wire.Now after the servers has compressed the data and send it across the wire,it’s the job of the client to decompress and render the data to the client.This decompression functionality built in the client is the root cause of the error which we see during the load testing.Depending the type of the library,client load testing tool has implemented ,content of the error message might vary.So for the sake of this post,I will concentrate only on LoadRunner tool.Below are the some of the errors which I have seen with LoadRunner.

  • Z_Data_ERROR(-3)
  • Z_MEM_ERROR(-4)
  • Z_BUFFER_ERROR(-5)

The error message thrown should be read in the below format,

“Error xxxxx:’Decompression function (wgzMemDecompressBuffer) failed. Return code=’return code’ (‘string’), inSize=’input buffer’, inUse=’input bytes processed’, outUse=’output buffer'” (This I have learned from HP Support site).

LoadRunner tool needs to be told to capture  headers while recording the business flow and this can be done easily by going into the Recording options , Over there we can set up the  appropriate headers that are required to be captured.However I would say that LoadRunner default recording mode as it is works in the most of the cases,however there are some cases where it might not work correctly, and those cases are the ones where we see these kinds of the decompression errors.

Normally the flow of the headers for compressed data looks as below,

1. Client sends the request to the server indicating that it can accept the compressed data and  can decompress it as its end.This expression of interest is send via header to the server.The headers send by the client here looks like below,

Accept-Encoding: gzip,deflate

Please note that whenever we mention both gzip and deflate in the client request,it means that client is able to decompress both gzip and deflate schemes response received from the server.However for load testing purpose ,its not advisable to set the both schemes in your script. One needs to find out exact scheme used by the server and set it accordingly.This information can be found out either while recording the the script with all header enabled in recording options or by using third party tools like fiddler or Live Http headers.

2. Server responds to the client request by sending the compressed data. This compressed data is identified by the client by reading the server headers.The server response headers looks like below,

Content-Encoding: gzip

Content-Length: xxxx bytes

This means that server is using Gzip content encoding scheme and the size of the file the client needs to decompress before rendering the content to the user.

LoadRunner and most load testing tools follow the same steps,they indicate to servers, that they can decompress the request successfully if server is sending the compressed data.If for some reasons, if data is not decompressed by the LoadRunner,depending on the kind of situations ,one might come across either of these 3 types of errors or some extra errors like Z_STREAM_ERROR etc. .

Z_DATA_ERROR: Means that data received by the LoadRunner as the server response for the request cannot be decompressed at the LoadRunner level either due to bad data or data being corrupted in the way or LoadRunner is not aware that it needs to decompress the data.If you are getting very low amount of these kind of error messages, it either means that your load generator is overloaded or you have network issues intermittently which is corrupting the packets or your servers are having some kind of bottlenecks and due to these bottlenecks they are not in position to send the complete file to the client.Normally by looking at the content length received from the server response over the duration of time for the same request,one can make out if the server is having the any kind of bottlenecks.If content length looks similar across many requests and if LoadRunner has successfully decompressed some requests and some have failed, it means that either your LG’s are overloaded or you are having networks issues which corrupts the data flowing across the wire.

Z_MEM_ERROR: This error is received if for some reasons your LG’s boxes are running low on memory requirements,and due to less amount of memory,LoadRunner is not able to decompress the server response.Monitor your Load Generator boxes for memory usage.

Z_BUFFER_ERROR: In case if you have specified the correct headers  with correct schemes (Gzip/Deflate)and have followed my  earlier post on this and still you  see this error, it means that you need open the incident with HP Support.Some where the implementation of zlib Library in LoadRunner is not doing what it is suppose to do.

In case if you are interested in learning more on compression and how they are implemented , I would suggest the following sites ,

http://www.zlib.net/

http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html

http://www.zlib.net/zlib_faq.html

http://www.gzip.org/

http://en.wikipedia.org/wiki/Data_compression

My understanding is that most of the tools uses the above techniques and should also be using these libraries for decompressing the servers responses.

Hope this writing helps and shows direction to understand and debug various decompression errors encountered during load testing.

Tags: , , , ,

Error -26601: Decompression function (wgzMemDecompressBuffer) failed, return code=-5 (Z_BUF_ERROR), inSize=0, inUse=0, outUse=0 [MsgId: MERR-26601] With LoadRunner/Performance Center applicable to all versions.

February 20, 2011 by kiranbadi1991 | 3 Comments | Filed in Decompression, LoadRunner, Performance Center

My team frequently get this error message while running this scenario in performance center for many of our web application.Of course percentage of errors were extremely less.All these days I was pretty much ignoring this message whenever we use to get this.The reason for ignoring this message was that I strongly believe that this  was due to bug in LoadRunner.Reason for ignoring this was that I have come across this message in almost all version of LoadRunner starting from LoadRunner 8.0.

Some days back my colleague started getting this message in Vugen 9.52 with web protocol while scripting for remedy thin client 8.0 version.So I thought that let me investigate and see why LoadRunner keeps frequently give this error message in spite of correct response coming back from server.

Some of my observations were,

1. This errors tends to come whenever response have more number of bytes to return.In this case we have around 20kb of response coming back from the server for that particular request.

2.Correct headers were send by the LoadRunner during replay.Solution offered by HP Support says that one needs to add some web auto add header with gzip or deflate headers.This solution has never worked from me before.So I would not recommend it.

3. This errors comes with http 1.1 version which is send via LoadRunner.

So I would like to suggest people to understand the impact of web auto add headers.Web auto add headers function adds the headers for all the subsequent requests you send.This functions sends the added headers also to the request where the particular header is not required.So be cautious when using these types of auto header functions.In stead use the web add header function, both serve the same purpose with just the difference that web add header function adds the header to request just below it and not to all subsequent requests.We need to understand that web works with correct headers and headers play very important role in web communications.Messing with headers means that you are not doing right performance testing and bound to waste lot of time in debugging the scripts for errors which comes just because you did not use the right headers in the right place for replay.By default setting LoadRunner records only specific sets of headers and not all headers which might be used by your application.

So Is this error message related to headers? ,my suggestion it might be or it might not be the case.However I observed one thing, there is one setting in run time setting of the LoadRunner where Network buffer was set with appx 12k bytes(by default it is set at 12k bytes),this setting surely means that LoadRunner client script cannot decompress or play beyond the 12k bytes of response data and we were getting around 20k bytes of response to parse.So just increased this setting to 30k bytes and also changed the http version used for replay to 1.0.That’s it script starting running the fine.

I am not sure as why changing the http version 1.1 to 1.0 did the trick but I will surely investigate more on this and come back with my observations.Well part of the credit to this solution also goes to SQAForums.I remember either Terri C or Jean Ann had posted this some years back to the thread I was either replying or some thread which I might have been reading.I feel they got this response from Mercury support.

I also remember that this particular error message comes in  3 Flavors in LoadRunner.Maybe I will try to stress my mind and see if I recollect all the 3 types and write something about that as well.