Home     RSSRSS

Archives: Memory

Linux vmstat command

April 4, 2019 by kiranbadi1991 | Comments Off on Linux vmstat command | Filed in Database, Development, Environment, Memory, Performance Engineering, Process, Web Server

I have been spending  bit of my time on EC2 Amazon linux. So thought of just making a note of some of the commands I frequently use.

It helps me to look directly at my site for information on this command rather than googling and spending time for the information in the internet for this command.(All I need is what each column stands for)

vmstat gives information about processes, memory, paging, block I/O, traps, and CPU activity. It displays either average data or actual samples. The sampling mode can be enabled by providing vmstat with a sampling frequency and a sampling duration.

vmstat

The columns in the output are as follows:

Process (procs)  r: The number of processes waiting for runtime

                             b: The number of processes in uninterruptable sleep

Memory      swpd: The amount of virtual memory used (KB)

                     free: The amount of idle memory (KB)

                     buff: The amount of memory used as buffers (KB)

                     cache: The amount of memory used as cache (KB)

Swap                   si: Amount of memory swapped from the disk (KBps)

                            so: Amount of memory swapped to the disk (KBps)

IO                         bi: Blocks sent to a block device (blocks/s)

                             bo: Blocks received from a block device (blocks/s)

System                in: The number of interrupts per second, including the clock

                             cs: The number of context switches per second

CPU (% of total CPU time)

                            us: Time spent running non-kernel code (user time, including nice time).
                             sy: Time spent running kernel code (system time).
                             id: Time spent idle. Prior to Linux 2.5.41, this included I/O-wait time.
                            wa: Time spent waiting for IO.

Some additional flags for vmstat are

-m   -  displays the memory utilization of the kernel (slabs)
-a    – provides information about active and inactive memory pages
-n   – displays only one header line, useful if running vmstat in sampling mode and piping the output to a file. (eg.root#vmstat –n 2 10 generates vmstat 10 times with a sampling rate of two seconds.)
          When used with the –p {partition} flag, vmstat also provides I/O statistics

Tags: , ,

Know your Default Initial and Max heap size of JVM

June 7, 2012 by kiranbadi1991 | Comments Off on Know your Default Initial and Max heap size of JVM | Filed in Development, Environment, Memory, Others, Performance Engineering

At times it becomes necessary that we know the default heap size allocated to the JVM in order to debug some issues, for those cases, I suggest run the below command on the command line of the server box to get this information,

java -XX:+PrintCommandLineFlags -version

 

On my machine, where I have tomcat server installed, I get the information something like,

 

C:\Users\kiran>java -XX:+PrintCommandLineFlags -version
-XX:InitialHeapSize=16777216 -XX:MaxHeapSize=268435456 -XX:+PrintCommandLineFlag
s -XX:-UseLargePagesIndividualAllocation
java version “1.6.0_32”
Java(TM) SE Runtime Environment (build 1.6.0_32-b05)
Java HotSpot(TM) Client VM (build 20.7-b02, mixed mode, sharing)

 

Tags: , , ,

Java Performance Series

March 22, 2012 by kiranbadi1991 | Comments Off on Java Performance Series | Filed in Development, Environment, Memory, Performance Engineering

It’s really been long time I have worked on performance testing of the java based applications (Good 2+ years), so in order to rehearse my past experience on java based applications, I am thinking to start series of posts which will showcase my thoughts on the testing/identifying/isolating/fixing/suggesting some of the key performance issues which I have seen/observed/fixed while working on the java based applications.

We know that performance tuning of the java based applications is a kind of painful iterative process where there is no single size fit all solution which can help to determine the optimum memory requirements of the java based application. I call this as painful process for the simple reason that there are very few people who are ready to make changes to the code base in order to fix the performance issue and almost no one in case if we are dealing with legacy systems or legacy applications which has no original SME’s working for that application. Any change in the code base is considered as a high risk item unless it’s a very low hanging fruit and something which is external and yet impacts application performance(think load balancing). So I believe that’s one of primary reasons as why lot many people turn to tune memory allocation requirements rather than fix the badly composed/written or outdated data structure code used by the java based application. Another good valid reason I could think of is that hardware has become lot cheaper than hiring the developer to fix the issue and however this approach also by no means assures the business that it’s going to fix the original issue without any side effects to the other part of the code.There always exists a risk for regression.

Allocating the right size of the memory to the Java heap along the right JVM runtime environments can help to mitigate some/most of the performance issues but definitely not all especially if you have designed the application without keeping performance engineering requirements in your mind. Memory requirements for Java based application are quite often described/measured in terms of Java heap size. Lot many folks says larger the heap size better the performance in terms of latency and throughput, but I believe otherwise for simple reason that if you have bad code which is consuming a lot of memory, larger heap size will give that bad piece of code extra time to live rather than make it fail fast.That’s band aid and not the permanent fix. (IIS App pool recycling technique used by IIS is one such good example for this).

Tuning the JVM often helps in ensuring that application meets acceptable level of response time/throughput/availability .To large extent we can also improve the start time/latency/throughput and manageability of the application by the tuning the JVM and using right runtime environment. The availability of the applications can also be improved by deploying the applications across multiple JVM’s provided your application is designed in such a way that it supports this solution. Client JVM Runtime environments often have good start up time and provide good throughput and latency compare Server JVM Runtime environments, but lacks the code optimization techniques used by the server runtime environment. Depending on the application and system requirements one can choose between client and server runtime environments.

That’s it for now, stay tuned for next post on some of my weird thoughts on Java performance stuff.

Technorati Tags:

Tags: