Home     RSSRSS

Archives: Development

Linux vmstat command

April 4, 2019 by kiranbadi1991 | Comments Off on Linux vmstat command | Filed in Database, Development, Environment, Memory, Performance Engineering, Process, Web Server

I have been spending  bit of my time on EC2 Amazon linux. So thought of just making a note of some of the commands I frequently use.

It helps me to look directly at my site for information on this command rather than googling and spending time for the information in the internet for this command.(All I need is what each column stands for)

vmstat gives information about processes, memory, paging, block I/O, traps, and CPU activity. It displays either average data or actual samples. The sampling mode can be enabled by providing vmstat with a sampling frequency and a sampling duration.

vmstat

The columns in the output are as follows:

Process (procs)  r: The number of processes waiting for runtime

                             b: The number of processes in uninterruptable sleep

Memory      swpd: The amount of virtual memory used (KB)

                     free: The amount of idle memory (KB)

                     buff: The amount of memory used as buffers (KB)

                     cache: The amount of memory used as cache (KB)

Swap                   si: Amount of memory swapped from the disk (KBps)

                            so: Amount of memory swapped to the disk (KBps)

IO                         bi: Blocks sent to a block device (blocks/s)

                             bo: Blocks received from a block device (blocks/s)

System                in: The number of interrupts per second, including the clock

                             cs: The number of context switches per second

CPU (% of total CPU time)

                            us: Time spent running non-kernel code (user time, including nice time).
                             sy: Time spent running kernel code (system time).
                             id: Time spent idle. Prior to Linux 2.5.41, this included I/O-wait time.
                            wa: Time spent waiting for IO.

Some additional flags for vmstat are

-m   -  displays the memory utilization of the kernel (slabs)
-a    – provides information about active and inactive memory pages
-n   – displays only one header line, useful if running vmstat in sampling mode and piping the output to a file. (eg.root#vmstat –n 2 10 generates vmstat 10 times with a sampling rate of two seconds.)
          When used with the –p {partition} flag, vmstat also provides I/O statistics

Tags: , ,

CSS style debugging trick with Dev tools

January 24, 2019 by kiranbadi1991 | Comments Off on CSS style debugging trick with Dev tools | Filed in Browser, Development

One of the very old  trick for debugging CSS styles for an element is apply selector(*) which applies to all elements of the page and then give it an outline property with some solid borders.

We do something like below in chrome dev tools

css-1

Once you apply this property to the page, it looks something like below.

css-2

So we now know exactly which element style to adjust so that it does not overflow the view port.

Technorati Tags: ,

Application Performance and Scalability

September 18, 2018 by kiranbadi1991 | Comments Off on Application Performance and Scalability | Filed in Development, Environment, Performance Engineering

Application performance is measured by service time (response time), latency, throughput, efficiency.

Depending on application needs, we describe performance as “how fast “can a given task or work can be completed by the program with the available computing resources.

Scalability means ability of the application to increase throughput or computing power of the program when additional resources (CPU/Disk etc) are given to the program.

Scalability and Application performance are two different things for the majority of the applications and almost all of the time they are at odds to each other and requires some level of tradeoffs. Designing the application for scalability often requires that we distribute the given set of work (tasks) across parallel threads or programs or computing resources so that all the given computing resources can be used by the programs so as to increase throughput.

Good example to understand this concept is deploying the some web application on a single server which hosts its database (Persistence), application server (Business layer), cache server (Service layer/persistence layer) etc. on the single machine. Since all the components of application are hosted on same machine, it’s bound to give very good performance (No network latency involved across any tier). However after a certain point, performance might start to deteriorate after reaching a certain threshold in terms of throughput. Once the threshold throughput is achieved. There is no way to increase the throughput since every tier is on same single server. However if we move the each of the layers on the different machines, then we might increase the throughput of the application, however this might decrease the performance of the application (Network latency involved here). It’s very rare case to see application performance and scalability go hand in hand.

Tags: , , , ,