Home     RSSRSS

Archives: Database

Linux vmstat command

April 4, 2019 by kiranbadi1991 | Comments Off on Linux vmstat command | Filed in Database, Development, Environment, Memory, Performance Engineering, Process, Web Server

I have been spending  bit of my time on EC2 Amazon linux. So thought of just making a note of some of the commands I frequently use.

It helps me to look directly at my site for information on this command rather than googling and spending time for the information in the internet for this command.(All I need is what each column stands for)

vmstat gives information about processes, memory, paging, block I/O, traps, and CPU activity. It displays either average data or actual samples. The sampling mode can be enabled by providing vmstat with a sampling frequency and a sampling duration.

vmstat

The columns in the output are as follows:

Process (procs)  r: The number of processes waiting for runtime

                             b: The number of processes in uninterruptable sleep

Memory      swpd: The amount of virtual memory used (KB)

                     free: The amount of idle memory (KB)

                     buff: The amount of memory used as buffers (KB)

                     cache: The amount of memory used as cache (KB)

Swap                   si: Amount of memory swapped from the disk (KBps)

                            so: Amount of memory swapped to the disk (KBps)

IO                         bi: Blocks sent to a block device (blocks/s)

                             bo: Blocks received from a block device (blocks/s)

System                in: The number of interrupts per second, including the clock

                             cs: The number of context switches per second

CPU (% of total CPU time)

                            us: Time spent running non-kernel code (user time, including nice time).
                             sy: Time spent running kernel code (system time).
                             id: Time spent idle. Prior to Linux 2.5.41, this included I/O-wait time.
                            wa: Time spent waiting for IO.

Some additional flags for vmstat are

-m   -  displays the memory utilization of the kernel (slabs)
-a    – provides information about active and inactive memory pages
-n   – displays only one header line, useful if running vmstat in sampling mode and piping the output to a file. (eg.root#vmstat –n 2 10 generates vmstat 10 times with a sampling rate of two seconds.)
          When used with the –p {partition} flag, vmstat also provides I/O statistics

Tags: , ,

My SQL InnoDB Storage Engine

January 24, 2015 by kiranbadi1991 | Comments Off on My SQL InnoDB Storage Engine | Filed in Database, Environment, Performance Engineering, Small Businesses

MySQL comes with different level of support for InnoDB Storage engine for every version of its release. The Default configuration settings for InnoDB also varies version wise. So its always a wise decision to ensure that if we are upgrading the MySQL and are having InnoDB as default Storage engine, then it does no harm to revisit the configuration settings.

InnoDB is default general purpose storage engine and is recommended for all tables unless you have any special cases. MySQL Server has a pluggable storage engine architecture that enables storage engines to be loaded into and unloaded from a running MySQL server.

InnoDB has below features that separates it from other engine types,

  1. Row level Locking – blocks  reading or writing of table data by connections if another connection is currently using that data.(MyISAM has table level locks) .
  2. Acid ComplianceAtomicity means that if a transaction fails then the changes are rolled back and not committed. Consistency means that each successfully executed transaction will move the database ahead in time from one state to the next in a consistent manner without errors or data integrity issues. Isolation means that each transaction will see separate sets of data in time and not conflict with other transactional data access.Durability  ensures that any data that has been committed in a successful transaction will be written to disk in its final state, without the risk of data loss from errors or system failure, and will then be available to transactions that come in the future. So chances of transaction corruption is not possible or very small.All transactions are executed in isolations.
  3. Referential Integrity – It has ability to store data in multiple tables and maintain referential integrity and data consistency.

One can verify the type of engine used by MySQL by issuing below commands,(I have PhpMyAdmin and it looks something like below screenshot)

image

Also before creating any tables with InnoDB as engine, it makes sense to check transaction isolation levels. This can be checked with below command,

image

Transaction Isolation levels has of below options,

READ UNCOMMITTED -  Every select query operates without locks so you don’t get consistency and might contain dirt reads. So it violates ACID Principles and should never be used if your application issues transactions that require point-in-time consistent data reads.

READ COMMITTED – This setting offer consistent reads without table or row locks. Each consistent read, even within the same transaction, sets and reads its own fresh snapshot of point-in-time data.It offers  consistency and performance for applications that do not require full ACID.It does not fully comply with ACID.

REPEATABLE READ – The InnoDB default isolation level for ACID compliance.

SERIALIZABLE: This is the same as REPEATABLE READ but MySQL converts  select queries with the preface of LOCK IN SHARED MODE when auto-commit is enabled. If auto-commit is disabled then each select query is started in a separate transaction, which will ensure that all reads are consistent. This setting also allows for XA distributed transactions support, which you can read more about in the MySQL manual.The SERIALIZABLE value setting will impact database transaction execution performance, so only enable this if it is absolutely necessary.

Tags: , ,

Loading Microsoft IIS Logs into SQL Server 2012 with Log Parser

February 1, 2014 by kiranbadi1991 | Comments Off on Loading Microsoft IIS Logs into SQL Server 2012 with Log Parser | Filed in Database, Development, Performance Engineering, Performance Test Tools, Scripting, Web Server

If you ever thing of building the customized tool for viewing and analyzing the IIS logs, the first thing, you will probably do is to think of some way of loading the IIS logs to some database specially MS SQL Express or MY SQL.

Parsing the IIS logs and loading it into the database has its own challenges, we can always write the customized code which will read each line of log file and then load it to the database table. However it will  require a specialized programming skills since parsing logs file will require you to first read the log file, remove the headers or unnecessary data in the log file and then proceed to load the required data into the Database table. So your program should be aware of all the format and complete structure of your log file so that it can handle all unexpected characters of the log file.Its quite a tedious and time consuming  task.I have loaded the IIS logs using Powershell and Log Parser into the MS SQL Express database.

Log parser is a powerful, versatile tool that provides universal query access to text-based data such as log files, XML files and CSV files, as well as key data sources on the Windows operating system such as the Event Log, the Registry, the file system, and Active Directory®.You tell Log Parser what information you need and how you want it processed. The results of your query can be custom-formatted in text based output, or they can be persisted to more specialty targets like SQL, SYSLOG, or a chart.

So in this post, I will show you as how to Load the IIS Logs into the MSSQL Express database.First thing you need to do , is to download the IIS Log Parser and then Install it in your local machine or Server. Once you have installed the log parser tool, then you need know the location of your IIS logs files and you need to access that you have all the rights to access and read the IIS Logs.You will also need to ensure that you have SQL Server installed and you have complete rights on the database.Probably you need to have rights to create the table and have full rights to database.If your database is located in other machine, then you will need to ensure that you are able to connect the machine where log parser is installed and it is able to connect to the database and has access the location of IIS logs files.

Once you have installed log parser , have access to SQL Server and IIS logs location, then you can use the below example query to load iis logs to sql server database.(Below example I have loaded multiple logs into the Database)

C:\Program Files (x86)\Log Parser 2.2>LogParser.exe "SELECT * INTO TBIIS76 FROM
C:\Users\Kiran\Desktop\IISLogs\u_ex130327.txt" -o:SQL -oConnString:"Driver=SQL S
erver;Server=KIRAN\SQLExpress; Database=IISLOGS;Trusted_Connection=yes" -createT
able:OFF -e:10

Statistics:
-----------
Elements processed: 75
Elements output:    75
Execution time:     0.07 seconds


C:\Program Files (x86)\Log Parser 2.2>LogParser.exe "SELECT * INTO TBIIS76 FROM
C:\Users\Kiran\Desktop\IISLogs\u_ex130318.txt" -o:SQL -oConnString:"Driver=SQL S
erver;Server=KIRAN\SQLExpress; Database=IISLOGS;Trusted_Connection=yes" -createT
able:OFF -e:10
Task completed with parse errors.
Parse errors:
Cannot find '#Fields' directive in header of file "C:\Users\Kiran\Desktop\IIS
  Logs\u_ex130318.txt". Lines 1 to 285 have been ignored

Statistics:
-----------
Elements processed: 12968
Elements output:    12968
Execution time:     20.93 seconds


C:\Program Files (x86)\Log Parser 2.2>LogParser.exe "SELECT * INTO TBIIS76 FROM
C:\Users\Kiran\Desktop\IISLogs\u_ex130319.txt" -o:SQL -oConnString:"Driver=SQL S
erver;Server=KIRAN\SQLExpress; Database=IISLOGS;Trusted_Connection=yes" -createT
able:OFF -e:10

Statistics:
-----------
Elements processed: 25219
Elements output:    25219
Execution time:     54.60 seconds


C:\Program Files (x86)\Log Parser 2.2>LogParser.exe "SELECT * INTO TBIIS76 FROM
C:\Users\Kiran\Desktop\IISLogs\u_ex130320.txt" -o:SQL -oConnString:"Driver=SQL S
erver;Server=KIRAN\SQLExpress; Database=IISLOGS;Trusted_Connection=yes" -createT
able:OFF -e:10

Statistics:
-----------
Elements processed: 25922
Elements output:    25922
Execution time:     53.50 seconds


C:\Program Files (x86)\Log Parser 2.2>LogParser.exe "SELECT * INTO TBIIS76 FROM
C:\Users\Kiran\Desktop\IISLogs\u_ex130321.txt" -o:SQL -oConnString:"Driver=SQL S
erver;Server=KIRAN\SQLExpress; Database=IISLOGS;Trusted_Connection=yes" -createT
able:OFF -e:10

Statistics:
-----------
Elements processed: 27395
Elements output:    27395
Execution time:     59.74 seconds


C:\Program Files (x86)\Log Parser 2.2>LogParser.exe "SELECT * INTO TBIIS76 FROM
C:\Users\Kiran\Desktop\IISLogs\u_ex130322.txt" -o:SQL -oConnString:"Driver=SQL S
erver;Server=KIRAN\SQLExpress; Database=IISLOGS;Trusted_Connection=yes" -createT
able:OFF -e:10

Statistics:
-----------
Elements processed: 22772
Elements output:    22772
Execution time:     60.12 seconds (00:01:0.12)


C:\Program Files (x86)\Log Parser 2.2>LogParser.exe "SELECT * INTO TBIIS76 FROM
C:\Users\Kiran\Desktop\IISLogs\u_ex130323.txt" -o:SQL -oConnString:"Driver=SQL S
erver;Server=KIRAN\SQLExpress; Database=IISLOGS;Trusted_Connection=yes" -createT
able:OFF -e:10

Statistics:
-----------
Elements processed: 17497
Elements output:    17497
Execution time:     42.62 seconds


C:\Program Files (x86)\Log Parser 2.2>LogParser.exe "SELECT * INTO TBIIS76 FROM
C:\Users\Kiran\Desktop\IISLogs\u_ex130325.txt" -o:SQL -oConnString:"Driver=SQL S
erver;Server=KIRAN\SQLExpress; Database=IISLOGS;Trusted_Connection=yes" -createT
able:OFF -e:10

Statistics:
-----------
Elements processed: 670
Elements output:    670
Execution time:     1.35 seconds


C:\Program Files (x86)\Log Parser 2.2>LogParser.exe "SELECT * INTO TBIIS76 FROM
C:\Users\Kiran\Desktop\IISLogs\u_ex130326.txt" -o:SQL -oConnString:"Driver=SQL S
erver;Server=KIRAN\SQLExpress; Database=IISLOGS;Trusted_Connection=yes" -createT
able:OFF -e:10

Statistics:
-----------
Elements processed: 1135
Elements output:    1135
Execution time:     1.60 seconds


C:\Program Files (x86)\Log Parser 2.2>
Let me explain the below query which inserts the logs into the sql server database,
C:\Program Files (x86)\Log Parser 2.2>LogParser.exe "SELECT * INTO TBIIS76 FROM
C:\Users\Kiran\Desktop\IISLogs\u_ex130326.txt" -o:SQL -oConnString:"Driver=SQL S
erver;Server=KIRAN\SQLExpress; Database=IISLOGS;Trusted_Connection=yes" -createT
able:OFF -e:10

If you look closely at the above query, you can see that I have connection string to the local server KIRAN\SQLExpress and I am using the driver SQL Server and I have a database IIS Logs. Since I have windows authentication on my local machine, I am using Trusted_Connection = Yes. However if you are using network based logs, then you need to replace,

 
Trusted_Connection=yes

with

-username:yourusername -password:yourpassword 

I have given the create table flag as Off since I have table TBIIS76 already created in the database IISLogs. In case if you need the log parser to create the table by itself, then you need to set create table flag as ON.

e flag is very important flag and is more of extended logging for log parser.I will give you information as what went wrong in case if few records are not loaded into the database for any reason.

Below is example of parse errors which was given for one of the log files. Without this flag, you will spend hours trying to figure out as what is wrong in case if there are any errors. If you look at below error, you will understand as why it’s painful to write custom code to parse log files, if there is any missing status fields, then probably your program will die without giving you any indication.

Task completed with parse errors.
Parse errors:
Cannot find '#Fields' directive in header of file "C:\Users\Kiran\Desktop\IIS
  Logs\u_ex130318.txt". Lines 1 to 285 have been ignored

The table in case if it does not exists and you want log parser to create for you, then it will create one for you. The table structure along with column data format looks something like below,

image

Once the data is loaded into the SQL Server, it will open the new world for you to query the results and you can built the customized tools on top of it or use SQL Management studio to write customized query and do the kind of analysis you want to do. This is especially very helpful in analyzing Production metrics and doing capacity planning activity.

Also please note that you can upload any event logs/registry logs/http err logs etc to the database using the log parser tool.I feel using the above approach, you should be able to do all kinds of performance analysis on logs.

In the next post I will show you as how to load the IIS logs into the SQL Server at runtime using system ODBC driver.

Tags: , , , , , ,