As performance engineers, we scan all layers for performance bottlenecks, including the front end, network, application, middleware, and database. During one of my projects, I discovered hundreds of long-running SQLs in a client onboarding application. In a debugging session with the database team, I learned a few crucial things about database tuning.
Lesson 1 - Index to speed up the SQL query
When SQLs are slow, you can check if an index exists. Adding an index is a quick fix and can be implemented without a code change. However, there might be situations when the SQLs are slow because of the high index fragmentation ratio. Read this blog post to learn more about solving index fragmentation issues.
Lesson 2 - Improve Compilation Time
If you deal with complex SQLs, the query compilation time can take several seconds and impact the response times. Ad-hoc plans could increase the query compilation time. You could use OPTION PARAMETERIZATION FORCED to make better use of prepared queries. Another fix would be to look into the plan guides and force statements to youse the plan you specify instead of completing a new plan.
Lesson 3 - Redesign SQL Queries
After adding indexes and improving compilation times, some SQL's response times are still slow. In such cases, the designed SQL is usually your problem. To speed up your SQLs, you have several options:
Check if EXISTS or IN improves performance
Check if you can remove DISTINCT clauses
Check if you select too much data
Improve the select performance by using TOP
Convert your SQL to a procedure
And several more
In one of my recent projects, after adding SELECT TOP 1, the query's response time dropped by 6 seconds.
Lesson 4 - Archiving
As always, data volumes matter greatly when fetching data from your tables. Imagine an online banking system that stores all your transactions. For high-volume clients, there might be thousands of transactions a day, and as such, accounts age, and there are millions of records for a single customer. Usually, only the last few months of transactional data can be stored in online tables, and older records can be archived. Reading data from a table that contains a few thousand versus several million records would improve your response times.
Lesson 5 - Database Backup Strategies
Database backups are crucial for mission-critical applications. Sometimes, we create incremental backups every 15 minutes and full backups several times a day. Splitting your database into active data, which is the most essential, and inactive data, which has a lower priority, could be beneficial. A restore would be faster, and your transaction logs would be smaller. Â
How do we identify "bad" SQL Statements?
Based on my experience, SQL response times for online applications should always be faster than 500 milliseconds. When you filter for long-running SQLs, you should also focus on the number of executions because a single long-running SQL might not harm, but thousands of SQLs showing response times > 500 ms is much worse. Consider using sum and count instead of average SQL response times to understand your most expensive SQLs better.
Keep up the great work! Happy Performance Engineering!
Comments