How to Enable Slow Query Log

How to Enable Slow Query Log The Slow Query Log is one of the most powerful diagnostic tools available to database administrators, developers, and system architects working with relational databases such as MySQL, MariaDB, and PostgreSQL. It records queries that take longer than a specified threshold to execute, providing critical insight into performance bottlenecks, inefficient indexing, and res

Oct 30, 2025 - 12:46
Oct 30, 2025 - 12:46
 1

How to Enable Slow Query Log

The Slow Query Log is one of the most powerful diagnostic tools available to database administrators, developers, and system architects working with relational databases such as MySQL, MariaDB, and PostgreSQL. It records queries that take longer than a specified threshold to execute, providing critical insight into performance bottlenecks, inefficient indexing, and resource-heavy operations. Enabling the Slow Query Log is not merely a technical configurationits a foundational practice for maintaining scalable, responsive, and reliable database systems.

In modern web applications, even a single poorly optimized query can cascade into slow page loads, increased server load, and degraded user experience. Without visibility into which queries are underperforming, troubleshooting becomes guesswork. The Slow Query Log transforms this ambiguity into actionable data. By capturing query execution times, full SQL statements, and execution plans, it empowers teams to identify and eliminate performance killers before they impact end users.

This guide provides a comprehensive, step-by-step walkthrough on how to enable the Slow Query Log across major database systems. Whether you're managing a small-scale application or a high-traffic enterprise system, understanding and leveraging this tool is essential. Well cover configuration specifics, best practices for tuning thresholds, analysis techniques, real-world examples, and essential tools to turn raw log data into performance improvements.

Step-by-Step Guide

Enabling Slow Query Log in MySQL

MySQL is the most widely used open-source relational database, and enabling its Slow Query Log is a straightforward process. The configuration can be done either dynamically at runtime or persistently via the configuration file. We recommend the latter for production environments to ensure settings survive restarts.

First, locate your MySQL configuration file. On most Linux distributions, this is typically found at:

  • /etc/mysql/my.cnf
  • /etc/my.cnf
  • /etc/mysql/mysql.conf.d/mysqld.cnf

On macOS with Homebrew, it may be located at /usr/local/etc/my.cnf.

Open the file using your preferred text editor (e.g., nano or vim):

sudo nano /etc/mysql/mysql.conf.d/mysqld.cnf

Add or modify the following lines under the [mysqld] section:

[mysqld]

slow_query_log = 1

slow_query_log_file = /var/log/mysql/mysql-slow.log

long_query_time = 2

log_queries_not_using_indexes = 1

Lets break down each setting:

  • slow_query_log = 1 Enables the slow query log.
  • slow_query_log_file Specifies the path and filename where slow queries will be recorded. Ensure the directory exists and is writable by the MySQL process.
  • long_query_time = 2 Defines the threshold in seconds. Any query taking longer than 2 seconds will be logged. Adjust this based on your applications performance expectations.
  • log_queries_not_using_indexes = 1 Logs queries that do not use indexes, even if they execute quickly. This helps identify potential indexing opportunities.

After saving the file, restart the MySQL service to apply the changes:

sudo systemctl restart mysql

To verify the configuration is active, connect to MySQL and run:

SHOW VARIABLES LIKE 'slow_query_log%';

SHOW VARIABLES LIKE 'long_query_time';

If both slow_query_log and log_queries_not_using_indexes return ON, the configuration is successful.

Enabling Slow Query Log in MariaDB

MariaDB, a fork of MySQL, maintains near-identical syntax for slow query logging. The configuration process is the same as MySQL, but the file locations may vary slightly depending on your distribution.

On Ubuntu/Debian systems, the configuration file is often located at:

/etc/mysql/mariadb.conf.d/50-server.cnf

Add the following lines under the [mysqld] section:

[mysqld]

slow_query_log = 1

slow_query_log_file = /var/log/mariadb/mariadb-slow.log

long_query_time = 2

log_queries_not_using_indexes = 1

Create the log directory if it doesnt exist:

sudo mkdir -p /var/log/mariadb

sudo chown mysql:mysql /var/log/mariadb

Restart MariaDB:

sudo systemctl restart mariadb

Confirm the settings using the MySQL client:

mysql -u root -p

SHOW VARIABLES LIKE 'slow_query_log%';

SHOW VARIABLES LIKE 'long_query_time';

Enabling Slow Query Log in PostgreSQL

PostgreSQL handles slow query logging differently than MySQL. Instead of a dedicated slow query log, it uses the log_min_duration_statement parameter to log queries exceeding a specified duration.

Locate your PostgreSQL configuration file, typically:

  • /etc/postgresql/[version]/main/postgresql.conf (Debian/Ubuntu)
  • /var/lib/pgsql/[version]/data/postgresql.conf (RHEL/CentOS)

Open the file:

sudo nano /etc/postgresql/15/main/postgresql.conf

Find and modify the following lines:

log_min_duration_statement = 2000     

Log queries taking longer than 2000ms (2 seconds)

log_statement = 'none'

Optional: Set to 'mod' or 'all' for broader logging

log_destination = 'stderr'

Default

logging_collector = on

Required to capture logs to files

log_directory = '/var/log/postgresql'

Where logs are stored

log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'

Set log_min_duration_statement to the desired threshold in milliseconds. For example, 2000 logs queries longer than 2 seconds.

Ensure the log directory exists and is writable:

sudo mkdir -p /var/log/postgresql

sudo chown postgres:postgres /var/log/postgresql

Reload PostgreSQL to apply changes without restarting:

sudo systemctl reload postgresql

Alternatively, use:

SELECT pg_reload_conf();

from within a PostgreSQL session to reload the configuration dynamically.

Verifying Log File Creation and Permissions

After enabling the Slow Query Log, verify that log files are being written to. Use the ls command to check the files modification time:

ls -la /var/log/mysql/mysql-slow.log

ls -la /var/log/mariadb/mariadb-slow.log

ls -la /var/log/postgresql/

If no files are created, check the following:

  • Ensure the MySQL/MariaDB/PostgreSQL user has write permissions to the log directory.
  • Confirm the path in the configuration file is correct and absolute.
  • Check system logs for errors: journalctl -u mysql or journalctl -u postgresql.
  • Test with a slow query: SELECT SLEEP(5); and verify it appears in the log.

Rotating and Managing Log Files

Slow query logs can grow rapidly in high-traffic environments. Unmanaged log files can consume disk space and degrade system performance. Implement log rotation using logrotate on Linux systems.

Create a logrotate configuration file:

sudo nano /etc/logrotate.d/mysql-slow

Add the following content:

/var/log/mysql/mysql-slow.log {

daily

missingok

rotate 14

compress

delaycompress

notifempty

create 640 mysql adm

sharedscripts

postrotate

/usr/bin/mysqladmin flush-logs > /dev/null 2>&1 || true

endscript

}

For PostgreSQL, create a similar file:

sudo nano /etc/logrotate.d/postgresql
/var/log/postgresql/postgresql-*.log {

daily

missingok

rotate 14

compress

delaycompress

notifempty

create 640 postgres adm

sharedscripts

postrotate

/usr/bin/pg_ctl reload -D /var/lib/postgresql/[version]/main > /dev/null

endscript

}

Test the configuration:

sudo logrotate -d /etc/logrotate.d/mysql-slow

This ensures logs are rotated daily, compressed, and retained for 14 days, reducing manual maintenance.

Best Practices

Set an Appropriate Long Query Time Threshold

The long_query_time value is critical. Setting it too low (e.g., 0.1 seconds) may flood the log with benign queries, making analysis difficult. Setting it too high (e.g., 10 seconds) may miss performance issues that accumulate under load.

Best practice: Start with a threshold of 12 seconds for most applications. Monitor the volume of logged queries over 2448 hours. If the log contains fewer than 10 entries per hour, consider lowering the threshold to 0.5 seconds. If it fills up rapidly with hundreds of entries, raise it to 5 seconds and focus on the most expensive queries.

For high-performance applications (e.g., financial systems, real-time APIs), a threshold of 100500 milliseconds may be appropriate. Always align thresholds with your Service Level Objectives (SLOs).

Use log_queries_not_using_indexes Judiciously

Enabling log_queries_not_using_indexes is invaluable for identifying missing indexes. However, it can generate significant noiseespecially in applications with complex joins or temporary tables that intentionally avoid indexes.

Recommendation: Enable this option during performance audits or after major code deployments. Disable it in production unless youre actively tuning indexes. Use it in staging environments to catch issues before they reach users.

Separate Logs by Environment

Never enable slow query logging on production systems without proper monitoring and retention policies. Use environment-specific configurations:

  • Development: Set threshold to 0.1 seconds to catch issues early.
  • Staging: Use 1 second to simulate production behavior.
  • Production: Use 25 seconds with strict log rotation and monitoring.

Store logs in separate directories per environment to avoid cross-contamination during analysis.

Monitor Log File Size and Disk Usage

Slow query logs are diagnostic tools, not archival systems. Unchecked, they can fill disk partitions and cause service outages. Use monitoring tools like df -h, du -sh /var/log/mysql/, or Prometheus + Node Exporter to alert when log directories exceed 80% capacity.

Automate alerts via scripts or infrastructure-as-code tools (e.g., Ansible, Terraform) to ensure compliance with storage policies.

Enable Logging Only During Performance Investigations

While the overhead of slow query logging is minimal, it is not zero. Each logged query requires additional I/O and disk space. For mission-critical systems with high query volumes, consider enabling the log temporarily during performance testing or after an incident.

Use dynamic configuration where possible:

SET GLOBAL slow_query_log = 'ON';

SET GLOBAL long_query_time = 1;

Then disable it after analysis:

SET GLOBAL slow_query_log = 'OFF';

This minimizes long-term impact while allowing targeted diagnostics.

Analyze Logs Regularly, Not Just When Problems Occur

Proactive performance tuning beats reactive firefighting. Schedule weekly reviews of slow query logs using automated scripts or dashboards. Look for:

  • Repeated queries with similar patterns (indicating application-level inefficiencies).
  • Queries with high Rows_examined values (signaling full table scans).
  • Queries with long lock times (indicating contention).

Integrate log analysis into your CI/CD pipeline or weekly engineering retrospectives.

Combine with Other Monitoring Tools

The Slow Query Log is most powerful when used alongside other metrics:

  • Performance Schema (MySQL) Provides real-time insight into query execution.
  • pg_stat_statements (PostgreSQL) Tracks execution statistics for all queries.
  • Application Performance Monitoring (APM) Correlate slow queries with user-facing latency.
  • System Metrics CPU, I/O wait, memory usage during slow query spikes.

For example, if a slow query coincides with high I/O wait, it may indicate insufficient RAM or slow disk performancenot just a bad query.

Tools and Resources

mysqldumpslow (MySQL)

MySQL includes a built-in utility called mysqldumpslow to summarize slow query logs. It aggregates similar queries and displays top offenders by count and total time.

Usage examples:

Show top 10 slowest queries

mysqldumpslow -s t -t 10 /var/log/mysql/mysql-slow.log

Show top 10 queries by count

mysqldumpslow -s c -t 10 /var/log/mysql/mysql-slow.log

Show queries containing 'JOIN'

mysqldumpslow -s t -t 10 -g "JOIN" /var/log/mysql/mysql-slow.log

Flags:

  • -s Sort by: c (count), t (time), l (lock time), r (rows sent)
  • -t Top N results
  • -g Filter by pattern (e.g., WHERE, JOIN)

Use this tool for quick overviews before diving into detailed analysis.

pt-query-digest (Percona Toolkit)

Perconas pt-query-digest is the industry-standard tool for analyzing MySQL and MariaDB slow query logs. It provides comprehensive reports, including query fingerprints, execution frequency, latency distribution, and execution plan insights.

Install Percona Toolkit:

sudo apt-get install percona-toolkit

Analyze a log file:

pt-query-digest /var/log/mysql/mysql-slow.log

Output includes:

  • Query rank by total time
  • Query fingerprint (normalized form)
  • Number of executions
  • Min, max, avg, and 95th percentile latency
  • Lock time and rows examined
  • Sample query with execution plan

Export to JSON or HTML for reporting:

pt-query-digest --output json /var/log/mysql/mysql-slow.log > slow_queries.json

pt-query-digest --output report /var/log/mysql/mysql-slow.log > report.txt

For real-time analysis, pipe live logs:

tail -f /var/log/mysql/mysql-slow.log | pt-query-digest

pg_stat_statements (PostgreSQL)

PostgreSQLs pg_stat_statements extension tracks execution statistics for all queries, including total time, calls, and rows affected. Unlike slow query logs, it captures every query, making it ideal for comprehensive performance analysis.

Enable it by adding to postgresql.conf:

shared_preload_libraries = 'pg_stat_statements'

pg_stat_statements.track = all

Restart PostgreSQL, then create the extension:

CREATE EXTENSION IF NOT EXISTS pg_stat_statements;

Query top offenders:

SELECT

query,

calls,

total_time,

mean_time,

rows,

100.0 * shared_blks_hit / nullif(shared_blks_hit + shared_blks_read, 0) AS hit_percent

FROM pg_stat_statements

ORDER BY total_time DESC

LIMIT 10;

Combine with log_min_duration_statement to correlate slow queries with broader performance trends.

Log Analysis Dashboards

For teams managing multiple servers, centralized log analysis is essential. Use tools like:

  • ELK Stack (Elasticsearch, Logstash, Kibana) Ingest, parse, and visualize slow query logs.
  • Graylog Open-source log management with alerting and filtering.
  • Prometheus + Grafana Monitor log volume and query latency trends over time.

Example Kibana dashboard fields:

  • Query duration
  • Query fingerprint
  • Timestamp
  • Database name
  • Rows examined

Set up alerts for spikes in slow queries or recurring problematic patterns.

Online Query Analyzers

For developers unfamiliar with SQL optimization, online tools can help interpret slow queries:

  • EXPLAIN.DEV Paste your query to visualize execution plans.
  • SQLFiddle Test queries against sample data.
  • MySQL Workbench Built-in performance dashboard and query profiler.

These tools help bridge the gap between raw logs and actionable insights.

Real Examples

Example 1: Missing Index Causes Full Table Scan

A retail application experienced slow checkout times. The slow query log revealed:

Time: 2024-03-15T10:22:34.123456Z

User@Host: app_user[app_user] @ localhost []

Query_time: 4.872345 Lock_time: 0.000123 Rows_sent: 1 Rows_examined: 1250000

SET timestamp=1710500554;

SELECT * FROM orders WHERE customer_id = 987654 AND status = 'pending';

Analysis:

  • Query took nearly 5 seconds.
  • Examined 1.25 million rows.
  • No index on customer_id or status.

Fix:

CREATE INDEX idx_customer_status ON orders (customer_id, status);

After applying the index, the same query now executes in 0.008 seconds and examines only 2 rows.

Example 2: N+1 Query Problem in Application Code

A blog platforms homepage loaded slowly. The slow query log showed 50+ identical queries:

Query_time: 0.012345 Rows_examined: 1

SELECT * FROM posts WHERE id = 123;

Repeated 50 times with different IDs

SELECT * FROM posts WHERE id = 124;

SELECT * FROM posts WHERE id = 125;

...

Analysis:

  • Each post was fetched individually in a loop.
  • 50 queries, each under 12ms, but total latency exceeded 600ms.

Fix:

Refactor application code to use a single query:

SELECT * FROM posts WHERE id IN (123, 124, 125, ..., 172);

Result: Total time dropped to 15ms.

Example 3: Unoptimized JOIN in Reporting Query

A financial dashboard reported delays during monthly reports. The slow log showed:

Query_time: 18.765432 Lock_time: 0.000456 Rows_sent: 12000 Rows_examined: 45000000

SELECT u.name, SUM(t.amount) as total

FROM users u

JOIN transactions t ON u.id = t.user_id

JOIN accounts a ON t.account_id = a.id

WHERE a.status = 'active'

GROUP BY u.name

ORDER BY total DESC;

Analysis:

  • Examined 45 million rows.
  • No indexes on transactions.user_id, transactions.account_id, or accounts.status.

Fix:

CREATE INDEX idx_transactions_user_account ON transactions (user_id, account_id);

CREATE INDEX idx_accounts_status ON accounts (status);

Query time dropped from 18 seconds to 1.2 seconds.

Example 4: PostgreSQL Query with High I/O

A SaaS applications analytics page was sluggish. PostgreSQL logs showed:

2024-03-15 11:05:32 UTC [12345]: [1-1] user=analytics,db=app,host=192.168.1.10 LOG:  duration: 3421.876 ms  statement: SELECT * FROM logs WHERE event_type = 'login' AND created_at > '2024-03-01';

Analysis:

  • Query scanned 8 million rows.
  • High I/O wait observed on the server.
  • No index on created_at or event_type.

Fix:

CREATE INDEX idx_logs_event_created ON logs (event_type, created_at);

Result: Query time reduced to 18ms.

FAQs

What is the default long_query_time in MySQL?

The default value is 10 seconds. This means only queries taking longer than 10 seconds are logged. For most modern applications, this is too high and should be lowered to 12 seconds for meaningful insights.

Can enabling the Slow Query Log slow down my database?

Yes, but minimally. Writing to disk adds slight I/O overhead. On high-throughput systems, this may add 13% latency. The trade-off is usually worth it for the diagnostic value. Use log rotation and appropriate thresholds to minimize impact.

How do I know if my slow query log is working?

Run a test query like SELECT SLEEP(5); and check the log file. If the query appears with a duration of ~5 seconds, the log is working. Also verify variables using SHOW VARIABLES LIKE 'slow_query_log%'; in MySQL.

Should I enable slow query logging in production?

Yes, but with caution. Use a reasonable threshold (e.g., 2 seconds), enable log rotation, monitor disk usage, and avoid logging queries that dont use indexes unless actively tuning. Never leave it enabled with a threshold of 0 or 0.1 seconds in production.

Whats the difference between slow query log and general query log?

The general query log records every query executed, regardless of speed. Its extremely verbose and should only be used for debugging specific issues temporarily. The slow query log only records queries exceeding a threshold, making it far more practical for ongoing performance monitoring.

Can I analyze slow query logs without command-line tools?

Yes. Tools like MySQL Workbench, phpMyAdmin, and third-party SaaS platforms (e.g., Percona Monitoring and Management, Datadog) can import and visualize slow query logs through web interfaces. However, command-line tools like pt-query-digest remain the most powerful for deep analysis.

How often should I review slow query logs?

For production systems, review logs weekly. For high-traffic or critical applications, consider daily automated summaries. Use alerts to notify you of new patterns or spikes in slow queries.

Do I need to restart the database after changing slow query settings?

In MySQL and MariaDB, you can enable the log dynamically using SET GLOBAL without restarting. However, changes to the configuration file require a restart to persist after reboot. PostgreSQL requires a reload or restart for most configuration changes.

What if my slow query log is empty?

Check: (1) Is the log enabled? (2) Is the file path writable? (3) Is the threshold too high? (4) Are queries actually slow? Test with SELECT SLEEP(5); and verify the log.

Conclusion

Enabling the Slow Query Log is not a one-time taskits a continuous practice essential for maintaining database health. Whether youre managing a small startup app or a global enterprise system, understanding which queries are underperforming is the first step toward optimization. The log transforms vague performance complaints into concrete, measurable problems.

This guide provided a complete roadmap: from initial configuration in MySQL, MariaDB, and PostgreSQL, to log rotation, analysis with industry-standard tools like pt-query-digest and pg_stat_statements, and real-world examples of how to turn log data into performance gains. We emphasized best practices such as setting appropriate thresholds, separating environments, and combining logs with other monitoring tools.

Remember: the goal is not to eliminate all slow queriesits to eliminate the ones that matter. A single query that takes 10 seconds to execute and runs 100 times per minute is far more damaging than 100 queries taking 0.1 seconds each. Prioritize based on impact, not just duration.

By integrating slow query log analysis into your regular workflow, you shift from reactive firefighting to proactive performance engineering. You reduce server costs, improve user satisfaction, and build more resilient applications. Start todayenable the log, analyze the data, and optimize relentlessly. Your usersand your infrastructurewill thank you.