How to Monitor Redis Memory
How to Monitor Redis Memory Redis is one of the most widely used in-memory data stores in modern application architectures. Its speed, simplicity, and versatility make it ideal for caching, session storage, real-time analytics, message brokering, and more. However, because Redis stores all data in RAM, memory usage becomes a critical operational concern. Unlike disk-based databases, Redis has no f
How to Monitor Redis Memory
Redis is one of the most widely used in-memory data stores in modern application architectures. Its speed, simplicity, and versatility make it ideal for caching, session storage, real-time analytics, message brokering, and more. However, because Redis stores all data in RAM, memory usage becomes a critical operational concern. Unlike disk-based databases, Redis has no fallback when memory is exhausted—exceeding allocated memory can lead to evictions, performance degradation, or even service outages.
Monitoring Redis memory is not optional—it’s essential. Without proper visibility into memory consumption patterns, teams risk unexpected crashes, inefficient resource allocation, and poor user experiences. Effective memory monitoring enables proactive scaling, identifies memory leaks, optimizes data structures, and ensures high availability under load.
This guide provides a comprehensive, step-by-step approach to monitoring Redis memory. Whether you’re managing a small deployment or a large-scale production cluster, this tutorial will equip you with the knowledge, tools, and best practices to maintain optimal Redis performance through intelligent memory oversight.
Step-by-Step Guide
1. Understand Redis Memory Metrics
Before you begin monitoring, you must understand the key memory-related metrics Redis exposes. These metrics are accessible via the INFO memory command and form the foundation of all memory analysis.
Key metrics include:
- used_memory: Total number of bytes allocated by Redis using its allocator (typically jemalloc or libc malloc).
- used_memory_human: Human-readable version of
used_memory(e.g., “2.15G”). - used_memory_rss: Resident Set Size—the amount of physical memory (RAM) occupied by the Redis process. This includes memory fragmentation overhead.
- used_memory_peak: Peak memory allocated by Redis since startup.
- used_memory_peak_human: Human-readable peak memory usage.
- mem_fragmentation_ratio: Ratio of
used_memory_rsstoused_memory. A ratio significantly above 1.5 indicates memory fragmentation; below 1 may indicate swapping. - mem_allocator: The memory allocator in use (e.g., jemalloc, libc).
- active_defrag_running: Indicates if memory defragmentation is currently active (Redis 4.0+).
These metrics help you answer critical questions: Is Redis using more memory than expected? Is fragmentation wasting resources? Has memory usage spiked recently? Is the system swapping?
2. Connect to Redis and Retrieve Memory Stats
To begin monitoring, connect to your Redis instance using the Redis CLI or any compatible client.
For local Redis:
redis-cli
Then run:
INFO memory
For remote Redis instances (with authentication):
redis-cli -h your-redis-host.com -p 6379 -a yourpassword INFO memory
Alternatively, use the redis-cli shortcut:
redis-cli -h your-redis-host.com -p 6379 -a yourpassword memory
Sample output:
used_memory:2263286888
used_memory_human:2.11G
used_memory_rss:2518773760
used_memory_peak:2264147024
used_memory_peak_human:2.11G
used_memory_lua:37888
mem_fragmentation_ratio:1.11
used_memory_startup:791568
used_memory_dataset:2262642504
used_memory_dataset_perc:99.97%
allocator_allocated:2263325576
allocator_active:2271465472
allocator_resident:2523774976
total_system_memory:16777216000
total_system_memory_human:15.62G
used_memory_scripts:0
number_of_cached_scripts:0
maxmemory:0
maxmemory_human:0B
maxmemory_policy:noeviction
allocator_frag_ratio:1.00
allocator_frag_bytes:8139896
allocator_rss_ratio:1.11
allocator_rss_bytes:2523004504
rss_overhead_ratio:0.99
rss_overhead_bytes:-5001216
mem_cluster_nodes:0
Pay attention to used_memory and used_memory_rss for actual consumption, and mem_fragmentation_ratio for efficiency. If maxmemory is set, compare used_memory against it to determine headroom.
3. Set Up Automated Monitoring
Manual checks are insufficient for production environments. Automate collection using scripts or monitoring agents.
Option A: Bash Script with Cron
Create a script named redis_memory_check.sh:
!/bin/bash
REDIS_HOST="localhost"
REDIS_PORT="6379"
REDIS_PASSWORD="yourpassword"
Get memory info
MEMORY_INFO=$(redis-cli -h $REDIS_HOST -p $REDIS_PORT -a $REDIS_PASSWORD INFO memory 2>/dev/null)
Extract values
USED_MEMORY=$(echo "$MEMORY_INFO" | grep "^used_memory:" | cut -d: -f2)
USED_MEMORY_RSS=$(echo "$MEMORY_INFO" | grep "^used_memory_rss:" | cut -d: -f2)
FRAG_RATIO=$(echo "$MEMORY_INFO" | grep "^mem_fragmentation_ratio:" | cut -d: -f2)
MAXMEMORY=$(echo "$MEMORY_INFO" | grep "^maxmemory:" | cut -d: -f2)
Log to file
echo "$(date): used_memory=$USED_MEMORY, used_memory_rss=$USED_MEMORY_RSS, frag_ratio=$FRAG_RATIO, maxmemory=$MAXMEMORY" >> /var/log/redis_memory.log
Alert if fragmentation > 1.5
if (( $(echo "$FRAG_RATIO > 1.5" | bc -l) )); then
echo "ALERT: High fragmentation ($FRAG_RATIO) on $REDIS_HOST" >> /var/log/redis_alerts.log
fi
Alert if memory > 80% of maxmemory
if [[ $MAXMEMORY -gt 0 ]] && (( $(echo "$USED_MEMORY > $(($MAXMEMORY * 0.8))" | bc -l) )); then
echo "ALERT: Memory usage exceeds 80% of maxmemory on $REDIS_HOST" >> /var/log/redis_alerts.log
fi
Make it executable:
chmod +x redis_memory_check.sh
Add to crontab to run every 5 minutes:
crontab -e
Add line:
*/5 * * * * /path/to/redis_memory_check.sh
Option B: Use Prometheus + Redis Exporter
For scalable, metric-driven monitoring, deploy the Redis Exporter alongside your Redis instance.
Install via Docker:
docker run -d -p 9121:9121 --name redis-exporter \
-e REDIS_ADDR=redis://your-redis-host:6379 \
-e REDIS_PASSWORD=yourpassword \
oliver006/redis_exporter
Configure Prometheus to scrape metrics from http://your-host:9121/metrics:
scrape_configs:
- job_name: 'redis'
static_configs:
- targets: ['your-host:9121']
Redis Exporter exposes over 100 Redis metrics, including redis_memory_used_bytes, redis_memory_fragmentation_ratio, and redis_maxmemory_bytes, making it ideal for dashboards and alerting.
4. Visualize Memory Trends with Grafana
Once Prometheus is collecting Redis memory metrics, connect it to Grafana for visualization.
Create a new dashboard and add panels for:
- Used Memory (Bytes):
redis_memory_used_bytes - Memory Fragmentation Ratio:
redis_memory_fragmentation_ratio - Memory Usage Percentage:
redis_memory_used_bytes / redis_maxmemory_bytes * 100 - Used Memory RSS:
redis_memory_rss_bytes
Set alerts for:
- Fragmentation ratio > 1.5 for 5 minutes
- Memory usage > 85% for 10 minutes
- Memory usage > 95% (critical)
Use the “Redis Overview” dashboard template from Grafana Labs (ID: 1860) as a starting point. Customize it to reflect your thresholds and data retention policies.
5. Monitor Memory by Key and Database
Redis stores all data in a single namespace by default, but you can use multiple databases (0–15). To identify which keys are consuming the most memory:
Use the MEMORY USAGE command to check individual key memory:
MEMORY USAGE mykey
For bulk analysis, use SCAN with MEMORY USAGE in a script:
!/bin/bash
redis-cli -h localhost -p 6379 --scan --pattern "*" | while read key; do
usage=$(redis-cli -h localhost -p 6379 MEMORY USAGE "$key" 2>/dev/null)
if [[ $usage =~ ^[0-9]+$ ]]; then
echo "$key: $usage bytes"
fi
done | sort -n -k3 -r | head -20
This lists the top 20 memory-consuming keys. Look for:
- Large hash, zset, or list structures
- Keys with no TTL (time-to-live)
- Keys storing serialized objects (e.g., JSON, Protocol Buffers)
Use KEYS * only in development—never in production—as it blocks Redis. Always use SCAN for safe iteration.
6. Analyze Memory with Redis Memory Analyzer Tools
Manual inspection is time-consuming. Use specialized tools to automate deep memory analysis:
- redis-rdb-tools: Parses RDB files to generate memory usage reports. Install via pip:
pip install rdbtools. Run:rdb -c memory /var/lib/redis/dump.rdbto generate a CSV report of key sizes. - redis-memory-for-redis: A Python script that analyzes memory usage per data type and key pattern.
- RedisInsight: A GUI tool from Redis Labs that provides real-time memory analysis, key explorer, and heatmaps of memory usage across databases.
RedisInsight is particularly valuable for teams without CLI expertise. It visualizes memory per database, key type, and even shows top keys with their sizes and TTLs.
7. Set Up Alerts and Thresholds
Define alerting rules based on business impact and system capacity.
Critical Alerts (Immediate Action Required):
- Memory usage exceeds 95% of
maxmemory - Fragmentation ratio > 2.0
- Redis process OOM-killed (check system logs with
dmesg | grep -i redis)
Warning Alerts (Investigate Soon):
- Memory usage > 80% of
maxmemory - Fragmentation ratio > 1.5
- Memory usage increases > 20% in 1 hour
Use tools like Alertmanager (with Prometheus), Datadog, New Relic, or even Slack webhooks to trigger notifications. Example Slack webhook script:
curl -X POST -H 'Content-type: application/json' \
--data '{"text":"ALERT: Redis memory usage at 92% on prod-redis-01"}' \
https://hooks.slack.com/services/YOUR/WEBHOOK/URL
Integrate this into your monitoring script or use a platform like UptimeRobot or Prometheus Alertmanager to automate delivery.
8. Correlate Memory with Other Metrics
Memory spikes often correlate with other system behaviors. Always monitor alongside:
- Commands per second: A sudden spike in writes may cause memory growth.
- Evictions: Check
evicted_keysinINFO stats. High eviction rates indicate insufficient memory. - Client connections: Too many clients may increase memory overhead.
- Latency: Memory pressure can cause command delays.
- System swap usage: If
used_memory_rss> total RAM, Redis may be swapping—crippling performance.
Use Grafana to create a unified dashboard with Redis memory, CPU, network, and system swap metrics. This holistic view helps identify root causes.
Best Practices
1. Set a Reasonable maxmemory Limit
Never allow Redis to use unlimited memory. Set maxmemory in your redis.conf to 70–80% of available RAM to leave room for OS, forks, and fragmentation.
maxmemory 12gb
Use maxmemory-policy to define behavior when limit is reached:
- allkeys-lru: Evict least recently used keys (recommended for caches)
- volatile-lru: Evict only keys with TTL set
- allkeys-random: Evict random keys
- volatile-random: Evict random keys with TTL
- volatile-ttl: Evict keys with shortest TTL
- noeviction: Return errors on write (safe for persistent data)
For most caching use cases, allkeys-lru is optimal. For session storage with TTLs, use volatile-lru.
2. Use Appropriate Data Structures
Memory efficiency varies by data type:
- Strings: Efficient for single values, but inefficient for many small keys.
- Hashes: Store multiple fields in one key—use for objects (e.g., user profiles).
- Sorted Sets: Higher overhead due to scoring and ordering.
- Lists: Use only for queues; avoid large lists.
- Sets: Good for unique items; use
intsetsfor small integer sets.
Optimize by:
- Using hashes to group related fields:
HSET user:1000 name "Alice" email "alice@example.com"instead of separate keys. - Encoding small hashes and sets with
hash-max-ziplist-entriesandset-max-intset-entries(Redis compresses them internally). - Avoiding nested structures (e.g., JSON strings in Redis)—they prevent efficient eviction and are harder to query.
3. Implement Key Expiration
Never store data indefinitely unless absolutely necessary. Use TTLs:
SET mykey "value" EX 3600 expires in 1 hour
Or set TTL on existing keys:
EXPIRE mykey 3600
Use consistent naming patterns (e.g., cache:user:1000) to easily identify and clean up keys. Consider automated cleanup scripts that delete keys matching patterns older than a threshold.
4. Enable Memory Defragmentation
Redis 4.0+ supports automatic memory defragmentation. Enable it in redis.conf:
activedefrag yes
active-defrag-ignore-bytes 100mb
active-defrag-threshold-lower 10
active-defrag-threshold-upper 100
active-defrag-cycle-min 25
active-defrag-cycle-max 75
This allows Redis to move memory chunks in the background to reduce fragmentation without blocking operations. Monitor active_defrag_running to confirm it’s active.
5. Avoid Large Keys
Keys with millions of elements (e.g., a single hash with 1M fields) can cause latency spikes during eviction, replication, or restarts. Break them into smaller chunks.
Use sharding: Instead of hash:bigdata, use hash:bigdata:0, hash:bigdata:1, etc.
Use SCAN with MEMORY USAGE to identify large keys and refactor them.
6. Monitor Replica Memory Usage
Redis replicas (slaves) replicate the entire dataset. If your master uses 10GB, each replica will also use ~10GB. Monitor replicas independently.
Replicas may also use additional memory for replication buffers. Check repl_backlog_active and repl_backlog_size in INFO replication.
Use different alert thresholds for replicas—memory usage should closely match the master. A large discrepancy may indicate replication lag or partial sync issues.
7. Regularly Audit and Clean Up
Perform weekly audits:
- Review top 50 memory-consuming keys
- Check for keys without TTLs
- Verify eviction policy is working
- Confirm no orphaned or stale keys from failed processes
Use RedisInsight or custom scripts to generate reports and share with engineering teams.
8. Plan for Scaling
When memory usage consistently exceeds 70% of capacity, plan for scaling:
- Horizontal scaling: Use Redis Cluster to shard data across multiple nodes.
- Vertical scaling: Upgrade to a machine with more RAM.
- Architectural changes: Offload cold data to disk-based databases (e.g., PostgreSQL).
Never wait until Redis is at 95% memory usage to scale—plan proactively.
Tools and Resources
1. Redis Exporter
https://github.com/oliver006/redis_exporter
Open-source Prometheus exporter for Redis metrics. Supports Redis 2.8+ and Sentinel. Exposes detailed memory, latency, replication, and command stats.
2. RedisInsight
https://redis.com/redis-enterprise/redis-insight/
Official GUI tool from Redis Labs. Provides memory heatmaps, key explorer, performance monitoring, and cluster visualization. Free for up to 10GB of memory.
3. Prometheus + Grafana
Industry-standard open-source monitoring stack. Ideal for teams already using Kubernetes, Docker, or cloud-native infrastructure. Combine with Redis Exporter for end-to-end visibility.
4. redis-rdb-tools
https://github.com/sripathikrishnan/redis-rdb-tools
Parse RDB files to analyze memory usage offline. Useful for post-mortem analysis after a crash or for capacity planning.
5. Datadog and New Relic
Commercial APM platforms with built-in Redis integration. Offer automated alerting, anomaly detection, and deep-dive diagnostics. Best for enterprises with budget for premium tools.
6. Redis CLI Commands Reference
INFO memory– Memory usage statsINFO stats– Command counters, evictionsINFO replication– Replica status, backlog sizeMEMORY USAGE <key>– Memory used by a specific keySCAN 0 MATCH * COUNT 1000– Safe iteration over keysCLIENT LIST– View active connectionsCONFIG GET maxmemory– Check current maxmemory setting
7. Redis Documentation
https://redis.io/docs/latest/operate/oss_and_stack/management/monitoring/
Official Redis monitoring guide with in-depth explanations of all metrics and configuration options.
8. Books and Courses
- Redis in Action by Josiah L. Carlson
- Designing Data-Intensive Applications by Martin Kleppmann (Chapter 10 covers caching and Redis)
- Udemy: “Mastering Redis: From Beginner to Expert”
Real Examples
Example 1: Unexpected Memory Growth Due to Missing TTL
A team noticed Redis memory usage rising steadily over 3 days—from 4GB to 12GB on a 16GB instance. No new features were deployed.
Investigation:
- Used
redis-cli --bigkeysto find large keys. - Discovered a key named
user_sessions:batch:2024-05-15with 800,000 entries and no TTL. - Root cause: A background job was writing session data but forgot to set expiration.
Resolution:
- Added
EX 3600to the write command. - Deleted the orphaned key:
DEL user_sessions:batch:2024-05-15. - Deployed a monitoring alert for keys without TTLs.
Result: Memory dropped to 4.5GB within 2 hours. No recurrence.
Example 2: High Fragmentation After Rapid Scaling
After a traffic spike, Redis memory usage jumped from 5GB to 14GB, but used_memory remained stable at 12GB. used_memory_rss was 18GB. Fragmentation ratio: 1.8.
Investigation:
- Redis was handling 5x more writes per second.
- Memory allocator (jemalloc) couldn’t compact freed chunks fast enough.
- System had 16GB RAM—no swap, but fragmentation caused memory pressure.
Resolution:
- Enabled
activedefrag yesin config. - Restarted Redis during low-traffic window to reset memory layout.
- Upgraded to 32GB RAM to accommodate future spikes.
Result: Fragmentation dropped to 1.1. System stabilized.
Example 3: Cache Invalidation Storm
After a deployment, Redis memory usage spiked to 98%, triggering evictions. Performance degraded.
Investigation:
- Check
INFO stats:evicted_keysjumped from 0 to 12,000/min. - Found that a new feature was caching API responses with 1-minute TTLs.
- At peak, 200,000 new keys were created per minute—each with 5KB payload.
- Eviction policy was
allkeys-lru, but new keys were “hot” and never evicted.
Resolution:
- Reduced TTL to 15 seconds.
- Added client-side caching (e.g., browser or CDN) to reduce Redis load.
- Implemented rate limiting on the cache-creation endpoint.
Result: Evictions dropped to 50/min. Memory usage normalized at 6GB.
Example 4: Replica Memory Drain
One of three Redis replicas had 50% more memory usage than the others.
Investigation:
- Checked
INFO replicationon the high-memory replica:repl_backlog_active:1andrepl_backlog_size:1073741824(1GB). - Master had backlog size of 100MB.
- Root cause: Network latency caused the replica to fall behind, forcing Redis to retain more replication buffer.
Resolution:
- Improved network connectivity between replica and master.
- Increased
repl-backlog-sizeon master to 2GB to accommodate lag. - Added alert for replication offset lag > 10 seconds.
Result: Replica memory normalized. No further discrepancies.
FAQs
What causes Redis memory to keep increasing?
Memory increases due to: new data writes, lack of TTLs on keys, memory fragmentation, replication backlog buildup, or large key growth. Always check for keys without expiration and monitor eviction rates.
Is it normal for used_memory_rss to be higher than used_memory?
Yes. used_memory_rss includes OS-level memory overhead, fragmentation, and allocator metadata. A ratio of 1.1–1.3 is normal. Above 1.5 indicates fragmentation.
How do I reduce Redis memory usage?
Use TTLs, switch to efficient data structures (hashes over strings), enable defragmentation, delete orphaned keys, and consider sharding or eviction policies. Avoid storing large serialized objects.
Can Redis memory be monitored without restarting the server?
Yes. All metrics are available via INFO memory and Redis Exporter. No restart is needed to begin monitoring.
What’s the difference between used_memory and used_memory_rss?
used_memory is the amount of memory Redis’s allocator has allocated. used_memory_rss is the actual physical RAM consumed by the Redis process, including fragmentation and OS overhead.
How often should I check Redis memory usage?
For production systems, monitor every 1–5 minutes. Use automated tools—manual checks are error-prone and impractical at scale.
Should I use Redis Cluster to reduce memory pressure?
Redis Cluster distributes data across shards, allowing you to scale horizontally. It doesn’t reduce memory per key, but it lets you add more nodes to handle total memory growth.
What happens if Redis runs out of memory?
If maxmemory is set, Redis evicts keys based on policy. If maxmemory is not set, Redis will crash or be killed by the OS OOM killer. Always set a limit.
Can I monitor Redis memory in Kubernetes?
Yes. Deploy Redis Exporter as a sidecar or separate pod. Use Prometheus Operator and Grafana to visualize metrics. Set resource limits and requests in your deployment YAML.
Does Redis compression help reduce memory usage?
Redis does not compress data by default. However, it uses memory-efficient encodings (ziplists, intsets) for small data. For larger objects, compress data before storing (e.g., gzip JSON).
Conclusion
Monitoring Redis memory is not a one-time task—it’s an ongoing discipline critical to system reliability and performance. Redis’s in-memory nature makes it fast, but also fragile under memory pressure. Without proper oversight, even minor misconfigurations can lead to outages, data loss, or degraded user experiences.
This guide has provided a complete roadmap: from understanding core metrics and setting up automated collection, to visualizing trends, identifying root causes, and implementing best practices. You now know how to detect memory leaks, reduce fragmentation, optimize data structures, and respond to alerts with confidence.
Remember: proactive monitoring prevents crises. Set thresholds, automate alerts, visualize trends, and audit regularly. Use tools like Redis Exporter, Grafana, and RedisInsight to turn raw data into actionable insights. And never underestimate the power of TTLs and efficient data modeling—they are your first line of defense against memory bloat.
By applying these strategies consistently, you’ll ensure your Redis deployments remain stable, scalable, and efficient—even under the heaviest loads. Memory monitoring isn’t just about watching numbers—it’s about safeguarding the performance of every application that depends on Redis.