~17 min read • Updated Mar 15, 2026
Introduction
Dynamic websites generate content on the fly. Each request may involve database queries, template rendering, and business logic. While this is powerful, it is also expensive compared to serving static files directly from disk. For small sites, this overhead is manageable, but for medium‑ to high‑traffic applications, caching becomes essential.
Caching stores the result of expensive operations so that future requests can reuse the stored result instead of recalculating it.
Pseudocode Example
if page exists in cache:
return cached page
else:
generate page
store page in cache
return page
Django provides a robust caching framework that supports multiple levels of caching:
- Full‑site caching
- Per‑view caching
- Template fragment caching
- Low‑level caching API
Django also integrates well with downstream caches such as Squid, Varnish, and browser caches.
Setting Up the Cache
To use caching, you must configure the CACHES setting in your settings.py.
This determines where cached data is stored—memory, filesystem, database, or a custom backend.
Different backends offer different performance characteristics. For high‑performance caching, Memcached is the recommended choice.
Memcached
Memcached is a high‑performance, in‑memory caching system used by large websites such as Facebook and Wikipedia. It stores data entirely in RAM, making it extremely fast.
Requirements
To use Memcached with Django:
- Install Memcached server
- Install a Python binding:
pylibmcorpymemcache
Basic Configuration (TCP Socket)
CACHES = {
"default": {
"BACKEND": "django.core.cache.backends.memcached.PyMemcacheCache",
"LOCATION": "127.0.0.1:11211",
}
}
Using a Unix Socket
CACHES = {
"default": {
"BACKEND": "django.core.cache.backends.memcached.PyMemcacheCache",
"LOCATION": "unix:/tmp/memcached.sock",
}
}
Distributed Memcached Setup
Memcached can run across multiple servers and behave as a single logical cache.
To enable this, list all server addresses in LOCATION.
Example: Multiple Servers
CACHES = {
"default": {
"BACKEND": "django.core.cache.backends.memcached.PyMemcacheCache",
"LOCATION": [
"172.19.26.240:11211",
"172.19.26.242:11211",
],
}
}
Different Ports
CACHES = {
"default": {
"BACKEND": "django.core.cache.backends.memcached.PyMemcacheCache",
"LOCATION": [
"172.19.26.240:11211",
"172.19.26.242:11212",
"172.19.26.244:11213",
],
}
}
Default Options for PyMemcacheCache
You can override these in OPTIONS:
"OPTIONS": {
"allow_unicode_keys": True,
"default_noreply": False,
"serde": pymemcache.serde.pickle_serde,
}
Important Notes About Memcached
Because Memcached stores data in memory:
- All cached data is lost if the server restarts
- It should never be used as permanent storage
- It is ideal for high‑performance, temporary caching
This applies to all Django caching backends: they are for caching, not for persistent storage.
Conclusion
Django’s cache framework is a powerful tool for improving performance in dynamic web applications. Whether you cache entire pages, specific views, or small template fragments, caching can dramatically reduce server load and improve response times. Memcached, in particular, offers exceptional performance for medium‑ to large‑scale applications.
Redis Cache in Django
Redis is a high‑performance, in‑memory data store that works exceptionally well as a caching backend.
To use Redis with Django, you need a running Redis server and the redis-py Python client.
Installing hiredis is also recommended for faster parsing.
Basic Redis Configuration
To configure Redis as your cache backend:
CACHES = {
"default": {
"BACKEND": "django.core.cache.backends.redis.RedisCache",
"LOCATION": "redis://127.0.0.1:6379",
}
}
Redis with Authentication
"LOCATION": "redis://username:[email protected]:6379"
Redis Replication (Leader + Replicas)
When using multiple Redis servers in replication mode:
- Writes go to the first server (leader)
- Reads are distributed randomly among replicas
"LOCATION": [
"redis://127.0.0.1:6379", # leader
"redis://127.0.0.1:6378", # replica 1
"redis://127.0.0.1:6377", # replica 2
]
Database Caching
Django can store cached data in a database table. This backend is useful when you have a fast, well‑indexed database and want persistent caching across processes.
Configuration
CACHES = {
"default": {
"BACKEND": "django.core.cache.backends.db.DatabaseCache",
"LOCATION": "my_cache_table",
}
}
Creating the Cache Table
python manage.py createcachetable
This command creates the table defined in LOCATION.
It will not overwrite existing tables.
Important Note
DatabaseCache does not automatically remove expired entries at the database level.
Expired rows are cleaned up during add(), set(), or touch().
Database Caching with Multiple Databases
When using multiple databases, you must define routing rules for the cache table. Django treats the cache table as a model named CacheEntry in the django_cache app.
Example Router
class CacheRouter:
def db_for_read(self, model, **hints):
if model._meta.app_label == "django_cache":
return "cache_replica"
return None
def db_for_write(self, model, **hints):
if model._meta.app_label == "django_cache":
return "cache_primary"
return None
def allow_migrate(self, db, app_label, **hints):
if app_label == "django_cache":
return db == "cache_primary"
return None
File-Based Caching
The file-based backend stores each cached value as a separate file on disk. It is simple to configure but may become slow with large numbers of files.
Configuration
"BACKEND": "django.core.cache.backends.filebased.FileBasedCache",
"LOCATION": "/var/tmp/django_cache",
Security Warning
Never store cache files inside MEDIA_ROOT or STATIC_ROOT.
Cache files are serialized using pickle, which can lead to remote code execution if exposed.
Performance Warning
File-based caching slows down when storing many files. Consider Redis or Memcached for high‑traffic environments.
Local-Memory Caching
This is Django’s default cache backend when none is specified. It stores cached data in memory within the current process.
Configuration
"BACKEND": "django.core.cache.backends.locmem.LocMemCache",
"LOCATION": "unique-snowflake",
Characteristics
- Very fast
- Thread‑safe
- Per‑process (no sharing across processes)
- Uses LRU (least‑recently‑used) eviction
- Not ideal for production
Dummy Cache (Development Only)
The dummy cache implements the cache API but does not store anything. It is useful when you want to disable caching without changing your code.
Configuration
"BACKEND": "django.core.cache.backends.dummy.DummyCache"
Custom Cache Backends
If needed, you can define your own cache backend by providing its import path:
"BACKEND": "path.to.backend"
However, unless you have a strong reason, it is best to use Django’s built‑in backends, which are well‑tested and reliable.
Conclusion
Django provides a rich ecosystem of caching backends suitable for different performance needs. Redis offers high‑speed, distributed caching; database caching provides persistence; file‑based caching is simple and flexible; local‑memory caching is ideal for development; and the dummy cache is perfect for disabling caching safely. Choosing the right backend can dramatically improve your application’s performance and scalability.
Introduction
Django’s caching framework is highly configurable and allows fine‑grained control over how cached data is stored, expired, and retrieved.
These configurations are defined in the CACHES setting and can dramatically improve performance when used correctly.
Cache Arguments
TIMEOUT
Defines the default expiration time for cached values (in seconds). The default is 300 seconds.
- None: keys never expire.
- 0: keys expire immediately (effectively disabling caching).
OPTIONS
A dictionary of backend‑specific options. Third‑party backends pass these options directly to the underlying client library.
MAX_ENTRIES
The maximum number of items allowed in the cache before old entries are removed. Default: 300.
CULL_FREQUENCY
Controls how many entries are removed when MAX_ENTRIES is reached. The ratio is 1 / CULL_FREQUENCY.
- 2 → remove half the entries
- 0 → clear the entire cache (faster but more cache misses)
KEY_PREFIX
A string automatically prepended to all cache keys. Useful when multiple sites share the same cache backend.
VERSION
The default version number for cache keys. Changing the version invalidates all existing keys.
KEY_FUNCTION
A dotted path to a function that defines how the final cache key is constructed from prefix, version, and key.
Example: File-Based Cache Configuration
CACHES = {
"default": {
"BACKEND": "django.core.cache.backends.filebased.FileBasedCache",
"LOCATION": "/var/tmp/django_cache",
"TIMEOUT": 60,
"OPTIONS": {"MAX_ENTRIES": 1000},
}
}
Example: PyLibMC with Advanced Options
"OPTIONS": {
"binary": True,
"username": "user",
"password": "pass",
"behaviors": {"ketama": True},
}
Example: PyMemcache with Pooling
"OPTIONS": {
"no_delay": True,
"ignore_exc": True,
"max_pool_size": 4,
"use_pooling": True,
}
Example: Redis with Custom Database and Pool
"OPTIONS": {
"db": "10",
"pool_class": "redis.BlockingConnectionPool",
}
Per‑Site Cache
The simplest way to enable caching for your entire site is to use Django’s cache middleware.
Required Middleware
MIDDLEWARE = [
"django.middleware.cache.UpdateCacheMiddleware",
"django.middleware.common.CommonMiddleware",
"django.middleware.cache.FetchFromCacheMiddleware",
]
Important: UpdateCacheMiddleware must be first, and FetchFromCacheMiddleware must be last.
Required Settings
CACHE_MIDDLEWARE_ALIAS = "default"
CACHE_MIDDLEWARE_SECONDS = 600
CACHE_MIDDLEWARE_KEY_PREFIX = "mysite"
Only GET and HEAD responses with status 200 are cached. Requests with different query parameters are cached separately.
Per‑View Cache
For more granular control, Django provides the cache_page decorator to cache individual views.
Basic Example
from django.views.decorators.cache import cache_page
@cache_page(60 * 15)
def my_view(request):
...
Using a Specific Cache
@cache_page(60 * 15, cache="special_cache")
def my_view(request):
...
Using key_prefix
@cache_page(60 * 15, key_prefix="site1")
def my_view(request):
...
Applying Per‑View Cache in URLconf
To avoid coupling the view to caching logic, apply the decorator in the URLconf:
from django.views.decorators.cache import cache_page
urlpatterns = [
path("foo//", cache_page(60 * 15)(my_view)),
]
Conclusion
Django’s caching system offers powerful tools for optimizing performance. By understanding cache arguments, enabling per‑site caching, and applying per‑view caching where appropriate, you can significantly reduce server load and improve response times. These features give you precise control over how your application stores, expires, and retrieves cached data.
Introduction
Django provides multiple caching layers, but sometimes full‑page or per‑view caching is too coarse. Template fragment caching allows you to cache only the expensive parts of a template. For even more control, Django exposes a low‑level cache API that lets you cache arbitrary Python objects.
Template Fragment Caching
To use template fragment caching, load the cache tag at the top of your template:
{% load cache %}
Basic Usage
The {% cache %} tag caches the enclosed block for a given number of seconds:
{% cache 500 sidebar %}
... sidebar content ...
{% endcache %}
The second argument (sidebar) is the fragment name and must be a literal string.
Caching Based on Dynamic Values
You can cache multiple versions of a fragment by adding extra arguments. These arguments may be variables or expressions:
{% cache 500 sidebar request.user.username %}
... user-specific sidebar ...
{% endcache %}
Multilingual Fragment Caching
If your site uses internationalization, you can vary the cache by language:
{% load i18n %}
{% load cache %}
{% get_current_language as LANGUAGE_CODE %}
{% cache 600 welcome LANGUAGE_CODE %}
{% translate "Welcome to example.com" %}
{% endcache %}
Timeout as a Template Variable
The timeout may be a template variable that resolves to an integer:
{% cache my_timeout sidebar %}
...
{% endcache %}
Selecting a Specific Cache Backend
By default, the tag uses the template_fragments cache if defined, otherwise the default cache.
You can override this using the using keyword:
{% cache 300 local_fragment using="localcache" %}
...
{% endcache %}
It is an error to reference a cache alias that is not configured.
Getting the Cache Key for a Fragment
To manually invalidate or inspect a cached fragment, use:
from django.core.cache.utils import make_template_fragment_key
from django.core.cache import cache
key = make_template_fragment_key("sidebar", [username])
cache.delete(key)
This is especially useful when you need to invalidate fragments after updates.
The Low‑Level Cache API
When fragment caching is not enough, Django’s low‑level cache API gives you full control. You can cache any picklable Python object: strings, lists, dictionaries, model objects, etc.
Accessing Cache Backends
Use caches to access any configured backend:
from django.core.cache import caches
cache1 = caches["myalias"]
cache2 = caches["myalias"]
cache1 is cache2 # True
The default cache is available as:
from django.core.cache import cache
Basic Operations
cache.set()
cache.set("my_key", "hello world", 30)
cache.get()
cache.get("my_key")
If the key does not exist, None is returned unless a default is provided:
cache.get("my_key", "expired")
Handling Literal None Values
Use a sentinel object to distinguish between “missing” and “stored None”:
sentinel = object()
cache.get("my_key", sentinel) is sentinel
cache.add()
Adds a key only if it does not already exist:
cache.set("add_key", "initial")
cache.add("add_key", "new") # ignored
cache.get_or_set()
Retrieves a value or sets it if missing:
cache.get_or_set("timestamp", datetime.datetime.now, 100)
If the key is missing, the callable is executed and stored.
Conclusion
Template fragment caching gives you precise control over which parts of a page should be cached, while Django’s low‑level cache API allows you to cache arbitrary Python objects with full flexibility. Together, these tools enable powerful performance optimizations for complex, data‑driven applications.
Introduction
Django’s low‑level cache API gives you full control over how cached data is stored, retrieved, and managed. It is ideal when full‑page or per‑view caching is too coarse and you need fine‑grained caching logic for specific data structures or expensive computations.
Retrieving Multiple Keys with get_many()
The get_many() method retrieves multiple keys in a single cache lookup:
cache.set("a", 1)
cache.set("b", 2)
cache.set("c", 3)
cache.get_many(["a", "b", "c"])
# {'a': 1, 'b': 2, 'c': 3}
Only keys that exist and haven’t expired are returned.
Setting Multiple Keys with set_many()
To store multiple values efficiently:
cache.set_many({"a": 1, "b": 2, "c": 3})
Like set(), it accepts an optional timeout.
On backends like Memcached, it returns a list of keys that failed to be stored.
Deleting Keys with delete()
cache.delete("a")
# True
Returns True if the key was deleted, False otherwise.
Deleting Multiple Keys with delete_many()
cache.delete_many(["a", "b", "c"])
Clearing the Entire Cache with clear()
Removes all keys from the cache:
cache.clear()
Use with caution, as this affects all cached data, not just your application’s keys.
Updating Expiration with touch()
To refresh a key’s expiration time:
cache.touch("a", 10)
# True
If no timeout is provided, the backend’s default TIMEOUT is used.
Incrementing and Decrementing Values
You can modify numeric cache values using incr() and decr():
cache.set("num", 1)
cache.incr("num") # 2
cache.incr("num", 10) # 12
cache.decr("num") # 11
cache.decr("num", 5) # 6
A ValueError is raised if the key does not exist. On backends like Memcached, these operations are atomic.
Closing the Cache Connection
cache.close()
If the backend does not implement close(), the call is a no‑op.
Cache Key Prefixing (KEY_PREFIX)
When multiple environments or servers share the same cache instance, key collisions can occur. Django solves this with the KEY_PREFIX setting.
Every cache key is automatically prefixed with this value, ensuring isolation between environments.
Cache Versioning
Instead of clearing the entire cache when code changes, Django allows versioning of cache keys.
Example:
cache.set("my_key", "hello world!", version=2)
cache.get("my_key") # None (default version is 1)
cache.get("my_key", version=2) # 'hello world!'
Incrementing a Key’s Version
cache.incr_version("my_key")
cache.get("my_key", version=3) # 'hello world!'
Custom Key Transformation (KEY_FUNCTION)
By default, Django combines prefix, version, and key like this:
key_prefix:version:key
If you want to hash keys or change the structure, you can define a custom key function and set KEY_FUNCTION to its dotted path.
Cache Key Warnings
Memcached does not allow keys longer than 250 characters or containing whitespace. To help maintain portability, Django’s other backends issue CacheKeyWarning when such keys are used.
Silencing the Warning
import warnings
from django.core.cache import CacheKeyWarning
warnings.simplefilter("ignore", CacheKeyWarning)
Custom Key Validation
You can subclass a backend and override validate_key():
from django.core.cache.backends.locmem import LocMemCache
class CustomLocMemCache(LocMemCache):
def validate_key(self, key):
...
Conclusion
Django’s low‑level cache API provides powerful tools for managing cached data with precision. With multi‑key operations, expiration control, atomic increments, versioning, and custom key handling, you can build highly optimized and maintainable caching strategies for complex applications.
Asynchronous Cache Support
Django is gradually introducing asynchronous support for cache backends, although full async caching is not yet available.
The BaseCache class includes asynchronous variants of all base methods, each prefixed with a.
Example
await cache.aset("num", 1)
await cache.ahas_key("num") # True
Both sync and async versions accept the same arguments. Full asynchronous caching will arrive in a future Django release.
Downstream Caching
Downstream caches are systems that cache responses before they reach your Django application. Examples include:
- HTTP caching by ISPs (not possible under HTTPS)
- Proxy caches such as Squid
- Browser-level caching
While downstream caching improves performance, it can be dangerous if pages contain user-specific or sensitive data. Incorrect caching may expose private content to other users.
Using Vary Headers
The Vary header tells caches which request headers should influence the cache key. If a page depends on cookies, language, or user-agent, you must vary on those headers.
Using vary_on_headers()
from django.views.decorators.vary import vary_on_headers
@vary_on_headers("User-Agent")
def my_view(request):
...
Multiple Headers
@vary_on_headers("User-Agent", "Cookie")
def my_view(request):
...
Vary on Cookie
These two are equivalent:
@vary_on_cookie
def my_view(request):
...
@vary_on_headers("Cookie")
def my_view(request):
...
Using patch_vary_headers()
from django.utils.cache import patch_vary_headers
response = render(request, "template.html", context)
patch_vary_headers(response, ["Cookie"])
Controlling Cache with Cache-Control Headers
Cache-Control headers determine whether a response is public or private, how long it should be cached, and how caches should behave.
Marking a Response as Private
from django.views.decorators.cache import cache_control
@cache_control(private=True)
def my_view(request):
...
Manual Cache-Control Modification
from django.views.decorators.cache import patch_cache_control
patch_cache_control(response, public=True)
Setting max-age
@cache_control(max_age=3600)
def my_view(request):
...
Other Valid Directives
- no_transform=True
- must_revalidate=True
- stale_while_revalidate=seconds
- no_cache=True
Disabling Caching Entirely
from django.views.decorators.cache import never_cache
@never_cache
def my_view(request):
...
Ordering of Caching Middleware
When using Django’s caching middleware, ordering is critical.
Correct Order
MIDDLEWARE = [
"django.middleware.cache.UpdateCacheMiddleware",
"django.middleware.common.CommonMiddleware",
"django.middleware.cache.FetchFromCacheMiddleware",
]
Why Order Matters
- UpdateCacheMiddleware runs during the response phase and must run before middleware that modifies the Vary header.
- FetchFromCacheMiddleware runs during the request phase and must run after middleware that modifies the Vary header.
Middleware That Modifies Vary
- SessionMiddleware → adds Cookie
- GZipMiddleware → adds Accept-Encoding
- LocaleMiddleware → adds Accept-Language
Conclusion
Django provides powerful tools for managing both server-side and downstream caching. With evolving asynchronous support, Vary headers for fine-grained control, Cache-Control directives for privacy and expiration, and strict middleware ordering, you can build a highly optimized and secure caching strategy for your application.
Written & researched by Dr. Shahin Siami