Channel Layer Types

Multiple choices of backend are available to fill different tradeoffs of complexity, throughput, and scalability. You can also write your own backend if you wish; the spec they conform to is called ASGI. Any ASGI-compliant channel layer can be used.

Redis

The Redis layer is the recommended backend to run Channels with. It supports both high throughput on a single Redis server as well as the ability to run against a set of Redis servers in a sharded mode.

To use the Redis layer, simply install it from PyPI (it lives in a separate package because we didn’t want to force a dependency on redis-py for the main install):

pip install -U asgi_redis

By default, it will attempt to connect to a Redis server on localhost:6379, but you can override this with the hosts key in its config:

CHANNEL_LAYERS = {
    "default": {
        "BACKEND": "asgi_redis.RedisChannelLayer",
        "ROUTING": "???",
        "CONFIG": {
            "hosts": [("redis-channel-1", 6379), ("redis-channel-2", 6379)],
        },
    },
}

Consider hiredis library installation to improve layer performance:

pip install hiredis

It will be used automatically if it’s installed.

Sharding

The sharding model is based on consistent hashing - in particular, response channels are hashed and used to pick a single Redis server that both the interface server and the worker will use.

For normal channels, since any worker can service any channel request, messages are simply distributed randomly among all possible servers, and workers will pick a single server to listen to. Note that if you run more Redis servers than workers, it’s very likely that some servers will not have workers listening to them; we recommend you always have at least ten workers for each Redis server to ensure good distribution. Workers will, however, change server periodically (every five seconds or so) so queued messages should eventually get a response.

Note that if you change the set of sharding servers, then you will need to restart all interface servers and workers with the new set before anything works, and any in-flight messages will be lost (even with persistence, some will); the consistent hashing model relies on all running clients having the same settings. Any misconfigured interface server or worker will drop some or all messages.

RabbitMQ

RabbitMQ layer is comparable to Redis in terms of latency and throughput. It can work with single RabbitMQ node and with Erlang cluster.

You need to install the RabbitMQ layer package from PyPI:

pip install -U asgi_rabbitmq

To use it you also need provide a link to the virtual host with granted permissions:

CHANNEL_LAYERS = {
    "default": {
        "BACKEND": "asgi_rabbitmq.RabbitmqChannelLayer",
        "ROUTING": "???",
        "CONFIG": {
            "url": "amqp://guest:guest@rabbitmq:5672/%2F",
        },
    },
}

This layer has complete documentation on its own.

IPC

The IPC backend uses POSIX shared memory segments and semaphores in order to allow different processes on the same machine to communicate with each other.

As it uses shared memory, it does not require any additional servers running to get working, and is quicker than any network-based channel layer. However, it can only run between processes on the same machine.

Warning

The IPC layer only communicates between processes on the same machine, and while you might initially be tempted to run a cluster of machines all with their own IPC-based set of processes, this will result in groups not working properly; events sent to a group will only go to those channels that joined the group on the same machine. This backend is for single-machine deployments only.

In-memory

The in-memory layer is only useful when running the protocol server and the worker server in a single process; the most common case of this is runserver, where a server thread, this channel layer, and worker thread all co-exist inside the same python process.

Its path is asgiref.inmemory.ChannelLayer. If you try and use this channel layer with runworker, it will exit, as it does not support cross-process communication.

Writing Custom Channel Layers

The interface channel layers present to Django and other software that communicates over them is codified in a specification called ASGI.

Any channel layer that conforms to the ASGI spec can be used by Django; just set BACKEND to the class to instantiate and CONFIG to a dict of keyword arguments to initialize the class with.