Incus is a community fork of LXD, a next generation system container and virtual machine manager. Debian packages LTS releases of Incus, beginning with trixie. A backported version of Incus is also available in bookworm-backports.
Contents
Supported versions of Incus
Incus (upstream) has the following releases:
Version |
EOL |
In Debian release |
6.14 |
August, 2025 |
experimental |
6.0.4 LTS |
Trixie |
LTS vs Feature releases
Debian packages the current LTS release of Incus in unstable, which will then migrate to testing and eventually become part of the next stable Debian release. Upstream supports a LTS branch for five years, including security fixes. This aligns well with Debian's development cycle and commitment to supporting packaged software.
Monthly feature releases of Incus are uploaded on a best-effort basis to experimental, and only to experimental. Upstream is very clear that each feature release is only supported until the next feature release is made. A given feature release of Incus will NEVER be uploaded to unstable, except when a new LTS branch is created by upstream.
Versions of Incus in experimental will by definition be less well tested than the version in unstable, though bug reports are always welcome.
Installation
To install Incus with support for creating both containers and virtual machines, run:
sudo apt install incus
If you only wish to create containers in Incus and don't want the additional dependencies required for running virtual machines, you can install just the base package:
sudo apt install incus-base
Incus initialization
If you wish to migrate existing containers or VMs from LXD, please refer to the next section. Otherwise, after installing Incus you must perform an initial configuration:
sudo incus admin init
Migrating from LXD
Incus includes a tool named lxd-to-incus, included in the incus-extra package, which can be used to convert an existing LXD installation into an Incus one.
For this to work properly, you should install Incus but not initialize it. Instead, make sure that both incus info and lxc info both work properly, then run lxd-to-incus to migrate your data.
This process transfers the entire database and all storage from LXD to Incus, resulting in an identical setup after the migration.
For further information, please view the migration HOWTO.
This should be considered a destructive action from LXD's perspective. Afterwards the LXD daemon may not properly start, and running lxc list will be empty. It is recommend that the LXD packages be purged as part of running lxd-to-incus, or by hand immediately following the migration.
Configuration
Incus's default bridge networking requires the dnsmasq-base package to be installed. If you chose to install Incus without its recommended packages and intend to use the default bridge, you must first install dnsmasq-base for networking to work correctly.
If you wish to allow non-root users to interact with Incus via the local Unix socket, you must add them to the incus group:
sudo usermod -aG incus <username>
Access via the incus group grants restricted access to Incus, allowing members to run most commands, except incus admin. For the vast majority of use cases, this is the preferred setup.
Alternatively, if you wish to allow non-root users full administrative access to Incus via the local Unix socket, you must add them to the incus-admin group:
sudo usermod -aG incus-admin <username>
From the upstream documentation, be aware that local access to Incus through the Unix socket via the incus-admin group always grants full access to Incus. This includes the ability to attach file system paths or devices to any instance as well as tweak the security features on any instance. Therefore, you should only give access to users who would be trusted with root access to the host.
Init scripts
systemd, SysV, and runit init scripts are provided, but the SysV and runit scripts aren't as well-tested as the systemd ones. Bug reports and patches are welcome to improve the Incus experience on non-systemd installs.
Non-systemd required configuration
When installing Incus on a non-systemd host, you must ensure that a cgroup v2 hierarchy is mounted prior to starting Incus. One possible way to do this is to add a line like the following to your /etc/fstab:
cgroup2 /sys/fs/cgroup cgroup2 rw,nosuid,nodev,noexec 0 0
Additionally, the "incus-user" service is not automatically started after install to allow the local administrator to first run incus admin init. After performing initialization, you can then enable/start the "incus-user" service. This is needed because if "incus-user" starts before Incus has been configured it will create a minimal empty profile which then conflicts with running incus admin init. (Hosts running systemd avoid this by relying on socket-based activation for the "incus" and "incus-user" services.)
Logging
By default, the Incus daemon will log to /var/log/incus/incus.log. If you wish to log instead to the host system's configured rsyslog daemon, set USE_SYSLOG=yes in /etc/default/incus and restart the service.
When using the runit scripts, Incus logs will always be written to /var/log/runit/incus/current.
Storage backends
Incus supports several storage backends. When installing, Incus will suggest the necessary packages to enable all storage backends, but in brief:
btrfs requires the btrfs-progs package
ceph/cephfs require the ceph-common package
lvm requires the lvm2 package
zfs requires the zfsutils-linux package
After installing one or more of those additional packages, be sure to restart the Incus service so it picks up the additional storage backend(s).
Virtual machines
Incus optionally can create virtual machine instances utilizing QEMU. Because Incus configures QEMU to use host CPU passthrough, emulation of a different architecture than the host system is not supported. For example, an amd64 host can create both amd64 and i386 virtual machines, but cannot create an arm64 virtual machine.
Currently, only amd64, arm64, ppc64el, and s390x are supported as host architectures for running virtual machines.
To enable this capability, on the host system ensure that the "incus" package is installed and then restart the Incus service. You will now be able to create virtual machine instances by passing the --vm flag in your creation command.
RHEL8/9 (and derivatives)
RHEL8/9 do not ship the 9p kernel module, which is used to dynamically mount instance-specific agent configuration and the incus-agent binary into VMs. To work around this, Incus 0.5.1 added a new agent drive, providing those files through what looks like a CD-ROM drive rather than being retrieved over a networked filesystem.
For example, to run CentOS 9-Stream, one now needs to do:
incus create images:centos/9-Stream centos --vm incus config device add centos agent disk source=agent:config incus start centos
For further details, please see the Incus 0.5.1 release notes.
Windows VMs
When running a Windows VM, it is recommended to set the VM's image.os configuration field to include the string "Windows". This allows Incus to hide some unsupported devices and configure an appropriate incus-agent binary for the VM.
Known issues
- Running Incus and Docker on the same host can cause connectivity issues. A common reason for these issues is that Docker sets the FORWARD policy to DROP, which prevents Incus from forwarding traffic and thus causes the instances to lose network connectivity. There are two different ways you can fix this:
As outlined in bug 865975, message 91, you can add net.ipv4.ip_forward=1 to /etc/sysctl.conf which will create a FORWARD policy that docker can use. Docker then won't set the FORWARD chain to DROP when it starts up.
Alternately, you can use the following command to explicitly allow network traffic from your network bridge to your external network interface: iptables -I DOCKER-USER -i <network_bridge> -o <external_interface> -j ACCEPT (from the upstream Incus documentation)
If the apparmor package is not installed on the host system, containers will fail to start unless their configuration is modified to include lxc.apparmor.profile=unconfined; this has been reported upstream at https://github.com/lxc/lxc/issues/4150.
References
