An external GPU (eGPU) is a standalone graphic card together with a power supply unit (PSU) and some kind of an adapter (in a form of a dock or an enclosure) that is connected to a host computer, usually a laptop, using a single flexible cable. This page gives a high-level overview of issues and benefits of connecting various brands of GPUs via various types of connections to a Debian system. For OS-agnostic in-depth reviews, analysis, guides, build examples etc refer to https://egpu.io site.

Discuss the content of this page on egpu.io forum.


Introduction

Due to space and power-consumption limitations, many laptop models are equipped only with pretty modest iGPUs (GPUs integrated on a single chip with a CPU) with limited computing power. This may be a limiting factor in tasks such as video processing, gaming, local LLMs and others. OTOH, desktop systems that are able to accommodate powerful dGPUs (discrete or dedicated GPU chips or whole separate cards) are not portable, which limits work flexibility, including of tasks that could be performed without a powerful GPU. eGPU is a sort of middle-ground between these two, allowing to perform "light" tasks utilizing your laptop's portability and changing it into a machine with almost desktop capabilities by connecting an eGPU to it when needed.

Most eGPU solutions are separately sold components (GPU, PSU, adapter), requiring users to put them together themselves, but ready to use 3-in-1 solutions are also sometimes available.

Comparing to the same card used as a dGPU in a desktop setup, a given eGPU will have exactly the same computing power. However depending on the type of connection used, the speed of communication between the CPU and the GPU (VRAM access) will usually be limited at least to some extent: see the egpu.io speed measurements table. Impact of this limitation on the overall performance heavily depends on the type of a task being performed. Recently pretty extensive benchmarks have been performed by Puget Systems. Also refer to the benchmarks gathered in "Builds" section of the egpu.io site.


Connection types and interfaces

As of 2025 on consumer computers, "low-level" peripheral devices such as GPUs are connected to the rest of the system using a PCIe bus. A motherboard and a CPU have a certain number of physical PCIe lanes available and each soldered device and each PCIe slot or connector has a few of these lanes assigned usually exclusively to itself (some desktop mobos are capable of reassigning lanes between PCIe slots depending on specifics of connected devices, but this rather an exception).
The transfer capacity of a single lane depends on the PCIe version and roughly doubles with each subsequent version, so for example 8 PCIe-3.0 lanes (x8 gen3) provide roughly the same transfer as 4 PCIe-4.0 lanes (x4 gen4), which is slightly less than 64Gbps.
Standard PCIe slots on desktop motherboards provide either x1, x4, x8, or x16 lanes and the more lanes they provide, the longer their minimum physical length needs to be. PCIe slots are compatible between sizes: it is possible to insert a card with a smaller interface into a bigger slot, for example an x8 card into an x16 slot and it will work normally at the speed provided by x8 lanes. If a slot has an open end, then it is also possible to insert a card with a bigger interface into it and it will work at the speed provided by the number of lanes of the slot. Similar solution to open-ended slots are bigger physical slots with only a fraction of lanes connected, for example a slot of x16 physical size, but only with x8 lanes electrically connected.
PCIe interface is also fully backwards compatible between versions, providing the overall speed of the slowest component, so it is possible to insert a gen3 device into a gen4 slot or vice versa and everything will work fine, but at the gen3 speed.

As of 2025 most (if not all) consumer GPU cards use standard x16 PCIe slot interface. Therefore that's the physical size of slots provided on the "GPU end" of all types of eGPU adapters described here, although only a fraction of lanes is actually connected in most cases (most commonly x4). The highest PCIe version supported by cards and adapters varies between specific models and depends mostly how old the model is.

A wide variety of interface types described in the below subsections is used on the "host end" of eGPU adapters. Ultimately however it is either a direct connector to a host computer's PCIe lanes or some other interface that is capable of tunneling of the PCIe protocol. The below list is far from comprehensive and describes only the currently most common types.


Fixed-cable connections

This connection type uses interfaces that are supposed to be connected and disconnected very rarely, usually only during the initial setup and hardware upgrades or maintenance. As such it only makes sense in systems that are supposed to be used only at fixed locations, which in many cases defeats the original purpose of portability. Nevertheless sometimes people use spare laptops as their desktops in which case extending their capabilities with a fixed-cable eGPU makes perfect sense.

This connection type provides usually the highest PCIe signal integrity: something that some pluggable connection types suffer with.

From an OS perspective, GPUs connected this way are virtually indistinguishable from standard dGPUs and as such no additional OS-level (nor above) software setup is needed comparing to dGPU-based multi-GPU setups. Depending on the host computer mobo model however, BIOS/UEFI settings may need to be modified and some laptop BIOSes require their iGPUs to be disabled when a dGPU (or a fixed-cable eGPU) is present.

M.2

M.2 is a compact connector that among others is used as an interface to PCIe buses in contemporary laptops and desktops. Depending on "keying" it provides up to x4 PCIe lanes: most usually "two times x1" (A, E. A+E keys) or x4 (M key).

Example adapters:

Standard PCIe slot

Available only on desktop and server motherboards. Such adapters may be useful if there are not enough x16 slots available and smaller ones are not open-ended. Another reason may be not enough physical space to accommodate a GPU card inside the case or if the case doesn't have enough cooling capabilities.

Example adapters:


Pluggable connections

This type refers to interfaces that may be easily connected and disconnected, but only when the system is powered-off. From an OS perspective, GPUs connected this way are also seen as dGPUs.

OCuLink is an interface designed specifically to expose PCIe lanes as an external port. It uses SFF8612 socket + SFF8611 plug as its connector, which comes in two sizes: 4i and 8i, exposing x4 and x8 lanes respectively. Currently still very few consumer devices are natively equipped with an OCuLink port, however M.2 M to OCuLink 4i adapters are easily available and are usually quite cheap.

The OCuLink spec does not define any norms for maximum latency nor signal noise level, leading to some cables and adapters that are formally "OCuLink compliant", degrading signal integrity beyond what many host motherboards are able to tolerate. This is especially true when M.2 to OCuLink adapters are used, as M.2 M slots are usually originally intended for NVMe drives that generally meet very strict signal integrity standards. To address this problem, some OCuLink eGPU adapters are equipped with PCIe redrivers that improve signal integrity. Another option is to use an M.2-to-OCuLink adapter with redrivers such as Minerva DP6303 or ADT-F4Q. It is usually sufficient to have redrivers on 1 side only, but in some extreme cases it may be necessary to use them on both adapters.

The PCIe spec and as a consequence also OCuLink define hot-plugging as an optional feature, but this requires a special hardware support on both sides, so in a general case OCuLink does not support hot-plugging. As of February 2025 the only consumer computers supporting OCuLink hot-plugging are Lenovo laptops with TGX ports.

Presently many new OCuLink solutions are being dynamically developed: see this dedicated egpu.io thread to stay up2date.

Example adapters:


Hot-pluggable connections

This type of connections allows to connect and disconnect an eGPU while the system is running, triggering OS level mechanisms to handle such events. While plugging-in is mostly PnP, unplugging currently requires to first terminate all processes using the eGPU. Failure to do so usually results in software crashes at various layers. Depending on the specific stack, these may range from fatal (kernel-panic), through current data loss, to fully recoverable. See the "Software support for hot-plugging" section for details.

USB-C based (Thunderbolt 3+ and USB 4+)

This interface family uses PCIe tunneling over USB. On x86_64 computers this was popularized on Intel-based machines with Thunderbolt-3 controllers and was later included in USB-4 and adopted by AMD. Earlier versions of USB are not capable of PCIe tunneling even when using USB-C connector. All adapters and ports within this family are usually compatible with each other, but the transfer capacity may vary from about 12 to 30Gbps depending on the exact mix: refer to the perf table on egpu.io site. If quality cables are used (active and/or with quality screening), this interface family provides a very good PCIe signal integrity. As of 2025 almost every contemporary laptop model is equipped with either a Thunderbolt or an USB-4 port, making this interface very ubiquitous.

Example adapters:

TGX

Thinkbook Graphics eXtension (TGX) is a Lenovo-designed interface based on OCuLink 4i that supports hot-plugging and includes redrivers. As such it is to some extent interoperable with OCuLink 4i and most OCuLink 4i docks that include redrivers are also TGX compliant. As of February 2025, Thinkbooks equipped with a TGX port are sold only in China.

ExpressCard

ExpressCard interface was popular on laptops during the first decade of the century. It exposes a single x1 gen2 lane, providing about 3.1Gbps transfer capacity in practice.

Example adapters:


Software support for hot-plugging

Common

PCIe tunneling over Thunderbolt/USB4

bolt should be installed to manage tunneling.

Thunderbolt 3 authorization

Depending on BIOS/UEFI settings, when connecting an eGPU via a Thunderbolt-3 port, device authorization may be required first: refer to boltctl manpage from bolt package for details.

PCIe bus IDs

The subsequent vendor-specific sections often refer to "PCIe bus IDs" of GPUs: these may be obtained from the output of lspci command. For example here is the lspci output on a laptop with an Intel iGPU and an Nvidia eGPU:

$ lspci |grep -i -E 'display|vga'
00:02.0 VGA compatible controller: Intel Corporation Iris Plus Graphics 640 (rev 06)
0b:00.0 VGA compatible controller: NVIDIA Corporation GA102 [GeForce RTX 3090] (rev a1)

In this case the bus ID of the eGPU are the hex digits in the first column of the last row: 0b:00.0. Note: X config uses decimal format when specifying a BusID, so numbers need to be converted appropriately (so in this case it would be 11:0:0 for the eGPU).

Running X11 on an eGPU

When running X on an eGPU (see the subsequent vendor-specific subsections), it is possible to also utilize monitors connected to other GPUs of the system (like laptop's built-in panel connected to its iGPU). Such additional GPUs need to be listed in the Screen section as a GPUDevice, for example in /etc/X11/xorg.conf.d/80-screen.conf file along a vendor-specific file defining "eGPU" config:

Section "Device"
        Identifier "Intel iGPU"
        Driver "modesetting"
        BusID "PCI:0:2:0"  ## replace with values from lspci
EndSection

Section "Screen"
        Identifier "dual GPU"
        Device "eGPU"  ## primary GPU performing the rendering, defined in a separate file
        GPUDevice "Intel iGPU"  ## secondary GPU providing additional monitors
EndSection

Note: running X on an eGPU requires to terminate a given session before unplugging the eGPU, so it's quite inconvenient in many cases. Failure to do so may cause serious kernel module crashes often requiring a reboot: see the subsequent vendor-specific subsections.


Nvidia with proprietary driver

This section refers to NvidiaGraphicsDrivers.

As per Nvidia driver README file, hot-unplugging an Nvidia eGPU while in use is generally not supported and will cause various levels of crashes. nvidia-smi command may be used to obtain the list of processes currently using the Nvidia GPUs.

Kernel module status

Running X11 on an eGPU

As described in the above README file, X11 nvidia driver by default refuses to start on hot-pluggable eGPUs to avoid crashing when unplugging. To force starting on an eGPU, it must be specifically pointed by its PCI BusID and AllowExternalGpus option needs to be set to true (in addition to AllowEmptyInitialConfiguration useful for multi-GPU setups involving Nvidia). For example one could have /etc/X11/xorg.conf.d/20-egpu-device.conf file like this:

Section "Device"
        Identifier "eGPU"
        Driver "nvidia"
        Option "AllowEmptyInitialConfiguration" "true"
        Option "AllowExternalGpus" "true"
        BusID "PCI:11:0:0"  ## replace with values from lspci
EndSection

Running X11 on an iGPU and using an eGPU for offloading

For General info on offloading see "PRIME" section in "Optimus" page.

As of driver 570, even if X11 is initially started only on the iGPU/dGPU, when an eGPU is connected, the Xorg process will "attach itself" to such eGPU to allow extending the desktop to the connected monitors. Unfortunately "detaching" from eGPUs is not implemented yet, so in such case it is necessary to close the whole X session before unplugging the eGPU, which defeats the original purpose of running it on the iGPU. The workaround is to prevent attaching to eGPUs by setting AutoAddGPU option to false in ServerFlags section, for example in an additional /etc/X11/xorg.conf.d/10-server-flags.conf file:

Section "ServerFlags"
        Option "AutoAddGPU" "false"
EndSection

Wayland

ToDo: gather and add info


Nvidia with Nouveau driver

ToDo: gather and add info


AMD

Before unplugging all processes using the eGPU must be terminated and amdgpu kernel driver needs to be removed or unbound from the eGPU by writing its PCIe bus ID prefixed with 0000: (assuming a single PCIe controller) to /sys/bus/pci/drivers/amdgpu/unbind file:

echo "0000:7:0.0" >/sys/bus/pci/drivers/amdgpu/unbind

Unbinding the module is preferred over removing especially when there are multiple AMD GPUs in the system (for example if the iGPU is also from AMD).

Running X11 on an eGPU

If no config is provided, by default X will choose to run on a built-in GPU. To run X on an eGPU, it is sufficient to define a Device section for it and point it by its PCI BusID. For example one could have /etc/X11/xorg.conf.d/20-egpu-device.conf file like this:

Section "Device"
        Identifier "eGPU"
        Driver "amdgpu"
        BusID "PCI:7:0:0"  ## replace with values from lspci
EndSection

Wayland

ToDo: gather and add info


Intel

This section refers to Xe cards using xe kernel module.

Running X11 on an eGPU

If no config is provided, by default X will choose to run on a built-in GPU. To run X on an eGPU, it is sufficient to define a Device section for it and point it by its PCI BusID. For example one could have /etc/X11/xorg.conf.d/20-egpu-device.conf file like this:

Section "Device"
        Identifier "eGPU"
        Driver "modesetting"
        BusID "PCI:48:0:0"  ## replace with values from lspci
EndSection

Wayland

ToDo: gather and add info


Choosing and connecting a PSU

Most of the eGPU adapters available on the market come without a PSU, which allows for greater flexibility, but requires an educated decision. PSUs are relatively simpler devices comparing to CPUs or GPUs, but their role in a system is no less critical, so the choice should be planned accordingly:

When connecting a PSU to a GPU (or a PSU to anything else to that matter), it is absolutely critical to make sure that the connector is fully plugged in and its "accidental unplug prevention latch" is locked. Failure to do so will likely lead to the GPU catching fire.


CategoryLaptopComputer CategoryHardware