<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0"
    xmlns:content="http://purl.org/rss/1.0/modules/content/"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:atom="http://www.w3.org/2005/Atom">
    <channel>
        <title>iGPU — LowEndSpirit DEV</title>
        <link>https://dev.lowendspirit.com/index.php?p=/</link>
        <pubDate>Wed, 08 Apr 2026 20:21:02 +0000</pubDate>
        <language>en</language>
            <description>iGPU — LowEndSpirit DEV</description>
    <atom:link href="https://dev.lowendspirit.com/index.php?p=/discussions/tagged/igpu/feed.rss" rel="self" type="application/rss+xml"/>
    <item>
        <title>Intel iGPU VAAPI in Unprivileged LXC 4.0 Container</title>
        <link>https://dev.lowendspirit.com/index.php?p=/discussion/3782/intel-igpu-vaapi-in-unprivileged-lxc-4-0-container</link>
        <pubDate>Wed, 16 Feb 2022 05:35:57 +0000</pubDate>
        <category>Technical</category>
        <dc:creator>yoursunny</dc:creator>
        <guid isPermaLink="false">3782@/index.php?p=/discussions</guid>
        <description><![CDATA[<blockquote><div>
  <p>This article is originally published on yoursunny.com blog <a href="https://yoursunny.com/t/2022/lxc-vaapi/" rel="nofollow">https://yoursunny.com/t/2022/lxc-vaapi/</a></p>
</div></blockquote>

<h2 data-id="background">Background</h2>

<p>I recently bought a DELL OptiPlex 7040 Micro desktop computer and wanted to operate it as a dedicated server.<br />
I installed Debian 11 on the computer, and placed it into the closet to be accessed over SSH only.<br />
To keep the host machine stable, I decide to run most workloads in <a rel="nofollow" href="https://wiki.debian.org/LXC">LXC</a> containers, which are said to be Fast-as-Metal.<br />
Since I <a rel="nofollow" href="https://yoursunny.com/t/2021/NDN-video-ndn6/">operate my own video streaming website</a>, I have an LXC container for encoding the videos.</p>

<p>The computer comes with an <a rel="nofollow" href="https://ark.intel.com/content/www/us/en/ark/products/88183/intel-core-i56500t-processor-6m-cache-up-to-3-10-ghz.html">Intel Core i5-6500T</a> processor.<br />
It has 4 hardware cores running at 2.50GHz frequency, and belongs to the Skylake family.<br />
FFmpeg is happily encoding my videos on this CPU.</p>

<p>As I read through the processor specification, I noticed this section:</p>

<ul><li><p>Processor Graphics: Intel® HD Graphics 530</p>

<ul><li>Processor Graphics indicates graphics processing circuitry integrated into the processor, providing the graphics, compute, media, and display capabilities.</li>
</ul></li>
<li><p>Intel® Quick Sync Video: Yes</p>

<ul><li>Intel® Quick Sync Video delivers fast conversion of video for portable media players, online sharing, and video editing and authoring.</li>
</ul></li>
</ul><p>It seems that I have a GPU!<br />
Can I make use of this Intel GPU and accelerate video encoding workloads?</p>

<h2 data-id="story">Story</h2>

<blockquote><div>
  <p>If you just want the solution, skip to the <strong>TL;DR Steps to Enable VAAPI in LXC</strong> section at the end.</p>
</div></blockquote>

<h3 data-id="testing-vaapi-with-docker">Testing VAAPI with Docker</h3>

<p>I read FFmpeg <a rel="nofollow" href="https://trac.ffmpeg.org/wiki/HWAccelIntro">HWAccelIntro</a> and <a rel="nofollow" href="https://trac.ffmpeg.org/wiki/Hardware/QuickSync">QuickSync</a> pages, and learned:</p>

<ul><li>FFmpeg supports hardware acceleration on various GPU brands including Intel, AMD, and NVIDIA.</li>
<li>Hardware encoders typically generate outputs of significantly lower quality than good software encoders, but are generally faster and do not use much CPU resource.</li>
<li><p>On Linux, FFmpeg may access Intel GPU through libmfx, OpenCL, or VAAPI.<br />
Among these, encoding is possible with libmfx or VAAPI.</p></li>
<li><p>Each generation Intel processors has different video encoding capabilities.<br />
For the Skylake family that I have, the integrated GPU can encode to H.264, MPEG-2, VP8, and H.265 formats.</p></li>
</ul><p>I decided to experiment with VAAPI, because it has the shortest name 🤪.<br />
I quickly found <a rel="nofollow" href="https://hub.docker.com/r/jrottenberg/ffmpeg">jrottenberg/ffmpeg</a> Docker image.<br />
Following the example commands on <a rel="nofollow" href="https://trac.ffmpeg.org/wiki/Hardware/VAAPI">FFmpeg VAAPI</a> page, I verified that my GPU can successfully encode videos to H264 format:</p>

<pre spellcheck="false" tabindex="0">docker run \
    --device /dev/dri \
    -v $(pwd):/data -w /data \
  jrottenberg/ffmpeg:4.1-vaapi \
    -loglevel info -stats \
    -vaapi_device /dev/dri/renderD128 \
    -i input.mov \
    -vf 'hwupload,scale_vaapi=w=640:h=480:format=nv12' \
    -preset ultrafast \
    -c:v h264_vaapi \
    -f mp4 output.mp4
</pre>

<h3 data-id="the-renderd128-device">The renderD128 Device</h3>

<p>This above <code spellcheck="false" tabindex="0">docker run</code> command tells me that the <code spellcheck="false" tabindex="0">/dev/dri/renderD128</code> device is likely the key of getting Intel GPU to work in an LXC container.<br />
It is a character device with major number 226 and minor number 128.</p>

<pre spellcheck="false" tabindex="0">sunny@sunnyD:~$ ls -l /dev/dri
total 0
drwxr-xr-x 2 root root         80 Jan 22 11:04 by-path
crw-rw---- 1 root video  226,   0 Jan 22 11:04 card0
crw-rw---- 1 root render 226, 128 Jan 22 11:04 renderD128
</pre>

<p>Inside the container, this device does not exist.<br />
Naively, I tried <code spellcheck="false" tabindex="0">mknod</code>, but it returns an "operation not permitted" error:</p>

<pre spellcheck="false" tabindex="0">ubuntu@video:~$ ls -l /dev/dri
ls: cannot access '/dev/dri': No such file or directory

ubuntu@video:~$ sudo mkdir /dev/dri

ubuntu@video:~$ sudo mknod /dev/dri/renderD128 c 226 128
mknod: /dev/dri/renderD128: Operation not permitted
</pre>

<p>I searched for this problem over several weeks, found several articles regarding how to get <a rel="nofollow" href="https://forums.plex.tv/t/pms-installation-guide-when-using-a-proxmox-5-1-lxc-container/219728">Plex</a> or <a rel="nofollow" href="https://emby.media/community/index.php?/topic/49680-howto-vaapi-transcoding-inside-lxc-container/">Emby</a> media server to use VAAPI hardware encoding from LXC containers, but they are either using <a rel="nofollow" href="https://forum.proxmox.com/threads/lxc-no-permission-to-use-vaapi.91536/">Proxmox</a> or <a rel="nofollow" href="https://linuxcontainers.org/lxd/">LXD</a> (unavailable on Debian), both differ from the plain LXC that I'm trying to use.<br />
From these articles, I gathered enough hints on what's needed:</p>

<ul><li>LXC container cannot <code spellcheck="false" tabindex="0">mknod</code> arbitrary devices for security reasons.</li>
<li><p>To have a device inode in an LXC container, the container config must:</p>

<ul><li>grant permission with <code spellcheck="false" tabindex="0">lxc.cgroup.devices.allow</code> directive, and</li>
<li>mount the device with <code spellcheck="false" tabindex="0">lxc.mount.entry</code> directory.</li>
</ul></li>
<li><p>In addition to <code spellcheck="false" tabindex="0">ffmpeg</code>, it's necessary to install <code spellcheck="false" tabindex="0">vainfo i965-va-driver</code> packages (available on both Debian and Ubuntu).</p></li>
</ul><h3 data-id="nobody-nogroup">nobody:nogroup</h3>

<p>With these configs in place, the device showed up in the container, but it does not work:</p>

<pre spellcheck="false" tabindex="0">ubuntu@video:~$ ls -l /dev/dri
total 0
crw-rw---- 1 nobody nogroup 226, 128 Jan 22 16:04 renderD128
ubuntu@video:~$ vainfo
error: can't connect to X server!
error: failed to initialize display
ubuntu@video:~$ sudo vainfo
error: XDG_RUNTIME_DIR not set in the environment.
error: can't connect to X server!
error: failed to initialize display
</pre>

<p>One suspicious thing is the <code spellcheck="false" tabindex="0">nobody:nogroup</code> owner on the renderD128 device.<br />
It differs from the <code spellcheck="false" tabindex="0">root:render</code> owner as seen on the host machine.<br />
Naively, I tried <code spellcheck="false" tabindex="0">chown</code>, but it returns an "invalid argument" error and has no effect:</p>

<pre spellcheck="false" tabindex="0">ubuntu@video:~$ sudo chown root:render /dev/dri/renderD128
chown: changing ownership of '/dev/dri/renderD128': Invalid argument

ubuntu@video:~$ ls -l /dev/dri
total 0
crw-rw---- 1 nobody nogroup 226, 128 Jan 22 16:04 renderD128
</pre>

<p><a rel="nofollow" href="https://www.reddit.com/r/Proxmox/comments/ii3u2c/comment/g36l72j/">A Reddit post</a> claims that running <code spellcheck="false" tabindex="0">chmod 0666 /dev/dri/renderD128</code> from the host machine would solve this problem.<br />
I gave it a try and it was indeed effective.<br />
However, I know this isn't a <em>proper</em> solution because you are not supposed to change permission on device inodes.<br />
So I continued searching.</p>

<h3 data-id="idmap">idmap</h3>

<p>The last piece of the puzzle lies in <a rel="nofollow" href="https://man7.org/linux/man-pages/man7/user_namespaces.7.html">user and group ID mappings</a>.<br />
In an unprivileged LXC container, user and group IDs are shifted, so that the root user (UID 0) inside the container would not gain root privilege on the host machine.<br /><code spellcheck="false" tabindex="0">lxc.idmap</code> directive in the container config controls these mappings.<br />
In my container, the relevant config was:</p>

<pre spellcheck="false" tabindex="0"># map container UID 0~65535 to host UID 100000~165535
lxc.idmap = u 0 100000 65536
# map container GID 0~65535 to host GID 100000~165535
lxc.idmap = g 0 100000 65536
</pre>

<p>Notably, the <code spellcheck="false" tabindex="0">root</code> user (UID 0) and <code spellcheck="false" tabindex="0">render</code> group (GID 107) on the host user aren't mapped to anything in the container.<br />
The kernel <a rel="nofollow" href="https://discuss.linuxcontainers.org/t/strange-nobody-nogroup-ownership-in-unprivileged-lxc/1705/2">uses 65534 to represent a UID/GID which is outside the container's map</a>.<br />
Hence, the renderD128 device, when mounted into the container, has owner UID and GID being 65534:</p>

<pre spellcheck="false" tabindex="0">ubuntu@video:~$ ls -ln /dev/dri
total 0
crw-rw---- 1 65534 65534 226, 128 Jan 22 16:04 renderD128
</pre>

<p>65534 is the UID of <code spellcheck="false" tabindex="0">nobody</code> and the GID of <code spellcheck="false" tabindex="0">nogroup</code>, which is why this device appears to be owned by <code spellcheck="false" tabindex="0">nobody:nogroup</code>.</p>

<p>To make the renderD128 owned by <code spellcheck="false" tabindex="0">render</code> group, the correct solution is mapping the <code spellcheck="false" tabindex="0">render</code> group inside the container to the <code spellcheck="false" tabindex="0">render</code> group on the host.<br />
This, in turn, requires two ingredients:</p>

<ul><li><a rel="nofollow" href="https://man7.org/linux/man-pages/man5/subgid.5.html"><code spellcheck="false" tabindex="0">/etc/subgid</code></a> must authorize the host user who starts the container to map the GID of the host's <code spellcheck="false" tabindex="0">render</code> group into child namespaces.</li>
<li>The container config should have an <code spellcheck="false" tabindex="0">lxc.idmap</code> directive that maps the GID of the container's <code spellcheck="false" tabindex="0">render</code> group to the GID of the host's <code spellcheck="false" tabindex="0">render</code> group.</li>
</ul><p>So I added <code spellcheck="false" tabindex="0">lxc:107:1</code> to <code spellcheck="false" tabindex="0">/etc/subgid</code>, in which <code spellcheck="false" tabindex="0">lxc</code> is the ordinary user on the host machine that starts the containers, and <code spellcheck="false" tabindex="0">107</code> is the GID of <code spellcheck="false" tabindex="0">render</code> group on the host machine.<br />
Then I modified the container config as:</p>

<pre spellcheck="false" tabindex="0"># map container UID 0-65535 to host UID 100000-165535
lxc.idmap = u 0 100000 65536
# map container GID 0-65535 to host GID 100000-165535
lxc.idmap = g 0 100000 65536
# map container GID 109 to host GID 107
lxc.idmap = g 109 107 1
</pre>

<p>However, the container fails to start:</p>

<pre spellcheck="false" tabindex="0">lxc@sunnyD:~$ lxc-unpriv-start -F video
Running scope as unit: run-r611f1778b87645918a2255d44073b86b.scope
lxc-start: video: conf.c: lxc_map_ids: 2865 newgidmap failed to write mapping "newgidmap: write to gid_map failed: Invalid argument": newgidmap 5297 0 100000 65536 109 107 1
             lxc-start: video: start.c: lxc_spawn: 1726 Failed to set up id mapping.
</pre>

<p>Re-reading <a rel="nofollow" href="https://man7.org/linux/man-pages/man7/user_namespaces.7.html">user_namespaces(7)</a> manpage reveals the reason:</p>

<blockquote><div>
  <p>Defining user and group ID mappings: writing to uid_map and gid_map</p>
  
  <ul><li>The range of user IDs (group IDs) specified in each line cannot overlap with the ranges in any other lines.</li>
  </ul></div></blockquote>

<p>The above container config defines two group ID mappings that overlaps at the GID 109, which causes the failure.<br />
Instead, it must be split to three ranges: 0-108 mapped to 100000-100108, 109 mapped to 107, 110-65535 mapped to 100110-165535.</p>

<p>Another idea I had, changing the GID of the <code spellcheck="false" tabindex="0">render</code> group to a large number greater than 65535 and thus dodge the overlap, turns out to be a bad idea, as it causes an error during system upgrades:</p>

<pre spellcheck="false" tabindex="0">ubuntu@video:~$ sudo apt full-upgrade
Setting up udev (245.4-4ubuntu3.15) ...
The group `render' already exists and is not a system group. Exiting.
dpkg: error processing package udev (--configure):
 installed udev package post-installation script subprocess returned error exit status 1
</pre>

<p>Hence, I must carefully calculate the GID ranges and write three GID mapping entries.<br />
With this final piece in place, success!</p>

<pre spellcheck="false" tabindex="0">ubuntu@video:~$ vainfo 2&gt;/dev/null | head -10
vainfo: VA-API version: 1.7 (libva 2.6.0)
vainfo: Driver version: Intel i965 driver for Intel(R) Skylake - 2.4.0
vainfo: Supported profile and entrypoints
      VAProfileMPEG2Simple            : VAEntrypointVLD
      VAProfileMPEG2Simple            : VAEntrypointEncSlice
      VAProfileMPEG2Main              : VAEntrypointVLD
      VAProfileMPEG2Main              : VAEntrypointEncSlice
      VAProfileH264ConstrainedBaseline: VAEntrypointVLD
      VAProfileH264ConstrainedBaseline: VAEntrypointEncSlice
      VAProfileH264ConstrainedBaseline: VAEntrypointEncSliceLP
</pre>

<p>Encoding speed comparison on one of my videos:</p>

<ul><li><p>h264, ultrafast, 640x480 resolution</p></li>
<li><p>Intel GPU VAAPI encoding:</p>

<pre spellcheck="false" tabindex="0">frame= 2900 fps=201 q=-0.0 Lsize=   18208kB time=00:01:36.78 bitrate=1541.2kbits/s speed=6.71x
video:16583kB audio:1528kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.533910%
</pre></li>
<li><p>Skylake CPU encoding:</p>

<pre spellcheck="false" tabindex="0">frame= 2900 fps=171 q=-1.0 Lsize=   18786kB time=00:01:36.78 bitrate=1590.1kbits/s speed=5.71x
video:17177kB audio:1528kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.434900%
</pre></li>
<li><p>GPU encoding was 17.5% faster than CPU encoding.</p></li>
</ul><h2 data-id="tl-dr-steps-to-enable-vaapi-in-lxc">TL;DR Steps to Enable VAAPI in LXC</h2>

<ol><li><p>Confirm that the <code spellcheck="false" tabindex="0">/dev/dri/renderD128</code> device exists on the host machine.</p>

<pre spellcheck="false" tabindex="0">lxc@sunnyD:~$ ls -l /dev/dri/renderD128
crw-rw---- 1 root render 226, 128 Jan 22 11:04 /dev/dri/renderD128
</pre>

<p>If the device does not exist, you do not have an Intel GPU or it is not recognized by the kernel.<br />
You must resolve this issue before proceeding to the next step.</p></li>
<li><p>Find the GID of the <code spellcheck="false" tabindex="0">render</code> group on the host machine:</p>

<pre spellcheck="false" tabindex="0">lxc@sunnyD:~$ getent group render
render:x:107:
</pre>

<p>On my computer, the GID is 107.</p></li>
<li><p>Authorize the host user who starts LXC containers to map the GID to child namespaces.</p>

<ol><li><p>Run <code spellcheck="false" tabindex="0">sudoedit /etc/subgid</code> to open the editor.</p></li>
<li><p>Append a line:</p>

<pre spellcheck="false" tabindex="0">lxc:107:1
</pre></li>
</ol><p>Explanation:</p>

<ul><li><code spellcheck="false" tabindex="0">lxc</code> refers to the host user account.</li>
<li><code spellcheck="false" tabindex="0">107</code> is the GID of the <code spellcheck="false" tabindex="0">render</code> group, as seen in step 2.</li>
<li><code spellcheck="false" tabindex="0">1</code> means authorizing just one GID.</li>
</ul></li>
<li><p>Create and start an LXC container, and find out the GID of the container's <code spellcheck="false" tabindex="0">render</code> group.<br />
I'm using a Ubuntu 20.04 template, but the same procedure is applicable to other templates.</p>

<pre spellcheck="false" tabindex="0">lxc@sunnyD:~$ export DOWNLOAD_KEYSERVER=keyserver.ubuntu.com

lxc@sunnyD:~$ lxc-create -n video -t download -- -d ubuntu -r focal -a amd64
Using image from local cache
Unpacking the rootfs

You just created an Ubuntu focal amd64 (20211228_07:42) container.

To enable SSH, run: apt install openssh-server
No default root or user password are set by LXC.

lxc@sunnyD:~$ lxc-unpriv-start video
Running scope as unit: run-re7a88541bd5d42ab92c9ea6d4cd2a19f.scope

lxc@sunnyD:~$ lxc-unpriv-attach video getent group render
Running scope as unit: run-reaad3e4a549a420bacb160fd8cbc87a8.scope
render:x:109:
</pre></li>
<li><p>Edit the container config.</p>

<ol><li><p>Run <code spellcheck="false" tabindex="0">editor ~/.local/share/lxc/video/config</code> to open the editor.</p></li>
<li><p>Delete existing lines that start with <code spellcheck="false" tabindex="0">lxc.idmap = g</code>.</p>

<p>However, do not delete lines that start with <code spellcheck="false" tabindex="0">lxc.idmap = u</code>.</p></li>
<li><p>Append these lines:</p>

<pre spellcheck="false" tabindex="0">lxc.idmap = g 0 100000 109
lxc.idmap = g 109 107 1
lxc.idmap = g 110 100110 65426
lxc.cgroup.devices.allow = c 226:128 rwm
lxc.mount.entry = /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file
</pre></li>
</ol><p>Explanation:</p>

<ul><li><p>The <code spellcheck="false" tabindex="0">lxc.idmap = g</code> directive defines a group ID mapping.</p>

<ul><li><code spellcheck="false" tabindex="0">109</code> is the GID of the container's <code spellcheck="false" tabindex="0">render</code> group, as seen instep 4.</li>
<li><code spellcheck="false" tabindex="0">107</code> is the GID of the host's <code spellcheck="false" tabindex="0">render</code> group, as seen in step 2.</li>
</ul></li>
<li><p>The <code spellcheck="false" tabindex="0">lxc.cgroup.devices.allow</code> directive exposes a device to the container.</p>

<ul><li><code spellcheck="false" tabindex="0">226:127</code> is the major number and minor number of the renderD128 device, as seen in step 1.</li>
</ul></li>
<li><p>The <code spellcheck="false" tabindex="0">lxc.mount.entry</code> directive mounts the host's renderD128 device into the container.</p></li>
</ul><p>You may use this handy idmap calculator to generate the <code spellcheck="false" tabindex="0">lxc.idmap</code> directives:<br />
(read original article <a href="https://yoursunny.com/t/2022/lxc-vaapi/" rel="nofollow">https://yoursunny.com/t/2022/lxc-vaapi/</a> to use the JavaScript calculator)</p></li>
<li><p>Restart the container and attach to its console.</p>

<pre spellcheck="false" tabindex="0">lxc@sunnyD:~$ lxc-stop video

lxc@sunnyD:~$ lxc-unpriv-start video
Running scope as unit: run-r77f46b8ba5b24254a99c1ef9cb6384c3.scope

lxc@sunnyD:~$ lxc-unpriv-attach video
Running scope as unit: run-r11cf863c81e74fcfa1615e89902b1284.scope
</pre></li>
<li><p>Install FFmpeg and VAAPI packages in the container.</p>

<pre spellcheck="false" tabindex="0">root@video:/# apt update

root@video:/# apt install --no-install-recommends ffmpeg vainfo i965-va-driver
0 upgraded, 148 newly installed, 0 to remove and 15 not upgraded.
Need to get 79.2 MB of archives.
After this operation, 583 MB of additional disk space will be used.
Do you want to continue? [Y/n]
</pre></li>
<li><p>Confirm that the <code spellcheck="false" tabindex="0">/dev/dri/renderD128</code> device exists in the container and is owned by <code spellcheck="false" tabindex="0">render</code> group.</p>

<pre spellcheck="false" tabindex="0">root@video:/# ls -l /dev/dri/renderD128
crw-rw---- 1 nobody render 226, 128 Jan 22 16:04 /dev/dri/renderD128
</pre>

<p>It's normal for the owner user to show as <code spellcheck="false" tabindex="0">nobody</code>.<br />
This does not affect operation as long as the calling user is a member of the <code spellcheck="false" tabindex="0">render</code> group.<br />
The only implication is that, the container's <code spellcheck="false" tabindex="0">root</code> user cannot access the renderD128 unless it is added to the <code spellcheck="false" tabindex="0">render</code> group.</p></li>
<li><p>Add container's user account(s) to <code spellcheck="false" tabindex="0">render</code> group.<br />
These users will have access to the GPU.</p>

<pre spellcheck="false" tabindex="0">root@video:/# /sbin/adduser ubuntu render
Adding user `ubuntu' to group `render' ...
Adding user ubuntu to group render
Done.
</pre></li>
<li><p>Become one of these users, and verify the Intel iGPU is operational in the LXC container.</p>

<pre spellcheck="false" tabindex="0">root@video:/# sudo -iu ubuntu

ubuntu@video:~$ vainfo
error: XDG_RUNTIME_DIR not set in the environment.
error: can't connect to X server!
libva info: VA-API version 1.7.0
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/iHD_drv_video.so
libva info: va_openDriver() returns -1
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so
libva info: Found init function __vaDriverInit_1_6
libva info: va_openDriver() returns 0
vainfo: VA-API version: 1.7 (libva 2.6.0)
vainfo: Driver version: Intel i965 driver for Intel(R) Skylake - 2.4.0
vainfo: Supported profile and entrypoints
      VAProfileMPEG2Simple            : VAEntrypointVLD
      VAProfileMPEG2Simple            : VAEntrypointEncSlice
      VAProfileMPEG2Main              : VAEntrypointVLD
      VAProfileMPEG2Main              : VAEntrypointEncSlice
      VAProfileH264ConstrainedBaseline: VAEntrypointVLD
      VAProfileH264ConstrainedBaseline: VAEntrypointEncSlice
      VAProfileH264ConstrainedBaseline: VAEntrypointEncSliceLP
      VAProfileH264ConstrainedBaseline: VAEntrypointFEI
      VAProfileH264ConstrainedBaseline: VAEntrypointStats
      VAProfileH264Main               : VAEntrypointVLD
      VAProfileH264Main               : VAEntrypointEncSlice
      VAProfileH264Main               : VAEntrypointEncSliceLP
      VAProfileH264Main               : VAEntrypointFEI
      VAProfileH264Main               : VAEntrypointStats
      VAProfileH264High               : VAEntrypointVLD
      VAProfileH264High               : VAEntrypointEncSlice
      VAProfileH264High               : VAEntrypointEncSliceLP
      VAProfileH264High               : VAEntrypointFEI
      VAProfileH264High               : VAEntrypointStats
      VAProfileH264MultiviewHigh      : VAEntrypointVLD
      VAProfileH264MultiviewHigh      : VAEntrypointEncSlice
      VAProfileH264StereoHigh         : VAEntrypointVLD
      VAProfileH264StereoHigh         : VAEntrypointEncSlice
      VAProfileVC1Simple              : VAEntrypointVLD
      VAProfileVC1Main                : VAEntrypointVLD
      VAProfileVC1Advanced            : VAEntrypointVLD
      VAProfileNone                   : VAEntrypointVideoProc
      VAProfileJPEGBaseline           : VAEntrypointVLD
      VAProfileJPEGBaseline           : VAEntrypointEncPicture
      VAProfileVP8Version0_3          : VAEntrypointVLD
      VAProfileVP8Version0_3          : VAEntrypointEncSlice
      VAProfileHEVCMain               : VAEntrypointVLD
      VAProfileHEVCMain               : VAEntrypointEncSlice
</pre></li>
</ol><h2 data-id="conclusion">Conclusion</h2>

<p>This article explores how to make use of Intel processor's integrated GPU in an unprivileged LXC 4.0 container, on Debian 11 bullseye host machine without Proxmox or LXD.<br />
The key points include mounting the renderD128 device into the container, configuring idmap for the <code spellcheck="false" tabindex="0">render</code> group, and verifying the setup with <code spellcheck="false" tabindex="0">vainfo</code> command.<br />
The result is an LXC container that can encode videos to H.264 and other formats in the GPU with Intel Quick Sync Video feature, which is 17.5% faster than CPU encoding.</p>
]]>
        </description>
    </item>
    <item>
        <title>E-2276G in Amsterdam, NL w/ Corero DDoS Protection, iGPU, NVMe, IPMI, 50TB on 2.5G</title>
        <link>https://dev.lowendspirit.com/index.php?p=/discussion/528/e-2276g-in-amsterdam-nl-w-corero-ddos-protection-igpu-nvme-ipmi-50tb-on-2-5g</link>
        <pubDate>Thu, 23 Jan 2020 17:17:43 +0000</pubDate>
        <category>Offers</category>
        <dc:creator>Clouvider</dc:creator>
        <guid isPermaLink="false">528@/index.php?p=/discussions</guid>
        <description><![CDATA[<h2 data-id="help-us-to-help-australia">Help us to help Australia!</h2>

<p>We started our Australian Aid Promotion to assist with the bushfire crisis. Any orders placed using the voucher code “BUSHFIRES” will have the value of their first months order donated to pre-selected charities. Additional information can be found on our <a rel="nofollow" href="https://www.clouvider.co.uk/news/help-us-donate-to-assist-australian-charities-with-the-bushfire-crisis/">news post</a>.</p>

<h2 data-id="what-sets-us-apart">What sets us apart?</h2>

<p>-Juniper only network with new equipment.<br />
-Over 1Tbps of network capacity with N+1 resiliency at minimum and diverse routing.<br />
-On premise Corero DDoS mitigation appliances in each location.<br />
-Great peering at LINX, LONAP, AMS-IX, DE-CIX and all major local ISP's.<br />
-We transit with: Level3, Telia, NTT, GTT, TATA, ZAYO and Cogent which guarantees excellent congestion free connectivity.<br />
-LINX, LONAP, AMS-IX &amp; DE-CIX partner - we can connect you to these exchanges with your own ASN.<br />
-EU MPLS Network Spread over 7 datacenters.<br />
-Tier 3 Datacentres in London, Amsterdam &amp; Frankfurt.<br />
-We aim at high quality for a reasonable price.<br />
-Clouvider is an established UK based business. We just turned 6!</p>

<h2 data-id="what-s-new">What’s new?</h2>

<p>-<a rel="nofollow" href="https://www.clouvider.co.uk/news/opening-of-new-pops-in-amsterdam-nl-and-frankfurt-de/">We recently expanded into both Amsterdam, NL and Frankfurt, DE</a>.<br />
-Latest generation Intel Xeon E based servers now in stock.<br />
-<a rel="nofollow" href="https://www.clouvider.co.uk/ddos-protection/">On-premise DDoS filtering appliances from Corero</a> &amp; Implementation of our dual-stage DDoS mitigation platform.</p>

<h2 data-id="so-where-are-the-offers">So where are the offers?</h2>

<h3 data-id="e-1-promo-limited-stock">E-1 Promo - Limited Stock!</h3>

<p>E-2276G (6 Cores, 12 Threads, 3.8 GHz &amp; 4.9 Ghz Turbo),<br />
16 GB DDR4 ECC RAM, <br />
512GB NVMe, <br />
50 TB @ 2.5Gbps,<br />
Complimentary Best Effort DDoS Protection.<br />
1 IPv4 &amp; IPv6 Available,<br />
VPN Secured IPMI Access,<br />
iGPU capable,<br />
Last few units in Amsterdam, NL.<br /><a rel="nofollow" href="https://console.clouvider.co.uk/cart/ams1-dedicated-servers/?id=408">$69.50 per month</a></p>

<p>Additional offers and configurations showing our systems in all locations can be found on our <a rel="nofollow" href="https://www.clouvider.co.uk/dedicated-servers/">dedicated servers page</a>.</p>

<h2 data-id="faq-s">FAQ’s:</h2>

<h5 data-id="can-these-servers-be-customised">Can these servers be customised?</h5>

<p>Yes, we have several options during the checkout process. If you seek a different configuration please contact us.</p>

<h5 data-id="can-i-check-your-network">Can I check your network?</h5>

<p>Yes, our LookingGlass can be found at <a rel="nofollow" href="https://as62240.net/">https://as62240.net/</a>, alternatively you can use the following test IP’s:<br />
185.42.223.63 - London, UK<br />
194.127.172.33 - Amsterdam, NL<br />
91.199.118.14 - Frankfurt, DE</p>

<h5 data-id="how-long-for-deployment">How long for deployment?</h5>

<p>Our standard delivery is within 3 working days, custom/bespoke configurations may take longer.</p>

<h5 data-id="how-do-i-activate-ipmi-igpu-features">How do I activate IPMI/iGPU features?</h5>

<p>Open a ticket with our support team requesting configuration.</p>

<h5 data-id="is-free-directadmin-available">Is Free DirectAdmin available?</h5>

<p>Yes, free <a rel="nofollow" href="https://www.clouvider.co.uk/software-licensing/directadmin/">DirectAdmin licenses</a> can be issued for any services on our network.</p>

<h5 data-id="do-you-announce-my-own-ip-s">Do you announce my own IP's?</h5>

<p>Yes, we can announce your own IP Space for use on dedicated servers with us free of charge.</p>

<h5 data-id="do-you-offer-bgp-session">Do you offer BGP Session?</h5>

<p>Yes, BGP Sessions can be configured on our dedicated servers free of charge.</p>

<h5 data-id="do-you-offer-ipv6">Do you offer IPv6?</h5>

<p>Yes, /64 or /48 subnets can be requested during the order process.</p>

<h5 data-id="can-you-accommodate-x-y-or-z">Can you accommodate x, y or z?</h5>

<p>Open a sales ticket to discuss our custom solutions, happy to help where possible.</p>

<h2 data-id="any-questions-or-queries">Any questions or queries?</h2>

<p>Contact our team today at <a rel="nofollow" href="https://www.clouvider.co.uk/contact/">https://www.clouvider.co.uk/contact/</a>.</p>

<p>All prices are exclusive of VAT where applicable. Stock is limited and the offers are valid subject to stock remaining. Because of the limited stock, if the order is not paid within an hour, it will be regretfully canceled and the stock returned to the pool for others to enjoy. Subject to our standard Terms &amp; Conditions <a rel="nofollow" href="https://www.clouvider.co.uk/terms-conditions">https://www.clouvider.co.uk/terms-conditions</a>. Cancellation notice period applies. £53 = $69.50 as of Google rate at 14:45 on 23rd Jan 2020.</p>
]]>
        </description>
    </item>
   </channel>
</rss>
