<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0"
    xmlns:content="http://purl.org/rss/1.0/modules/content/"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:atom="http://www.w3.org/2005/Atom">
    <channel>
        <title>yabs — LowEndSpirit DEV</title>
        <link>https://dev.lowendspirit.com/index.php?p=/</link>
        <pubDate>Thu, 09 Apr 2026 20:40:39 +0000</pubDate>
        <language>en</language>
            <description>yabs — LowEndSpirit DEV</description>
    <atom:link href="https://dev.lowendspirit.com/index.php?p=/discussions/tagged/yabs/feed.rss" rel="self" type="application/rss+xml"/>
    <item>
        <title>Why does yabs use Holy Build Box?</title>
        <link>https://dev.lowendspirit.com/index.php?p=/discussion/4090/why-does-yabs-use-holy-build-box</link>
        <pubDate>Fri, 22 Apr 2022 07:04:48 +0000</pubDate>
        <category>Help</category>
        <dc:creator>Not_Oles</dc:creator>
        <guid isPermaLink="false">4090@/index.php?p=/discussions</guid>
        <description><![CDATA[<p>Hello!</p>

<p>I'm trying to understand why yabs uses Holy Build Box. So I am asking the LES yabs support desk. <img src="https://dev.lowendspirit.com/plugins/emojiextender/emoji/twitter/smile.png" title=":)" alt=":)" height="18" /></p>

<p>The <a rel="nofollow" href="https://github.com/masonr/yet-another-bench-script">Security Notice</a> in the README.md for yabs says "The network (iperf3) and disk (fio) tests use binaries that are compiled by myself utilizing a Holy Build Box compiliation [spelling typo] environment to ensure binary portability."</p>

<p>When I look at <a rel="nofollow" href="https://github.com/phusion/holy-build-box#why-statically-linking-to-glibc-is-a-bad-idea">this Holy Build Box page,</a> it says, "the Holy Build Box approach is to statically link to all dependencies, except for glibc and other system libraries that are found on pretty much every Linux distribution, such as libpthread and libm."</p>

<p>However, when I sneak in while yabs is in action <img src="https://dev.lowendspirit.com/plugins/emojiextender/emoji/twitter/smile.png" title=":)" alt=":)" height="18" /> and run <code spellcheck="false" tabindex="0">ldd</code> on the <code spellcheck="false" tabindex="0">fio</code> and <code spellcheck="false" tabindex="0">iperf3</code> binaries installed by yabs, <code spellcheck="false" tabindex="0">ldd</code> tells me (please see output below) that both of these binaries are dynamic. Then <code spellcheck="false" tabindex="0">ldd</code> gives me a list of libraries all of which seem to be installed on the box which is running yabs.</p>

<p>I guess the <code spellcheck="false" tabindex="0">fio</code> and <code spellcheck="false" tabindex="0">iperf3</code> binaries are downloaded by <code spellcheck="false" tabindex="0">yabs</code> since neither of these is installed on the host.</p>

<p>I hear that linux-vdso is part of the kernel and so presumably wouldn't be statically linked. libpthread and libm each are specifically mentioned above as "system libraries that are found on pretty much every Linux distribution" which wouldn't be expected to be statically linked either. I am guessing ld-linux and libdl also would be on every system. It seems that librt has to do with realtime and is Posix. So maybe librt would be everywhere too.</p>

<p>Could all this mean that the reason to use Holy Build Box is only to ensure the use of an old version of the C library in libc.so.6 and that no dependencies are statically linked for either the fio or the iperf3 provided by yabs? Apparently no, because the glibc version as shown below seems to be 2.35, which is the current version.</p>

<p>So, why use the Holy Build Box if nothing is statically linked and we also are using the current version of glibc instead of an old version? What am I missing? <img src="https://dev.lowendspirit.com/plugins/emojiextender/emoji/twitter/smile.png" title=":)" alt=":)" height="18" /></p>

<p>Thanks in advance for any help! Best wishes and kindest regards from Sonora! 🗽🇺🇸🇲🇽🏜️</p>

<pre spellcheck="false" tabindex="0">root@darkstar:~/test/2022-04-22T05_46_41+00_00/disk# ldd fio
        linux-vdso.so.1 (0x00007ffd22196000)
        librt.so.1 =&gt; /lib64/librt.so.1 (0x00007f334efe2000)
        libpthread.so.0 =&gt; /lib64/libpthread.so.0 (0x00007f334efdd000)
        libm.so.6 =&gt; /lib64/libm.so.6 (0x00007f334eef9000)
        libdl.so.2 =&gt; /lib64/libdl.so.2 (0x00007f334eef4000)
        libc.so.6 =&gt; /lib64/libc.so.6 (0x00007f334ecda000)
        /lib64/ld-linux-x86-64.so.2 (0x00007f334f01c000)
root@darkstar:~/test/2022-04-22T05_46_41+00_00/disk# file fio
fio: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=da93aaa4b5346d3ee4d4f9ed1d6d04aeb4aa279b, with debug_info, not stripped
root@darkstar:~/test/2022-04-22T05_46_41+00_00/disk# 
</pre>

<pre spellcheck="false" tabindex="0">root@darkstar:~/test/2022-04-22T05_46_41+00_00/iperf# ldd iperf3 
        linux-vdso.so.1 (0x00007fff4daaa000)
        libdl.so.2 =&gt; /lib64/libdl.so.2 (0x00007f2292531000)
        libpthread.so.0 =&gt; /lib64/libpthread.so.0 (0x00007f229252c000)
        libm.so.6 =&gt; /lib64/libm.so.6 (0x00007f2292448000)
        libc.so.6 =&gt; /lib64/libc.so.6 (0x00007f229222e000)
        /lib64/ld-linux-x86-64.so.2 (0x00007f229256b000)
root@darkstar:~/test/2022-04-22T05_46_41+00_00/iperf# file iperf3 
iperf3: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=7354764742f5217d04624d477d5e940fa5850ec2, with debug_info, not stripped
root@darkstar:~/test/2022-04-22T05_46_41+00_00/iperf#
</pre>

<pre spellcheck="false" tabindex="0">root@darkstar:~# which fio
which: no fio in (/usr/local/sbin:/usr/sbin:/sbin:/usr/local/bin:/usr/bin:/bin:/usr/games:/usr/lib64/libexec/kf5:/usr/lib64/qt5/bin)
root@darkstar:~# which iperf3
which: no iperf3 in (/usr/local/sbin:/usr/sbin:/sbin:/usr/local/bin:/usr/bin:/bin:/usr/games:/usr/lib64/libexec/kf5:/usr/lib64/qt5/bin)
root@darkstar:~# 
</pre>

<pre spellcheck="false" tabindex="0">root@darkstar:~# /lib64/libc.so.6
GNU C Library (GNU libc) stable release version 2.35.
Copyright (C) 2022 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.
There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A
PARTICULAR PURPOSE.
Compiled by GNU CC version 11.2.0.
libc ABIs: UNIQUE IFUNC ABSOLUTE
For bug reporting instructions, please see:
&lt;https://www.gnu.org/software/libc/bugs.html&gt;.
root@darkstar:~# 
</pre>
]]>
        </description>
    </item>
    <item>
        <title>The Problem with Generalizations: A Response to &quot;The Problem with Benchmarks&quot; by raindog308</title>
        <link>https://dev.lowendspirit.com/index.php?p=/discussion/2908/the-problem-with-generalizations-a-response-to-the-problem-with-benchmarks-by-raindog308</link>
        <pubDate>Wed, 12 May 2021 17:56:05 +0000</pubDate>
        <category>General</category>
        <dc:creator>Mason</dc:creator>
        <guid isPermaLink="false">2908@/index.php?p=/discussions</guid>
        <description><![CDATA[<p>I was going to post this in the Rants category, but would rather not confine my thoughts on this to only LES members (the Rants category requires you to be signed in to view the threads).</p>

<blockquote><div>
  <p>YABS – and many others like it over the years – attempts to produce a meaningful report to judge or grade the VM. It reports CPU type and other configuration information, then runs various disk, network, and CPU tests to inform the user if the VPS service he’s just bought is good, bad, or middling. But does it really?</p>
</div></blockquote>

<p>I stumbled upon a blog post recently from raindog308 on the Other Green Blog and was amused that YABS was called out. Raindog states that YABS (and other benchmark scripts/tests like it) may be lacking in its ability to "produce a meaningful report to judge or grade the VM". Some of the reasons for discrediting and proposed alternatives had me scratching my head. I notice that raindog has been hard at work lately pumping up LEB with good content. But is he really?</p>

<hr /><p>I'm going to cherry pick some quotes and arguments to reply to below -</p>

<blockquote><div>
  <p>It’s valid to check CPU and disk performance for outliers. We’ve all seen overcrowded nodes. Hopefully, network performance is checked prior to purchase through test files and Looking Glass.</p>
</div></blockquote>

<p>I'd argue that not <em>all</em> providers have readily available test files for download and/or a LG. Additionally, it can also be misleading when hosts simply link to their upstream's test files or have their LG hosted on a different machine/hardware that may not have the same usage patterns and port speeds that one would expect in the end-user's VM. However, the point is noted to do some due diligence and research the provider a bit, as that's certainly important.</p>

<p>I'd also argue that iperf (which YABS uses for the network tests) is much more complex than a simple test file/LG. If all you care about is a quick, single-threaded, single-direction, HTTP download then sure use the test file to your heart's content. BUT if you actually care about overall capacity and throughput to different areas of the world in BOTH directions (upload + download), then a multi-threaded, bi-directional iperf test can be much more telling of overall performance.</p>

<blockquote><div>
  <p>But other than excluding ancient CPUs and floppy-drive-level performance, is the user really likely to notice a difference day-in and day-out between a 3.3Ghz CPU and a 3.4Ghz one? Particularly since any operation touches many often-virtualized subsystems.</p>
</div></blockquote>

<p>I actually laughed out loud at this comment. I guess I didn't realize that people use benchmarking scripts/tools to differentiate between a "3.3Ghz CPU and a 3.4Ghz one"... (Narrator: "<em>they don't</em>").</p>

<p>Providers have different ways that they fill up their nodes -- overselling CPU, disk space, network capacity, etc. is, more often than not, mandatory to keep prices low. Most (all?) providers are doing this in some form and the end-user most of the time is none the wiser as long as the ratios are done right and the end-user has resources available to meet their workload.</p>

<p>A benchmarking script/tool does help identify cases where the provider's nodes are oversubscribed to excess and is immediately obvious if disk speeds, network speeds, or CPU performance are drastically lower than they should be for the advertised hardware. Could this be a fluke and resolve itself just a few minutes later? Certainly possible. On the flip side, could performance of a system with good benchmark results devolve into complete garbage minutes/hours/days after the test is ran? Certainly possible as well. Multiple runs of a benchmark tool spread out over the course of hours, days, or weeks could potentially help identify if either of these cases are true.</p>

<p>On a personal note, I've seen dozens of instances where customers of various providers post their benchmark results and voice some concerns of system performance. And a large percentage of the time, the provider is then able to rectify the issues presented by fixing hardware issues, identifying abusers on the same node that are impacting performance, etc. From this, one can notice patterns starting to emerge where you can see the providers that take criticisms (via posts containing low-performing benchmarks) to better improve their services and ensure their customers are happy with the resources that they paid for. Other trends help identify providers to avoid where consistent low network speeds, CPU scores, etc. go unaddressed, indicating unhealthy overselling. But I digress...</p>

<blockquote><div>
  <p>If I could write a benchmark suite, here is what I would like it to report.</p>
</div></blockquote>

<p>Here we get a rapid-fire of different un-quantifiable metrics that would be in raindog's ideal benchmarking suite:</p>

<ul><li>Reliability</li>
<li>Problems</li>
<li>Support</li>
<li>Security</li>
<li>"Moldy Oldies" (outdated VM templates)</li>
<li>"Previous Residents" (previous owners of IPs)</li>
<li>"My Neighbors" (anybody doing shitcoin mining on the same node)</li>
</ul><p>Raindog realizes it'd be hard to get at these metrics -</p>

<blockquote><div>
  <p>Unfortunately, all of these things are impossible to quantify in a shell script.</p>
</div></blockquote>

<p>They'd be impossible to quantify by any means (shell script or fortune teller)... Almost all of the above metrics are subject to personal opinions and preference. Some of these can be investigated by other means -- reliability: one could check out a public status page for that provider (if available); problems: one could search for public threads of people having issues and noting how the provider responds/resolves the issues; moldy oldies: a simple message/pre-sales ticket to the provider could alleviate that concern.</p>

<p>Anyways, the above metrics are highly subjective and are of varying importance to perspective buyers (someone might not give a shit about support response times or if their neighbor is having a party on their VM).</p>

<p>But what's something that everyone is actually concerned about? <strong>How the advertised VM actually performs.</strong></p>

<p>And how do we assess system performance in a non-subjective manner? <strong>With benchmarking tests.</strong></p>

<blockquote><div>
  <p>If you ask me which providers I recommend, the benchmarks that result from VMs on their nodes are not likely to factor into my response. Rather, I’ll point to the provider’s history and these “unquantifiables” as major considerations on which providers to choose.</p>
</div></blockquote>

<p>That's great and, in fact, I somewhat agree here. Being an active member of the low end community, I have the luxury of knowing the providers that run a tight ship and care about their customers and which ones don't based on their track record. But not everyone has the time to dig through thousands of threads to assess a provider's support response or reliability and not everyone in the low end community has been around long enough to develop opinions and differentiate between providers that are "good" or "bad".</p>

<blockquote><div>
  <p><img src="https://talk.lowendspirit.com/uploads/editor/in/zrog9orjrvqv.png" alt="" title="" /></p>
</div></blockquote>

<p>I also found it highly amusing that a related post on the same page links to another post regarding a new benchmarking series by jsg. This post is also written by raindog. The framing is a bit different in that post, where benchmarks are described in a way where they aren't represented as entirely useless and lacking in their ability to "produce a meaningful report to judge or grade the VM". I'm not really sure what happened in the few months between those two posts and they both talk about the limitations of benchmarking tool, but just would like to note the changes in tone.</p>

<hr /><p>My main point in posting this "response" (read: rant) is that benchmark tests <em>aren't</em> and <em>shouldn't</em> be the all-in-one source for determining if a provider and a VM are right for you. I don't advertise the YABS project in that manner. In the description of the tool I even state that YABS is "just yet another bench script to add to your arsenal." So I'm not really sure of the intent of raindog's blog post. Should users not be happy when they score a sweet $7/year server that has top-notch performance and post a corresponding benchmark showing how sexy it is? Should users not test their system to see if they are actually getting the advertised performance that they are expecting? Should users not use benchmarking tools as a means of debugging and bringing a provider's attention to any resource issues? Those "unquantifiables" that are mentioned certainly won't help you out there.</p>

<p>This response is now much longer than the original blog post that I'm responding to, so I'll stop here.</p>

<p>Happy to hear anyone else's thoughts or face the pitch forks on this one.</p>
]]>
        </description>
    </item>
    <item>
        <title>YABS website</title>
        <link>https://dev.lowendspirit.com/index.php?p=/discussion/2713/yabs-website</link>
        <pubDate>Sat, 27 Mar 2021 08:01:05 +0000</pubDate>
        <category>General</category>
        <dc:creator>LeroyJenkins</dc:creator>
        <guid isPermaLink="false">2713@/index.php?p=/discussions</guid>
        <description><![CDATA[<p>How do you think, does YABS results should be saved on website like it was Serverbear.com ? I think of an idea to modify YABS in order to output JSON'ed data and feed it to database. Or YABS is just that tool where people launch and copy/paste without wanting to save results for future reference?</p>
]]>
        </description>
    </item>
   </channel>
</rss>
