@risturiz said: Processor : QEMU Virtual CPU version 2.5+
please request cpu passthrough and make sure you have virtio drivers enabled, you would only have that CPU type if you installed from ISO.
Yes i installed from Debian 10 ISO... Good to know... So the recommendation is not to install from ISO? Thanks for the info
It is a limitation of the VPS control panel.
For hosts that use solusvm, I suggest you deploy a template first even if you plan to then immediately install from ISO, as this will usually give you the optimal CPU and driver set if things have been set up properly.
For hosts that use solusvm, I suggest you deploy a template first even if you plan to then immediately install from ISO, as this will usually give you the optimal CPU and driver set if things have been set up properly.
The performance difference is not insignificant.
Ok, know the VPS is flying!... Reinstall first with template solve the issue... Thanks!
I also had the same problem with passthrough, and Ant fixed it today. (Thanks!) Anyway, I see some marginal improvement but I wouldn't say it's "flying" per se. Not complaining or anything, just some feedback.
Before:
Basic System Information:
---------------------------------
Processor : QEMU Virtual CPU version 2.5+
CPU cores : 1 @ 2199.998 MHz
AES-NI : ❌ Disabled
VM-x/AMD-V : ❌ Disabled
RAM : 981Mi
Swap : 1.9Gi
Disk : 197G
Geekbench 5 Benchmark Test:
---------------------------------
Test | Value
|
Single Core | 397
Multi Core | 412
After:
Basic System Information:
---------------------------------
Processor : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz
CPU cores : 1 @ 2199.998 MHz
AES-NI : ✔ Enabled
VM-x/AMD-V : ✔ Enabled
RAM : 981Mi
Swap : 1.9Gi
Disk : 984G
Geekbench 5 Benchmark Test:
---------------------------------
Test | Value
|
Single Core | 429
Multi Core | 431
@bugrakoc said: I also had the same problem with passthrough, and Ant fixed it today. (Thanks!) Anyway, I see some marginal improvement but I wouldn't say it's "flying" per se. Not complaining or anything, just some feedback
Yeah I mean it is a multi-tenant environment and looking at the base geekbench 5 benchmarks for the CPU model that is about right. I used geekbench 4 in the initial tests as it is more appropriate to that CPU's generation.
The main difference with using CPU passthrough is in real-world performance, not benchmarks, you see when you have a qemu-cpu model a large number of CPU instruction sets are not available as standard and that means that the CPU has to emulate them in software.
AES for example, you could quite literally be looking at 1000's of times slower than with passthrough for simple operations commonly used by VPN's, SSL/TLS etc.
I raised the lack of default passthrough again with the solusvm dev team off the back of this thread and got a positive response so it is possible that what OnApp (solusvm previous owners) felt it was ok just to leave as it is and not a big impact Plesk see the benefit of and are considering it worth the time to look into.
How I interpret this is: I can use a full core anytime, but if I hog more than %50 cpu constantly then I'm in trouble. So:
cpulimit -l 50 ffmpeg <args>
Is this enough? Am I using more than my share with this? Less than my share?
ToS only states this about CPU usage:
Excessive CPU use or continued high disk IO R+W requests will be considered server abuse, CPU cores are given on an equal share basis only they are not dedicated cores for your server alone.
As a general rule as long as you are not impacting others then you will not be restricted however as a guide line we ask that you do not average over 60% of a CPU core/thread for more than 24 hours on average or 100% of a core/thread for more than 1 hour.
So this is for fair share CPU plans, and by the way it is actually fair. I interpret this as, I can use <%60 CPU (read: %59) constantly on my fair share OpenVZ plan. Nice. The thing is, by the look of it a fair share core can be hogged more than this 1/2 dedicated core, and that kinda defeats the purpose of a dedicated (albeit half) core.
My guess is I'm screwing up somewhere in my thought process, but you never know. I'm just trying to use my resources without causing trouble. Speed doesn't matter that much, so I can cpulimit the hell out of it. If speed did matter I would be doing this elsewhere anyway, I just want to consolidate everything related to my videos. Any insight would be appreciated.
If you're just using it for rsyncing data over from EU, it should stay just under the limit even with aes encryption via LUKS volume (16-20 MB/sec max throughput right?)
Edit: re-read and yes, you should use cpu limit if your ffmpeg jobs are running for way more than 2hrs.
It just means you can burn 50% all day without having to worry about it and burst to 100%.
It is really a soft guide, no issues if you are not disrupting others but also keep in mind that if you limit a single process to 50% then you are going to be using more than 50% because you know.. your VPS runs a kernel and is alive etc and needs some CPU time itself.
Additionally, ffmpeg is just in no way suitable to be running on a storage VPS on a slow spinning rusty disk array, you will be hammering the disk read requests p/second no doubt and that is probably going to be the thing that gets you slapped across the back of the legs with a 4-foot cane!
There is a box you tick during order about understanding that the product is designed for storage, video encoding is a different animal, worst case scenario I have to hard limit your disk IOPS which will mean that your CPU time will be no concern as you won't be able to read as fast as the CPU can encode
@AnthonySmith said: if you limit a single process to 50% then you are going to be using more than 50%
Duh... Stupid me. 25-30% should make more sense since I'm not only running a kernel and whatnot, I also have Nextcloud+nginx+MariaDB so yeah... Nothing a few tries and top can't solve.
@AnthonySmith said: Additionally, ffmpeg is just in no way suitable to be running on a storage VPS on a slow spinning rusty disk array, you will be hammering the disk read requests p/second no doubt and that is probably going to be the thing that gets you slapped across the back of the legs with a 4-foot cane!
Oof.. Those 1.2192 metre canes must really hurt! I'd rather not hammer the disk. Looks like cpulimit alone won't cut it IO-wise, so I should be limiting the IO on my end. Maybe something like pv might work? Example: ffmpeg <args> | pv -L 1M (Saw it here)
@AnthonySmith said: There is a box you tick during order about understanding that the product is designed for storage
Yes I am very well aware of that, I initially meant to just throw nextcloud on it and be done with it. The thing is I recently realised my capture box is fixed to 60 fps and I'm recording presentations so really 25 fps would be more than enough. Then I thought since I'm transcoding anyway, I might as well crop out my taskbar etc... I'll be doing all this on my local machine before uploading from now on, mostly because my upload speed is 600 KiB/s max on a good day. I have ~30GB of videos already uploaded, I figured I might as well use the dedicated CPU to do it instead of re-uploading them on my sucky connection. Didn't think of the IOPS at all.
@AnthonySmith said: worst case scenario I have to hard limit your disk IOPS
I'd rather upload the videos again, I'll try to avoid the worst case scenario. Another approach is I can do it in several chunks as I'm in no rush at all.
Anyway thank you for your time and reading all my rambling
@bstrobl said:
Is the UK SSD Cached KVM 256MB for 10€ still valid?
Have to test out how well Ubuntu 20.04 runs on only 256 RAM
Nope, but its only €14 on regular pricing I would be surprised if a 5.x kernel would even boot on 256mb ram but I have not tried it.
Actually just noticed it came with 10GB storage, which means it could replace my current server. Gonna order one and try squeezing 20.04 in and see if it works
@bstrobl said:
Is the UK SSD Cached KVM 256MB for 10€ still valid?
Have to test out how well Ubuntu 20.04 runs on only 256 RAM
Nope, but its only €14 on regular pricing I would be surprised if a 5.x kernel would even boot on 256mb ram but I have not tried it.
Actually just noticed it came with 10GB storage, which means it could replace my current server. Gonna order one and try squeezing 20.04 in and see if it works
@bstrobl said:
Is the UK SSD Cached KVM 256MB for 10€ still valid?
Have to test out how well Ubuntu 20.04 runs on only 256 RAM
Nope, but its only €14 on regular pricing I would be surprised if a 5.x kernel would even boot on 256mb ram but I have not tried it.
Actually just noticed it came with 10GB storage, which means it could replace my current server. Gonna order one and try squeezing 20.04 in and see if it works
Let us know how that goes
Upgrading from a 18.04 template to 20.04 is perfectly fine, with plenty of RAM left to run an Apache webserver (Around 80MB used, 5MB in swap) . This worked a lot better than the time I tried stuffing Debian 8 into only 128MB of RAM which caused intermittent systemd issues.
It turns out however that MySQL is a pretty big memory hog even without any databases. Using that will quickly ramp up the remaining memory and swap to max and cause massive slowdowns . With only 256MB, it's best to look for alternatives that use less RAM or find a way of configuring mysql/mariadb to make do with a lot less.
Since I am only using static pages and the rest of the server as storage/repository I am perfectly happy with the tradeoffs right now when it comes to ditching mysql .
@bstrobl said: It turns out however that MySQL is a pretty big memory hog even without any databases. Using that will quickly ramp up the remaining memory and swap to max and cause massive slowdowns . With only 256MB, it's best to look for alternatives that use less RAM or find a way of configuring mysql/mariadb to make do with a lot less.
You should be able to tune the my.cnf to use a lot less ram as standard, failing that consider using grav for static content
@bstrobl said: It turns out however that MySQL is a pretty big memory hog even without any databases. Using that will quickly ramp up the remaining memory and swap to max and cause massive slowdowns . With only 256MB, it's best to look for alternatives that use less RAM or find a way of configuring mysql/mariadb to make do with a lot less.
You should be able to tune the my.cnf to use a lot less ram as standard, failing that consider using grav for static content
Yeah plenty of ways of getting sql running with enough tinkering, just really surprised mysql needs half a gig just to get running in its default config nowadays.
Comments
Who needs a shirt. They're overrated!!! J/k. It's still a killer deal!
Anyone with Phoenix Storage can test drive speed? ( i know for storage is slow but i want a reference ) Thanks!
any moar storage dealz, @AnthonySmith ?
What tests would you like?
Running a yabs now with
curl -sL yabs.sh | bash -s -- -ig
I will post that when it is done.https://inceptionhosting.com
Please do not use the PM system here for Inception Hosting support issues.
I'm getting this with Debian 10:
please request cpu passthrough and make sure you have virtio drivers enabled, you would only have that CPU type if you installed from ISO.
https://inceptionhosting.com
Please do not use the PM system here for Inception Hosting support issues.
Yes i installed from Debian 10 ISO... Good to know... So the recommendation is not to install from ISO? Thanks for the info
I believe it's more like qemu gets set as default cpu type when installing from iso. Just a simple ticket to 'correct' it
It is a limitation of the VPS control panel.
For hosts that use solusvm, I suggest you deploy a template first even if you plan to then immediately install from ISO, as this will usually give you the optimal CPU and driver set if things have been set up properly.
The performance difference is not insignificant.
https://inceptionhosting.com
Please do not use the PM system here for Inception Hosting support issues.
Ok, know the VPS is flying!... Reinstall first with template solve the issue... Thanks!
>
Great! I really wish I could make virtio and cpu passthrough default, such a better end-user first experience.
https://inceptionhosting.com
Please do not use the PM system here for Inception Hosting support issues.
I also had the same problem with passthrough, and Ant fixed it today. (Thanks!) Anyway, I see some marginal improvement but I wouldn't say it's "flying" per se. Not complaining or anything, just some feedback.
Before:
After:
Yeah I mean it is a multi-tenant environment and looking at the base geekbench 5 benchmarks for the CPU model that is about right. I used geekbench 4 in the initial tests as it is more appropriate to that CPU's generation.
The main difference with using CPU passthrough is in real-world performance, not benchmarks, you see when you have a qemu-cpu model a large number of CPU instruction sets are not available as standard and that means that the CPU has to emulate them in software.
AES for example, you could quite literally be looking at 1000's of times slower than with passthrough for simple operations commonly used by VPN's, SSL/TLS etc.
I raised the lack of default passthrough again with the solusvm dev team off the back of this thread and got a positive response so it is possible that what OnApp (solusvm previous owners) felt it was ok just to leave as it is and not a big impact Plesk see the benefit of and are considering it worth the time to look into.
https://inceptionhosting.com
Please do not use the PM system here for Inception Hosting support issues.
I didn't know they were capable of a positive response.
I need some help not abusing the CPU. My VM has:
How I interpret this is: I can use a full core anytime, but if I hog more than %50 cpu constantly then I'm in trouble. So:
cpulimit -l 50 ffmpeg <args>
Is this enough? Am I using more than my share with this? Less than my share?
ToS only states this about CPU usage:
So this is for fair share CPU plans, and by the way it is actually fair. I interpret this as, I can use <%60 CPU (read: %59) constantly on my fair share OpenVZ plan. Nice. The thing is, by the look of it a fair share core can be hogged more than this 1/2 dedicated core, and that kinda defeats the purpose of a dedicated (albeit half) core.
My guess is I'm screwing up somewhere in my thought process, but you never know. I'm just trying to use my resources without causing trouble. Speed doesn't matter that much, so I can cpulimit the hell out of it. If speed did matter I would be doing this elsewhere anyway, I just want to consolidate everything related to my videos. Any insight would be appreciated.
If you're just using it for rsyncing data over from EU, it should stay just under the limit even with aes encryption via LUKS volume (16-20 MB/sec max throughput right?)
Edit: re-read and yes, you should use cpu limit if your ffmpeg jobs are running for way more than 2hrs.
It just means you can burn 50% all day without having to worry about it and burst to 100%.
It is really a soft guide, no issues if you are not disrupting others but also keep in mind that if you limit a single process to 50% then you are going to be using more than 50% because you know.. your VPS runs a kernel and is alive etc and needs some CPU time itself.
Additionally, ffmpeg is just in no way suitable to be running on a storage VPS on a slow spinning rusty disk array, you will be hammering the disk read requests p/second no doubt and that is probably going to be the thing that gets you slapped across the back of the legs with a 4-foot cane!
There is a box you tick during order about understanding that the product is designed for storage, video encoding is a different animal, worst case scenario I have to hard limit your disk IOPS which will mean that your CPU time will be no concern as you won't be able to read as fast as the CPU can encode
https://inceptionhosting.com
Please do not use the PM system here for Inception Hosting support issues.
Duh... Stupid me. 25-30% should make more sense since I'm not only running a kernel and whatnot, I also have Nextcloud+nginx+MariaDB so yeah... Nothing a few tries and
top
can't solve.Oof.. Those 1.2192 metre canes must really hurt! I'd rather not hammer the disk. Looks like cpulimit alone won't cut it IO-wise, so I should be limiting the IO on my end. Maybe something like pv might work? Example:
ffmpeg <args> | pv -L 1M
(Saw it here)Yes I am very well aware of that, I initially meant to just throw nextcloud on it and be done with it. The thing is I recently realised my capture box is fixed to 60 fps and I'm recording presentations so really 25 fps would be more than enough. Then I thought since I'm transcoding anyway, I might as well crop out my taskbar etc... I'll be doing all this on my local machine before uploading from now on, mostly because my upload speed is 600 KiB/s max on a good day. I have ~30GB of videos already uploaded, I figured I might as well use the dedicated CPU to do it instead of re-uploading them on my sucky connection. Didn't think of the IOPS at all.
I'd rather upload the videos again, I'll try to avoid the worst case scenario. Another approach is I can do it in several chunks as I'm in no rush at all.
Anyway thank you for your time and reading all my rambling
Either way, don't worry about it, I am not evil so if its a problem it will be a conversation, not a termination
https://inceptionhosting.com
Please do not use the PM system here for Inception Hosting support issues.
I know, that's why I'm with IH since 2017
Is the UK SSD Cached KVM 256MB for 10€ still valid?
Have to test out how well Ubuntu 20.04 runs on only 256 RAM
Nope, but its only €14 on regular pricing I would be surprised if a 5.x kernel would even boot on 256mb ram but I have not tried it.
https://inceptionhosting.com
Please do not use the PM system here for Inception Hosting support issues.
Actually just noticed it came with 10GB storage, which means it could replace my current server. Gonna order one and try squeezing 20.04 in and see if it works
Let us know how that goes
https://inceptionhosting.com
Please do not use the PM system here for Inception Hosting support issues.
Upgrading from a 18.04 template to 20.04 is perfectly fine, with plenty of RAM left to run an Apache webserver (Around 80MB used, 5MB in swap) . This worked a lot better than the time I tried stuffing Debian 8 into only 128MB of RAM which caused intermittent systemd issues.
It turns out however that MySQL is a pretty big memory hog even without any databases. Using that will quickly ramp up the remaining memory and swap to max and cause massive slowdowns . With only 256MB, it's best to look for alternatives that use less RAM or find a way of configuring mysql/mariadb to make do with a lot less.
Since I am only using static pages and the rest of the server as storage/repository I am perfectly happy with the tradeoffs right now when it comes to ditching mysql .
Webinoly .
Edit: added later
Grab (mentioned below) is a AK I a good option. So is Bludit
VPS reviews | | MicroLXC | English is my nth language.
You should be able to tune the my.cnf to use a lot less ram as standard, failing that consider using grav for static content
https://inceptionhosting.com
Please do not use the PM system here for Inception Hosting support issues.
Yeah plenty of ways of getting sql running with enough tinkering, just really surprised mysql needs half a gig just to get running in its default config nowadays.
Also heard good things about hugo for static sites : https://gohugo.io/
MySQL does not need that much RAM. I ran a WP site on tinykvm for months as I have written elsewhere in this and OGF
VPS reviews | | MicroLXC | English is my nth language.
Ah Webinoly looks good, will keep that in mind when I need a full LEMP stack, thanks