@Brueggus said: Have you tried shutting it down and booting it again from the panel?
Of course.
@Brueggus said: Actually, this was terrible advice: After a reboot my server's RAM allocation shrunk from 6 to 4 GB. I don't have a response from their support yet, let's see what they have to say.
Was their mistake and got fixed on my VM, as stated above.
What made me rant here in the first place is that they silently changed the specs - that's a no go for me. At least a short notice would have been appreciated.
@benz said: What made me rant here in the first place is that they silently changed the specs - that's a no go for me. At least a short notice would have been appreciated.
Hope your box is right again too.
Yes, in my case they said changing the specs would have been part of their troubleshooting and they've forgot to change it back.
@Brueggus said: Yes, in my case they said changing the specs would have been part of their troubleshooting and they've forgot to change it back.
Hmm, something seems to be wrong there currently.
Do you happen to have one in Germany too?
Server seems to be pretty loaded, judging based on Geekbench scores compared with the same CPU at another host.
Getting about 850 points single core here, ~1150 with other R5-3600 providers.
CPU steal around 10%.
Nothing I won't expect at the price, just wondering if that's the usual load across nodes.
@Brueggus said: Yes, in my case they said changing the specs would have been part of their troubleshooting and they've forgot to change it back.
Hmm, something seems to be wrong there currently.
Do you happen to have one in Germany too?
Server seems to be pretty loaded, judging based on Geekbench scores compared with the same CPU at another host.
Getting about 850 points single core here, ~1150 with other R5-3600 providers.
CPU steal around 10%.
Nothing I won't expect at the price, just wondering if that's the usual load across nodes.
Hello,
We have an average usage on most nodes at 50-60%. We do have lot's of spikes from people doing benchmarks because they like to measure the servers.
Our CPU's are undersold on most nodes. Germany is another story. I am unsure of average utilization and so on as SolusIO does not provide the stats itself. I would have to look on the monitoring system to check further. But as I have gotten no alert, I guess it is not above 75-80%.
I will attach some pics
Best Regards!
P.D. Servers vary between 24 threads and 16 threads.
P.D. Permissions error is due to some backup mistake that takes all the space on the web server. Hence the issues, we are actively working to solve them.
Comments
Of course.
Was their mistake and got fixed on my VM, as stated above.
What made me rant here in the first place is that they silently changed the specs - that's a no go for me. At least a short notice would have been appreciated.
Hope your box is right again too.
Yes, in my case they said changing the specs would have been part of their troubleshooting and they've forgot to change it back.
Need a free NAT LXC? -> https://microlxc.net/
Hmm, something seems to be wrong there currently.
Do you happen to have one in Germany too?
Server seems to be pretty loaded, judging based on Geekbench scores compared with the same CPU at another host.
Getting about 850 points single core here, ~1150 with other R5-3600 providers.
CPU steal around 10%.
Nothing I won't expect at the price, just wondering if that's the usual load across nodes.
Hello,
We have an average usage on most nodes at 50-60%. We do have lot's of spikes from people doing benchmarks because they like to measure the servers.
Our CPU's are undersold on most nodes. Germany is another story. I am unsure of average utilization and so on as SolusIO does not provide the stats itself. I would have to look on the monitoring system to check further. But as I have gotten no alert, I guess it is not above 75-80%.
I will attach some pics
Best Regards!
P.D. Servers vary between 24 threads and 16 threads.
P.D. Permissions error is due to some backup mistake that takes all the space on the web server. Hence the issues, we are actively working to solve them.