cybertech
cybertech
Not much happening here, yet.
Comments
-
yes i get anxiety when no YABS runs well in a day.
-
managed to install debian 11 via netboot. TYOC040 seems faster now hitting 200mbps. However kernel panic still can happen when running YABS. ## ## ## ## ## ## ## ## ## ## ## Yet-Another-Bench-Script ## v2022-02-18 ## https://github.com/masonr/yet-another-bench-script ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ##…
-
be a vps provider on the ovh server, make it for cheap. i know i will want one
-
Virmach Tokyo Ryzen 2560mb [root@ZealousFat-VM ~]# curl -sL yabs.sh | bash -s -- -4i # ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## Yet-Another-Bench-Script ## v2022-02-18 ## https://github.com/masonr/yet-another-bench-script ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #Fri Apr 8 06:22:18 EDT 2022Basic…
-
https://browser.geekbench.com/v4/cpu/16527652
-
yes "interrupt" sounds like it. i'm just gonna leave it here first while i do actual work, feel free to tag or PM me if you need any other tests, or if you wanna double my bandwidth that i will never use.
-
you could be right on the CPU, it probably had to wait for processing power to perform as below: fio Disk Speed Tests (Mixed R/W 50/50):---------------------------------Block Size | 4k (IOPS) | 64k (IOPS) ------ | --- ---- | ---- ----Read | 405.26 MB/s (101.3k) | 960.79 MB/s (15.0k)Write | 406.33 MB/s (101.5k) | 965.85…
-
well i might be wrong, YABS is stuck on FIO, so disk is also an issue for me [root@ZealousFat-VM ~]# curl -sL yabs.sh | bash -s -- -4i# ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## Yet-Another-Bench-Script ## v2022-02-18 ## https://github.com/masonr/yet-another-bench-script ## ## ## ## ## ## ## ## ## ## ## ## ##…
-
my personal feeling is that the VM CPU/IO is currently fast, and only the network is "severely limited", and the latency is very high. some packet restriction? (im no network geek) yum -y update is downloading at 30k/s since network is slow i have not run any YABS yet. tried to reinstall Debian 11 using client panel, but…
-
oh man im together with mjj on TYC040? network is real slow now, yum -y update is going at 30k/s
-
ryzen is still in beta stage
-
hope Virmach is upgrading DDOS protection or investing 22K per year for his panel.
-
Antarctica uses IPv9. Virmach wouldn't have known even if any attacks came from there. That being said, an attack on this scale....sounds massive doesn't it? must be a lot of hate on this one.
-
English please
-
i like buffalo just not colocrossing and lack of IPv6
-
what happened, client area down?
-
thanks amazing IO there, cannot believe there's no RAID
-
which location? also try again with almalinux8 template?
-
Virmach yaaaaaaaaaaaaaaaaaaaaas welcome to the real LOW END SPIRIT
-
awwwww yeahhhhhhhhh
-
how much did you save?> @yoursunny said: thats good savings. not sure about the efforts though.
-
thats my premium backup box now lol. critical redundancy
-
yeah, maybe hostcram
-
like you i didn't have a positive experience with Oracle free tier, which is why i bought below annual vpses to idle: * Inceptionhosting 2C2G30GB 25EUR (transferred out) * greencloudvps 2222 deal 2C2G22GB EPYC $22 * Webhosting24 special 2C2G100GB 2697v3 22EUR * hostcram 2C2G20GB 5950X LXC $25 all in this 2GB ram tier…
-
Wow very very interesting.
-
OP mentioned that specs are not important. Besides Oracle Cloud has better IO on bigger storage, 200GB performs better than default 50GB.
-
how much did you save?
-
how has servarica storage been? im concerned with HH situation now, although to be fair i have had zero total failure, but one xfs_repair. anyone has a network YABS of servarica?
-
GB4 working now! my anxiety is over.
-
it was my first xfs_repair, but I left it at that and hosthatch came in and helped with the repair too. now it's back online woohoo
-
it just got fixed like 5mins ago. EDIT: no others failed again on GB4
-
does it look like this?
-
@lapua i just tried one and it failed too, so its not swap issue
-
maybe GB4 is possible
-
they look like need more swap
-
fuck mine in LAX is down too, looks like disk corrupted
-
what kernel is it on? edited: ooh 5.15 already