@gogogo said: I tried everything, reboot, shutdown, reinstall, rescue. But always offline. then can you tell me what should i do?
@AlwaysSkint is probably correct about the lack of memory and Ubuntu 20.04 being an issue. I am working with someone on the OGF with the same problem on DALZ007, also with an Ubuntu 20.04 ISO install, on a larger VPS. Originally purchased as Ryzen, never migrated, worked fine from April to July 5th when it went down and has never booted since. The node is confirmed up as other people have confirmed working VMs on DALZ007, and VirMach has confirmed it here. Same symptoms as you, reboot, shutdown, no ISO mounted, etc, stays offline. If we figure anything out I'll let you know.
Node: DALZ007 id: 659218
Certainement qui est en droit de vous rendre absurde, est en droit de vous rendre injuste - Voltaire
My Dallas 512mb VPS didn't work with Debian either (every time I did an upgrade, it lost the boot drive and wouldn't reboot). Centos 8 Stream install didn't work for me so I installed Centos 8 and manually upgraded it to Centos 8 Stream and it's been up for 7 days now.
@AlwaysSkint said: @gogogo
That's the same tiny VPS that I have. What's with Ubuntu? Don't even think about it!
At best, you're looking at debian 8, with that limited RAM - perhaps if lucky, a later version. I gave Alamalinux 8 a shot just to check for template issues, but didn't really expect that to boot.
The "complete" suite of ISO is unavailable to this node, so netboot.xyz isn't an option for now and the only one, TinyCore isn't booting either.
I've created a debian 8 img file to try out but can't due to Rescue not booting, which is strange.
so,The only thing left to do was to wait for @VirMach 's help
@gogogo said: I tried everything, reboot, shutdown, reinstall, rescue. But always offline. then can you tell me what should i do?
@AlwaysSkint is probably correct about the lack of memory and Ubuntu 20.04 being an issue. I am working with someone on the OGF with the same problem on DALZ007, also with an Ubuntu 20.04 ISO install, on a larger VPS. Originally purchased as Ryzen, never migrated, worked fine from April to July 5th when it went down and has never booted since. The node is confirmed up as other people have confirmed working VMs on DALZ007, and VirMach has confirmed it here. Same symptoms as you, reboot, shutdown, no ISO mounted, etc, stays offline. If we figure anything out I'll let you know.
@gogogo - I made a mistake on the node ID it was DALZ004 not DALZ007, but we ended up in the same place as you, not booting no matter what we tried. Guess we are just going to have to wait on VirMach to fix these. Like he said above it is in the queue. Hope you get going soon™
Certainement qui est en droit de vous rendre absurde, est en droit de vous rendre injuste - Voltaire
Okay after working on this all day, outside of the two nodes somehow still down (SJCZ004 and NYCB036) I've counted these specifically and we've gone down from something like 2% of people after the most recent round of migrations to exactly 0.91% of people on Ryzen having a VPS with an incorrect IP or non-bootable. At this point for the other two I basically just need the DC hands to reboot them at least and then I'll probably be able to try to take it from there... I hope it doesn't end up taking two weeks for that type of request but we'll see as we're on our way there.
All the others left have been organized and require a re-migration at this point. This number also does include all the broken VMs from Ryzen migrate button, which make up the majority and are clustered on 28 different nodes. What we're going to do is first create the LVMs manually to fix the majority and boot them up, and then either within 24 hours try to restore the data if it exists, or if it takes any longer, send out a ticket and ask people if they still wanted their old data (in case they begin using it and already loading in their own or just don't have important data.)
This last bit took astronomically longer to identify and fix but now that it's the ending portion of it and heavily organized with all the other problems (the other 1% fixed today) out of the way, it should go smoothly.
Then we'll probably run our auto credit and close ticket script again for everyone, probably piss off a few hundred people at least, but actually be able to get on track with tickets as otherwise the majority of the tickets are going to end up being for issues we already resolved. I'll try to wait on this until we get the other two nodes back up and finalized as well and any remaining migrations done.
From the fact that I still have a number of vps that haven't migrated yet, I'm guessing there is a certain amount total that haven't changed yet. I'm assuming the future migrations left hopefully will be to existing systems that have been tested?
@Daevien said:
From the fact that I still have a number of vps that haven't migrated yet, I'm guessing there is a certain amount total that haven't changed yet. I'm assuming the future migrations left hopefully will be to existing systems that have been tested?
All systems are existing systems that have been tested, some more than others. The newer nodes I've actually spent less time testing, not as a result of being careless, but in that a lot of the issues were already ironed out so it was a quicker fix and it's possible less of these core issues get passed onto those nodes. We have not sold anything on these Ryzen nodes for several weeks now so a good portion of migrations will be to existing nodes that freed up a little bit. The rest will mostly be Hivelocity setups and the final bit will be the last servers I've built which essentially have the best solid state drives (IMO) so they shouldn't "drop off" and the motherboards have been more extensively tested and pre-flashed (mostly.) Any issues these nodes may have will be related to power and brackets falling apart during shipping which get fixed before it goes online. I'm being specific about where I deploy them to ensure if we run into problems, it should hopefully be a quick fix. Only caveat is that the Hivelocity locations will have zero RAM available. We had to do so many RAM swaps and they're all with existing partners, I ran out, but I memory tested everything.
Networking configuration will be the one that gives us no problems straight out the gate for the new nodes as well and I've not sent any additional servers outside of storage to the partner who shall not be named who has taken over a week for a power button press and two weeks for switch configuration or initial node deployments.
We've also gotten quite good at migrations by now so those will hopefully be as smooth as they get for the last bit.
I did also finally get some switch access, so you will either see more nodes up in NYC that we can use for migrations, or you'll see me break the networking.
Is PHXZ004 working fine? I couldn't log in via SSH, so I logged in via VNC, but even ping to xxx.xxx.xxx.1 failed.
No reply required. Instead of writing a reply to me, you can rest your eyes and body 😉
I was going to get a flight to San Jose for tomorrow morning but I'm still waiting to see how we can even get DC access since it's the first time. I might still go on Tuesday and see if I can beat the DC hands to it for SJCZ004 at this point, only about an hour flight.
@tototo said:
Is PHXZ004 working fine? I couldn't log in via SSH, so I logged in via VNC, but even ping to xxx.xxx.xxx.1 failed.
No reply required. Instead of writing a reply to me, you can rest your eyes and body 😉
Phoenix actually has some of our infrastructure on it and I can confirm it's being terrible. But it's still mostly usable. We need to redo network configuration on it still.
all my VPS (LA) can't ping to 8.8.8.8. any explanation on that?
Very unlucky user or very lucky user based on preference, when it happened, etc. Maybe your services are getting migrated to Ryzen, maybe they're all broken.
NYC storage node, I had to do a lot of weird setups for this one to make it work as we faced problems such as a disk going missing, the wrong switch, and so on in the background. Right now it's LACP aggregated 3Gbps.
Originally was supposed to have 36 disks, then had 31, then Amazon delayed it, then either one died or DC lost it so we had 30 and I can't keep waiting at this point to try to do 36 disk RAID. This is a beefier card, I figured we give it a try with two 16 disk arrays to essentially emulate two Tokyo smaller servers or to add further fun challenges if I'm wrong about this decision. I configured dm-cache but I don't know what to do with it yet outside of just maybe using it for a test VM. SolusVM surprisingly added this feature in V2, whenever that ends up happening we'll be ready and you can get a whopping 7GB of Gen4 NVMe cache with your 1TB storage (I can probably get it it up higher it has open slots.) Don't know if it's any good.
This is a Threadripper 3960X. It's the infamous motherboard I dropped, I killed two of the RAM slots so that's why it's at 192GB instead of the planned 256GB and why 36 disk --> 2 x 16 (32) makes more sense, but I ran a lot of tests on it and it was otherwise healthy.
This 9560 has a weird battery (CacheVault) non-issue, I think it just takes longer to boot or has a more strict requirement before initiating. It's been like that since brand new and I've looked into it in every way I could with the limited time, but it only happens on initial boot and shows healthy state.
Fun fact though, it's now impossible to get a replacement motherboard for these, fully. I've looked so hard. So if it ever dies we'll just do a board swap with Epyc, 5950X, 6950X by then, or something like that.
9560-16i RAID controller.
Oh and we already have like 3 NICs now to make it 10Gbps, 20Gbps and later 40Gbps even maybe. We basically spam ordered and crossed fingers for this one since it's initially a lot larger than Tokyo and needs the 10Gbps. They just need to connect it.
I'm about to send the maintenance email for one of the current storage nodes, while we were running backups this week two of the disks on it died and it's degraded. That's one reason I was rushing this out in the last 2 days, I am absolutely not getting DC hands involved at the old DC, every single RAID-related thing they've done recently has been guaranteed destroying all data and the last time they worked on this they almost jumbled the data. Luckily that means we already have like 80% of a massive storage server already backed up to another storage server and I'll be moving off the people that didn't get backed up first.
Alright now let's fill it and bring those numbers down. I'll also try to start moving the other one less aggressively to LAX storage and shifting our disaster recovery backups from that to non-RAID since they're mostly complete. Once we finish the rest of the NY storage we'll let you move back and I apologize in advance for the double move but it's better than it getting yoinked offline.
What is the consensus with networking in Phoenix (PHXZ003), are whole nodes having issues? I cannot ping to 8.8.8.8 from the VPS after trying the usual stuff like a hard power-off then on. I have the "Fix Ryzen IP" option, but don't want to just try stuff randomly if the whole site is having issues.
@tetech said:
What is the consensus with networking in Phoenix (PHXZ003), are whole nodes having issues? I cannot ping to 8.8.8.8 from the VPS after trying the usual stuff like a hard power-off then on. I have the "Fix Ryzen IP" option, but don't want to just try stuff randomly if the whole site is having issues.
my LAX had 8.8.8.8 issues for a while , but went back up.
funny enough i was trying to do a bench.sh test and it was coincidentally down, and up afterwards.
@tetech said: What is the consensus with networking in Phoenix (PHXZ003), are whole nodes having issues? I cannot ping to 8.8.8.8 from the VPS after trying the usual stuff like a hard power-off then on. I have the "Fix Ryzen IP" option, but don't want to just try stuff randomly if the whole site is having issues.
Not just you, I'm on PHXZ002 and the network was so so before, but now no network at all for last ~24 hours. I don't think you can fix this yourself. VirMach mentioned issues in Phoenix above.
@VirMach said: Phoenix actually has some of our infrastructure on it and I can confirm it's being terrible. But it's still mostly usable. We need to redo network configuration on it still.
@willie said:
Looks like my vpsshared site is down, not responding to pings. It's a super low traffic personal site, so I'll survive, but it sounds like things are probably backlogged there.
I'd scream at CC but I've given up on that. They placed a permanent nullroute on the main IP. This is probably the 30th time they've done this, from a single website being malicious. I'm trying to get a server up and just move it at this point.
Again I'd usually be freaking out and working on it immediately to get it online but I'm just being realistic here when there's probably a dozen others in worse states. It sucks, it's unprofessional, but there's no use having more stress over it as that'll just slow me down.
I'm wondering if its LA10GKVM14 since that node has been down almost one month already
@tetech said:
What is the consensus with networking in Phoenix (PHXZ003), are whole nodes having issues? I cannot ping to 8.8.8.8 from the VPS after trying the usual stuff like a hard power-off then on. I have the "Fix Ryzen IP" option, but don't want to just try stuff randomly if the whole site is having issues.
my LAX had 8.8.8.8 issues for a while , but went back up.
funny enough i was trying to do a bench.sh test and it was coincidentally down, and up afterwards.
Honestly at this point I think 8.8.8.8 issue is actually not on any specific datacenter's end. We had it happen on our WHMCS for a little bit, and that one's not even hosted with any datacenter we use right now for these Ryzens. We've also had it reported on LAX which is QN and Phoenix which is PhoenixNAP IIRC.
So it has to be something upstream probably unless it's in some weird way related?
@willie said:
Looks like my vpsshared site is down, not responding to pings. It's a super low traffic personal site, so I'll survive, but it sounds like things are probably backlogged there.
I'd scream at CC but I've given up on that. They placed a permanent nullroute on the main IP. This is probably the 30th time they've done this, from a single website being malicious. I'm trying to get a server up and just move it at this point.
Again I'd usually be freaking out and working on it immediately to get it online but I'm just being realistic here when there's probably a dozen others in worse states. It sucks, it's unprofessional, but there's no use having more stress over it as that'll just slow me down.
I'm wondering if its LA10GKVM14 since that node has been down almost one month already
LA10GKVM14 was part of a massive PDU or power circuit event with them, where like a dozen plus servers went offline and online, and then offline, causing a bunch of power supplies to fail. A lot of times when we've had power issues like that, CC also moves them to another switch without telling us and then does not configure the networking properly. This is one of those cases, so it's basically been left without functional networking and no one's willing to help. We're marking it at a loss at this point because it's impossible to get them to do anything anymore. It's possible this also has other issues like a failed controller/data corruption as a result of it taking so long for them to hook up another PSU, I think it took 4 or 5 days and the battery drained for the cache but it doesn't matter. We have to try to locate backups which we haven't had luck with so far because backups also failed and were closely tied to the power event, I think we have partial backups.
So right now we're mostly stuck on regenerating services.
@tetech said: What is the consensus with networking in Phoenix (PHXZ003), are whole nodes having issues? I cannot ping to 8.8.8.8 from the VPS after trying the usual stuff like a hard power-off then on. I have the "Fix Ryzen IP" option, but don't want to just try stuff randomly if the whole site is having issues.
Not just you, I'm on PHXZ002 and the network was so so before, but now no network at all for last ~24 hours. I don't think you can fix this yourself. VirMach mentioned issues in Phoenix above.
@VirMach said: Phoenix actually has some of our infrastructure on it and I can confirm it's being terrible. But it's still mostly usable. We need to redo network configuration on it still.
@tetech said:
What is the consensus with networking in Phoenix (PHXZ003), are whole nodes having issues? I cannot ping to 8.8.8.8 from the VPS after trying the usual stuff like a hard power-off then on. I have the "Fix Ryzen IP" option, but don't want to just try stuff randomly if the whole site is having issues.
my LAX had 8.8.8.8 issues for a while , but went back up.
funny enough i was trying to do a bench.sh test and it was coincidentally down, and up afterwards.
Honestly at this point I think 8.8.8.8 issue is actually not on any specific datacenter's end. We had it happen on our WHMCS for a little bit, and that one's not even hosted with any datacenter we use right now for these Ryzens. We've also had it reported on LAX which is QN and Phoenix which is PhoenixNAP IIRC.
So it has to be something upstream probably unless it's in some weird way related?
i think so too. something related geographically in in US
@VirMach said: We have to try to locate backups which we haven't had luck with so far because backups also failed and were closely tied to the power event, I think we have partial backups.
So right now we're mostly stuck on regenerating services.
I think most of us/customers would be happy with fresh idlers servers. Specially after a month of downtime so I wouldn't spend any extra time for hunting backups but I usually to keep my own backups.
Tokyo is full right now, these will get activated by Wednesday most likely. If you'd like any of you guys (mucstudio, nauthnael, TrueBlumdfeld, gkl1368) can reply back to me here and request a refund or I can activate you in San Jose for now and migrate you to Tokyo when it's ready. Or of course you can just wait.
Could you please migrate me to Tokyo?
It's been offline for ten days on SJZ004 and the control pannel page timeout
Tokyo is full right now, these will get activated by Wednesday most likely. If you'd like any of you guys (mucstudio, nauthnael, TrueBlumdfeld, gkl1368) can reply back to me here and request a refund or I can activate you in San Jose for now and migrate you to Tokyo when it's ready. Or of course you can just wait.
Could you please migrate me to Tokyo?
It's been offline for ten days on SJZ004 and the control pannel page timeout
order is: 4678913601
How do you propose a migration of an offline server?
@VirMach said:
How do you propose a migration of an offline server?
Can you migrate my broken VPS in Phoenix to getRandomLocation() without data?
(Yes, needless to say, this is a joke. I'd rather wait than open a ticket)
Comments
There's about 3-5% of the VMs offline, but the node as a whole isn't. Those are on our queue.
@AlwaysSkint is probably correct about the lack of memory and Ubuntu 20.04 being an issue. I am working with someone on the OGF with the same problem on DALZ007, also with an Ubuntu 20.04 ISO install, on a larger VPS. Originally purchased as Ryzen, never migrated, worked fine from April to July 5th when it went down and has never booted since. The node is confirmed up as other people have confirmed working VMs on DALZ007, and VirMach has confirmed it here. Same symptoms as you, reboot, shutdown, no ISO mounted, etc, stays offline. If we figure anything out I'll let you know.
Node: DALZ007 id: 659218
Certainement qui est en droit de vous rendre absurde, est en droit de vous rendre injuste - Voltaire
My Dallas 512mb VPS didn't work with Debian either (every time I did an upgrade, it lost the boot drive and wouldn't reboot). Centos 8 Stream install didn't work for me so I installed Centos 8 and manually upgraded it to Centos 8 Stream and it's been up for 7 days now.
so,The only thing left to do was to wait for @VirMach 's help
thanks.
@gogogo - I made a mistake on the node ID it was DALZ004 not DALZ007, but we ended up in the same place as you, not booting no matter what we tried. Guess we are just going to have to wait on VirMach to fix these. Like he said above it is in the queue. Hope you get going soon™
Certainement qui est en droit de vous rendre absurde, est en droit de vous rendre injuste - Voltaire
Okay after working on this all day, outside of the two nodes somehow still down (SJCZ004 and NYCB036) I've counted these specifically and we've gone down from something like 2% of people after the most recent round of migrations to exactly 0.91% of people on Ryzen having a VPS with an incorrect IP or non-bootable. At this point for the other two I basically just need the DC hands to reboot them at least and then I'll probably be able to try to take it from there... I hope it doesn't end up taking two weeks for that type of request but we'll see as we're on our way there.
All the others left have been organized and require a re-migration at this point. This number also does include all the broken VMs from Ryzen migrate button, which make up the majority and are clustered on 28 different nodes. What we're going to do is first create the LVMs manually to fix the majority and boot them up, and then either within 24 hours try to restore the data if it exists, or if it takes any longer, send out a ticket and ask people if they still wanted their old data (in case they begin using it and already loading in their own or just don't have important data.)
This last bit took astronomically longer to identify and fix but now that it's the ending portion of it and heavily organized with all the other problems (the other 1% fixed today) out of the way, it should go smoothly.
Then we'll probably run our auto credit and close ticket script again for everyone, probably piss off a few hundred people at least, but actually be able to get on track with tickets as otherwise the majority of the tickets are going to end up being for issues we already resolved. I'll try to wait on this until we get the other two nodes back up and finalized as well and any remaining migrations done.
From the fact that I still have a number of vps that haven't migrated yet, I'm guessing there is a certain amount total that haven't changed yet. I'm assuming the future migrations left hopefully will be to existing systems that have been tested?
All systems are existing systems that have been tested, some more than others. The newer nodes I've actually spent less time testing, not as a result of being careless, but in that a lot of the issues were already ironed out so it was a quicker fix and it's possible less of these core issues get passed onto those nodes. We have not sold anything on these Ryzen nodes for several weeks now so a good portion of migrations will be to existing nodes that freed up a little bit. The rest will mostly be Hivelocity setups and the final bit will be the last servers I've built which essentially have the best solid state drives (IMO) so they shouldn't "drop off" and the motherboards have been more extensively tested and pre-flashed (mostly.) Any issues these nodes may have will be related to power and brackets falling apart during shipping which get fixed before it goes online. I'm being specific about where I deploy them to ensure if we run into problems, it should hopefully be a quick fix. Only caveat is that the Hivelocity locations will have zero RAM available. We had to do so many RAM swaps and they're all with existing partners, I ran out, but I memory tested everything.
Networking configuration will be the one that gives us no problems straight out the gate for the new nodes as well and I've not sent any additional servers outside of storage to the partner who shall not be named who has taken over a week for a power button press and two weeks for switch configuration or initial node deployments.
We've also gotten quite good at migrations by now so those will hopefully be as smooth as they get for the last bit.
I did also finally get some switch access, so you will either see more nodes up in NYC that we can use for migrations, or you'll see me break the networking.
Is PHXZ004 working fine? I couldn't log in via SSH, so I logged in via VNC, but even ping to xxx.xxx.xxx.1 failed.
No reply required. Instead of writing a reply to me, you can rest your eyes and body 😉
I was going to get a flight to San Jose for tomorrow morning but I'm still waiting to see how we can even get DC access since it's the first time. I might still go on Tuesday and see if I can beat the DC hands to it for SJCZ004 at this point, only about an hour flight.
Phoenix actually has some of our infrastructure on it and I can confirm it's being terrible. But it's still mostly usable. We need to redo network configuration on it still.
all my VPS (LA) can't ping to 8.8.8.8. any explanation on that?
Very unlucky user or very lucky user based on preference, when it happened, etc. Maybe your services are getting migrated to Ryzen, maybe they're all broken.
NYC storage node, I had to do a lot of weird setups for this one to make it work as we faced problems such as a disk going missing, the wrong switch, and so on in the background. Right now it's LACP aggregated 3Gbps.
Originally was supposed to have 36 disks, then had 31, then Amazon delayed it, then either one died or DC lost it so we had 30 and I can't keep waiting at this point to try to do 36 disk RAID. This is a beefier card, I figured we give it a try with two 16 disk arrays to essentially emulate two Tokyo smaller servers or to add further fun challenges if I'm wrong about this decision. I configured dm-cache but I don't know what to do with it yet outside of just maybe using it for a test VM. SolusVM surprisingly added this feature in V2, whenever that ends up happening we'll be ready and you can get a whopping 7GB of Gen4 NVMe cache with your 1TB storage (I can probably get it it up higher it has open slots.) Don't know if it's any good.
Anyway, Geekbench:
https://browser.geekbench.com/v5/cpu/16224250/claim?key=836554
This is a Threadripper 3960X. It's the infamous motherboard I dropped, I killed two of the RAM slots so that's why it's at 192GB instead of the planned 256GB and why 36 disk --> 2 x 16 (32) makes more sense, but I ran a lot of tests on it and it was otherwise healthy.
This 9560 has a weird battery (CacheVault) non-issue, I think it just takes longer to boot or has a more strict requirement before initiating. It's been like that since brand new and I've looked into it in every way I could with the limited time, but it only happens on initial boot and shows healthy state.
Fun fact though, it's now impossible to get a replacement motherboard for these, fully. I've looked so hard. So if it ever dies we'll just do a board swap with Epyc, 5950X, 6950X by then, or something like that.
9560-16i RAID controller.
Oh and we already have like 3 NICs now to make it 10Gbps, 20Gbps and later 40Gbps even maybe. We basically spam ordered and crossed fingers for this one since it's initially a lot larger than Tokyo and needs the 10Gbps. They just need to connect it.
I'm about to send the maintenance email for one of the current storage nodes, while we were running backups this week two of the disks on it died and it's degraded. That's one reason I was rushing this out in the last 2 days, I am absolutely not getting DC hands involved at the old DC, every single RAID-related thing they've done recently has been guaranteed destroying all data and the last time they worked on this they almost jumbled the data. Luckily that means we already have like 80% of a massive storage server already backed up to another storage server and I'll be moving off the people that didn't get backed up first.
YABS on the first VPS:
Okay let's pretend I didn't forget to enable the cache.
Alright now let's fill it and bring those numbers down. I'll also try to start moving the other one less aggressively to LAX storage and shifting our disaster recovery backups from that to non-RAID since they're mostly complete. Once we finish the rest of the NY storage we'll let you move back and I apologize in advance for the double move but it's better than it getting yoinked offline.
Any space left on this one for new sign ups?
(Asking for a friend)
VPS reviews | | MicroLXC | English is my nth language.
What is the consensus with networking in Phoenix (PHXZ003), are whole nodes having issues? I cannot ping to 8.8.8.8 from the VPS after trying the usual stuff like a hard power-off then on. I have the "Fix Ryzen IP" option, but don't want to just try stuff randomly if the whole site is having issues.
my LAX had 8.8.8.8 issues for a while , but went back up.
funny enough i was trying to do a bench.sh test and it was coincidentally down, and up afterwards.
I bench YABS 24/7/365 unless it's a leap year.
Not just you, I'm on PHXZ002 and the network was so so before, but now no network at all for last ~24 hours. I don't think you can fix this yourself. VirMach mentioned issues in Phoenix above.
Certainement qui est en droit de vous rendre absurde, est en droit de vous rendre injuste - Voltaire
I'm wondering if its LA10GKVM14 since that node has been down almost one month already
I expect he was referring to the shared hosting server in Buffalo with the comment you quoted. I could be wrong, but that was my understanding.
Certainement qui est en droit de vous rendre absurde, est en droit de vous rendre injuste - Voltaire
Honestly at this point I think 8.8.8.8 issue is actually not on any specific datacenter's end. We had it happen on our WHMCS for a little bit, and that one's not even hosted with any datacenter we use right now for these Ryzens. We've also had it reported on LAX which is QN and Phoenix which is PhoenixNAP IIRC.
So it has to be something upstream probably unless it's in some weird way related?
LA10GKVM14 was part of a massive PDU or power circuit event with them, where like a dozen plus servers went offline and online, and then offline, causing a bunch of power supplies to fail. A lot of times when we've had power issues like that, CC also moves them to another switch without telling us and then does not configure the networking properly. This is one of those cases, so it's basically been left without functional networking and no one's willing to help. We're marking it at a loss at this point because it's impossible to get them to do anything anymore. It's possible this also has other issues like a failed controller/data corruption as a result of it taking so long for them to hook up another PSU, I think it took 4 or 5 days and the battery drained for the cache but it doesn't matter. We have to try to locate backups which we haven't had luck with so far because backups also failed and were closely tied to the power event, I think we have partial backups.
So right now we're mostly stuck on regenerating services.
Phoenix is completely trashed right now.
i think so too. something related geographically in in US
I bench YABS 24/7/365 unless it's a leap year.
Yeah I don't know how reliable this random site is that I found but:
I think most of us/customers would be happy with fresh idlers servers. Specially after a month of downtime so I wouldn't spend any extra time for hunting backups but I usually to keep my own backups.
Could you please migrate me to Tokyo?
It's been offline for ten days on SJZ004 and the control pannel page timeout
order is: 4678913601
How do you propose a migration of an offline server?
Can you migrate my broken VPS in Phoenix to
getRandomLocation()
without data?(Yes, needless to say, this is a joke. I'd rather wait than open a ticket)