Just don't forget to replace eth0 with ens3 in case you use new Debian distro.
I had a bit of a fumble getting IPv6 working after the London migration. It didn't work straight off the bat, and going into the new CP saw IPv6 was disabled, so clicked enable hoping to get it working.
That resulted in a different /64 than previously, changed the config accordingly but still no IPv6, finally tried the old subnet again but with IPv6 enabled and everything sprang back to life.
Just checked today and now both /64 subnets appear in the panel, and both work, so was probably just me being a bit quick off the mark when the node reappeared.
This migration to the new control panel went well by my experience.
Agreed, no problems with the migration otherwise, short downtime, good communications. Even seems to have fixed a low priority ticket (outbound port 25 blocked) that's been outstanding for 5 months :-)
I noticed some issue tho. I can't re-use old IPv6 addresses from pre-migration VPS (/64 subnet is still the same) on my Oslo VPS which kinda sucks as some of those have rDNS set up (and because of that the they also can't be removed from control panel). They simply don't come online. Newly created addresses work.
I have no this issue with my Vienna VPS. Old re-used addresses work also with the newly migrated and reinstalled Vienna VPS, so it seems like a bug or something from old control panel wasn't purged or something.
Both VPSes are clean installed after the migration.
@cochon said: It didn't work straight off the bat, and going into the new CP saw IPv6 was disabled, so clicked enable hoping to get it working.
Interesting.
I don't see this option. It write just "ENABLED" without disable/enable option (which is fine by me).
Maybe because of there's "LEGACY" button suggesting that I need to reinstall vps (I did that from ISO) in order to enable all the features.
@Mumbly said:
I can't re-use old IPv6 addresses from pre-migration VPS ... they also can't be removed from control panel
I don't see any specific addresses configured now in the new control under either of my subnets. I can assign a random address (under either /64) interactively on the interface and it just works, seems to be a fully routed /64. (Edit:) including previously configured ones.
@Mumbly said:
I don't see this option. It write just "ENABLED" without disable/enable option (which is fine by me).
Maybe because of there's "LEGACY" button suggesting that I need to reinstall vps (I did that from ISO) in order to enable all the features.
Mine is still a legacy install, I haven't re-installed and wasn't planning on it, I'm not clear what if any advantage there is in that. The IPv6 subpanel just says enabled for me now also, it did say disabled whilst I was testing despite IPv6 being actively used prior, but probably me jumping the gun with the migration still in progress.
I take the difference between routed and non-routed is routed you can use add the ip on the server and it works where non-routed you need to add each ip on the panel first.
@Razza said:
I take the difference between routed and non-routed is routed you can use add the ip on the server and it works where non-routed you need to add each ip on the panel first.
That "non-routed" is called linked ipv6 I guess, something that solusvm does.
@Mumbly said:
I can't re-use old IPv6 addresses from pre-migration VPS ... they also can't be removed from control panel
I don't see any specific addresses configured now in the new control under either of my subnets. I can assign a random address (under either /64) interactively on the interface and it just works, seems to be a fully routed /64. (Edit:) including previously configured ones.
Update
Two identical (Debian 11) hosthatch setups. Fresh re-installed from ISO after the migration.
Vienna:
IPv6 work. Old addresses and new addresses.
Oslo:
IPv6 work, but only one address (no matter how many of them I add). And after some time even this one die.
It seems like it's not about old/new addresses actually. I tryed few things, different setups, etc, but no luck. I can't explain to myself why is that. Everything's seems correct on the VPS side.
Emil J yesterday responded. I hope that he won't give up otherwise I have a VPS without functional IPv6 after the migration
I had a couple of servers migrated over to the new panel and everything (including IPv6) remains working as it was.
For domain registrations, create an account at Dynadot (ref) and spend $9.99 within 48 hours to receive $5 DynaDollars!
Looking for cost-effective Managed/Anycast/DDoS-Protected/Geo DNS Services? Try ClouDNS (aff).
@Mumbly said:
I can't re-use old IPv6 addresses from pre-migration VPS ... they also can't be removed from control panel
I don't see any specific addresses configured now in the new control under either of my subnets. I can assign a random address (under either /64) interactively on the interface and it just works, seems to be a fully routed /64. (Edit:) including previously configured ones.
Update
Two identical (Debian 11) hosthatch setups. Fresh re-installed from ISO after the migration.
Vienna:
IPv6 work. Old addresses and new addresses.
Oslo:
IPv6 work, but only one address (no matter how many of them I add). And after some time even this one die.
It seems like it's not about old/new addresses actually. I tryed few things, different setups, etc, but no luck. I can't explain to myself why is that. Everything's seems correct on the VPS side.
Emil J yesterday responded. I hope that he won't give up otherwise I have a VPS without functional IPv6 after the migration
It's just migration to the new control panel, not actual node migration so everthing is pretty much the same (minus this IPv6 issue with one of my VPSes).
I'm actually interested in what @yoursunny considers to be routed vs non-routed IPv6. From what I can tell on the internet, routed seems to be things that use Neighbourhood Discovery...
Someone else here then said it's when you can just use any IP address you like without pre-configuring it.
But, e.g. in my home setup (which is sadly now IPv4 after an ISP switch, and I'd previously relied on IPv6 for accessing home machines from elsewhere), I now have a VM with a wireguard connection to router48.org. That box just has an IP address of 2a06:xxxx:xxxx::1/48 on eth1 (which is a virbr in proxmox shared with other machines I want to share using IPv6) and 2a06:xxxx:xxxx::2/128 on wg0, which uses eth0 to get the default IPv4 route to the tunnel server. Wireguard provides the default :: route from its config.
So, is this "routed" or not? Each vm that also has this shared network connection can use any IPv6 address it wants, and could choose to carve out a /64 if it wanted to. I'm using cloud-init to specify my chosen IP block and a gateway of 2a06:xxxx:xxxx::1.
If that's not what you call "routed" (and personally, I wouldn't call it routed because I haven't explicitly set up any routing or installed NDP), then what would you do differently to make it be considered routed?
This does differ from my old home router which had a /64 from the ISP and gave each machine a random 64-bit suffix (which I originally thought was based on MAC address, until they changed).
@ralf said:
I'm actually interested in what @yoursunny considers to be routed vs non-routed IPv6. From what I can tell on the internet, routed seems to be things that use Neighbourhood Discovery...
If that's not what you call "routed" (and personally, I wouldn't call it routed because I haven't explicitly set up any routing or installed NDP), then what would you do differently to make it be considered routed?
Oh, so Neighbourhood Discovery is just the IPv6 name for ARP? I guess I probably am using that then! I guess I still don't know what the difference between routed and non-routed IPv6 is.
After migration, my VPS is stuck on this screen. Did fsck through recovery and it fixed few errors, rebooted and it still doesn't boot, stuck at same screen. Any ideas on how to fix this ?
After migration, my VPS is stuck on this screen. Did fsck through recovery and it fixed few errors, rebooted and it still doesn't boot, stuck at same screen. Any ideas on how to fix this ?
Did you use the full disk space before migration? They switched from Gib to GB, means 100GiB at the old panel become 100GB during migration which equals 93,2GiB.
After migration, my VPS is stuck on this screen. Did fsck through recovery and it fixed few errors, rebooted and it still doesn't boot, stuck at same screen. Any ideas on how to fix this ?
Did you use the full disk space before migration? They switched from Gib to GB, means 100GiB at the old panel become 100GB during migration which equals 93,2GiB.
Was that true for panel migrations like this? I know new machines but not migrations.
After migration, my VPS is stuck on this screen. Did fsck through recovery and it fixed few errors, rebooted and it still doesn't boot, stuck at same screen. Any ideas on how to fix this ?
Did you use the full disk space before migration? They switched from Gib to GB, means 100GiB at the old panel become 100GB during migration which equals 93,2GiB.
After migration, my VPS is stuck on this screen. Did fsck through recovery and it fixed few errors, rebooted and it still doesn't boot, stuck at same screen. Any ideas on how to fix this ?
Did you use the full disk space before migration? They switched from Gib to GB, means 100GiB at the old panel become 100GB during migration which equals 93,2GiB.
i had one that was migrated as legacy, and showed up correctly as 1000GB.
After migration, my VPS is stuck on this screen. Did fsck through recovery and it fixed few errors, rebooted and it still doesn't boot, stuck at same screen. Any ideas on how to fix this ?
Did you use the full disk space before migration? They switched from Gib to GB, means 100GiB at the old panel become 100GB during migration which equals 93,2GiB.
Was that true for panel migrations like this? I know new machines but not migrations.
After migration, my VPS is stuck on this screen. Did fsck through recovery and it fixed few errors, rebooted and it still doesn't boot, stuck at same screen. Any ideas on how to fix this ?
Did you use the full disk space before migration? They switched from Gib to GB, means 100GiB at the old panel become 100GB during migration which equals 93,2GiB.
Was that true for panel migrations like this? I know new machines but not migrations.
That would be a very stupid move if true.
It would effectively invalidate most filesystems wouldn't it? They know how big they used to be.
It would effectively invalidate most filesystems wouldn't it? They know how big they used to be.
Indeed it would. But it's always nice to see the stuff that people come up with out of the air, like the guy claiming we use RAID0 somewhere around here.
It would effectively invalidate most filesystems wouldn't it? They know how big they used to be.
Indeed it would. But it's always nice to see the stuff that people come up with out of the air, like the guy claiming we use RAID0 somewhere around here.
This is before migration:
vda 254:0 0 20G 0 disk
├─vda1 254:1 0 1007K 0 part
├─vda2 254:2 0 512M 0 part
└─vda3 254:3 0 19.5G 0 part /
This after migration and after reinstall:
vda 254:0 0 18.6G 0 disk
├─vda1 254:1 0 18.5G 0 part /
├─vda14 254:14 0 3M 0 part
└─vda15 254:15 0 124M 0 part /boot/efi
When you reinstall a "legacy VM", the current process is to remove it and create a new VM using the new system.
This does not mean that existing live volumes are shrunk, which is being suggested in multiple places (the other one being LET). If we shrink an existing volume, that would render an existing VMs useless, as can be assumed from your earlier comment in reply to the person who cannot boot up their VM.
Such comments usually lead to a bunch of tickets asking us whether we are about to lose people's data by shrinking their existing VMs, which obviously isn't happening, so please excuse my annoyedness.
When you reinstall a "legacy VM", the current process is to remove it and create a new VM using the new system.
This does not mean that existing live volumes are shrunk, which is being suggested in multiple places (the other one being LET). If we shrink an existing volume, that would render an existing VMs useless, as can be assumed from your earlier comment in reply to the person who cannot boot up their VM.
Such comments usually lead to a bunch of tickets asking us whether we are about to lose people's data by shrinking their existing VMs, which obviously isn't happening, so please excuse my annoyedness.
does this mean the reinstalled VM will be converted from GiB to GB? Aka less storage than before.
When you reinstall a "legacy VM", the current process is to remove it and create a new VM using the new system.
This does not mean that existing live volumes are shrunk, which is being suggested in multiple places (the other one being LET). If we shrink an existing volume, that would render an existing VMs useless, as can be assumed from your earlier comment in reply to the person who cannot boot up their VM.
Such comments usually lead to a bunch of tickets asking us whether we are about to lose people's data by shrinking their existing VMs, which obviously isn't happening, so please excuse my annoyedness.
does this mean the reinstalled VM will be converted from GiB to GB? Aka less storage than before.
When you reinstall a "legacy VM", the current process is to remove it and create a new VM using the new system.
This does not mean that existing live volumes are shrunk, which is being suggested in multiple places (the other one being LET). If we shrink an existing volume, that would render an existing VMs useless, as can be assumed from your earlier comment in reply to the person who cannot boot up their VM.
Such comments usually lead to a bunch of tickets asking us whether we are about to lose people's data by shrinking their existing VMs, which obviously isn't happening, so please excuse my annoyedness.
does this mean the reinstalled VM will be converted from GiB to GB? Aka less storage than before.
Yes it does.
but
why? those who supported you before the switch and paid up multi year in advance got this?
When you reinstall a "legacy VM", the current process is to remove it and create a new VM using the new system.
This does not mean that existing live volumes are shrunk, which is being suggested in multiple places (the other one being LET). If we shrink an existing volume, that would render an existing VMs useless, as can be assumed from your earlier comment in reply to the person who cannot boot up their VM.
Such comments usually lead to a bunch of tickets asking us whether we are about to lose people's data by shrinking their existing VMs, which obviously isn't happening, so please excuse my annoyedness.
Ok, I could've been more precisely in my first reply, sorry this has lead to unnecessary tickets. On the other hand, if you described earlier that a reinstallation will cause this (destroy and recreation), it'd have reduced confusion as well because this makes clear the reinstallation and not migration is the cause, hence:
We've always advertised GB and TB, never GiB or TiB.
I really do not see the moral/ethical dilemma here. If you purchased a plan with 8TB bandwidth and it accidentally came with 9TB, and we changed it to 8TB an year later, it wouldn't really be as big of a scandal?
@hosthatch said:
We've always advertised GB and TB, never GiB or TiB.
I really do not see the moral/ethical dilemma here. If you purchased a plan with 8TB bandwidth and it accidentally came with 9TB, and we changed it to 8TB an year later, it wouldn't really be as big of a scandal?
Not the fact you've changed this is a problem imo but if this has been 9TB for 2 years and now it'll be 8TB, a short notice would be nice that this will happen and is an intended change.
Comments
I had a bit of a fumble getting IPv6 working after the London migration. It didn't work straight off the bat, and going into the new CP saw IPv6 was disabled, so clicked enable hoping to get it working.
That resulted in a different /64 than previously, changed the config accordingly but still no IPv6, finally tried the old subnet again but with IPv6 enabled and everything sprang back to life.
Just checked today and now both /64 subnets appear in the panel, and both work, so was probably just me being a bit quick off the mark when the node reappeared.
Agreed, no problems with the migration otherwise, short downtime, good communications. Even seems to have fixed a low priority ticket (outbound port 25 blocked) that's been outstanding for 5 months :-)
I noticed some issue tho. I can't re-use old IPv6 addresses from pre-migration VPS (/64 subnet is still the same) on my Oslo VPS which kinda sucks as some of those have rDNS set up (and because of that the they also can't be removed from control panel). They simply don't come online. Newly created addresses work.
I have no this issue with my Vienna VPS. Old re-used addresses work also with the newly migrated and reinstalled Vienna VPS, so it seems like a bug or something from old control panel wasn't purged or something.
Both VPSes are clean installed after the migration.
I opened support ticket - now let's pray
Interesting.
I don't see this option. It write just "ENABLED" without disable/enable option (which is fine by me).
Maybe because of there's "LEGACY" button suggesting that I need to reinstall vps (I did that from ISO) in order to enable all the features.
I don't see any specific addresses configured now in the new control under either of my subnets. I can assign a random address (under either /64) interactively on the interface and it just works, seems to be a fully routed /64. (Edit:) including previously configured ones.
Mine is still a legacy install, I haven't re-installed and wasn't planning on it, I'm not clear what if any advantage there is in that. The IPv6 subpanel just says enabled for me now also, it did say disabled whilst I was testing despite IPv6 being actively used prior, but probably me jumping the gun with the migration still in progress.
I take the difference between routed and non-routed is routed you can use add the ip on the server and it works where non-routed you need to add each ip on the panel first.
That "non-routed" is called linked ipv6 I guess, something that solusvm does.
Had to boot into rescue and fix my network config, didn't happen for any of the other locations
Update
Two identical (Debian 11) hosthatch setups. Fresh re-installed from ISO after the migration.
Vienna:
Oslo:
It seems like it's not about old/new addresses actually. I tryed few things, different setups, etc, but no luck. I can't explain to myself why is that. Everything's seems correct on the VPS side.
Emil J yesterday responded. I hope that he won't give up otherwise I have a VPS without functional IPv6 after the migration
I had a couple of servers migrated over to the new panel and everything (including IPv6) remains working as it was.
For domain registrations, create an account at Dynadot (ref) and spend $9.99 within 48 hours to receive $5 DynaDollars!
Looking for cost-effective Managed/Anycast/DDoS-Protected/Geo DNS Services? Try ClouDNS (aff).
any improvement in performance?
I bench YABS 24/7/365 unless it's a leap year.
It's just migration to the new control panel, not actual node migration so everthing is pretty much the same (minus this IPv6 issue with one of my VPSes).
I'm actually interested in what @yoursunny considers to be routed vs non-routed IPv6. From what I can tell on the internet, routed seems to be things that use Neighbourhood Discovery...
Someone else here then said it's when you can just use any IP address you like without pre-configuring it.
But, e.g. in my home setup (which is sadly now IPv4 after an ISP switch, and I'd previously relied on IPv6 for accessing home machines from elsewhere), I now have a VM with a wireguard connection to router48.org. That box just has an IP address of 2a06:xxxx:xxxx::1/48 on eth1 (which is a virbr in proxmox shared with other machines I want to share using IPv6) and 2a06:xxxx:xxxx::2/128 on wg0, which uses eth0 to get the default IPv4 route to the tunnel server. Wireguard provides the default :: route from its config.
So, is this "routed" or not? Each vm that also has this shared network connection can use any IPv6 address it wants, and could choose to carve out a /64 if it wanted to. I'm using cloud-init to specify my chosen IP block and a gateway of 2a06:xxxx:xxxx::1.
If that's not what you call "routed" (and personally, I wouldn't call it routed because I haven't explicitly set up any routing or installed NDP), then what would you do differently to make it be considered routed?
This does differ from my old home router which had a /64 from the ISP and gave each machine a random 64-bit suffix (which I originally thought was based on MAC address, until they changed).
panel looks complete with all the specifications
I bench YABS 24/7/365 unless it's a leap year.
Oh, so Neighbourhood Discovery is just the IPv6 name for ARP? I guess I probably am using that then! I guess I still don't know what the difference between routed and non-routed IPv6 is.
After migration, my VPS is stuck on this screen. Did fsck through recovery and it fixed few errors, rebooted and it still doesn't boot, stuck at same screen. Any ideas on how to fix this ?
Did you use the full disk space before migration? They switched from Gib to GB, means 100GiB at the old panel become 100GB during migration which equals 93,2GiB.
Was that true for panel migrations like this? I know new machines but not migrations.
No, I was using hardly 10-15% of disk space
i had one that was migrated as legacy, and showed up correctly as 1000GB.
I bench YABS 24/7/365 unless it's a leap year.
That would be a very stupid move if true.
It would effectively invalidate most filesystems wouldn't it? They know how big they used to be.
Indeed it would. But it's always nice to see the stuff that people come up with out of the air, like the guy claiming we use RAID0 somewhere around here.
This is before migration:
This after migration and after reinstall:
Yes.
When you reinstall a "legacy VM", the current process is to remove it and create a new VM using the new system.
This does not mean that existing live volumes are shrunk, which is being suggested in multiple places (the other one being LET). If we shrink an existing volume, that would render an existing VMs useless, as can be assumed from your earlier comment in reply to the person who cannot boot up their VM.
Such comments usually lead to a bunch of tickets asking us whether we are about to lose people's data by shrinking their existing VMs, which obviously isn't happening, so please excuse my annoyedness.
does this mean the reinstalled VM will be converted from GiB to GB? Aka less storage than before.
I bench YABS 24/7/365 unless it's a leap year.
Yes it does.
but
I bench YABS 24/7/365 unless it's a leap year.
Ok, I could've been more precisely in my first reply, sorry this has lead to unnecessary tickets. On the other hand, if you described earlier that a reinstallation will cause this (destroy and recreation), it'd have reduced confusion as well because this makes clear the reinstallation and not migration is the cause, hence:
We've always advertised GB and TB, never GiB or TiB.
I really do not see the moral/ethical dilemma here. If you purchased a plan with 8TB bandwidth and it accidentally came with 9TB, and we changed it to 8TB an year later, it wouldn't really be as big of a scandal?
Not the fact you've changed this is a problem imo but if this has been 9TB for 2 years and now it'll be 8TB, a short notice would be nice that this will happen and is an intended change.