Journaling cannot be disabled. This may result in shortened lifespan on SSDs
It is hard to reply to this without some sort of insult or mental health suspicion directed at anyone who disables journaling in a filesystem to save up on lifespan of SSDs.
But the first one is very valid, because of the resizing limitation I do not use XFS in a lot of places where I would like to, as otherwise it is much more advanced and performant than ext*.
^ I've got a couple of small (name)servers that I built quickly from provider templates, which turned out to be xfs. Not a problem really, as they are "single use" servers and if reconfigured at some later date, I'd use a minimal install instead.
Otherwise, it's ext* all the way, for the very reason @rm_ said.
[Why bother journaling /tmp & /backup, for examples: use ext2.]
/tmp is best mounted as 'tmpfs' anyway. As for backup, I would very much care for my backups to not become corrupted or lost on a power-cut or hang (which is what journaling protects against), as that's when the primary copy of the data is at a high risk as well.
/tmp is best mounted as 'tmpfs' anyway. As for backup, I would very much care for my backups to not become corrupted or lost on a power-cut or hang (which is what journaling protects against), as that's when the primary copy of the data is at a high risk as well.
I'm not a great lover of tmpfs, where it'll put a lot of pressure on swap, with low RAM systems. I prefer a fixed disc allocation of say, 2GB and mounted in the traditional way, noexec, nosuid etc. Each to their own.
I use /backup as a 'live' daily copy in case a client screws something up and needs a recovery quickly. Also, as an intermediate backup for remote transfers. If the server has hard/forcefully rebooted/crashed, then it wouldn't be wise to rely on that /backup anyway, IMHO.
@cmeerw said: Don't trust your provider's templates and install from scratch.
True, but there are some corner cases to be covered; sometimes a client can't perform a full install from scratch on really-low-end boxes with a limited amount of available RAM, at least with the usual netinstall
... and sometimes a client could be just lazy
Personally, I always try to jump through all the hoops I can to install from scratch, but I can understand if this is not what many (or some) people are looking for.
I don't know how many in the "low end" segment are interested in updated templates to the point of deciding whether to buy or not depending on the templates offered... we often hear about outdated templates (and maybe the question in the OP is simply this)
By the way, on a low-end VPS... the ability to shrink is not at all in my opinion something that guides a customer
Probably one needs to choose file system without journaling at all and not to disable it 
There are no modern FSes without journaling, only ancient, unreliable and limited ext2 and FAT.
If you want to try a filesystem with some special treatment of solid state storage, check out F2FS. But modern SSDs are just fine to be used with any FS you want, there's no need to choose a filesystem to suit them.
Probably one needs to choose file system without journaling at all and not to disable it 
There are no modern FSes without journaling, only ancient, unreliable and limited ext2 and FAT.
If you want to try a filesystem with some special treatment of solid state storage, check out F2FS. But modern SSDs are just fine to be used with any FS you want, there's no need to choose a filesystem to suit them.
@rm_ said:
as otherwise it is much more advanced and performant than ext*.
Is that something which really matters that much on a virtualized server. I mean if the hosts system where your machine is running on doesn't support certain things natively, what's the performance benefit when you run in within your VM in an optimized way because the overhead is already there, isn't it?
.
What about UFS? I think it's default in most BSD distros.
Is that something which really matters that much on a virtualized server.
It might. There's no difference that it's virtualized, as KVM has very little overhead, you're almost like running natively.
I mean if the hosts system where your machine is running on doesn't support certain things natively, what's the performance benefit when you run in within your VM in an optimized way because the overhead is already there, isn't it?
There is no filesystem overhead on the host side, because providers run VMs inside LVM logical volumes, and each of those are basically like partitions, they are stored not inside a filesystem.
@rm_ said:
There is no filesystem overhead on the host side, because providers run VMs inside LVM logical volumes, and each of those are basically like partitions, they are stored not inside a filesystem.
Thanks for your explanation. I wasn't aware of that. I thought some operations go through the hosts controller (that's why virtio drivers are needed for instance).
Comments
Ooops, I misread the OP, so I gave an off topic reply.
Chances are that there's some resistance over redhat's latest defaults.
Few dises of XFS:
It is hard to reply to this without some sort of insult or mental health suspicion directed at anyone who disables journaling in a filesystem to save up on lifespan of SSDs.
But the first one is very valid, because of the resizing limitation I do not use XFS in a lot of places where I would like to, as otherwise it is much more advanced and performant than ext*.
^ I've got a couple of small (name)servers that I built quickly from provider templates, which turned out to be xfs. Not a problem really, as they are "single use" servers and if reconfigured at some later date, I'd use a minimal install instead.
Otherwise, it's ext* all the way, for the very reason @rm_ said.
[Why bother journaling /tmp & /backup, for examples: use ext2.]
lowendinfo.com had no interest.
/tmp is best mounted as 'tmpfs' anyway. As for backup, I would very much care for my backups to not become corrupted or lost on a power-cut or hang (which is what journaling protects against), as that's when the primary copy of the data is at a high risk as well.
Don't trust your provider's templates and install from scratch.
I'm not a great lover of tmpfs, where it'll put a lot of pressure on swap, with low RAM systems. I prefer a fixed disc allocation of say, 2GB and mounted in the traditional way, noexec, nosuid etc. Each to their own.
I use /backup as a 'live' daily copy in case a client screws something up and needs a recovery quickly. Also, as an intermediate backup for remote transfers. If the server has hard/forcefully rebooted/crashed, then it wouldn't be wise to rely on that /backup anyway, IMHO.
lowendinfo.com had no interest.
True, but there are some corner cases to be covered; sometimes a client can't perform a full install from scratch on really-low-end boxes with a limited amount of available RAM, at least with the usual netinstall
... and sometimes a client could be just lazy
Personally, I always try to jump through all the hoops I can to install from scratch, but I can understand if this is not what many (or some) people are looking for.
I don't know how many in the "low end" segment are interested in updated templates to the point of deciding whether to buy or not depending on the templates offered... we often hear about outdated templates (and maybe the question in the OP is simply this)
By the way, on a low-end VPS... the ability to shrink is not at all in my opinion something that guides a customer
A valid question(?). Probably one needs to choose file system without journaling at all and not to disable it
For me BTRFS and/or ZFS works very good. Does anyone tried any exotic FS's? Like x11fs, SpockFS, ytfs etc.
There are no modern FSes without journaling, only ancient, unreliable and limited ext2 and FAT.
If you want to try a filesystem with some special treatment of solid state storage, check out F2FS. But modern SSDs are just fine to be used with any FS you want, there's no need to choose a filesystem to suit them.
+1
Is that something which really matters that much on a virtualized server. I mean if the hosts system where your machine is running on doesn't support certain things natively, what's the performance benefit when you run in within your VM in an optimized way because the overhead is already there, isn't it?
.
What about UFS? I think it's default in most BSD distros.
It might. There's no difference that it's virtualized, as KVM has very little overhead, you're almost like running natively.
There is no filesystem overhead on the host side, because providers run VMs inside LVM logical volumes, and each of those are basically like partitions, they are stored not inside a filesystem.
Thanks for your explanation. I wasn't aware of that. I thought some operations go through the hosts controller (that's why virtio drivers are needed for instance).