Kubernetes cluster with multiple cloud servers?

sanvitsanvit OG
edited August 2020 in Help

Hello LES fellows,

I have multiple VMs on multiple regions and providers, and since managing all of them is a pain, and to achieve HA, I was thinking of setting up a k8s cluster and use it to deploy/run apps. However, it seems like k8s is actually made for single AZ on a single provider in mind. Would running k8s in this kind of situation be a bad idea, or would it work out as expected? Or are there better ways to accomplish similar results?

Any insights would be awesome. Thanks :D

Comments

  • ehabehab Content Writer

    ideally a cluster on AZ makes sense but if they are not far apart then why not. Just check the bandwidth.

    you can also use docker swarm if you know exactly the numbers and less maintenance.

    Thanked by (1)sanvit
  • @ehab said:
    ideally a cluster on AZ makes sense but if they are not far apart then why not. Just check the bandwidth.

    you can also use docker swarm if you know exactly the numbers and less maintenance.

    So if using k8s, I should make a cluster for each location (and put further apart nodes in different cluster) as long as I have sufficiant bandwidth?
    Docker swarm also looks nice, but what do you mean by 'the numbers'?

    Thanks!

  • ehabehab Content Writer

    if your applications "developers - rollouts - containers" and endpoints are "small" limited then docker swarm is the right choice.

    Thanked by (1)sanvit
  • ehabehab Content Writer

    In a perfect scenario each location should handle its region. A backup is needed to the nearest region.
    off course you can spend more and select cloud load-balancers and use their nodes as ha master-worker.

    Thanked by (1)sanvit
  • @ehab said:
    if your applications "developers - rollouts - containers" and endpoints are "small" limited then docker swarm is the right choice.

    Got it. Thanks for the insight!

  • edited August 2020

    @sanvit said:
    Hello LES fellows,

    I have multiple VMs on multiple regions and providers, and since managing all of them is a pain, and to achieve HA, I was thinking of setting up a k8s cluster and use it to deploy/run apps. However, it seems like k8s is actually made for single AZ on a single provider in mind. Would running k8s in this kind of situation be a bad idea, or would it work out as expected? Or are there better ways to accomplish similar results?

    Any insights would be awesome. Thanks :D

    Just don't combine server with too far distance.

    Initially I also have same idea. But, think again, how do you set your persistent volume? Hostpath will remove pod flexibility to be spawned in any nodes. Many storage cluster need under 1 ms latencies and good bandwidth to achieve good performance.

    The best thing you can do is spawn 3-5 multiple "not too far away" master instance, and make close VM into one cluster. Set up new cluster for the server with far distance. Manage all cluster with rancher, so you will have centralized control for all of your cluster

    Or maybe, you just need something like runcloud to manage all of your server?

  • @akhfa said: how do you set your persistent volume

    I was actually thinking of storing all files on S3 (or other object storage). DB latency might also be an issue, but since people also use remote DBs, i figured this wouldn't be a big issue.

    I am thinking of putting one cluster on western US, and one in EU.

  • I am doing something similar, but to handle the hostpath issue is I am dedicating certain nodes for different deployments e.g one for monitoring one for wordpress

    Has anyone looked at Longhorn?

  • @chrisfy said: Has anyone looked at Longhorn?

    Assuming you are not talking about the Steakhouse, it actually seems pretty promising, especially with the integrated backup feature

  • ehabehab Content Writer
    edited August 2020

    Longhorn is easy to install, nice GUI, opensource, uses iSCSI, less restarts , stable ... i have not production tested it thoroughly. I heard good things about StorageOS.

  • @akhfa said: Just don't combine server with too far distance.

    Initially I also have same idea. But, think again, how do you set your persistent volume?

    What you will want to do in such situation is to label node with for example region: France and then use the nodeSelector in pod.spec or deployment.spec.template.spec. If you decide to provide persistent volumes statically you can predefine to which PersistenVolumeClaim will it be bounded (claimRef). If you decide to go the dynamic (cloud-provided) way I'm pretty sure you can define in which region you want the PV to be created (YMMV based on your cloud PV driver).

    nodeSelector can also be replaced by nodeAffinity so in case all the nodes in say region: France die or will not have enough resources the pods will be spawned on different node.

    @sanvit Kubernetes should be choosen over Docker Swarm just because of the community size. Once you get into a trouble you will have hard time finding useful information. And well... Red Hat is phasing out Docker in it's entirety so expect the community to only shrink. Kubernetes on the other hand... Boy oh boy. Unfortunately, learning curve for K8s is quite steep, but it's a great way to enrich your CV. Pursue your vision and give K8s a try. Do not give up when your ingress doesn't work or your first deployment gets into CrashLoopBackOff. My advice for you is to start off with VMs in one region, once you get some experience you can start expanding your cluster to other regions. Since we are talking about VMs here I advise you to create the cluster with kubeadm so you will have less overhead (Rancher pretty much is worth looking into once you have 16GB of RAM on each worker).

    Thanked by (3)seanho sanvit vimalware
  • edited August 2020

    @chrisfy said:
    Has anyone looked at Longhorn?

    I used it when it still on beta release. One of the simplest storage installation on kubernetes ;)

    @MrPsycho said:

    @akhfa said: Just don't combine server with too far distance.

    Initially I also have same idea. But, think again, how do you set your persistent volume?

    What you will want to do in such situation is to label node with for example region: France and then use the nodeSelector in pod.spec or deployment.spec.template.spec. If you decide to provide persistent volumes statically you can predefine to which PersistenVolumeClaim will it be bounded (claimRef). If you decide to go the dynamic (cloud-provided) way I'm pretty sure you can define in which region you want the PV to be created (YMMV based on your cloud PV driver).

    nodeSelector can also be replaced by nodeAffinity so in case all the nodes in say region: France die or will not have enough resources the pods will be spawned on different node.

    In my case, I use longhorn for my cluster storage. Even by combining France and Frankfurt, the performance of the storage will drop.
    Maybe I can install longhorn one in France and one in Frankfurt, but I think it will become much overhead. We live in low-end world with limited specs ;)
    I think storage like edgefs can tolerate cluster with high latencies, but I don't try it yet :/

    @sanvit said:

    @akhfa said: how do you set your persistent volume

    I was actually thinking of storing all files on S3 (or other object storage). DB latency might also be an issue, but since people also use remote DBs, i figured this wouldn't be a big issue.

    I am thinking of putting one cluster on western US, and one in EU.

    Then, just install each database on each "region" to serve closer service. But again, what kind of persistent storage will you use for the database? Maybe use local persistent volume / hostpath in dedicated node is enough ?

  • @MrPsycho Thanks for the insight. I will try with k8s. :)

    @akhfa said: But again, what kind of persistent storage will you use for the database?

    Actually a quick Google search told me that running DBs on Docker Swarm isn't a good idea, so my initial thought was to create a separate DB cluster, or use some kind of hosted DB. Since I'm going to use k8s now, I think I will try with local volume and see what kind of performance I will get on a somewhat high latency cluster.

  • ehabehab Content Writer

    @MrPsycho said:
    Kubernetes should be choosen over Docker Swarm just because of the community size. Once you get into a trouble you will have hard time finding useful information. And well... Red Hat is phasing out Docker in it's entirety so expect the community to only shrink.

    i know some who from k8s went back to docker swarm. As long as docker is used in k8s its not going away any time soon.

  • seanhoseanho OG
    edited August 2020

    For DB, my inclination would be to do the HA/failover in the DB, e.g. Postgres streaming WAL replication. Each DB server stores its data on local SSD. You can still use k8s for deployment / lifecycle management of the DB services.

  • I am interested in this as well

  • @sanvit said: managing all of them is a pain

    is it?

  • @comi said:

    @sanvit said: managing all of them is a pain

    is it?

    Depends :P

  • @sanvit How you making out? I want to do this too! :)

  • @aaronstuder said:
    @sanvit How you making out? I want to do this too! :)

    Actually, I currently deployed Docker Swarm on some of my nodes for running stuffs, and currently learning kubernetes on a hosted environment. Since K8s has a somewhat steep learning curve, I figured it's best to learn how it works, before actually deploying my own cluster.

    For docker swarm, I'm not seeing any huge speed loss for now.

  • aaronstuder, this is a great course! I passed CKA last month thanks to the practice tests it includes..

    One note of warning though, the exam has changed this month (it's no longer CKA, but CKA2020) so it might be not as effective as it used to - as in, until last month, you were basically 100% sure to pass the test as long as you were acing lightning labs and all of the mock exams

    Contribute your idling VPS/dedi (link), Android (link) or iOS (link) devices to medical research

Sign In or Register to comment.

This Site is currently in maintenance mode.
Please check back here later.

→ Site Settings