• azertyfun@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    12 hours ago

    The “problem” with k8s is not that it’s abstract-y (it’s not inherently any more abstract than docker), it’s that it’s very complex and enterprise-y.

    The need for such a complex orchestration layer is not necessarily immediately obvious, until you’ve worked on a complex infra setup that wasn’t deployed with kubernetes. Believe me when you’ve seen the depths of hell that are hundreds of separately configured customer setups using thousands of lines of ansible playbooks, all using ad-hoc systems for creating containers/VMs, with even more ad-hoc and hacked together development and staging environments, suddenly k8s starts looking very appetizing. Instead of an abominable spaghetti of bash scripts, playbooks, and random documentation, one common (albeit complex) set of tools understood by every professional which manages your application deployment & configuration, redundancy, software upgrades, firewall configs, etc.

    A small self-hosted production kubernetes cluster doesn’t have to be hard to operate or significantly more expensive than bare-metal; you can buy 3U of rack space, plop in 3 semi-large servers (think 128 GB plus a few TB of SSD RAID), install rancher and longhorn, and now you’ve got a prod cluster large enough for nearly every workload such that if you ever need to upgrade that means you have so many customers that hiring a k8s administrator will be a no-brainer.

    Or you can buy minutes from AWS because CapEx is the absolute devil and instead you pay several times as much in OpEx to make it someone else’s problem. But if you’re doing that then you’re not comparing against “installing things the old-fashioned way”.

    • mac@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 hours ago

      Thanks for the response!

      I personally haven’t rolled a k8s or k3s cluster, so it’s always felt a bit abstract to me. I probably should though, to demystify it to myself in my work environment.

      Complex is definitely what I have noticed when I see my devops team PR into the ingress directories.

      I guess the abstract issue I see, that ties in to the meme i shared above, is that sometimes around deploys where we get blips of 503/4’s and we appear to be unable to track them down. Is it the load balancer? Ingress? Kong? The fact that there is so many layers make infra issues rough to debug