I’d expected this but it still sucks.

  • Crogdor@lemmy.world
    link
    fedilink
    English
    arrow-up
    115
    arrow-down
    1
    ·
    9 months ago

    There are two kinds of datacenter admins, those who aren’t using VMWare, and those who are migrating away from VMWare.

  • brygphilomena@lemmy.world
    link
    fedilink
    English
    arrow-up
    97
    ·
    9 months ago

    Regrettably, there is currently no substitute product offered.

    I really don’t think you regret a God damn thing broadcom.

    • yeehaw@lemmy.ca
      link
      fedilink
      English
      arrow-up
      28
      arrow-down
      1
      ·
      edit-2
      9 months ago

      If you’re already running windows, hyper-v. theres proxmox, and tons of others. So they are mistaken. 🤣

        • yeehaw@lemmy.ca
          link
          fedilink
          English
          arrow-up
          3
          ·
          9 months ago

          I know, but this is the way I read it when they claim to give no option.

      • TheHolm@aussie.zone
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        13
        ·
        9 months ago

        All of them not equate in same league. Do you know any type 1 free supervises out there? Xen probably.

        • yeehaw@lemmy.ca
          link
          fedilink
          English
          arrow-up
          23
          arrow-down
          1
          ·
          edit-2
          9 months ago

          Proxmox, Xen, hyper-v are all considered type 1 as far as I’m aware.

        • Voroxpete@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          5
          ·
          9 months ago

          I assume what you’re looking for specifically here is a complete platform that you can install on bare-metal, not just the actual hypervisor itself. In which case consider any of these:

          • Proxmox
          • XCP-NG
          • Windows Hyper-V Server Core (basically Windows Server Nano with Hyper-V)
          • Any Linux distro running KVM/QEMU - Add Cockpit if you need a web interface, or use Virt-Manager, either directly or over X-forwarding
          • Anarch157a@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            9 months ago

            Any Linux distro running KVM/QEMU - Add Cockpit if you need a web interface, or use Virt-Manager, either directly or over X-forwarding

            No need for X forwarding, you can connect Virt-Manager to a remote system that has libvirt,

            • Voroxpete@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              9 months ago

              This is true, but not everyone gets to use a linux system as their main desktop at work. I’m not aware of a windows version of virt-manager, but if that exists it would be fucking rad.

        • jelloeater - Ops Mgr@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          9 months ago

          I’m not sure why you’re getting down voted, you’re right. I’m not sure if anyone would run Proxmox for their enterprise hypervisor? I mean HyperV is okay. Slim pickings for big orgs. I know there’s Nutanix, but most folks are moving to the big three for VMs and hosting.

          • ssdfsdf3488sd@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            9 months ago

            I am running proxmox at a moderately sized corp. The lack of a real support contract almost kills it, which is too bad because it is a decent product

  • Moonrise2473@feddit.it
    link
    fedilink
    English
    arrow-up
    69
    ·
    9 months ago

    RIP VMware.

    Broadcom prefers to milk the top 500 customers with unreasonable fees rather than bother with the rest of the world. They know that nobody with a brain would intentionally start a new datacenter with VMware solutions

  • Changer098@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    58
    ·
    9 months ago

    Well dang, I guess that “learn about proxmox” line on my to-do list just moved a little higher. For the most part, I’ve enjoyed using ESXi and am sad to see it go.

      • dan@upvote.au
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        9 months ago

        I like Unraid… It has a UI for VMs and LXC containers like Proxmox, but it also has a pretty good Docker UI. I’ve got most things running on Docker on my home server, but I’ve also got one VM (Windows Server 2022 for Blue Iris) and two LXC containers. (LXC support is a plugin; it doesn’t come out-of-the-box)

        Docker with Proxmox is a bit weird, since it doesn’t actually support Docker and you have to run Docker inside an LXC container or VM.

        • LifeBandit666@feddit.uk
          link
          fedilink
          English
          arrow-up
          2
          ·
          9 months ago

          I’m in the market for a nas or thinclient for these kinds of things, an upgrade for my RPi Home Assistant.

          I’m stuck at hardware at the moment and think a cheap 2bay NAS is probably the way to go. My concern is that I won’t be able to run all the things on a NAS mainly because I’m clueless. This community talks in maths (as Radiohead say) so half the time I’m trying to decipher all the LXCs and other acronyms.

          Anyway, I think I need to learn PROXMOX or Unraid so your comment has me interested.

          My question to you is this: since your server is plugged in via ethernet, can you access the Windows VM via web interface? Or does it require a screen, keyboard, mouse, etc?

          I think I’m gonna be running HA in a VM, along with Adguard and maybe LMS in docker containers, then probably a Windows VM for Arr and Plex. I assume all these things will have their own port but I’m just not 100% about the actual Windows VM

          • Scrath@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            9 months ago

            I run a couple of containers on my lenovo mini pc. I have proxmox installed on bare metal and then one VM for truenas, one for docker containers and one for home assistant OS.

            For me the limiting factor is definitely RAM. I have 20GB (because the machine came with a 2x4GB configuration and I bought a single 16GB upgrade stick) and am constantly at ~98% utilization.

            To be fair, about half of that is eaten up by TrueNAS alone due to ZFS.

            The point I’m trying to make is basically make sure you can put enough RAM into your machine. Some NAS have soldered memory you won’t be able to upgrade. The CPU performance you need highly depends on what you want to do.

            In my case the only CPU intensive task I have is media transcoding which can often be offloaded to dedicated bardware like intel quicksync. The only annoying exception is hardware transcoding of x265 media which is apparently only supported from intel 7th gen and upwards processors and I have a 6th gen i5… Or maybe I configured something wrong. No clue

            Edit: I wrote that after reading the first half of your comment. Regarding connecting a screen, I think I had one connected once to set up proxmox. Afterwards I just log into the proxmox web interface. If required I can use that to get a GUI session of each VM as well.

            • LifeBandit666@feddit.uk
              link
              fedilink
              English
              arrow-up
              2
              ·
              9 months ago

              Hey no you answered a bunch of questions I had there. So I’m looking for an i7 with lots of RAM. Thanks that’s excellent

              • Scrath@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                2
                ·
                9 months ago

                Just to be sure there isn’t a misunderstanding. With 7th gen I mean any intel iX-7xxx processor or higher.

                The first (or first 2) numbers of the second part of the processor name determine the generation of the processor. The number immediately following the i just denotes the performance tier within the processors own generation

                • LifeBandit666@feddit.uk
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  9 months ago

                  Thanks for the correction. I’ve lurked in here and the Reddit one back before the time we don’t talk about, but I have no clue when it comes to hardware. I got given a PC to game on and was talking to my mate about buying server bits, and mentioned getting i7 processors. He told me it would be more powerful than my gaming rig because that’s only i5s.

                  This makes more sense. So I can get an i3-7xxx quad core mini PC and try upgrade the RAM and storage.

                  I have a bunch of ram sticks in a bottom drawer and some HDDs I’ve never managed to boot yet, so I have things to play with… I just don’t know what they are or if they work.

                  I love to tinker though. This all sounds like lots of fun

          • dan@upvote.au
            link
            fedilink
            English
            arrow-up
            1
            ·
            9 months ago

            I’d recommend building your own server rather than buying an off-the-shelf NAS. The NAS will have limited upgrade options - usually, if you want to make it more powerful in the future, you’ll have to buy a new one. If you build your own, you can freely upgrade it in the future - add more memory (RAM), make it faster by replacing the CPU with a better one, etc.

            If you want a small one, the Asus Prime AP201 is a pretty nice (and affordable!) case.

        • ___@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 months ago

          I’ve just learned about converting docker containers to lxc natively, so that’s my next project.

          • dan@upvote.au
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            9 months ago

            I personally prefer Docker over LXC since the containers are essentially immutable. You can completely delete and recreate a container without causing issues. All your data is stored outside the container in a Docker volume.

            Good Docker containers are “distroless” which means it only contains the app and the bare minimum dependencies for the app to run, without any extraneous OS stuff in it. LXC containers aren’t as light since as far as I know they always contain an OS.

            • ___@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              ·
              9 months ago

              I’m with you for the most part, but I’m slowly moving over to podman over docker for security and simplicity. LXC is convenient for proxmox, and you can make a golden snapshot, store your data and config in a bind mount, and replicate some of docker’s features. Lately, I run a privileged lxc with rootless podman running dockge. Seems to work well for now.

  • mindlight@lemm.ee
    link
    fedilink
    English
    arrow-up
    40
    ·
    edit-2
    9 months ago

    Along with the termination of perpetual licensing, Broadcom has also decided to discontinue the Free ESXi Hypervisor, marking it as EOGA (End of General Availability).

    Wiktionary: Adjective perpetual (not comparable) Lasting forever, or for an indefinitely long time.

    Hello ProxMox here I come!

    • kn33@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      9 months ago

      They’re terminating in the sense that they won’t sell it anymore. They’re not breaking the licensing they’ve already sold (mostly, there was some fuckery with activating licensing they sold through third parties)

      • kalpol@lemmy.worldOP
        link
        fedilink
        English
        arrow-up
        3
        ·
        9 months ago

        Sort of. The activation license will work as long as you have it. They won’t renew support though, which effectively kills it when the support contract runs out.

        • kn33@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 months ago

          You won’t be able to upgrade to new versions when the support contract runs out, but you can install updates to the existing version as long as updates are made for it. This has always been the lifecycle for perpetual licensing. It’s good forever, but at a certain point it becomes a security risk to continue using. The difference here is they won’t sell you another perpetual license when the lifecycle is up.

    • TCB13@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      44
      ·
      edit-2
      9 months ago

      Hello ProxMox here I come!

      Proxmox is questionable open-source, performs poorly and will most likely end up burning the free users at some point. Get get yourself into LXC/LXD/Incus that does both containers and VMs, is way more performant and clean and is also available on Debian’s repositories.

            • moonpiedumplings@programming.dev
              link
              fedilink
              English
              arrow-up
              4
              ·
              9 months ago

              Nothing that is more questionable than lxd, which now requires a contributor license agreement, allowing canonical to not open source their hosted versions, despite lxd being agpl.

              Thankfully, it’s been forked as incus, and debian is encouraging users to migrate.

              But yeah. They haven’t said what makes proxmox’s license questionable.

              • TCB13@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                9 months ago

                Thankfully, it’s been forked as incus, and debian is encouraging users to migrate.

                Yes, the people running the original LXC and LXD projects under Canonical now work on Incus under the Linux Containers initiative. Totally insulated from potential Canonical BS. :)

                The move from LXD to Incus should be transparent as it guarantees compatibility for now. But even if you install Debian 12 today and LXD from the Debian repository you’re already insulated from Canonical.

            • TCB13@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              6
              ·
              9 months ago

              First they’re always nagging you to get a subscription. Then they make system upgrades harder for free customers. Then the gatekeep you from the enterprise repositories in true RedHat fashion and have important fixes from the pve-no-subscription repository multiple times.

              • acockworkorange@mander.xyz
                link
                fedilink
                English
                arrow-up
                2
                ·
                9 months ago

                As long as the source code is freely available, that’s entirely congruent with GPL, which is one of the most stringent licenses. You can lay a lot of criticism on their business practices, and I would not deploy this on my home server, but it haven’t seen any evidence that they’re infringing any licenses.

                • TCB13@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  4
                  ·
                  9 months ago

                  Okay if you want to strictly look at licenses per si no issues there. But the rest of what I described I believe we can agree is very questionable, takes into questionable open-source.

  • sj_zero@lotide.fbxl.net
    link
    fedilink
    arrow-up
    29
    ·
    9 months ago

    The most important thing for everyone to remember is that if you don’t fully own the thing such that you can install and run it without asking permission, or if it isn’t simply free and open source, then it can go away at any time.

  • 0110010001100010@lemmy.world
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    1
    ·
    9 months ago

    Really glad I made the transition from ESXi to Docker containers about a year ago. Easier to manage too and lighter on resources. Plus upgrades are a breeze. Should have done that years ago…

    • kalpol@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      11
      ·
      9 months ago

      I need full on segregated machines sometimes though. I’ve got stuff that only runs in Win98 or XP (old radio programming software).

          • DeltaTangoLima@reddrefuge.com
            link
            fedilink
            English
            arrow-up
            10
            ·
            9 months ago

            No headaches here - running a two node cluster with about 40 LXCs, many of them using Docker, and an OPNsense VM. It’s been flawless for me.

            • TCB13@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              19
              ·
              edit-2
              9 months ago

              If you’re already using LXC containers why are you stuck with their questionable open-source and ass of a kernel when you can just run LXD/Incus and have a much cleaner experience in a pure Debian system? Boots way faster, fails less and is more open.

              Proxmox will eventually kill the free / community version, it’s just a question of time and they don’t offer anything particularly good over what LXD/Incus offers.

              • DeltaTangoLima@reddrefuge.com
                link
                fedilink
                English
                arrow-up
                16
                ·
                9 months ago

                I’m intrigued, as your recent comment history keeps taking aim at Proxmox. What did you find questionable about them? My servers boot just fine, and I haven’t had any failures.

                I’m not uninterested in genuinely better alternatives, but I don’t have a compelling reason to go to the level of effort required to replace Proxmox.

                • TCB13@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  7
                  arrow-down
                  6
                  ·
                  edit-2
                  9 months ago

                  comment history keeps taking aim at Proxmox. What did you find questionable about them?

                  Here’s the thing, I run Promox since 2009 until the end of last year professionally in datacenters, multiple clusters around 10-15 nodes each. I’ve been around for all wins and fails of Proxmox, I’ve seen the raise and fall of OpenVZ, all the SLES/RHEL compatibility issues and then them moving to LXC containers.

                  While it worked most of the time and their payed support was decent I would never recommend it to anyone since LXD/Incus became a thing. The Promox PVE kernel has a lot of quirks and hacks. Besides the fact that is build upon Ubuntu’s kernel that is already a dumpster fire of hacks (waiting someone upstream to implement things properly so they can backport them and ditch their implementations) they add even more garbage over it. I’ve been burned countless times by their kernel when it comes to drivers, having to wait months for fixes already available upstream or so they would fix their own shit after they introduced bugs.

                  At some point not even simple things such as OVPN worked fine under Proxmox’s kernel. Realtek networking was probably broken more times than working, ZFS support was introduced with guaranteed kernel panics and upgrading between versions was always a shot in the dark and half of the time you would get a half broken system that is able to boot and pass a few tests but that will randomly fail a few days later. Their startup is slow, slower than any other solution - it even includes daemons that are there just to ensure that other things are running (because most of them don’t even start with the system properly on the first try).

                  Proxmox is considerably cheaper than ESXi so people use it in some businesses like we did, but far from perfect. Eventually Canonical invested in LXC and a very good and much better than OpenVZ and co. container solution was born. LXC got stable and widely used and LXD came with the hypervisor higher level management, networking, clustering etc. and since we now have all that code truly open-source and the creators of it working on the project without Canonicals influence.

                  There’s no reason to keep using Proxmox as LXC/LXD got really good in the last few years. Once you’re already running on LXC containers why keep using and dragging all the Proxmox bloat and potencial issues when you can use LXD/Incus made by the same people who made LXC that is WAY faster, stable, more integrated and free?

                  I’m not uninterested in genuinely better alternatives, but I don’t have a compelling reason to go to the level of effort required to replace Proxmox.

                  Well if you’re some time to spare on testing stuff try LXD/Incus and you’ll see. Maybe you won’t replace all your Proxmox instances but you’ll run a mixed environment like a did for a long time.

              • fuckwit_mcbumcrumble@lemmy.world
                link
                fedilink
                English
                arrow-up
                7
                arrow-down
                2
                ·
                9 months ago

                why are you stuck with their questionable open-source and ass of a kernel

                Because you don’t care about it being open source? Just working (and continuing to work) is a pretty big motivating factor to stay with what you have.

                • TCB13@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  11
                  ·
                  9 months ago

                  Because you don’t care about it being open source?

                  If you’re okay with the risk of one day ending up like the people running ESXi now then you should be okay. Let’s say that not “ending up with your d* in your hand” when you least expect it is also a pretty big motivating factor to move away from Proxmox.

                  Now I don’t see how come in a self-hosting community on Lemmy someone would bluntly state what you’ve.

      • eerongal@ttrpg.network
        link
        fedilink
        English
        arrow-up
        5
        ·
        9 months ago

        I agree with the other poster; you should look into proxmox. I migrated from ESXi to proxmox 7-8 years ago or so, and honestly its been WAY better than ESXi. The migration process was pretty easy too, i was able to bring over the images from ESXi and load them directly into proxmox.

      • TCB13@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        10
        ·
        edit-2
        9 months ago

        Fear no my friend. Get get yourself into LXC/LXD/Incus as it can do both containers and full virtual machines. It is available on Debian’s repositories and is fully and truly open-source.

    • TCB13@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      17
      ·
      edit-2
      9 months ago

      So… you replaced a property solution by a free one that depends on proprietary components and a proprietary distribution mechanism? Get get yourself into LXC/LXD/Incus (that does both containers and VMs) and is available on Debian’s repositories. Or Podman if you really like the mess that Docker is.

      • kalpol@lemmy.worldOP
        link
        fedilink
        English
        arrow-up
        6
        ·
        9 months ago

        I’ve seen you recommending this here before - what’s its selling point vs say qemu-kvm? Does Incus do virtual networking without having to straight up learn iptables or whatever? (Not that there is anything wrong with iptables, I just have to choose what I can learn about)

        • TCB13@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          2
          ·
          9 months ago

          Does Incus do virtual networking without having to straight up learn iptables or whatever?

          That’s the just one of the things it does. It goes much further as it can create clusters, download, manage and create OS images, run backups and restores, bootstrap things with cloud-init, move containers and VMs between servers (even live sometimes). Another big advantage is the fact that it provides a unified experience to deal with both containers and VMs, no need to learn two different tools / APIs as the same commands and options will be used to manage both. Even profiles defining storage, network resources and other policies can be shared and applied across both containers and VMs.

  • Lettuce eat lettuce@lemmy.ml
    link
    fedilink
    English
    arrow-up
    27
    ·
    9 months ago

    XCP-ng or Proxmox if you need a bare metal hypervisor. Both open source, powerful, mature, and have large communities with lots of helpful documentation.

    I think you can migrate ESXi VMs directly to XCP-ng. I have moved onto it about 6 months ago and it has been solid. Steep learning curve, but really great once you get the hang of it, and enterprise grade if you need stuff like HA clustering and complex virtual networking solutions.

    • Disaster@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      6
      ·
      9 months ago

      I managed to migrate all mine to libvirt when I dumped esxi. They dropped support for the old opteron I was running at the time, so I couldn’t upgrade to v7. Welp, Fedora Server does just as well and I’ve been moving the VM hosted services into containers anyway.

      Ofc… well, we’ll see what IBM does with RedHat. Probably something like this eventually. They simply can’t help themselves.

    • TCB13@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      3
      ·
      9 months ago

      This was totally expected, even before BCM bought them. This is the same thing we had with CentOS/ReadHat and that will happen with Docker/DockerHub and all the people that moved from CentOS to Ubuntu.

  • Brickfrog@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    8
    ·
    9 months ago

    Sucks but not surprising. Broadcom has a history of doing things like this, ugh. Even with their paid products they jack up the price so much that the only customers that stick around are the business enterprise types that are locked in & can’t easily migrate for various reasons.

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    9 months ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    ESXi VMWare virtual machine hypervisor
    HA Home Assistant automation software
    ~ High Availability
    LTS Long Term Support software version
    LXC Linux Containers
    NAS Network-Attached Storage
    Plex Brand of media server package
    RPi Raspberry Pi brand of SBC
    SBC Single-Board Computer
    ZFS Solaris/Linux filesystem focusing on data integrity

    8 acronyms in this thread; the most compressed thread commented on today has 4 acronyms.

    [Thread #506 for this sub, first seen 12th Feb 2024, 20:15] [FAQ] [Full list] [Contact] [Source code]