Hi! Im new to self hosting. Currently i am running a Jellyfin server on an old laptop. I am very curious to host other things in the future like immich or other services. I see a lot of mention of a program called docker.

search this on The internet I am still Not very clear what it does.

Could someone explain this to me like im stupid? What does it do and why would I need it?

Also what are other services that might be interesting to self host in The future?

Many thanks!

EDIT: Wow! thanks for all the detailed and super quick replies! I’ve been reading all the comments here and am concluding that (even though I am currently running only one service) it might be interesting to start using Docker to run all (future) services seperately on the server!

  • Black616Angel@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    40
    ·
    14 days ago

    Please don’t call yourself stupid. The common internet slang for that is ELI5 or “explain [it] like I’m 5 [years old]”.

    I’ll also try to explain it:

    Docker is a way to run a program on your machine, but in a way that the developer of the program can control.
    It’s called containerization and the developer can make a package (or container) with an operating system and all the software they need and ship that directly to you.

    You then need the software docker (or podman, etc.) to run this container.

    Another advantage of containerization is that all changes stay inside the container except for directories you explicitly want to add to the container (called volumes).
    This way the software can’t destroy your system and you can’t accidentally destroy the software inside the container.

      • folekaule@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        14 days ago

        I know it’s ELI5, but this is a common misconception and will lead you astray. They do not have the same level of isolation, and they have very different purposes.

        For example, containers are disposable cattle. You don’t backup containers. You backup volumes and configuration, but not containers.

        Containers share the kernel with the host, so your container needs to be compatible with the host (though most dependencies are packaged with images).

        For self hosting maybe the difference doesn’t matter much, but there is a difference.

        • fishpen0@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          14 days ago

          A million times this. A major difference between the way most vms are run and most containers are run is:

          VMs write to their own internal disk, containers should be immutable and not be able to write to their internal filesystem

          You can have 100 identical containers running and if you are using your filesystem correctly only one copy of that container image is on your hard drive. You have have two nearly identical containers running and then only a small amount of the second container image (another layer) is wasting disk space

          Similarly containers and VMs use memory and cpu allocations differently and they run with extremely different security and networking scopes, but that requires even more explanation and is less relevant to self hosting unless you are trying to learn this to eventually get a job in it.

          • chunkystyles@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            0
            ·
            13 days ago

            containers should be immutable and not be able to write to their internal filesystem

            This doesn’t jive with my understanding. Containers cannot write to the image. The image is immutable. However, a running container can write to its filesystem, but those changes are ephemeral, and will disappear if the container stops.

            • fishpen0@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              13 days ago

              This is why I said “most containers most of the time should”. It’s a bad practice to write to the inside of the container and a better practice to treat them as immutable. You can go as far as actively preventing them from writing to themselves when you build them or in certain container runtimes, but this is not usually how they work by default.

              Also a container that is stopped and restarted will not lose its internal changes in most runtimes. The container needs to be deleted and recreated from the image to do that

  • grue@lemmy.world
    link
    fedilink
    English
    arrow-up
    27
    ·
    14 days ago

    A program isn’t just a program: in order to work properly, the context in which it runs — system libraries, configuration files, other programs it might need to help it such as databases or web servers, etc. — needs to be correct. Getting that stuff figured out well enough that end users can easily get it working on random different Linux distributions with arbitrary other software installed is hard, so developers eventually resorted to getting it working on their one (virtual) machine and then just (virtually) shipping that whole machine.

    • Scrollone@feddit.it
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      5
      ·
      14 days ago

      Isn’t all of this a complete waste of computer resources?

      I’ve never used Docker but I want to set up a Immich server, and Docker is the only official way to install it. And I’m a bit afraid.

      • Encrypt-Keeper@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        ·
        edit-2
        14 days ago

        If it were actual VMs, it would be a huge waste of resources. That’s really the purpose of containers. It’s functionally similar to running a separate VM specific to every application, except you’re not actually virtualizing an entire system like you are with a VM. Containers are actually very lightweight. So much so, that if you have 10 apps that all require database backends, it’s common practice to just run 10 separate database containers.

      • dustyData@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        14 days ago

        On the contrary. It relies on the premise of segregating binaries, config and data. But since it is only running one app, then it is a bare minimum version of it. Most containers systems include elements that also deduplicate common required binaries. So, the containers are usually very small and efficient. While a traditional system’s libraries could balloon to dozens of gigabytes, pieces of which are only used at a time by different software. Containers can be made headless and barebones very easily. Cutting the fat, and leaving only the most essential libraries. Fitting in very tiny and underpowered hardware applications without losing functionality or performance.

        Don’t be afraid of it, it’s like Lego but for software.

      • PM_Your_Nudes_Please@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        11 days ago

        It can be, yes. One of the largest complaints with Docker is that you often end up running the same dependencies a dozen times, because each of your dozen containers uses them. But the trade-off is that you can run a dozen different versions of those dependencies, because each image shipped with the specific version they needed.

        Of course, the big issue with running a dozen different versions of dependencies is that it makes security a nightmare. You’re not just tracking exploits for the most recent version of what you have installed. Many images end up shipping with out-of-date dependencies, which can absolutely be a security risk under certain circumstances. In most cases the risk is mitigated by the fact that the services are isolated and don’t really interact with the rest of the computer. But it’s at least something to keep in mind.

      • couch1potato@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        14 days ago

        I’ve had immich running in a VM as a snap distribution for almost a year now and the experience has been leaps and bounds easier than maintaining my own immich docker container. There have been so many breaking changes over the few years I’ve used it that it was just a headache. This snap version has been 100% hands off “it just works”.

        https://snapcraft.io/immich-distribution

        • AtariDump@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          14 days ago

          Interesting idea (snap over docker).

          I wonder, does using snap still give you the benefit of not having to maintain specific versions of 3rd party software?

          • couch1potato@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            2
            ·
            14 days ago

            I don’t know too much about snap (I literally haven’t had to touch my immich setup) but as far as I remember when I set it up that was snap’s whole thing - it maintains and updates itself with minimal administrative oversight.

    • akilou@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      14 days ago

      But why can I “just install a program” on my windows machine or on my phone and it is that easy?

      • SirQuack@feddit.nl
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        14 days ago

        In case of phones, there’s less of a myriad of operating systems and libraries.

        A typical Android app is (eventually) Java with some bundled dependencies and ties in to known system endpoints (for stuff like notifications and rendering graphics).

        For windows these installers are usually responsible for getting the dependencies. Which is why some installers are enormous (and most installers of that size are web installers, so it looks smaller).

        Docker is more aimed at developers and server deployment, you don’t usually use docker for desktop applications. This is the area where you want to skip inconsistencies between environments, especially if these are hard to debug.

      • GnuLinuxDude@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        14 days ago

        You might notice that your Windows installation is like 30 gigabytes and there is a huge folder somewhere in the system path called WinSXS. Microsoft bends over backwards to provide you with basically all the versions of all the shared libs ever, resulting in a system that can run programs compiled from decades ago just fine.

        In Linux-land usually we just recompile all of the software from source. Sometimes it breaks because Glibc changed something. Or sometimes it breaks because (extremely rare) the kernel broke something. Linus considers breaking the userspace API one of the biggest no-nos in kernel development.

        Even so, depending on what you’re doing you can have a really old binary run on your Linux computer if the conditions are right. Windows just makes that surface area of “conditions being right” much larger.

        As for your phone, all the apps that get built and run for it must target some kind of specific API version (the amount of stuff you’re allowed to do is much more constrained). Android and iOS both basically provide compatibility for that stuff in a similar way that Windows does, but the story is much less chaotic than on Linux and Windows (and even macOS) where your phone app is not allowed to do that much, by comparison.

        • pressanykeynow@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          14 days ago

          In Linux-land usually we just recompile all of the software from source

          That’s just incorrect. Apart from 3 guys who have no better things to do no one in “Linux-land” does that.

    • I Cast Fist@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      7
      ·
      14 days ago

      So instead of having problems getting the fucking program to run, you have problems getting docker to properly build/run when you need it to.

      At work, I have one program that fails to build an image because of a 3rd party package who forgot to update their pgp signature; one that builds and runs, but for some reason gives a 404 error when I try to access it on localhost; one that whoever the fuck made it literally never ran it, because the Dockerfile was missing some 7 packages in the apt install line.

      • turmacar@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        14 days ago

        Building from source is always going to come with complications. That’s why most people don’t do it. A docker compose file that ‘just’ downloads the stable release from a repo and starts running is dramatically more simple than cross-referencing all your services to make sure there are no dependency conflicts.

        There’s an added layer of complexity under the hood to simplify the common use case.

  • echutaaa@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    19
    ·
    14 days ago

    It’s a container service. Containers are similar to virtual machines but less separate from the host system. Docker excels in creating reproducible self contained environments for your applications. It’s not the simplest solution out there but once you understand the basics it is a very powerful tool for system reliability.

  • Vinny_93@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    14 days ago

    Containerized software. The main advantage of this is that every application, or stack of applications, runs in its own ecosystem. You can restart a container whenever without having to reboot your entire system. You can store all data off a container in a volume, so if you hit a snag, you can recreate the container without actually losing any of your configs.

    You can also create networks so that apps run in different subnets than other apps.

    Very simply put, a docker container is like a mini system that runs on your main system.

    Something else I like about docker is docker compose. You can create a container or stack of containers with a single simple YAML file without actually having to install anything yourself. I manage my containers in Portainer.

  • PhilipTheBucket@ponder.cat
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    7
    ·
    14 days ago

    Okay, so way back when, Google needed a way to install and administer 500 new instances of whatever web service they had going on without it being a nightmare. So they made a little tool to make it easier to spin up random new stuff easily and scriptably.

    So then the whole rest of the world said “Hey Google’s doing that and they’re super smart, we should do that too.” So they did. They made Docker, and for some reason that involved Y Combinator giving someone millions of dollars for reasons I don’t really understand.

    So anyway, once Docker existed, nobody except Google and maybe like 50 other tech companies actually needed to do anything that it was useful for (and 48 out of those 50 are too addled by layoffs and nepotism to actually use Borg / K8s/ Docker (don’t worry they’re all the the same thing) for its intended purpose.) They just use it so their tech leads can have conversations at conferences and lunches where they make it out like anyone who’s not using Docker must be an idiot, which is the primary purpose for technology as far as they’re concerned.

    But anyway in the meantime a bunch of FOSS software authors said “Hey this is pretty convenient, if I put a setup script inside a Dockerfile I can literally put whatever crazy bullshit I want into it, like 20 times more than even the most certifiably insane person would ever put up with in a list of setup instructions, and also I can pull in 50 gigs of dependencies if I want to of which 2,421 have critical security vulnerabilities and no one will see because they’ll just hit the button and make it go.”

    And so now everyone uses Docker and it’s a pain in the ass to make any edits to the configuration or setup and it’s all in this weird virtualized box, and the “from scratch” instructions are usually out of date.

    The end

    • tuckerm@feddit.online
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      14 days ago

      I’m an advocate of running all of your self-hosted services in a Docker container and even I can admit that this is completely accurate.

    • i_am_not_a_robot@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      14 days ago

      Borg / k8s / Docker are not the same thing. Borg is the predecessor of k8s, a serious tool for running production software. Docker is the predecessor of Podman. They all use containers, but Borg / k8s manage complete software deployments (usually featuring processes running in containers) while Docker / Podman only run containers. Docker / Podman are better for development or small temporary deployments. Docker is a company that has moved features from their free software into paid software. Podman is run by RedHat.

      There are a lot of publicly available container images out there, and most of them are poorly constructed, obsolete, unreprodicible, unverifiable, vulnerable software, uploaded by some random stranger who at one point wanted to host something.

    • 0x0@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      14 days ago

      Incus (formerly LXC/D, on which Docker used to be based on) is on my to-learn list.
      Docker is not.

  • state_electrician@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    14 days ago

    Docker is a set of tools, that make it easier to work with some features of the Linux kernel. These kernel features allow several degrees of separating different processes from each other. For example, by default each Docker container you run will see its own file system, unable to interact (read: mess) with the original file system on the host or other Docker container. Each Docker container is in the end a single executable with all its dependencies bundled in an archive file, plus some Docker-related metadata.

  • TempermentalAnomaly@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    14 days ago

    A little box you can put your app.

    If the app does bad, it doesn’t sink your ship. Just throw the box over board and repackage the app.

    I’m not sure most people need it, but it could be fun to use a new app inside a container. Also makes updating that needs a restarting without shutting down your other services.

  • xavier666@lemm.ee
    link
    fedilink
    English
    arrow-up
    7
    ·
    14 days ago

    Learn Docker even if you have a single app. I do the same with a Minecraft server.

    • No dependency issues
    • All configuration (storage/network/application management) can be done via a single file (compose file)
    • Easy roll-backs possible
    • Maintain multiple versions of the app while keeping them separate
    • Recreate the server on a different server/machine using only the single configuration file
    • Config is standardized so easy to read

    You will save a huge amount of time managing your app.

    PS: I would like to give a shout out to podman as the rootless version of Docker

  • dave@lemmy.wtf
    link
    fedilink
    English
    arrow-up
    4
    ·
    14 days ago

    good answers already so i will give you a different example.

    my basic understanding of it is that docker was created originally for developers. im not sure if anyone planned for it to be a way to package up software for end users.

    before docker existed you would have this issue where devs would be working on an app, say jellyfin, but each dev might be on a different platform (windows, mac, linux), or be using a different OS version, or different versions of whatever software… which meant it happened often that the app would work for one dev but not another. maybe one dev updated C# to version 2.3 and told everyone else to update, but someone missed the memo and is still running version 2.2 and now jellyfin wont work for them and time would be wasted trying to figure out where the mismatch was

    so docker was a way to fix that “version hell” problem. every single thing that is needed for the app to run is kept inside the container. one dev will update something to a new version, then that container is shared to all other devs and each dev only has to worry about updating to the newest container before they start working on something.

    app settings are kept in a separate location and the app data in another. in the case of jellyfin, the app data would be the movies or tv shows folder for example. then when you start the docker container, it will symlink those 2 locations/folders inside the container and the jellyfin app can access them as if they were folders that were actually stored inside the container.

    so having the settings and data separate like that makes it very easy to update the container to a new version, or for a developer is probably useful being able to rollback to an older container for testing. its similar to how say windows puts the program files in one location and settings in the appdata folder

    for end users its handy if theres a new version of jellyfin or whatever that isnt released yet but you want try it out, you can run 2 containers at the same time and both of them can access the same settings and data. (maybe with the newer one in read-only mode so it doesnt mess up your settings or data!)

  • tuckerm@feddit.online
    link
    fedilink
    English
    arrow-up
    4
    ·
    14 days ago

    You can think of Docker as something that lets you run all of your self-hosted services inside of their own virtual machine. To each service, it looks like that service is running on its own separate computer. (A Docker container is not actually a virtual machine, it’s something much faster than that, but I like to think about it the same way. It has similar advantages.)

    This has a few advantages. For example, if there is a security vulnerability in one of your services, it’s less likely to affect your whole server if that vulnerable service is inside of a Docker container. Even if the vulnerability lets an attacker see files on your system, the only “system” they can see is the one inside of the Docker container. They can’t look at anything else on the rest of your actual computer, they can only see the Docker “virtual machine” that you created for that one service.

  • LittleBobbyTables@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    14 days ago

    I’m not sure how familiar you are with computers in general, but I think the best way to explain Docker is to explain the problem it’s looking to solve. I’ll try and keep it simple.

    Imagine you have a computer program. It could be any program; the details aren’t important. What is important, though, is that the program runs perfectly fine on your computer, but constantly errors or crashes on your friend’s computer.

    Reproducibility is really important in computing, especially if you’re the one actually programming the software. You have to be certain that your software is stable enough for other people to run without issues.

    Docker helps massively simplify this dilemma by running the program inside a ‘container’, which is basically a way to run the same exact program, with the same exact operating system and ‘system components’ installed (if you’re more tech savvy, this would be packages, libraries, dependencies, etc.), so that your program will be able to run on (best-case scenario) as many different computers as possible. You wouldn’t have to worry about if your friend forgot to install some specific system component to get the program running, because Docker handles it for you. There is nuance here of course, like CPU architecture, but for the most part, Docker solves this ‘reproducibility’ problem.

    Docker is also nice when it comes to simply compiling the software in addition to running it. You might have a program that requires 30 different steps to compile, and messing up even one step means that the program won’t compile. And then you’d run into the same exact problem where it compiles on your machine, but not your friend’s. Docker can also help solve this problem. Not only can it dumb down a 30-step process into 1 or 2 commands for your friend to run, but it makes compiling the code much less prone to failure. This is usually what the Dockerfile accomplishes, if you ever happen to see those out in the wild in all sorts of software.

    Also, since Docker puts things in ‘containers’, it also limits what resources that program can access on your machine (but this can be very useful). You can set it so that all the files it creates are saved inside the container and don’t affect your ‘host’ computer. Or maybe you only want to give permission to a few very specific files. Maybe you want to do something like share your computer’s timezone with a Docker container, or prevent your Docker containers from being directly exposed to the internet.

    There’s plenty of other things that make Docker useful, but I’d say those are the most important ones–reproducibility, ease of setup, containerization, and configurable permissions.

    One last thing–Docker is comparable to something like a virtual machine, but the reason why you’d want to use Docker over a virtual machine is much less resource overhead. A VM might require you to allocate gigabytes of memory, multiple CPU cores, even a GPU, but Docker is designed to be much more lightweight in comparison.

  • Professorozone@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    14 days ago

    I’ve never posted on Lemmy before. I tried to ask this question of the greater community but I had to pick a community and didn’t know which one. This shows up as lemmy.world but that wasn’t an option.

    Anyway, what I wanted to know is why do people self host? What is the advantage/cost. Sorry if I’m hijacking. Maybe someone could just post a link or something.

    • irmadlad@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      14 days ago

      Anyway, what I wanted to know is why do people self host?

      Wow. That’s a whole separate thread on it’s on. I selfhost a lot of my services because I am a staunch privacy advocate, and I really have a problem with corporations using my data to further bolster their profit margins without giving me due compensation. I also self host because I love to tinker and learn. The learning aspect is something I really get in to. At my age it is good to keep the brain active and so I self host, create bonsai, garden, etc. I’ve always been into technology from the early days of thumbing through Pop Sci and Pop Mech magazines, which evolved into thumbing through Byte mags.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      14 days ago

      It usually comes down to privacy and independence from big tech, but there are a ton of other reasons you might want to do it. Here are some more:

      • preservation - no longer have to care if Google kills another service
      • cost - over time, Jellyfin could be cheaper than a Netflix sub
      • speed - copying data on your network is faster than to the internet
      • hobby - DIY is fun for a lot of people

      For me, it’s a mix of several of reasons.

  • CodeBlooded@programming.dev
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    12 days ago

    Docker enables you to create instances of an operating system running within a “container” which doesn’t access the host computer unless it is explicitly requested. This is done using a Dockerfile, which is a file that describes in detail all of the settings and parameters for said instance of the operating system. This might be packages to install ahead of time, or commands to create users, compile code, execute code, and more.

    This instance of an operating system, usually a “server,” is great because you can throw the server away at any time and rebuild it with practically zero effort. It will be just like new. There are many reasons to want to do that; who doesn’t love a fresh install with the bare necessities?

    On the surface (and the rabbit hole is deep!), Docker enables you to create an easily repeated formula for building a server so that you don’t get emotionally attached to a server.