• 0 Posts
  • 23 Comments
Joined 2 years ago
cake
Cake day: September 9th, 2023

help-circle

  • yall were being obtuse about my point that one needs to “pay rent” for an internet connection

    No, it was obviously clear to most of us the whole time that you can pay an ISP to get internet connection, and that that necessarily includes some kind of IP address since the service wouldn’t work without it. Once you have subscribed to a provider’s service, some offer a static IP as a paid add-on.

    SIMO Solis Lite Mobile WLAN Router - 100$ one time purchase price. And they claim: Includes 1GB of free global data volume per month, for the lifetime of the device

    I’m not sure what you’re on about now. You’re still paying rent (though up-front instead of monthly or quarterly), and some IP address is still necessarily included within the price. How is that different to you, other than the fact that you don’t know when it expires?



  • Of course you have to pay for internet service to get the included defaults necessary for it to work. Just like you get a bowl/container when ordering hot soup from a restaurant, and just like a phone number is usually included in the price of telephone service – except that a dynamic IP is somewhat analogous to sharing that phone number, or that bowl of soup, with other customers.

    My point is that a static IP is often a paid add-on while the dynamic IP is the included default, since you wouldn’t be able to use the internet service without some sort of IP address anyway.





  • pirat@lemmy.worldtoSelfhosted@lemmy.worldLow Cost Mini PCs
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    I’m in the same situation as you, more or less… I have three new 22TB drives that need an enclosure, preferably for JBOD (no hardware RAID needed) but I can’t figure out which ones are actually good products… I don’t mind using a random-brand product if it’s actually solid.

    I find it very difficult to figure out which ones will support my 22TB drives. And for some of them, it seems, it’s impossible to add new drives to empty slots later (because of hardware RAID, I guess?), which has made me hesitant in buying one with more slots than I have drives, in case they can’t be utilized later on anyway…

    I was looking at the QNAP TR-004 which was mentioned by someone else somewhere on Lemmy some months ago, but IIRC it would be impossible to use the fourth slot later if the drive isn’t included in the hardware RAID configuration…

    EDIT: I have also been looking into so-called “backplanes” as an alternative, since they seem to do the job and are cheaper, but I’m unsure if I’ll need a PC chassis/case/tower for that to actually work?

    If you find something good (products or relevant info), feel free to share it with me.









  • a “tl,dr” bot would probably not even need high end hardware, because it does not matter if it takes ten minutes for a summary.

    True, that’s a good take. Tl;dr for the masses! Do you think an internal or external tl;dr bot would be embraced by the Paperless community?

    It could either process the (entire or selected) collection, adding the new tl;dr entries to the files “behind the scenes”, just based on some general settings/prompt to optimize for the desired output – or it could do the work on-demand on a per-document basis, either based on the general settings or custom settings, though this could be a flow-breaking bottleneck in situations where the hardware isn’t powerful enough to keep up with you. However, that only seems like a temporary problem to me, since hardware, LLMs etc. will keep advancing and getting more powerful/efficient/cheap/noice.

    a chat bot do not belong into paperless

    Right – but, opposingly to that, Paperless definitely do belong into some chatbots!


  • I’m not interest in sending my documents to open AI.

    You wouldn’t have to. There are plenty of well-performing open-source models that work with an API similar to the Open AI standard, with which you can simply substitute OpenAI models by using a different URL and API-key.

    You can run these models in the cloud, either selfhosted or “as a service”.

    Or you can run them locally on high-end consumer-grade hardware, some even on smartphones, and the models are only getting smaller and more performant with very frequent advancements regarding training, tuning and prompting. Some of these open-source models are already claiming to be outperforming GPT-4 in some regards, so this solution seems viable too.

    Hell, you can even build and automate your own specialized agents in collaborating “crews” using frameworks, and so much more…

    Though, I’m unsure if the LLM functionality should be integrated into Paperless, or rather implemented by calling the Paperless API from the LLM agent. I see how both ways could fit some specific uses.