

There isn’t really an agreed-on metadata system for ebooks, which is surprising to me, considering the ISBN system is well-established as a credible source.
Uploading ebooks to my CWA instance is a guaranteed metadata edit on each one.
There isn’t really an agreed-on metadata system for ebooks, which is surprising to me, considering the ISBN system is well-established as a credible source.
Uploading ebooks to my CWA instance is a guaranteed metadata edit on each one.
You can also approach this by blocking file types at the download client.
I know what you’re trying to do, and what those tutorials don’t tell you is that you are shortcutting normal DNS flow, which most apps are expecting.
DNS isn’t designed to work that way, so some apps (like Firefox) with internal hard-coded DNS functions are going to balk at private RFC ips in a DNS record. Or a lack of reverse record.
Again, slow down and think about what your trying to do here. You are complicating your stack for no reason other than you don’t want to set up a local DNS handler.
No, it is not fully working.
Many have tried to explain to you that your setup only works for YOU on YOUR subnet.
Your are then asking other public tools meant to lookup public ips with publicly-available DNS names to resolve your internal addresses, which they obviously don’t know anything about, and you’re getting those errors from tools that follow rfc because you are putting the equivalent of “bedroom” on the outside of an envelope and expecting the post office to know that it means YOUR bedroom.
For dns to work properly, the authoritative DNS server should be able to create a reverse lookup record for every a record that allow a DNS client to ask “what record do you have for this IP?” and get a coherent response. Since 192.168.10.0/24 is a non-routable network, you will never have such a reverse record.
Wolfgang has done you a disservice by giving you a shortcut that works as a side-effect of dns before you fully understood how DNS works.
I use rsync and a pruning script in crontab on my NFS mounts. I’ve tested it numerous times breaking containers and restoring them from backup. It works great for me at home because I don’t need anything older than 4 monthly, 4 weekly, and 7 daily backups.
However, in my job I prefer something like bacula. The extra features and granularity of restore options makes a world of difference when someone calls because they deleted prod files.
Is your container isolated from your internal network?
If I were to compromise your container, I’d immediately pivot to other systems on your private network.
Why do the difficult thing of breaking out of a container when there’s a good chance I can use the credentials I got breaking in to your container to access other systems on your network?
Presuming you have not limited edge port 22 to one or two IPS and that you are not translating a high port to 22 internal, the danger is that you are allowing the entire internet to hammer away at your ssh. If you have this described setup, you will most definitely see the evidence of attempts to break in in your SSH endpoint and firewall logs.
Zero days for SSH do exist, so it’s just a matter of time before you’re compromised if you leave this open.
No it doesn’t. I’ve sold items to European buyers from my location in Canada.
We are now at the point where you are completely fabricating your responses, and therefore no productive outcome can be achieved here.
Have a better day.
Again, for the nth time, NO ONE IS ASKING FOR THAT. They are asking how flohmarkt works. YOU are (for some reason) insisting that we all want a centralized market.
You must be joking. Ebay does not now, nor has ever worked that way.
Come on.
but apparently people here in this comment thread think this is bad design 🙄
And on top of it, you are becoming belligerent to users, insisting they don’t know what they’re talking about.
They are local listings for buying used stuff that that you might have not even have known you want before browsing the listing.
That is not how I nor anyone I know uses our listings on Kijiji. I go to look for specific things i don’t want to buy new. I do not browse used stuff for sale. I’ve personally bought motorbikes 600km away because the search area has to be bigger for more niche items. I also set up the sale of a car to a guy 1800km away via kijiji.
I couldn’t do either of these using flohmarkt, so it isn’t really useful to me, federated or not.
Decentralization is not the issue here. This is a design choice that doesn’t understand how the service will be used.
In the example of my country, Canada, let’s say I have two flohmarkt servers: east and west. To look for a certain make and model of car, I have to check my region first, then sign out, sign back into the other region?
Why would anyone continue to use this as a shopping mechanism?
It"s one thing to limit searches bases on geographic location of items, but I should be able to change that to look up items at a destination to which I’m travelling, or just to compare to my area.
Plus, I might be more willing to travel farther to get a used car than a loveseat.
This is def bad design.
What parts are “a bitch” to work with?
A pig on what?
Nesting=1. This isn’t about virtualizing inside the container, it allows internal resources to access parent resources.
You should only need the cgroup2 entries, but they should be pointing to the correct devices:
Nvidia example, but quicksync is similar:
lxc.cgroup2.devices.allow: c 226:0 rwm
lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.mount.entry: /dev/dri dev/dri none bind,optional,create=dir
lxc.mount.entry: /dev/dri/renderD128 dev/renderD128 none bind,optional,create=file
Yes, since Tailscale is based on wireguard.
Probably not the best practice, though, since any device that connects will be allowed to use the service if there is no authentication on the cert.
If the target is zfs, use zfs send. If the target is anything else, rsync.
Schedule it with cron.
Be aware that with zfs snapshots, you need to replay them to restore, which means you’ll periodically need to do a full backup. ClaraSystems has a number of guides on how to create zfs datasets to make efficient backups the way you want.
Edit: KlaraSystems, sorry.
It never worked well for me. Not because it couldn’t fetch ebooks, but because it defaults to adding an author’s entire library, which was dumb for my reading habits.
I would search for a book, find it, only be able to add the author, and then have to uncheck almost all the books the author had written because I just wanted one.
Sorting by “books” just showed me a list of hundreds of books when I just wanted 7 of those.
If your workflow matched that for readarr, I’m sure it worked well, metadata problems aside.