I’m gonna stop responding to this asanine thread now before you continue to demean us both with your nonsense.
Just a guy doing stuff.
I’m gonna stop responding to this asanine thread now before you continue to demean us both with your nonsense.
Simpler language is fine when it’s accurate.
Your simplification is inaccurate and could mislead people into thinking GPTs are just advanced regex matching engines.
They are not. They are closer to autocorrect on steroids.
Analysis. It uses it, but not by “matching it”. The training data is not included in the final model. No GPT can access its training data at runtime.
Training analyzes the contents of the training data and creates a statistical model representing the likelihoods of various tokens based on a complex series of mathematical transformations that encode various attributes of the tokens making up the training data.
3Blue1Brown has a great series on the actual math behind it, I would highly recommend educating yourself on what GPTs actually do. It’s way more interesting than simple matching.
You said it matches text to its training data, which it does not do.
Your single-phrase statement only works for very short, non-repetitive phrases. As soon as your phrase repeats a token more than a few times, the statistics for the tokens change and could result in nonsensical output that repeats through subsections of the training data.
And even then for that single non-repetitive phrases, the reason you would get that single phrase back is not because it would be “matching on” the phrase. It is because the token weights would effectively encode that the statistical likelihood of the “next token” in the generated output is 100% for a given token when the evaluated token precedes it in the training phrase. Or in other words: Your training data being a single phrase maniplates the statistics so that the most likely output is that single phrase.
However, that is a far cry from simple “matching” against the training data. Which is what you said it does.
They do not store anything verbatim; They instead store the directions in which various words and related concepts relate to one another in some gigantic multidimensional space.
I highly suggest you go learn what they actually do before you continue talking out of your ass about them
“Today I learned learned”
Bro we promise bro, we’re deleting the data - We know bro, you thought we didn’t collect it but bro we’re deleting it we promise now we’re cool bro just keep using it bro we don’t collect more data bro we promise
Meanwhile, for my homelab I just use split DNS and a (properly registered+set up) .house
domain - But that’s because I have services that I want to have working with one name both inside and outside of my network
Yep, as someone who just recently setup a hyperconverged mini proxmox cluster running ceph for a kubernetes cluster atop it, storage is hard to do right. Wasn’t until after I migrated my minor services to the new cluster that I realized that ceph’s rbd csi can’t be used by multiple pods at once, so having replicas of something like Nextcloud means I’ll have to use object storage instead of block storage. I mean. I can do that, I just don’t want to lol. It also heavily complicates installing apps into Nextcloud.
Certbot also does DNS challenge, fwiw
DNS challenge makes it even easier, since you don’t have to go through the process of transferring it yourself
Worth mentioning: Anyone using TachiyomiJ2K (I use it for Surface Duo dual-screen support) or another fork with support who has some self-hosting prowess, there’s always Suwayomi - It will let you “migrate” to a third-party sources repo even if your app doesn’t support it, since it becomes your device’s only local extension.
Others have addressed the root and trust questions, so I thought I’d mention the “mess” question:
Even the messiest bowl of ravioli is easier to untangle than a bowl of spaghetti.
The mounts/networks/rules and such aren’t “mess”, they are isolation. They’re commoditization. They’re abstraction - Ways to tell whatever is running in the container what it wants to hear, so that you can treat the container as a “black box” that solves the problem you want solved.
Think of Docker containers less like pets and more like cattle, and it very quickly justifies a lot of that stuff because it makes the container disposable, even if the data it’s handling isn’t.
Ah, neat! I just looked it up and it does look useful.
I’ve never really had any trouble with dark reader speed-wise - though it gives one major bonus that no other extension has so far: Attempting to match the appearance of darkened websites to my system theme (Catppuccin)
I can’t tell if you’re agreeing with me, disagreeing with me, or suggesting some alternative
I highly recommend the Dark Reader extension for your browser
The solution for me is that I run Nextcloud on a Kubernetes cluster and pin a container version. Then every few months I update that version in my deployment yaml to the latest one I want to run, and run kubectl apply -f nextcloud.yml
and it just does its thing. Never given me any real trouble.
Autism+ADHD life, I can’t stand to have emails in my inbox for more than a day and I also can’t be diligent enough to achieve that