Which applies to EU countries.
Not sure if apple is going to do separate builds for separate regions
Which applies to EU countries.
Not sure if apple is going to do separate builds for separate regions
It’s a shih tzu
Unfortunately, I think you need a passport number to be able to even book an international flight.
Not like they will spend $$$$ getting to another country to be denied at the border.
Unless they drive, I guess. I feel like it’s “easy” to get into Mexico, and would probably be impossible to get home again.
At the homelab scale, proxmox is great.
Create a VM, install docker and use docker compose for various services.
Create additional VMs when you feel the need. You might never feel the need, and that’s fine. Or you might want a VM per service for isolation purposes.
Have proxmox take regular snapshots of the VMs.
Every now and then, copy those backups onto an external USB harddrive.
Take snapshots before, during and after tinkering so you have checkpoints to restore to. Copy the latest snapshot onto an external USB drive once you are happy with the tinkering.
Create a private git repository (on GitHub or whatever), and use it to store your docker-compose files, related config files, and little readmes describing how to get that compose file to work.
Proxmox solves a lot of headaches. Docker solves a lot of headaches. Both are widely used, so plenty of examples and documentation about them.
That’s all you really need to do.
At some point, you will run into an issue or limitation. Then you have to solve for that problem, update your VMs, compose files, config files, readmes and git repo.
Until you hit those limitations, what’s the point in over engineering it? It’s just going to over complicate things. I’m guilty of this.
Automating any of the above will become apparent when tinkering stops being fun.
The best thing to do to learn all these services is to comb the documentation, read GitHub issues, browse the source a bit.
Bitwarden is cheap enough, and I trust them as a company enough that I have no interest in self hosting vaultwarden.
However, all these hoops you have had to jump through are excellent learning experiences that are a benefit to apply to more of your self hosted setup.
Reverse proxies are the backbone of hosting and services these days.
Learning how to inspect docker containers, source code, config files and documentation to find where critical files are stored is extremely useful.
Learning how to set up more useful/granular backups beyond a basic VM snapshot in proxmox can be applied to any install anywhere.
The most annoying thing about a lot of these is that tutorials are “minimal viable setup” sorta things.
Like “now you have it setup, make sure you tune it for production” and it just ends.
And finding other tutorials that talk about the next step, to get things production ready, often reference out dated versions, or have different core setups so doesn’t quite apply.
I understand your frustrations.
Absolutely right, that should be 20 years. I guess I’m already preparing for my 40s
I’m late 30s.
I can’t remember <13. So, at least the last 30+ years I’ve had 4 pairs of sunnies. Maybe 5 pairs.
I’ve still got 2 of those pairs.
I’m tempted to get a fancy pair that look good instead of just sunnies that look good enough (ie, more than $100). I just don’t wear them enough… Maybe a couple weeks a year?
What’s the point in buying good sunglasses, and why would I lose a pair?
I’ve had the same wallet for 15 years, I’ve been locked out once, and I’ve lost my phone about 3 times (all of which I’ve got my phone back).
I’m recovering from about 10 years of undiagnosed depression. Recently (like a year) it has affected my short term memory, to the point I thought I had ADHD or something else. Effecting my work, my ability to live day-to-day, my socialmlife.
I now realise, while ADHD might be a factor, undiagnosed depression has devastated who I am VS who I think I am and who I want to be.
Are there other explanations for your forgetfulness?
Is it age related? Anything else you find you are forgetting?
Yeh, seems not
What if it had 3 corners and 4 edges? Or 4 corners and 3 edges?
Military contractors advertise on the London underground because MPs, MI5 and MI6 have plenty of staff who commute.
Plenty of public servants use social media.
In Scotland, Scottish Water (a publicly owned nationalised company) has to advertise.
It’s not necessarily about selling new products, but maintaining awareness and keeping up opinion.
If the router/gateway/network (IE not local) firewall is blocking forwarding unknown IPv6, then it’s a compromised server connected to via IPv6 that has the ability to leverage the exploit (IE your windows client connecting to a compromised server that is actively exploiting this IPv6 CVE).
It’s not like having IPv6 enabled on a windows machine automatically makes it instantly exploitable by anyone out there.
Routers/firewalls will only forward IPv6 for established connections, so your windows machine has to connect out.
Unless you are specifically forwarding to a windows machine, at which point you are intending that windows machine to be a server.
Essentially the same as some exploit in some service you are exposing via NAT port forwarding.
Maybe a few more avenues of exploit.
Like I said. Why would a self-hoster or homelabber use windows for a public facing service?!
How many people are running public facing windows servers in their homelab/self-hosted environment?
And just because “it’s worked so far” isn’t a great reason to ignore new technology.
IPv6 is useful for public facing services. You don’t need a single proxy that covers all your http/s services.
It’s also significantly better for P2P applications, as you no longer need to rely on NAT traversal bodges or insecure uPTP type protocols.
If you are unlucky enough to be on IPv4 CGNAT but have IPv6 available, then you are no longer sharing reputation with everyone else on the same public IPv4 address. Also, IPv6 means you can get public access instead of having to rely on some RPoVPN solution.
Older games for specific older console hardware were specifically designed.
It leveraged specific features of that hardware.
They literally hacked the consoles they were releasing on to get their desired results.
And because it’s consumer gaming hardware/software neither backwards compatible nor forward compatibility for all the stuff the pulled were ever built in. So a game would have to target multiple platforms to actually release on multiple platforms .
It’s like why so many games don’t run Mac OSX. “Why don’t they just release windows software for free on Mac OSX?”. Because it needs to be redesigned to work on OSX, which costs money.
Everything up to, what, PS4? is probably specifically tailored to that specific hardware. Games that released on PS3 and xbox-whatever would have some core software dev team, then hardware specific developers. It would be targeted for the target hardware.
At some point, things like Unity and Unreal Engine took over, with generic code and targeted compiling. Pretty much (not quite) allowing developers to “just hit compile”, and release to multiple architectures.
Any official re-release of Nintendo games have generally been on an emulated system. Where they have developed that emulation to work with the original software.
There are some re-releases, where the game has essentially been rebuilt from the ground up, using original assets but to work with modern (and flexible) game engines.
Both of these have a lot of work, so not free. Worth $60 or whatever Nintendo charges? Meh, that’s competing with real games.
If you own (or buy) a nes/snes/N64 cart, you can rip it. There are plenty of ways.
It’s not the source, but it’s what it compiles to. And you can reverse engineer the source, then adapt it to modern game engines. There are a few open source projects that do this. Their quality varies.
Or you can build an emulator to run that software, as if it was the original hardware - an emulator.
Nintendo can skip the rip, decompile and reverse engineering steps. They likely have access to the source code, and the actual design specs for the hardware (not just what they tell developers - who then hack the hardware anyway)
All of this requires a LOT of work. So a sellable product from someone like Nintendo requires a lot of investment.
Emulators are good. Any used for speedrun leaderboards on equal footing to actual hardware (ie times are similar, even if they are different categories) will be good enough that you wouldn’t know.
Encourage/incentivise an actual apprentice scheme.
I don’t know how to validate new apprenticeships, tho.
I know there is some sort of apprenticeship program in the US. Bring that to the forefront. Subsidise it, advertised it, promote it.
Apprentices shouldn’t be taxed, subsidise their health/personal/business insurances.
Apprenticeships should include some basic training in business, taxes and finances.
Have tax breaks for companies, and heavily fund self employed/sole-traders who take on an apprentice.
You have to make apprenticeship as attractive as university & industry/corporate, especially in a world where technology is so exciting.
Making apprenticeship pay decently, and not be a significant financial risk to employers (apprenters?) will make it a no-brainer to companies.
Tax breaks or geants for big companies.
And making apprenticeship extremely cheap and low risk for sole traders/self employed (IE, more of this than subsidies for companies) will stop it from being a big-company-only thing. Like subsidised special apprentice insurance, tax breaks, grants.
Promote some sort of “ethical traders” kinda website, where traders with apprentices are advertised/promoted. Have some regulation around it, some customer reviews, make it neutral.
The idea being consumers will go to the “ethical traders” website to find someone to do work because it is a regulated neutral 3rd party that only includes ethical traders with reputable customer reviews. Somewhere that companies will work to maintain reputation.
You missed a then/than as well
Transfering a domain from one registrar (IE reseller) to another can be a pain, but yes you can - it normally involves a fee and manual actions from the registrars.
As long as the new registrar supports the TLD. A few Geo-TLDs can only be resold/managed by some registrars.
The easiest thing to do is to point the domain at ClouDNS nameservers.
Make sure you are happy with ClouDNS (I’ve never had issues with them) etc before committing
It’s more than just views. It’s rewatches, binge watches, complete vs interrupted episode watches, probably even time skips.
Likely also where the view comes from, like a specific search vs general recommendations vs targeted recommendations
Sure, but what you are describing is the problem that k8s solves.
I’ve run plenty of production things from docker compose. Auto scaling hasn’t been a requirement, and HA was built into the application (so 2 separate VMs running the compose stack). Docker was perfect for it, and k8s would’ve been a sledgehammer.
Surely you want to enable 802.1q? Like, that is vlan aware switching and routing. Or is that on the nas?
Edit:
Some troubleshooting:
Connect a laptop into the same subnet as your Nas (so same vlan and IP range/subnet) and connect to the nas. This either eliminates the NAS or the router from the equation
Eventually you will get used to it.
You have 3 options.
normalise to OSX shortcuts (and concile your Linux shortcuts to those). You are more likely to encounter an osx machine “in the wild”, and if you have to get a new Mac then everything is instantly comfortable. Linux is also easier to customise.
normalise to your Linux shortcuts. Figure out how to script osx to adopt those shortcuts (so you can quickly adopt a new work machine), and accept that you won’t always be able to use those shortcuts (like when using a loaner or helping someone).
accept the few years of confusing Osx Vs Linux shortcuts, and learn both.
Option 3 is the most versatile. Takes ages, and you will still make mistakes.
Option 2 is the least versatile, but is the fastest to adopt.
Option 1 is fairly versatile, but probably has the longest adoption/pain period.
If OSX is in your future, the it’s option 1.
Option 3 is probably the best.
If you are never going to interact with any computer/server other than your own & other Linux machines, then option 2. Just make sure that every preference/shortcut you change is scriptable or at least documented and that the process is stored somewhere safe