• 8 Posts
  • 1.14K Comments
Joined 2 years ago
cake
Cake day: October 4th, 2023

help-circle


  • I was a bit of a holdout for some years, but as they did for what I think is most of society, cell phones pretty much killed watches for me. Carrying a cell phone means that you’ve already got a timepiece in your pocket which you probably already carry everywhere, which automatically syncs time via the cell network (and GPS; I don’t know which actually takes precedence on current phones, actually), handles timezones automatically, handles switching to local time to wherever you are when you move from place to place, handles leap years…it’s tough for a watch to compete with that.

    A digital watch has very low power requirements, can run for maybe a couple years off a button cell. That compares pretty favorably to a cell phone. But if you’re willing to deal with charging a cell phone anyway, the timekeeping function is effectively free.

    A wristwatch (or, I suppose, smartwatches, if that’s the way you swing) is on one’s wrist, rather than in one’s pocket, so it’s a bit faster to check, and one can do it a bit less unobtrusively. But I just don’t check the time anywhere near enough to warrant that.

    And it’s one more thing to deal with, to catch on things, and so forth.



  • I wasn’t a regular follower in recent years, so I’m reaching a bit further back, but yeah, I recall a steady flow of people submitting general questions and mods removing them. I’d have probably just treated it like a desire path ([email protected], BTW) — if that’s how people want to walk, maybe just a sign that it’s easier to just build a path there.

    thinks

    I suppose that there were some changes that could have happened in the move from Reddit.

    There was also a collection of people who didn’t want to copy the “*porn” convention from Reddit for attractive-but-non-pornographic pictures of things (that one doesn’t bother me, but I do understand people who are uncomfortable about it and wanted to shelve it in the move). Like, their workplace may not care about people looking at landscape pictures, but gets twitchy about anything remotely porn-related.

    There are also some pretty obscure jokes that came from long-ago Reddit drama or jokes that probably make the Threadiverse more-complicated to navigate for people who weren’t in on the joke from years back. Like the “inversion” communities, like trees/MarijuanaEnthusiasists ([email protected] and [email protected]) or worldnews/anime_titties ([email protected] and [email protected], though it looks like eventually, worldnews went back to being actual world news both on Reddit and here). Or /r/superbowl ([email protected]), though I think that that one, at least, someone can figure out if they stumble into it. Might have been a good argument that we should have adopted more-conventional naming. But I think that the bigger concern in the big move was getting things up-and-running, rather than trying to rearchitect everything.


  • IMHO, the real problem is that the community is poorly-named. It should be “ThoughtfulDiscussion” or something. The name suggests a general forum to ask any question. And so, well, people do.

    The /r/askreddit subreddit had the same problem as [email protected] does, as I recall.

    EDIT: I’d add that I think that there’s actually a better argument for a general “ask questions” community on the Threadiverse than on Reddit, at least as things stand in 2025, because the userbase is smaller, so it’s hard to get many people in a lot of the niche forums. Like, sure, if you want to ask a question about Linux or about a video game, there are more-appropriate communities. But…suppose you want to ask a question about, say, fly-fishing? I haven’t looked, but I’ll bet that there isn’t even a fly-fishing community out there yet.

    EDIT2: [email protected] is sorta-kinda for general posts that are intended to spark conversations, and the content there might be somewhat-closer to what you’re looking for, if you want content that people would actually talk about. I don’t know if I’d call all of that “thought-provoking”, but I think that stuff there is better at starting back-and-forth conversations, rather than just getting a one-off answer.



  • Ah, gotcha. You’ll still have an option in that case — if you go to Amazon (or Monoprice…traditionally, they were my go-to spot for cables, but I haven’t tried pricing them against Amazon recently), they’ll also have male-to-female extension cables for USB and HDMI. I keep a USB extension cable in the car, as I normally want short cables, but every now and then, I want to put something further away.


  • (Side note, a longish cable is appreciated, but not required, I think the current length is around 5 ft, which is just about enough)

    Almost everything is going to be USB or HDMI, and will virtually always have a replaceable cable, so you can get whatever length you want, within the spec of that protocol.

    EDIT: If you really want a long cable, something that exceeds what copper can do, you can even get cables that will contain an optical transceiver and contain a fiber optic strand — I have a long USB cable like this. I assume that you don’t need that, though, if you have the computer and the webcam in more-or-less the same place.






  • $3-10k…not getting the speeds and quality

    I mean, that’s true. But the hardware that OpenAI is using costs more than that per pop.

    The big factor in the room is that unless the tech nerds you mention are using the hardware for something that requires keeping the hardware under constant load — which occasionally interacting with a chatbot isn’t going to do — it’s probably going to be cheaper to share the hardware with others, because it’ll keep the (quite expensive) hardware at a higher utilization rate.

    I’m also willing to believe that there is some potential for technical improvement. I haven’t been closely following the field, but one thing that I’ll bet is likely technically possible — if people aren’t banging on it already — is redesigning how LLMs work such that they don’t need to be fully loaded into VRAM at any one time.

    Right now, the major limiting factor is the amount of VRAM available on consumer hardware. Models get fully loaded onto a card. That makes for nice, predictable computation times on a query, but it’s the equivalent of…oh, having video games limited by needing to load an entire world onto the GPU’s memory. I would bet that there are very substantial inefficiencies there.

    The largest GPU you’re going to get is something like 24GB, and some workloads can be split that across multiple cards to make use of VRAM on multiple cards.

    You can partially mitigate that with something like a 128GB Ryzen AI Max 395+ processor-based system. But you’re still not going to be able to stuff the largest models into even that.

    My guess is that it is probably possible to segment sets of neural net edge weightings into “chunks” that have a likelihood to not concurrently be important, and then keep not-important chunks not loaded, and not run non-loaded chunks. One would need to have a mechanism to identify when they likely do become important, and swap chunks out. That will make query times less-predictable, but also probably a lot more memory-efficient.

    IIRC from my brief skim, they do have specialized sub-neural-networks, which are called “MoE”, for “Mixture of Experts”. It might be possible to unload some of those, though one is going to need more logic to decide when to include and exclude them, and probably existing systems are not optimal for these:

    kagis

    Yeah, sounds like it:

    https://arxiv.org/html/2502.05370v1

    fMoE: Fine-Grained Expert Offloading for Large Mixture-of-Experts Serving

    Despite the computational efficiency, MoE models exhibit substantial memory inefficiency during the serving phase. Though certain model parameters remain inactive during inference, they must still reside in GPU memory to allow for potential future activation. Expert offloading [54, 47, 16, 4] has emerged as a promising strategy to address this issue, which predicts inactive experts and transfers them to CPU memory while retaining only the necessary experts in GPU memory, reducing the overall model memory footprint.



  • cannot bind to local IPv4 socket: Cannot assign requested address

    inet 169.254.210.0

    Yeah. That’ll be that you’re needing an interface with that address assigned.

    ifconfig

    Going from memory, I believe that if you’ve got ifconfig available and this is a Linux system and you need to keep the address on the current interface to keep the system connected to the Internet or something, you can use something like ifconfig enp7s0:0 10.10.10.3 to use an interface alias, use both addresses (169.254.210.0 and 10.10.10.3) at the same time. Might also need ifconfig enp7s0:0 up after that. That being said, (a) I don’t think that I’ve set up an interface alias in probably a decade, and it’s possible that’s something has changed, (b) that’s a bit of additional complexity, and if you aren’t super familiar with Linux networking, you might not want to add more complexity if you don’t mind dropping just setting the address on the interface to something else.

    Probably an iproute2-based approach to do this too (the ip command rather than the ifconfig command) but I haven’t bothered to pick up iproute2 equivalents for a bunch of stuff.

    EDIT: Sounds like you can assign the address and bring the interface alias up as one step (or could a decade ago, when this comment was written):

    https://askubuntu.com/questions/585468/how-do-i-add-an-additional-ip-address-to-an-interface-in-ubuntu-14

    To setup eth0:0 alias type the following command as the root user:

    # ifconfig eth0:0 192.168.1.6 up
    

    So probably give ifconfig enp7s0:0 10.10.10.3 up a try, then see if the TFTP server package can bind to the 10.10.10.3 address.


  • tal@lemmy.todaytoSelfhosted@lemmy.worldHelp with TFTP server to flash Openwrt router
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    2
    ·
    edit-2
    16 days ago

    I haven’t done anything with OpenWRT for a lomg time, but…

    I have the IP of the server set to 0.0.0.0:69 when I try to set it to 10.10.10.3 (per the wiki) The server on my pc won’t start and gives an error.

    I’m pretty sure that you can’t use all zeroes as an IP address.

    kagis

    https://en.wikipedia.org/wiki/0.0.0.0

    RFC 1122 refers to 0.0.0.0 using the notation {0,0}. It prohibits this as a destination address in IPv4 and only allows it as a source address during the initialization process, when the host is attempting to obtain its own address.

    As it is limited to use as a source address and prohibited as a destination address, setting the address to 0.0.0.0 explicitly specifies that the target is unavailable and non-routable.

    You probably need to figure out why your TFTP server is unhappy with 10.10.10.3, and there’s not enough information here to provide guidance on that. I don’t know what OS or software package you’re using or the error or the network config.

    It may be that you don’t have any network interface with 10.10.10.3 assigned to it, which I believe might cause the TFTP server to fail to bind a socket to that address and port when it attempts to do so.

    If you are manually invoking the TFTP server as a non-root user and trying to bind to port 69, and this is a Linux system, it will probably fail, as ports below 1024 are privileged ports and processes running as ordinary users cannot bind to them. That might cause a TFTP server package to bail out.

    But I’m really just taking wild stabs in the dark, without any information about the software involved and the errors you’re seeing. I would probably recommend trying to make 10.10.10.3 work, though, not 0.0.0.0.

    If this is a Linux system, you might use a packet sniffer on the TFTP host, like Wireshark or tcpdump, to diagnose any additional issues that come up, since that will let you see how the two devices are talking to each other. But if you can’t get the TFTP server to even run on an IP address, then you’re not to that point yet.



  • I don’t use this plugin myself, but if you’re using Firefox, you might take a look at it, as it provides a bunch of browser-side configurability. I don’t know whether the feature you’re looking for is there, but as far as I can tell, it aims to be a pretty large bucket of pretty much every add-on YouTube feature one might want.

    I was looking at it a while back for something unrelated, a UI tweak that I was hoping that it might do.