The question is simple. I wanted to get a general consensus on if people actually audit the code that they use from FOSS or open source software or apps.
Do you blindly trust the FOSS community? I am trying to get a rough idea here. Sometimes audit the code? Only on mission critical apps? Not at all?
Let’s hear it!
Nah. My security is entirely based on vibes and gambling
All I do is look into the open issues, the community, docs etc. I don’t remember auditing the code.
If it’s a project with a couple hundred thousands of downloads a week then no, i trust that it’s been looked at by more savvy people than myself.
If it’s a niche project that barely anyone uses or comes from a source i consider to be less reputable then i will skim it
This
Let me put it this way: I audit open source software more than I audit closed source software.
Depends on what you mean by “audit”.
I look at the GitHub repo.
- How many stars?
- Last commit?
- Open issues
- Contributer count
Do I read the whole code base? Of course not. But this is way more than I can do with closed source software.
Yes, but with an explanation.
You don’t necessarily need coding skills to “audit”, you can get q sense of the general state of things by simply reading the docs.
The docs are a good starting point to understand if there will be any issues from weird licensing, whether the author cares enough to keep the project going, etc. Also serious, repeated or chronic issues should be noted in the docs if its something the author cares about.
And remember, even if you do have a background in the coding language, the project might not be built in a style you like or agree with.
I’m pretty proficient at bash scripting, and I found the proxmox helper scripts a spaghetti mess of interdependent scripts that were simply a nightmare to follow for any particular install.
I think the overall message is do your best within your abilities.
For personal use? I never do anything that would qualify as “auditing” the code. I might glance at it, but mostly out of curiosity. If I’m contributing then I’ll get to know the code as much as is needed for the thing I’m contributing, but still far from a proper audit. I think the idea that the open-source community is keeping a close eye on each other’s code is a bit of a myth. No one has the time, unless someone has the money to pay for an audit.
I don’t know whether corporations audit the open-source code they use, but in my experience it would be pretty hard to convince the typical executive that this is something worth investing in, like cybersecurity in general. They’d rather wait until disaster strikes then pay more.
Of course I do bro, who doesnt have 6 thousand years of spare time every time they run dnf update to go check on 1 million lines of code changed? Amateurs around here…
Nope! Not at all. I don’t think I could find anything even if I tried. I do generally trust OS more than other apps but I feel like I’m taking a risk either way. If it’s some niche thing I’m building from a git repo I’ll be wary enough to not put my credit card info but that’s about it
no… I do just blindly trust the code.
I know lemmy hates AI but auditing open source code seems like something it could be pretty good at. Maybe that’s something that may start happening more.
This is one of the few things that AI could potentially actually be good at. Aside from the few people on Lemmy who are entirely anti-AI, most people just don’t want AI jammed willy-nilly into places where it doesn’t belong to do things poorly that it’s not equipped to do.
Aside from the few people on Lemmy who are entirely anti-AI
Those are silly folks lmao
most people just don’t want AI jammed willy-nilly into places where it doesn’t belong to do things poorly that it’s not equipped to do.
Exactly, fuck corporate greed!
Those are silly folks lmao
Eh, I kind of get it. OpenAI’s malfeasance with regard to energy usage, data theft, and the aforementioned rampant shoe-horning (maybe “misapplication” is a better word) of the technology has sort of poisoned the entire AI well for them, and it doesn’t feel (and honestly isn’t) necessary enough that it’s worth considering ways that it might be done ethically.
I don’t agree with them entirely, but I do get where they’re coming from. Personally, I think once the hype dies down enough and the corporate money (and VC money) gets out of it, it can finally settle into a more reasonable solid-state and the money can actually go into truly useful implementations of it.
OpenAI’s malfeasance with regard to energy usage, data theft,
I mean that’s why I call them silly folks, that’s all still attributable to that corporate greed we all hate, but I’ve also seen them shit on research work and papers just because “AI” Soo yea lol