As a software developer I fully agree. People bash on it constantly here but the fact is is that it’s required for our jobs now. I just made it through a job hunt and every tech screen I did they not only insisted on me using AI, but they figured how much I was using too.
The fact is is that like it or not it does speed us up, and it is a tool in our toolbelt. You don’t have to trust it 100% or blindly accept what it does, but you do need to be able to use it. Refusing to use it is like refusing to use the designer for WinForms 20 years ago, or refusing to use an IDE at work. You’re going to be at a massive disadvantage to your competing jobseekers who are more than happy to use AI.
I have a more nuanced take. AI is simultaneously untrustworthy and useful. For many queries, DuckDuckGo and Google are performing considerably worse than they used to, while Perplexity usually yields good results. Perplexity also handles complex queries traditional search engines just can’t.
About a third of the time, Perplexity’s text summary of what it found is inaccurate; it may even say the opposite of what a source does. Reading the sources and evaluating their reliability is no less important than with traditional search, but much of the time I think I wouldn’t have found the same sources that way.
Of course there are other issues with AI, such as power usage and Perplexity in particular being known for aggressive web scraping.
Nuance and depth isn’t as popular as I’d like on or off Lemmy.
Ah, but you see, I never claimed AI isn’t useful. In fact, you can check my comment history. I’ve agreed AI is a very useful tool, I still think it shouldn’t be used for ethical, social, and personal reasons
A problem with nuance is that people want to discuss the specifics and nuances of what they care about but for the most part won’t do that on subjects for other people. So you need to tailor your responses to your audience. FWIW on Lemmy I see a lot more instances of people with specificly opposed takes where both sides have similar vote counts. So while it’s not perfect it’s better than most
You can theoretically have an ethical LLM. You can train one from the ground up on non-copyrighted materials using renewable energy.
But I think what a lot of people are forgetting is that it’s not uncommon for technology to start off super inefficient. A computer used to take up an entire floor of an office building, and a hard drive with a few KB of storage used to be the size of a fridge.
Now you can have a system orders of magnitude more powerful that’s the size of a postage stamp and consumes less than 1W of power.
Absolutely. I recently needed to satisfy auditors with a report on our network security. Our main guy was on leave, but I quickly got the evidence I needed with a few powershell commands that I would have previously spent way more time googling.
It’s also decent at reports and short, impersonal emails to suppliers etc. It frees up a lot of my time to do actual work, and for that I think it’s decent.
Like basically everything in life, the truth is between the extremes. For me it’s useful, but doesn’t replace me and my team. I’m neither an AI evangelist or detractor. It’s just another tool.
AI is untrustworthy and shouldn’t be used
I have management talking about copilot usage rates and I hear people casually refer to “what ChatGPT told them” in conversation
The other day on Reddit someone was saying they just fact checked something with ChatGPT.
You can ask ChatGPT to provide sources you know.
I’ve found it very useful as a tool to gather references from talks that don’t cite claims…
It’s like a super search for context. I would never use a LLM to provide logic or reason, and sadly I think many people do.
As a software developer I fully agree. People bash on it constantly here but the fact is is that it’s required for our jobs now. I just made it through a job hunt and every tech screen I did they not only insisted on me using AI, but they figured how much I was using too.
The fact is is that like it or not it does speed us up, and it is a tool in our toolbelt. You don’t have to trust it 100% or blindly accept what it does, but you do need to be able to use it. Refusing to use it is like refusing to use the designer for WinForms 20 years ago, or refusing to use an IDE at work. You’re going to be at a massive disadvantage to your competing jobseekers who are more than happy to use AI.
This is not a fact at all.
Fine it speeds me up.
The people in the study thought so too
I have a more nuanced take. AI is simultaneously untrustworthy and useful. For many queries, DuckDuckGo and Google are performing considerably worse than they used to, while Perplexity usually yields good results. Perplexity also handles complex queries traditional search engines just can’t.
About a third of the time, Perplexity’s text summary of what it found is inaccurate; it may even say the opposite of what a source does. Reading the sources and evaluating their reliability is no less important than with traditional search, but much of the time I think I wouldn’t have found the same sources that way.
Of course there are other issues with AI, such as power usage and Perplexity in particular being known for aggressive web scraping.
Nuance and depth isn’t as popular as I’d like on or off Lemmy.
Ah, but you see, I never claimed AI isn’t useful. In fact, you can check my comment history. I’ve agreed AI is a very useful tool, I still think it shouldn’t be used for ethical, social, and personal reasons
A problem with nuance is that people want to discuss the specifics and nuances of what they care about but for the most part won’t do that on subjects for other people. So you need to tailor your responses to your audience. FWIW on Lemmy I see a lot more instances of people with specificly opposed takes where both sides have similar vote counts. So while it’s not perfect it’s better than most
You can theoretically have an ethical LLM. You can train one from the ground up on non-copyrighted materials using renewable energy.
But I think what a lot of people are forgetting is that it’s not uncommon for technology to start off super inefficient. A computer used to take up an entire floor of an office building, and a hard drive with a few KB of storage used to be the size of a fridge.
Now you can have a system orders of magnitude more powerful that’s the size of a postage stamp and consumes less than 1W of power.
I’ve found it to be extremely useful for stuff like one-off powershell commands that I’ll use like 3x in my career.
Just today I was trying to find the command line switches for disk2vhd, and none of the top results, even the official page for the app, had them.
But Google’s AI had them and provided sources I could use to verify the information.
But people didn’t do that last part before AI, so I can see why it’s an issue.
Absolutely. I recently needed to satisfy auditors with a report on our network security. Our main guy was on leave, but I quickly got the evidence I needed with a few powershell commands that I would have previously spent way more time googling.
It’s also decent at reports and short, impersonal emails to suppliers etc. It frees up a lot of my time to do actual work, and for that I think it’s decent.
Like basically everything in life, the truth is between the extremes. For me it’s useful, but doesn’t replace me and my team. I’m neither an AI evangelist or detractor. It’s just another tool.