Hi, I am a computer nerd. I also took a computer programming class and got the highest score in the class, but I never followed up with advanced classes. Recently, I’ve thought of different ideas for software I’d like to try to create. I’ve heard about vibe coding. I know real programmers make fun of it, but I also have heard so much about it and people using it and paying for it that I have a hard time believing it writes garbage code all the time.

However, whenever I am trying to do things in linux and don’t know how and ask an LLM, it gets it wrong like 85% of the time. Sometimes it helps, but a lot of times it’s fucking stupid and just leads me down a rabbit hole of shit that won’t work. Is all vibe coding actually like that too or does some of it actually work?

For example, I know how to set up a server, ssh in, and get some stuff running. I have an idea for an App and since everyone uses smart phones (unfortunately), I’d probably try to code something for a smart phone. But would it be next to impossible for someone like me to learn? I like nerdy stuff, but I am not experienced at all in coding.

I also am not sure I have the dedication to do hours and hours of code, despite possible autism, unless I were highly fucked up, possibly on huge amounts of caffeine or microdosing something. But like, it doesn’t seem impossible.

Is this a rabbit hole worth falling into? Do most Apps just fail all the time? Is making an App nowadays like trying to win a lotto?

It would be cool to hear from real App developers. I am getting laid off, my expenses are low because I barely made anything at my job, I’ll be getting unemployment, and I am hoping I can get a job working 20-30 hours a week and pay for my living expenses, which are pretty low.

Is this a stupid idea? I did well in school, but I’m not sure that means anything. Also, when I was in the programming class, the TA seemed much, much smarter at programming and could intuitively solve coding problems much faster due to likely a higher IQ. I’m honestly not sure my IQ is high enough to code. My IQ is probably around 112, but I also sometimes did better than everyone on tests for some reason, maybe because I’m a nerd. I’m not sure I will have the insight to tackle hard coding problems, but I’m not sure if those actually occur in real coding.

  • xavier666@lemmy.umucat.day
    link
    fedilink
    English
    arrow-up
    15
    ·
    edit-2
    9 days ago

    Think of LLMs as the person who gets good marks in exams because they memorized the entire textbook.

    For small, quick problems you can rely on them (“Hey, what’s the syntax for using rsync between two remote servers?”) but the moment the problem is slightly complicated, they will fail because they don’t actually understand what they have learnt. If the answer is not present in the original textbook, they fail.

    Now, if you are aware of the source material or if you are decently proficient in coding, you can check their incorrect response, correct it, and make it your own. Instead of creating the solution from scratch, LLMs can give you a push in the right direction. However, DON’T consider their output as the gospel truth. LLMs can augment good coders, but it can lead poor coders astray.

    This is not something specific to LLMs; if you don’t know how to use Stackoverflow, you can use the wrong solution from the list of given solutions. You need to be technically proficient to even understand which one of the solutions is correct for your usecase. Having a strong base will help you in the long run.

    • lepinkainen@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      8 days ago

      The main problem with LLMs is that they’re the person who memorised the textbook AND never admit they don’t know something.

      No matter what you ask, an LLM will give you an answer. They will never say “I don’t know”, but will rather spout 100% confident bullshit.

      The “thinking” models are a bit better, but still have the same issue.

      • xavier666@lemmy.umucat.day
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 days ago

        No matter what you ask, an LLM will give you an answer. They will never say “I don’t know”

        There is a reason for this. LLMs are “rewarded” (just an internal scoring mechanism) for generating an answer. No matter what you say, it will try to maximize the reward value by generating an answer with high hallucination. There is no reward mechanism for saying “I don’t know” to a difficult question.

        I am not into research on LLMs, but i think this is being worked upon.

  • ComfortableRaspberry@feddit.org
    link
    fedilink
    arrow-up
    13
    arrow-down
    1
    ·
    edit-2
    9 days ago

    I use it as a friendlier version of stackoverflow. I think you should generally know / understand what you are doing because you have to take everything it says with a grain of salt. It’s important to understand that these assistants can’t admit that they don’t know something and come up with random generated bullshit instead so you can’t fully trust their answers.

    So you still need to understand the basics of software development and potential issues otherwise it’s just negligence.

    On a general note: IQ means nothing. I mean a lot of IQ tests use pattern recognition tasks that can be helpful but still, having a high IQ says nothing about you ability as developer

    • FuglyDuck@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      9 days ago

      On a general note: IQ means nothing. I mean a lot of IQ tests use pattern recognition tasks that can be helpful but still, having a high IQ says nothing about you ability as developer

      to put this another way… expertise is superior to intelligence. Unfortunately we have this habit of conflating the two. intelligent people some times do some incredibly stupid things because they lack the experience to understand why something is stupid.

      Being a skilled doctor or surgeon doesn’t make you skilled at governance. two different skillsets.

  • listless@lemmy.cringecollective.io
    link
    fedilink
    arrow-up
    8
    ·
    9 days ago

    if you know how to code, you can vibe code because you can immediately see and be confident enough to identify and not use obvious mistakes, oversights, lack of security, and missed edge cases the LLM generated.

    if you don’t know how to code, you can’t vibe code, because you think the LLM is smarter than you and you trust it.

    Imagine saying “I’m a mathematician” because you have a scientific calculator. If you don’t know the difference between RAD and DEG and you just start doing calculations without understanding the unit circle, then building a bridge based on your math, you’re gonna have a bad time.

  • Mountaineer@aussie.zone
    link
    fedilink
    English
    arrow-up
    7
    ·
    9 days ago

    If you “vibe code” your way through trial and error to an app, it may work.
    But if you don’t understand what it’s doing, why it’s doing it and how it’s doing it?
    Then you can’t (easily) maintain it.
    If you can’t fix bugs or add features, you don’t have a saleable product - you have a proof of concept.

    AI tools are useful, but letting the tool do all the driving is asking for the metaphorical car to crash.

  • AdamBomb@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    5
    ·
    8 days ago

    My bro, your TA wasn’t better at coding because “higher IQ”. They were better because they put in the hours to build the instincts and techniques that characterize an experienced developer. As for LLM usage, my advice is to be aware of what they are and what the aren’t. They are a randomized word prediction engine trained on— among other things— all the publicly available code on the internet. This means they’ll be pretty good at solving problems that it has seen in its training set. You could use it to get things set up and maybe get something partway done, depending on how novel your idea is. An LLM cannot think or solve novel problems, and they also generally will confidently fake an answer rather than say they don’t know something, because truly, they don’t know anything. To actually make it to the finish line, you’ll almost certainly need to know how to finish it yourself, or learn how to as you go.

  • OhNoMoreLemmy@lemmy.ml
    link
    fedilink
    arrow-up
    5
    ·
    9 days ago

    You absolutely can’t use LLMs for anything big unless you learn to code.

    Think of an LLM as a particularly shit builder. You give them a small job and maybe 70% of the time they’ll give you something that works. But it’s often not up to spec, so even if it kinda works you’ll have to tell them to correct it or fix it yourself.

    The bigger the job is and the more complex the more ways they have to fuck it up. This means in order to use them, you have to break the problem down into small sub tasks, and check that the code is good enough for each one.

    Can they be useful? Sometimes yes, it’s quicker to have an AI write code than for you to do it yourself, and if you want something very standard it will probably get it right or almost right.

    But you can’t just say ‘write me an app’ and expect it to be useable.

  • Lovable Sidekick@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    9 days ago

    The exact definition of vibe coding varies with who you talk to. A software dev friend of mine uses ChatGPt every day in his work and claims it saves him a ton of time. He mostly does db work and node apps right now, and I’m pretty sure the way he uses ChatGPT falls under the heading of vibe coding - using AI to generate code and then going through the code and tweaking it, saving the developer a lot of typing and grunt work.

    • TranquilTurbulence@lemmy.zip
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      8 days ago

      I prefer to think of vibe coding like the relationship some famous artists had with apprentices and assistants. The master artist tells the apprentice to take care of the simple and boring stuff, like background and less significant figures. Meanwhile the master artist would paint of all the parts that require actual skill and talent. Raphael and Rembrandt would be a good examples of that sort of workflow.

  • droning_in_my_ears@lemmy.world
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    edit-2
    9 days ago

    It works short term. If you have a deadline tomorrow by all means.

    Long term you need to be aware of not just the code but the theory behind the code. You can make it work if you’re promoting what you need and read the result, understand it and test it but if we pure vibe coding is probably too much. How are you gonna solve problems when you don’t fully understand how things work?

    Another thing, a lot of AI generated code solves the problem in the most obvious often bad way. For example I asked the AI for help with an ORM limitation I was running into and so many times the code it suggested was just query the db, then filter in code afterwards

  • Toes♀@ani.social
    link
    fedilink
    arrow-up
    4
    ·
    9 days ago

    It’s cool for little things when working on an unfamiliar project or learning something new.

    But don’t trust one example and read about the features you’re using.

  • 18107@aussie.zone
    link
    fedilink
    English
    arrow-up
    4
    ·
    8 days ago

    LLMs are great at language problems. If you’re learning the syntax of a new programming language or you’ve forgotten the syntax for a specific feature, LLMs will give you exactly what you want.

    I frequently use AI/LLMs when switching languages to quickly get me back up to speed. They’re also adequate at giving you a starting point, or a basic understanding of a library or feature.

    The major downfall is if you ask for a solution to a problem. Chances are, it will give you a solution. Often it won’t work at all.
    The real problem is when it does work.

    I was looking for a datatype that could act as a cache (forget the oldest item when adding a new one). I got a beautifully written class with 2 fields and 3 methods.
    After poking at the AI for a while, it realized that half the code wasn’t actually needed. After much more prodding, it finally informed me that there was actually an existing datatype (LinkedHashMap) that would do exactly what I wanted.

    Be aware that AI/LLMs will rarely give you the best solution, and often give you really bad solutions even when an elegant one exists. Use them to learn if you want, but don’t trust them.

  • Emily (she/her)@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    4
    ·
    9 days ago

    In my experience, an LLM can write small, basic scripts or equally small and isolated bits of logic. It can also do some basic boilerplate work and write nearly functional unit tests. Anything else and it’s hopeless.

  • Psythik@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    7 days ago

    I vibe coded an AutoHotKey script to automate part of my job. It works.

    Edit: FWIW you have to pressure it quite a bit to get what you want. One or two prompts usually won’t produce working code on the first attempt. Also you have to understand at least the basics of programing so that you know the right words to enter into the prompt to get the results you desire.

  • jol@discuss.tchncs.de
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    9 days ago

    It already kinda works 95% of the way. But more often than not the last 5% still requires you to understand everything the AI did which can be hard. If it was you implementing everything, you’d already have the whole context in working memory. I’ve been learning better prompting and getting better at it. I think it thrives in typed languages and where the code base has clear design patterns it can follow.

  • NigelFrobisher@aussie.zone
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    9 days ago

    Only for really basic things. I’m trying to use it to build tools in the background while I do real work, but it quickly falls into a pattern of presenting a working product that actually doesn’t work at all (and then If have to dedicate a lot more time analysing the generated code to find out why. It often can’t fix its own workings even with a “reasoning” model.)

  • GreenKnight23@lemmy.world
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    8 days ago

    I don’t even have to read everything you wrote past the question.

    no. no it does not.

    it doesn’t work for many reasons. most of all it doesn’t work when you need to improve or extend the code. handing it over to a new developer also doesn’t work.

    If I ever see another developer vibe code IRL I will relentlessly mock them until HR is forced to get involved.