• 0 Posts
  • 10 Comments
Joined 2 years ago
cake
Cake day: July 1st, 2023

help-circle
  • Hmm. I had pretty much the same experience, and wondered about having multiple conversation agents for specific tasks - but didn’t get around to trying that out. Currently, I am using it without LLM, albeit with GPU accelerated whisper (and other custom CV tasks for camera feeds). This gives me fairly accurate STT, and I have defined a plethora of variable sentences for hassil (intent matcher), so I often get the correct match. There is the option for optional words and or-alternatives, for instance:

    sentences:
     - (start|begin|fire) [the] [one] vaccum clean(er|ing) [robot] [session]
    

    So this would match “start vacuum”, but also “fire one vacuum cleaning session”

    Of course, this is substantial effort initially, but once configured and debugged (punctuation is poison!) works pretty well. As an aside, using the atom echo satellites gave me a lot of errors, simply because the microphones are bad. With a better quality satellite device (the voice preview) the success rate is much higher, almost flawless.

    That all said, if you find a better intent matcher or another solution, please do report back as I am very interested in an easier solution that does not require me to think of all possible sentence ahead of time.






  • PostgreSQL is definitely a boost to performance, especially if you offload the DB to a dedicated server (depending on load, can even be a cluster)

    Nevertheless, it probably has much to do with how it’s deployed and how many proxies are in front of it, and/or VPN. If you have large numbers of containers and small CPU/low memory hardware, and either running everything on one machine or have some other limitations, it’ll be slow.

    Admittedly, I’m not very familiar with the codebase, but I feel Apache isn’t improving the speed either. Not exactly sure how PHP is nowadays with concurrency and async, but generally a microservice type architecture is nice because you can add more workers/instances wherever a bottleneck emerges.