CEOs are SO INTELLIGENT! I would NEVER have Thought to invest BILLIONS OF DOLLARS on Chatbots and Summarizers which ALREADY existed!
Trying not to be too douchey here, but ironically, your message is actually a very good example of where this technology could be beneficial.
IT is ACTUALLY not EASY to read a MESSAGE when THE CASE randomly SWITCHES back AND forth.
“Caden, it looks like Airlynn just said you’re a hopeless loser, and she’s been banging your personal trainer Chad. Is there anything else I can help you with?”
Tinder needs this function right now!
It does make sense in big groups with tons of irrelevant discussion but also few messages you actually need to read.
I don’t use WhatsApp, but this immediately made me think of my dad who doesn’t use any punctuation and frequently skips and misspells words. His messages are often very difficult to interpret, through no fault of his own (dyslexia).
Having an LLM do this for me would help both him and me.
He won’t feel self conscious when I send a, “What you talkin’ about Willis?” message, and I won’t have to waste a ridiculous amount of time trying to figure out what he was trying to say.
If he’s not communicating in an explicit and clear way the AI can’t help you magically gain context. It will happily make up bullshit that sounds plausible though.
A poorly designed tool will do that, yes. An effective tool would do the same thing a person could do, except much quicker, and with greater success.
An LLM could be trained on the way a specific person communicates over time, and can be designed to complete a forensic breakdown of misspelt words e.g. reviewing the positioning of words with nearby letters in the keyboard, or identifying words that have different spellings but may be similar phonetically.
the same thing a person could do
asking for clarification seems like a reasonable thing to do in a conversation.
A tool is not about to do that because it would feel weird and creepy for it to just take over the conversation.
An LLM could be trained on the way a specific person communicates over time
Are there any companies doing anything similar to this? From what I’ve seen companies avoid this stuff like the plague, their LLMs are always frozen to the customer with no custom training. Training takes a lot of compute, but also has huge risks of the LLM going off the rails and saying bad things that could even get the company into trouble or get bad publicity. Also the disk space per customer, and loading times of individual models.
The only hope for your use case is that the LLM has a large enough context window to look at previous examples from your chat and use those for each request, but that isn’t the same thing as training.
What makes you think the llm will be able to decipher something that already doesn’t make sense
To be fair, my father tends to make messages quite incomprehensible by adding irrelevant information all over the place. Sometimes going on for multiple screens while it could easily have been a 2-3 sentence message.
Sadly I think AI would even be worse at picking up what information is important from that. But I understand why people want it.
As for very active groupchats, I am not gonna read +25 messages a day, but being able to glance the gist of it would be awesome.
The exact point at which the gist of it can be manipulated, leaving out context and nudging you toward a different opinion than you might have formed if you’d read the whole thread.
Friend, I think you need to reconsider your world perspective a bit. Not everyone is out to get you all the time.
To be fair, when Facebook was still big the privacy advocates were being branded as paranoid. Those turned out to be right after all.