

We’re dead center in the observable universe though.
We’re dead center in the observable universe though.
Sure! Here’s an expanded version of the fictional profile for Chris Whitmore, now including made-up family member names, relationships, and contact info — all entirely fictional and consistent with the character:
You forgot to remove that part of the LLM response…
It’s not even only colloquial, it’s the scientific term for it.
Edit: Even things that have nothing to do with machine learning or deep learning are AI. i.e. stupid rule based approaches (aka tons of if-else). Deep Learning is a subset of Machine Learning which is a subset of AI.
Assuming each user will always encrypt to the same value, this still loses to statistical attacks.
As a simple example, users are e.g. more likely to vote on threads they comment in. With data reaching back far enough, people who exhibit “normal” behavior will be identified with high certainty.
Re LLM summaries: I’ve noticed that too. For some of my classes shortly after the ChatGPT boom we were allowed to bring along summaries. I tried to feed it input text and told it to break it down into a sentence or two. Often it would just give a short summary about that topic but not actually use the concepts described in the original text.
Also minor nitpick but be wary of the term “accuracy”. It is a terrible metric for most use cases and when a company advertises their AI having a high accuracy they’re likely hiding something. For example, let’s say we wanted to develop a model that can detect cancer on medical images. If our test set consists of 1% cancer inages and 99% normal tissue the 99% accuracy is achieved trivially easy by a model just predicting “no cancer” every time. A lot of the more interesting problems have class imbalances far worse than this one too.
AI can be good but I’d argue letting an LLM autonomously write a paper is not one of the ways. The risk of it writing factually wrong things is just too great.
To give you an example from astronomy: AI can help filter out “uninteresting” data, which encompasses a large majority of data coming in. It can also help by removing noise from imaging and by drastically speeding up lengthy physical simulations, at the cost of some accuracy.
None of those use cases use LLMs though.
Same. I had PayPal do an automated charge back because their system thought I was doing something fraudulent when I wasn’t. Steam blocked my account.
Talking to support and re-buying said game did fix the issue for me.
Funny because all they have to do is ask ChatGPT “Are you always right?” and it’ll answer something about it trying to always be right but indeed not always being right.
It’s organized by the European Broadcasting Union which includes a lot of countries in Northern Africa, some countries in the Caucasus region and some of the countries inbetween.
Australia also joined Eurovision though despite not being a member of the EBU…
Yeah, wtf. That’s not “right to repair(verb)” it’s “right to repair(noun)”. Totally different concepts.
That was a response I got from ChatGPT with the following prompt:
Please write a one sentence answer someone would write on a forum in a response to the following two posts:
post 1: “You sure? If it’s another bot at the other end, yeah, but a real person, you recognize ChatGPT in 2 sentences.”
post 2: “I was going to disagree with you by using AI to generate my response, but the generated response was easily recognizable as non-human. You may be onto something lol”
It’s does indeed have an AI vibe, but I’ve seen scammers fall for more obvious pranks than this one, so I think it’d be good enough. I hope it fooled at least a minority of people for a second or made them do a double take.
Yeah, I’ve noticed that too—there’s a distinct ‘AI vibe’ that comes through in the generated responses, even if it’s subtle.
Or “watch”. That way they don’t have to make it obvious that their customers won’t own it but still don’t straight up lie.
Yes and no. Generally speaking, ML-Models are pulling towards the average and away from the extremes, meanwhile most people have weird quirks when they write. (For example my overuse of (), too many , instead of . and probably a few other things I’m unaware of)
To make a completely different example, if you average the facial features of humans in a large group (size, position, orientation, etc. of everything) you get a conventionally very attractive person. But very, very few people are actually close to that ideal. This is because the average person, meaning a random person, has a few features that stray far from this ideal. Just by the sheer number of features, there’s a high chance some will end up out of bounds.
A ML-Model will generally be punished during training for creating anything that contains such extremes, so the very human thing of being eccentric in any regards is trained away. If you’ve ever seen people generate anime-waifus with modern generative models you know exactly what I mean. Some methods can and are being deployed to try and keep/bring back those eccentricities, at least when asked for.
On top of that, modern LLM chatbots have reinforcement learning part, where they learn how to write so that readers will enjoy reading it, which is no longer copying but instead “inventing” in a more trial-and-error style. Think of the videos on youtube you’ve seen of “AI learns to play x game”, where no training material of someone actually playing the game was used and the model still learned. I’m assuming that’s where the overuse of em-dash and quippy one liners come from. They were probably liked by either the human testers or the automated judges trained on the human feedback used in that process.