Interesting question… I think it would be possible, yes. Poison the data, in a way.
Interesting question… I think it would be possible, yes. Poison the data, in a way.
Not Perplexity specifically; I’m taking about the broader “issue” of data-mining and it’s implications :)
You’re aware that it’s in their best interest to make everyone think their “”“AI”“” can execute advanced cognitive tasks, even if it has no ability to do so whatsoever and it’s mostly faked?
Are you sure you read the edits in the post? Because they say the exact contrary; Perplexity isn’t all powerful and all knowing. It just crawls the web and uses other language models to “digest” what it found. They are also developing their own LLMs. Ask Perplexity yourself or check the documentations.
Taking what an “”“AI”“” company has to say about their product at face value in this part of the hype cycle is questionable at best.
Sure, that might be part of it, but they’ve always been very transparent on their reliance on third party models and web crawlers. I’m not even sure what your point here is. Don’t take what they said at face value; test the claims yourself.
What did you mean by “police” your content?
Seems odd that someone from dbzer0 would be very concerned about data ownership. How come?
That doesn’t make much sense. I created this post to spark a discussion and hear different perspectives on data ownership. While I’ve shared some initial points, I’m more interested in learning what others think about this topic rather than expressing concerns. Please feel free to share your thoughts – as you already have.
I don’t exactly know how Perplexity runs its service. I assume that their AI reacts to such a question by googling the name and then summarizing the results. You certainly received much less info about yourself than you could have gotten via a search engine.
Feel free to go back to the post and read the edits. They may help shed some light on this. I also recommend checking Perplexity’s official docs.
The prompt was something like, What do you know about the user [email protected] on Lemmy? What can you tell me about his interests?" Initially, it generated a lot of fabricated information, but it would still include one or two accurate details. When I ran the test again, the response was much more accurate compared to the first attempt. It seems that as my account became more established, it became easier for the crawlers to find relevant information.
It even talked about this very post on item 3 and on the second bullet point of the “Notable Posts” section.
However, when I ran the same prompt again (or similar prompts), it started hallucinating a lot of information. So, it seems like the answers are very hit or miss. Maybe that’s an issue that can be solved with some prompt engineering and as one’s account gets more established.
I think their documentation will help shed some light on this. Reading my edits will hopefully clarify that too. Either way, I always recommend reading their docs! :)
Not really. All I did was ask it what it knew about [email protected] on Lemmy. It hallucinated a lot, thought. The answer was 5 to 6 items long, and the only one who was partially correct was the first one – it got the date wrong. But I never fed it any data.
My favorite anime website is down; good thing FMHY has a bunch of great ones to choose from. Migrating sucks, though.