This what happens when you train your LLMs off Reddit.
Assaltwaffle
This just in: AIs are still unreliable garbage with limited use cases.
kamekaze1024
I understand why LLMs can be u reliable but I never get why it gets stuff like this wrong. How hard is to just have it do “GPU”+ (<=“280”W) as a search? 🔍
No-Necessary-8333
Remember to smoke 2-3 cigarettes a day while pregnant! -google ai
6 Comments
This what happens when you train your LLMs off Reddit.
This just in: AIs are still unreliable garbage with limited use cases.
I understand why LLMs can be u reliable but I never get why it gets stuff like this wrong. How hard is to just have it do “GPU”+ (<=“280”W) as a search? 🔍
Remember to smoke 2-3 cigarettes a day while pregnant! -google ai
https://preview.redd.it/yca5yeijx1td1.jpeg?width=1023&format=pjpg&auto=webp&s=0c843d57a1a64943d513a11aa3f1f6d9a8bd0b30
Your take? Huh?
Go ask chat gpt how many R’s are in raspberry
Idk if it’s been corrected but for a while it would tell you there’s only 2, and it would fight you on it if you tried to correct it
Edit: just tried it and it still says 2