The people here don’t get LLMs and it shows. This is neither surprising nor a bad thing imo.
In what way is presenting factually incorrect information as if it’s true not a bad thing?
LLMs operate using tokens, not letters. This is expected behavior. A hammer sucks at controlling a computer and that’s okay. The issue is the people telling you to use a hammer to operate a computer, not the hammer’s inability to do so
deleted by creator
It would be luck based for pure LLMs, but now I wonder if the models that can use Python notebooks might be able to code a script to count it. Like its actually possible for an AI to get this answer consistently correct these days.
Maybe in a “it is not going to steal our job… yet” way.
True but if we suddenly invent an AI that can replace most jobs I think the rich have more to worry about than we do.
Maybe, but I am in my 40s and my back aches, I am not in a shape for revolution :D
Lenin was 47 in 1917
That’s looking at the bright side :D
People who make fun of LLMs most often do get LLMs and try to point out how they tend to spew out factually incorrect information, which is a good thing since many many people out there do not, in fact, “get” LLMs (most are not even acquainted with the acronym, referring to the catch-all term “AI” instead) and there is no better way to make a precaution about the inaccuracy of output produced by LLMs –however realistic it might sound– than to point it out with examples with ridiculously wrong answers to simple questions.
Edit: minor rewording to clarify
deleted by creator