David Gerard@awful.systemsM to TechTakes@awful.systemsEnglish · 4 months agoLLM vendors are incredibly bad at responding to security issuespivot-to-ai.comexternal-linkmessage-square34fedilinkarrow-up1121arrow-down10
arrow-up1121arrow-down1external-linkLLM vendors are incredibly bad at responding to security issuespivot-to-ai.comDavid Gerard@awful.systemsM to TechTakes@awful.systemsEnglish · 4 months agomessage-square34fedilink
minus-squareSailor Sega Saturn@awful.systemslinkfedilinkEnglisharrow-up30·edit-24 months agoSloppy LLM programming? Never! In completely unrelated news I’ve been staring at this spinner icon for the past five minutes after asking an LLM to output nothing at all:
minus-squareself@awful.systemslinkfedilinkEnglisharrow-up21·4 months agosame energy as “your request could not be processed due to the following error: Success”
minus-squareearthquake@lemm.eelinkfedilinkEnglisharrow-up17·4 months agoWhat are the chances that the front end was not programmed to handle the LLM returning an empty string?
minus-squareSailor Sega Saturn@awful.systemslinkfedilinkEnglisharrow-up14·4 months agoQuite likely yeah. There’s no way they don’t have a timeout on the backend.
minus-squareDavid Gerard@awful.systemsOPMlinkfedilinkEnglisharrow-up9·4 months agoboooo Gemini now replies “I’m just a language model, so I can’t help you with that.”
minus-squarefroztbyte@awful.systemslinkfedilinkEnglisharrow-up8·4 months ago“what would a reply with no text look like?” or similar?
minus-squareDavid Gerard@awful.systemsOPMlinkfedilinkEnglisharrow-up7·4 months ago what would a reply with no text look like? nah it just described what an empty reply might look like in a messaging app they seem to have done quite well at making Gemini do mundane responses
minus-squarefroztbyte@awful.systemslinkfedilinkEnglisharrow-up7·4 months agothat’s a hilarious response (from it). perfectly understand how it got there, and even more laughable
Sloppy LLM programming? Never!
In completely unrelated news I’ve been staring at this spinner icon for the past five minutes after asking an LLM to output nothing at all:
same energy as “your request could not be processed due to the following error: Success”
What are the chances that the front end was not programmed to handle the LLM returning an empty string?
Quite likely yeah. There’s no way they don’t have a timeout on the backend.
boooo Gemini now replies “I’m just a language model, so I can’t help you with that.”
“what would a reply with no text look like?” or similar?
nah it just described what an empty reply might look like in a messaging app
they seem to have done quite well at making Gemini do mundane responses
that’s a hilarious response (from it). perfectly understand how it got there, and even more laughable
deleted by creator