• 6 Posts
  • 84 Comments
Joined 1 year ago
cake
Cake day: June 18th, 2023

help-circle



  • Not just song lyrics, but any piece of media

    rant

    This is horribly rampant issue on Reddit. Swaths of comments reduced to three-word dialogues from movies that even most Americans may not have seen.

    While it might be acceptable in a community specific to that piece of media, it always comes across as lazy everywhere else.

    A simple link to a relevant clip or snippet would help contextualise the reference, but if commenters were willing to put in that effort, they probably wouldn’t resort to quoting three-word phrases in the first place.

    Unfortunately, this practice is becoming common on Lemmy.

    Some might see my rant as gatekeeping, but it genuinely hinders meaningful discussion on the topic at hand.

    It is a pet peeve of mine that led me to unsubscribe from many, otherwise good, subreddits and eventually leave that platform altogether (thanks to a push from its CEO).




  • I do not agree with @FiniteBanjo@lemmy.today’s take. LLMs as these are used today, at the very least, reduces the number of steps required to consume any previously documented information. So these are solving at least one problem, especially with today’s Internet where one has to navigate a cruft of irrelevant paragraphs and annoying pop ups to reach the actual nugget of information.

    Having said that, since you have shared an anecdote, I would like to share a counter(?) anecdote.

    Ever since our workplace allowed the use of LLM-based chatbots, I have never seen those actually help debug any undocumented error or non-traditional environments/configurations. It has always hallucinated incorrectly while I used it to debug such errors.

    In fact, I am now so sceptical about the responses, that I just avoid these chatbots entirely, and debug errors using the “old school” way involving traditional search engines.

    Similarly, while using it to learn new programming languages or technologies, I always got incorrect responses to indirect questions. I learn that it has incorrectly hallucinated only after verifying the response through implementation. This makes the entire purpose futile.

    I do try out the latest launches and improvements as I know the responses will eventually become better. Most recently, I tried out GPT-4o when it got announced. But I still don’t find them useful for the mentioned purposes.