It really is big. From baby’s first prompting on big corpo model learning how tokens work, to setting up your own environment to run models locally (Because hey, not everyone knows how to use git), to soft prompting, to training your own weights.
Nobody is realistically writing fundamental models unless they work with Google or whatever though.
I’ve even heard people try and call slightly complex bots “AI” and claim they can code them (or their friend totally can lol). It’s infuriating and hilarious at the same time.
Not only that, but what I was aiming at was building applications that actually use the models. There are thousands upon thousands of internal tooling and applications built that take advantage of various models. They all require various levels of coding skill.
True! Interfacing is also a lot of work, but I think that starts straying away from AI to “How do we interact with it.” And let’s be real, plugging into OAI’s or Anthropic’s API is not that hard.
Does remind me of a very interesting implementation I saw once though. A VRChat bot powered by GPT 3.5 with TTS that used sentiment classification to display the appropriate emotion for the text generated. You could interact with it directly via talking to it. Very cool. Also very uncanny, truth be told.
All that is still in the realm of “fucking around” though.
There’s a huge gap between “playing with prompts” and “writing the underlying models” and they entire gap is all coding.
It really is big. From baby’s first prompting on big corpo model learning how tokens work, to setting up your own environment to run models locally (Because hey, not everyone knows how to use git), to soft prompting, to training your own weights.
Nobody is realistically writing fundamental models unless they work with Google or whatever though.
I’ve even heard people try and call slightly complex bots “AI” and claim they can code them (or their friend totally can lol). It’s infuriating and hilarious at the same time.
w++ is a programming language now 🤡
Not only that, but what I was aiming at was building applications that actually use the models. There are thousands upon thousands of internal tooling and applications built that take advantage of various models. They all require various levels of coding skill.
True! Interfacing is also a lot of work, but I think that starts straying away from AI to “How do we interact with it.” And let’s be real, plugging into OAI’s or Anthropic’s API is not that hard.
Does remind me of a very interesting implementation I saw once though. A VRChat bot powered by GPT 3.5 with TTS that used sentiment classification to display the appropriate emotion for the text generated. You could interact with it directly via talking to it. Very cool. Also very uncanny, truth be told.
All that is still in the realm of “fucking around” though.