I saw people complaining the companies are yet to find the next big thing with AI, but I am already seeing countless offer good solutions for almost every field imaginable. What is this thing the tech industry is waiting for and what are all these current products if not what they had in mind?
I am not great with understanding the business point of view of this situation and I have been out from the news for a long time, so I would really appreciate if someone could ELI5.
Here’s a secret. It’s not true AI. All the hype is marketing shit.
Large language models like GPT, llama, and Gemini don’t create anything new. They just regurgitate existing data.
You can see this when chat bots keep giving the same 2 pieces incorrect information. They have no concept of they are wrong.
Until a llm can understand why it is wrong we won’t have true AI.
It’s just a stupid probability bucket. The term AI shits me.
Statistical methods have been a longstanding mainstay in the field of AI since its inception. I think the trouble is that the term AI has been co-opted for marketing.
That’s not a secret. The industry constantly talks about the difference between LLMs and AGI.
Until a product goes through marketing and they slap that ‘Using AI’ into the blurb when it doesn’t.
LLMs are AI. They are not AGI. AGI is a particular subset of AI, that does not preclude non-general AI from being AI.
People keep talking about how it just regurgitates information, and says incorrect things sometimes, and hallucinates or misinterprets things, as if humans do not also do those things. Most people just regurgitate information they found online, true or false. People frequently hallucinate things they think are true and stubbornly refuse to change when called out. Many people cannot understand when and why they’re wrong.
Large language models like GPT, llama, and Gemini don’t create anything new
That’s because it is a stupid use case. Why should we expect AI models to be creative, when that is explicitly not what they are for?
I have different weights for my two dumbbells and I asked ChatGPT 4.0 how to divide the weights evenly on all 4 sides of the 2 dumbbells. It told me to use 4 half-pound weighs instead of my 2 pound weighs constantly, and finally after like 15 minutes, it admitted that, with my sets of weights, it’s impossible to divide them evenly…
You used an LLM for one of the things it is specifically not good at. Dismissing its overall value on that basis is like complaining that your snowmobile is bad at making its way up and down your basement stairs, and so it is therefore useless.
You are totally right! Sadly, people think that LLMs are able to do all of these things…
It is true AI, it’s just not AGI. Artificial General Intelligence is the sort of thing you see on Star Trek. AI is a much broader term and it encompasses large language models, as well as even simpler things like pathfinding algorithms or OCR. The term “AI” has been in use for this kind of thing since 1956, it’s not some sudden new marketing buzzword that’s being misapplied. Indeed, it’s the people who are insisting that LLMs are not AI that are attempting to redefine a word that’s already been in use for a very long time.
You can see this when chat bots keep giving the same 2 pieces incorrect information. They have no concept of they are wrong.
Reminds me of the classic quote from Charles Babbage:
“On two occasions I have been asked, – “Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?” … I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question”
How is the chatbot supposed to know that the information it’s been given is wrong?
If you were talking with a human and they thought something was true that wasn’t actually true, do you not count them as an intelligence any more?
The most successful applications (e.g. translation, medical image processing) aren’t marketed as “AI”. That term seems to be used more when companies want to distance themselves from the potential output by pretending that the software has independent agency.
Disclaimer : I currently work in the field, not on the fundamental side of things but I build tooling for LLM-based products.
There are a ton of true uses for newer AI models. You can already see specialized products getting mad traction in their respective niches, and the clients are very satisfied with them. It’s mostly boring stuff, legal/compliance like Hypercomply or accounting like Chaintrust. It doesn’t make headlines but it’s obvious if you know where to look.
Recently I saw AI transcribe a YT video. It was genuinely helpful.
“recent AI developments”
so, you just want to talk about the current batch of narrow AI LLMs?
or are you open to all the graphics/video editing stuff? (Topaz’s quality is pretty amazing)
it’s a lot better than “is hotdog”.
it’s also slow.
remember, all these systems do is take a bunch of data in and guess until they get it right, then based on that, process more data and so on.
Have you ever read the story about the AI tank from the 90s?
short version of the story is: computer was fed a bunch of pictures. some with tanks, some without. after a while, it got great at identifying them.
when they tried it out with a tank, it kept shooting at trees.
turns out, all the pics with tanks were taken in the shade.
now, like I said: story.
but the point is, this is something that’s been worked on for decades. it’s a problem as big as teaching as it is how to teach.
so, to be clear: there are LOTS of “true uses”. the issue is “they aren’t ready yet”.
we’re just playing around with beta versions (effectively) while still being amazed at how far they’ve come.
They’re looking for something like the internet or smartphones and are disappointed that it’s not doing something on that level. Doesn’t matter that there’s tons of applications in science and art (even if we’d like to ignore the latter).
Or maybe they thought we’d have human level AI by now.
Between OCR and LLM, summarising scanned things (something I do ~20% of the time) has about halved in terms of mental effort and time. As I’m paid on billable hours, this is big for me. I have told nobody and have not increased my overall output commensurately. This is the only good kind of automation I’ve observed: bottom-up, no decrease in compensation, no negotiations.
I tried FreedomGPT for better personal ownership, but for now, the hardware isn’t up to snuff for my needs. With stronger processing and somewhat better open source models I’ll be sitting pretty.
Current gen AI is pretty mediocre. It’s not much more than the bastard child of a search engine and every voice assistant that has been around for the last ten years. It has the potential to be a stepping stone to fantastic future tech, but that’s been true of tons of different technologies for basically as long as we’ve been inventing things.
AI is not good enough to replace the majority of workers yet. It summarizes information pretty well and can be helpful with drafting any sort of document, but so was Clippy. When it doesn’t know something it can lie confidently. Lie isn’t really the right word but I’ll come back to that concept in a second. Incorrect information is frustrating in most cases but it can be deadly when presented by a source that is viewed as trustworthy, and what could be more trustworthy than an AI with access to the collective knowledge of mankind? Well, unfortunately for us AI as we know it isn’t really intelligent and the databases they’re trained on also contain the collective stupidity of mankind.
That brings us back to the concept of lying and what I view as the fundamental flaw of current AI; namely that any sort of data interpretation can only be as good as the data it describes. ChatGPT isn’t lying to you when it says you can put glue on your cheese pizza, it’s just pointing out that someone who said that got a lot of attention. Unfortunately it leaves out all the context which could have told you that pizza would not be fit to consume and presents the fact that it was a popular answer as if that is the only thing that defines the best answer. There’s so much more that needs to be taken into account, so much unconscious human experience being drawn from when an actual human looks at something and tries to categorize or describe it. All of that necessary context is really difficult to impart to a computer and right now we’re not very good at that essential piece of the puzzle.
If we could assume that all datasets analyzed by AI were free from human error, AI would be taking over the world right now. However, that’s not the world we live in. All data has errors. Some are easy to spot but many are not. AI firms are getting companies to salivate at the idea of easy manipulation of data in one form or another. They aren’t worried about the errors in the data because they view that as someone else’s problem and the companies all think their data is good enough that it won’t be an issue. Both are wrong. That’s exactly why you hear a lot of talk about AI right now and not all that much practical application beyond replacing customer service reps, especially in the business world. Companies are finding out that years of bad practices have left them with a dataset full of errors. Can they find a way to get AI to correct those errors? In some cases yes, in others no. In either case the missing piece preventing a full scale AI takeover is all that human background context necessary for relevant data interpretation. If we find a way to teach that to an AI then the world is going to look vastly different than it does today, but we’re not there yet.
There is truth in statistics. The minor errors are irrelevant in the actual LLM. Problems like the bad reddit quotes by google have nothing to do with and actual LLM, that is a RAG (augmented retrieval) and just bad standard code. The model itself is learning statistical word associations across millions of instances of similar data. The minor errors are irrelevant in this context.
Generative tools posted online are trash in their controls and especially the depth of capabilities. If you play with an enthusiast level consumer machine, with ComfyUI, the full nodes manager (not just the comfy anonymous repo), and the hundreds of nodes, things change. I’ve spent the last week reading white papers, following code examples, and trying new techniques. The possibilities are getting exponentially complex in a short period of time. I think most people working on generative AI in the public space are turning inward at the moment because it is hard to grasp all the possibilities, or maybe I’m just not following the right people.
We are in a data grab phase where it is feasible to collect more data as opposed to refining what exists. I think the techniques are growing too fast to say what will be the most efficient way of refining data. Eventually a refinement phase is likely.
Hallucinations are not actually a thing. The reasons they happen are just too complex to explain to a consumer public or no one would use the tool. If you learn about alignment and you really start reading into the tokenizer code, you’ll learn that it is just a complex system where most errors are due to safety alignment. The rest are generalizations made for an average use case. The underlying capability is far more complex and nuanced than any publicly hosted stalkerware data mining operation might appear. These real capabilities of the LLM are the building blocks of change. There are many other systems than just the tensor tables and word relationship statistics.
I think most of the media coverage is hype. That doesn’t directly answer your question… But I take everything I read with a grain of salt.
Currently, for the tech industry, it’s main use is to generate hype and drive the speculation bubble. Whether it’s useful or not, slapping the word “AI” on things and offering AI services increases the value of your company. And I personally think if they complain about this, it’s they want the bubble even bigger, but they already did the most obvious things. But that has nothing to do with “find use” in the traditional sense (for the thing itself.)
And other inventions came with hype. Like smartphones (the iPhone.) Everyone wanted one. Lots of people wanted to make cash with that. But still, if it’s super new, it’s not always obvious at what tasks it excels and what the main benefits are in the long term. At first everyone wants in just because it’s cool and everyone else has one. In the end it turned out not every product is better with an App (or Bluetooth). And neither a phone, nor AI can (currently) do the laundry and the other chores. So there is a limit in “use” anyways.
So I think the answer to your question: what did they have in mind… is: What else can we enhance with AI or just slap the words on to make people buy more. And to be cool in the eyes of our investors.
I think one of the next steps is the combination with robotics. That will make it quite more useful. Like input from sensors so AI can take part in the real world, not just the virtual one. But that’s going to take some time. We’ve already started, but it won’t happen over night. And for the close future i think it’s gonna be gradual increase. AI just needs to get more intelligent, make less errors, be more affordable to run. That’s going to be a gradual increase and provide me with a better translation service on my phone, a smart-home that i can interact with better, an assistant that can clean up the mess with all the files on my computer, organize my picture folder… But the revolution already happened. I think it’s going to be constant, but smaller steps/ progress from now on.
https://en.m.wikipedia.org/wiki/File:Gartner_Hype_Cycle.svg
It’s not that helpfull as everybody thinks and slowly people are realizing that.
AI is being used to replace a lot of jobs, but companies usually do not want to advertise that.
There are possibilities of consumer products (e.g. smarter alexa and siri) but those are non monetized, so they cannot generate 100B revenue from it.
There is possibility of more innovative products e.g. smart christmas toy, but AI needs few more years to get there.
AI is being used to replace a lot of jobs, but companies usually do not want to advertise that.
I would be careful with that statement.
I’ve been involved in some projects about “leveraging on data” to reduce maintenance costs. And a big pitfall is that you still someone to do the job. Great, now, you know that the “Primary pump” is about to break. You still need to send a tech to replace-it, and often you have to deal with a user who can’t afford to turn the system off until the repair is done, and the you can’t let someone work alone in the area. So you end-up having to send 2 persons asap to repair the “primary pump”.
It’s a bit better in term of planning/ressources than “Send 2 persons to diagnose what’s going wrong, get the part and do the repair”, which allows to replace engineer able to do a diagnostic by technicians able to execute a procedure (which is itself an issue as soon as you have to think out of the box). It allow to have a more dynamic “preventive maintenance planning”. So somehow, it helped cutting down the maintenance costs and improve system reliability. But in the end, you still need staff to do the repair. And I let alone, all the manpower needed to collect/process the data, hardware engineer looking on how to integrate sensor in the machines, data-engineer building a data-base able to use these data, data-scientists building efficient algorithm, product maintenance expert trying to make-sense of these data and so on.
I feel like, a big chunk of the AI will be similar, with some jobs being cut down (or less qualified) while tons of new jobs will take over
I’m not sure it’s going to be that. That was the model for the last wave of tech advancement layoffs and job replacements. This one is going to be so much dumber.
It’s no secret that most companies are stagnant or losing money right now across the board. For many reasons, disposable income is way down, COVID mentality change (people decided they wanted to live instead of just consume), and products have just been getting worse. So, CEOs are using AI to replace jobs that AI cannot yet replace. It immediately makes their bottom line look better for investors while doing nothing useful. This will bite them in the ass soon but they’ll say AI was oversold and it’s not their fault. Meanwhile, they look like the nothing they’re doing to improve their company is working and will survive another day.
You’re falling into a no true Scotsman fallacy. There are plenty of uses for recent AI developments, I use them quite frequently myself. Why are those uses not “true” uses?