Apple is worried about its own science output, with many of their office heavily employing data scientists. A lot of people slate Siri, but Apple’s scientists put out a lot of solid research.
Amazon is plugging GenAI into practically everything to appease their execs, because it’s the only way to get funding. Moonshot ideas are dead, and all that remains is layoffs, PIP, and pumping AI into shit where it doesn’t belong to make shareholders happy. The innovation died, and AI replaced it.
Google has let AI divisions take over both search and big parts of ads. Both are reporting worse experiences for users, but don’t worry, any engineer worth anything was laid off and there are no opportunities in other divisions for you either. If there are, they probably got offshored…
Meta is struggling a lot less, probably because they were smart enough to lay off in one go, but they’re still plugging AI shite in places no one asked for it, with many divisions now severely down in headcount.
If the AI boom is a dud, I can see many of these companies reducing their output further. If someone comes along and competes in their primary offering, there’s a real concern that they’ll lose ground in ways that were unthinkable mere years ago. Someone could legitimately challenge Google on search right now, and someone could build a cheap shop that doesn’t sell Chinese tat and uses local suppliers to compete with Amazon. Tech really shat the bed during the last economic downturn.
I’m not sure there could be any sort of legitimate threat to them, but I could definitely see a Netflix situation playing out. That is a popular upstart temporarily seems poised to take over, but then suffers from extreme levels of interference from bigger players who artificially hold the upstart down while they desperately catch up and then ultimately come at least equal while the Netflix equivalent is mostly a shell of what it could’ve been.
Never underestimate how much buckets and buckets of cash reserves can overcome even incredibly out of touch laziness when it comes to competing with any start ups. Apple in particular could probably afford to let competitors get a decade ahead and still be able to come back based on the ridiculous amount of cash they have to float their business along with.
Yeah competition won’t work in a market where some competitors have such massive amounts of wealth. This is a failure of unrestrained capitalism and it’s bad for consumers ultimately.
There’s also the whole interaction in how the CEO treated someone who wrote an article and he wouldn’t leave her alone after asking him to stop.
From his perspective I get it, you want to have good press and try to clear up any misconceptions. But how he went about it was very unprofessional and far too pushy.
I read that stuff a few weeks ago. And the responses and discussion on Kagi’s Discord. I’ll continue to monitor Kagi’s behavior, of course, but for now I prefer Kagi. I get far more relevant results with no advertising noise and as much or as little “AI” assistance as I want.
Google is a cesspool and DDG is simply inferior - worthy, but inferior.
Using Kagi is not a bad decision. After reading a lot of positive things about it and beeing quite hyped, I was so ungently reminded, that every good thing comes with its own baggage of bad.
I thought I share it, so that everyone can make their own decision.
I also tried Metager, a german meta search engine. Sad to report: not usable for me, although I like their club (suma-ev) and donated some money towards them.
No. They are still capable of pressure typical for oligopoly (censoring out mentions of their competition, tactically buying out things which could help that competition and shutting them down, defamation, lobbying for laws directed against their competition).
AI did boom, but people don’t realize the peak happened a year ago. Now all we have is latecomers with FOMO. It’s gonna be all incremental gains from here on.
That is indeed exactly my point. LLMs are just a language-tailored expression of deep-learning, which can be incredibly useful, but should never be confused for any kind of intelligence (i.e. logical conclusions).
I appreciate that you see my point and admit that it makes some sense :)
Example where I think pattern recognition by deep learning can be extremely useful:
recheck medical imaging data of patients that have already been screened by a doctor, to flag some data for a re-check by a second doctor. This could improve chances of e.g. early cancer detection for patients, without a real risk of a false detection, because again, a real doctor will look at the flagged results in detail before even alarming a patient to a potential diagnosis
pre-filter large amounts of data for potential matches -> e.g. exoplanet search by certain patterns (planet hunters lets humans do this as crowdsourcing)
But what I am afraid is happening for people who do not see why a very simple algorithm is already AI, but consider LLMs AI, is that they mentally decide to call AI what seems “AGI” / “human-like”. They mistake the patterns of LLMs for a conscious being and that is incredibly dangerous in terms of trusting the answers given by LLMs.
Why do I think they subconsciously imply (self-)awareness / conscience? Because to not consider as (very limited) AI a control mechanism like a simple room thermostat, is viewing it as “too simple” to be AI - which means that a person with such a view makes a qualitative distinction between control laws and “AI”, where a quantitative distinction between “simple AI” and “advanced AI” would be appropriate.
And such a qualitative distinction that elevates a complex word guessing machine to “intelligence”, that can only be made by people who actually believe there’s understanding behind those word predictions.
I don’t use a single Meta product on purpose. I’m sure they scrape my data despite my best efforts to not be tracked online.
I still unfortunately order things from Amazon for the convenience, use Windows for gaming and at work, and occasionally use Google search with heavy boolean search, custom search engines, and browser extensions for filtering out the garbage. I also still use Google Maps and I have an Android based tv where I occasionally watch SmartTube.
Hell I even get Netflix included with my T-Mobile subscription. My wife watches that.
And for now, I have an iPhone SE until it dies and I make the switch to a Google phone or something.
Typing this out makes me wonder what I’m waiting for to find alternatives for this FAANG garbage, but I have no idea how Facebook still exists.
Yes but I don’t want to type my billing details every time I need some thing. I don’t want to wait 6 weeks. I don’t know if other sites are reputable. I don’t want to pay shipping. I like being able to wishlist stuff or store stuff in my cart for later and read lots of reviews on products (I’m aware many are fake).
There’s also the fact that nearly every website runs on AWS, so even if I boycott Amazon (I’m sure they’ll miss my $100 a month in purchases), I’m still providing them money by visiting the sites that are hosted on AWS. Pretty hard to completely avoid them in this day and age.
Amazon for me has been utter garbage in the last 10 years. Fake products, stuff that is supposedly coming next day comes in 3+ days, customer service is some copy/paste canned answers etc
Monopolies don’t care about the user experience, only profit. The AI doesnt understand the former, only the latter. The continued degredation of the user experience is a likely indicator of an increase in revenue as function of successful application of AI.
The AI doesnt understand the former, only the latter.
Do you possibly mean “The AI evangelists” or something similar?
Like, I could totally understand it in the “software will also include the biases of those who wrote it” kind of way (a la Amazon’s failed attempt at automating job candidate search). If the only incentive you’re given as a programmer is “make it make money”, then yeah, your AI is going to bias towards that end.
I’m not actually asking for good faith answers to these questions. Asking seems the best way to illustrate the concept.
Does the programmer fully control the extents of human meaning as the computation progresses, or is the value in leveraging ignorance of what the software will choose?
Shall we replace our judges with an AI?
Does the software understand the human meaning in what it does?
The problem with the majority of the AI projects I’ve seen (in rejecting many offers) is that the stakeholders believe they’ve significantly more influence over the human meaning of the results than exists in the quality and nature of the data they’ve access to. A scope of data limits a resultant scope of information, which limits a scope of meaning. Stakeholders want to break the rules with “AI voodoo”. Then, someone comes along and sells the suckers their snake oil.
There is a bubble in AI, AI isnt a bubble. In the same way there was a bubble in e-commerce that lead to the dotcom crash. But that didnt mean there was nothing of value there, just that there was too much money chasing hype.
I think it will hinge on one thing: Will AI provide an experience that is maybe worse, but still sufficient to keep the market share, at lower cost than putting in the proper effort? If so, it might still become a tragic “success”-story.
It’s very, very costly, both but the hardware and the electricity it takes to run it. There may be a bit of sunk cost fallacy at play for some, especially the execs who are calling for AI Everything, but in the end, in AI doesn’t generate enough increase in revenue to offset its operational costs, even those execs will bow out. I think the economics of AI will cause the bubble to burst because end users aren’t going to pay money for a service that does a mediocre job at most things but costs more.
All of big tech is really worried about this.
If the AI boom is a dud, I can see many of these companies reducing their output further. If someone comes along and competes in their primary offering, there’s a real concern that they’ll lose ground in ways that were unthinkable mere years ago. Someone could legitimately challenge Google on search right now, and someone could build a cheap shop that doesn’t sell Chinese tat and uses local suppliers to compete with Amazon. Tech really shat the bed during the last economic downturn.
I’m not sure there could be any sort of legitimate threat to them, but I could definitely see a Netflix situation playing out. That is a popular upstart temporarily seems poised to take over, but then suffers from extreme levels of interference from bigger players who artificially hold the upstart down while they desperately catch up and then ultimately come at least equal while the Netflix equivalent is mostly a shell of what it could’ve been.
Never underestimate how much buckets and buckets of cash reserves can overcome even incredibly out of touch laziness when it comes to competing with any start ups. Apple in particular could probably afford to let competitors get a decade ahead and still be able to come back based on the ridiculous amount of cash they have to float their business along with.
Yeah competition won’t work in a market where some competitors have such massive amounts of wealth. This is a failure of unrestrained capitalism and it’s bad for consumers ultimately.
I fucking bing’d something the other day to get a better search result. What the fuck google.
Try Kagi. Paid search engines are the future in order to extract yourself from the enshittification of “free” search engines.
If your goal is to get away from this AI shit show, Kagi might not be the answer, according to their own blog.
I will search for a very interesting article you should read, before deciding to give kagi any money.
Edit: found it
There’s also the whole interaction in how the CEO treated someone who wrote an article and he wouldn’t leave her alone after asking him to stop.
From his perspective I get it, you want to have good press and try to clear up any misconceptions. But how he went about it was very unprofessional and far too pushy.
I read that stuff a few weeks ago. And the responses and discussion on Kagi’s Discord. I’ll continue to monitor Kagi’s behavior, of course, but for now I prefer Kagi. I get far more relevant results with no advertising noise and as much or as little “AI” assistance as I want.
Google is a cesspool and DDG is simply inferior - worthy, but inferior.
Using Kagi is not a bad decision. After reading a lot of positive things about it and beeing quite hyped, I was so ungently reminded, that every good thing comes with its own baggage of bad.
I thought I share it, so that everyone can make their own decision.
I also tried Metager, a german meta search engine. Sad to report: not usable for me, although I like their club (suma-ev) and donated some money towards them.
For now it’s DDG I guess. :-(
No. They are still capable of pressure typical for oligopoly (censoring out mentions of their competition, tactically buying out things which could help that competition and shutting them down, defamation, lobbying for laws directed against their competition).
Unless that happens too fast for them to realize.
Whaddya mean, “if”? Emperor wears no clothes…
AI did boom, but people don’t realize the peak happened a year ago. Now all we have is latecomers with FOMO. It’s gonna be all incremental gains from here on.
A simple control algorithm “if temperature > LIMIT turnOffHeater” is AI, albeit an incredibly limited one.
LLMs are not AI. Please don’t parrot marketing bullshit.
The former has an intrinsic understanding about a relationship based in reality, the latter has nothing of the likes.
I can see where you’re getting at, LLM don’t necessarily solve a problem, they just mímic patterns in data.
That is indeed exactly my point. LLMs are just a language-tailored expression of deep-learning, which can be incredibly useful, but should never be confused for any kind of intelligence (i.e. logical conclusions).
I appreciate that you see my point and admit that it makes some sense :)
Example where I think pattern recognition by deep learning can be extremely useful:
But what I am afraid is happening for people who do not see why a very simple algorithm is already AI, but consider LLMs AI, is that they mentally decide to call AI what seems “AGI” / “human-like”. They mistake the patterns of LLMs for a conscious being and that is incredibly dangerous in terms of trusting the answers given by LLMs.
Why do I think they subconsciously imply (self-)awareness / conscience? Because to not consider as (very limited) AI a control mechanism like a simple room thermostat, is viewing it as “too simple” to be AI - which means that a person with such a view makes a qualitative distinction between control laws and “AI”, where a quantitative distinction between “simple AI” and “advanced AI” would be appropriate.
And such a qualitative distinction that elevates a complex word guessing machine to “intelligence”, that can only be made by people who actually believe there’s understanding behind those word predictions.
That’s my take on this.
or more like their user experience was already so garbage, adding AI to it doesn’t make any noticeable change lol
I don’t use a single Meta product on purpose. I’m sure they scrape my data despite my best efforts to not be tracked online.
I still unfortunately order things from Amazon for the convenience, use Windows for gaming and at work, and occasionally use Google search with heavy boolean search, custom search engines, and browser extensions for filtering out the garbage. I also still use Google Maps and I have an Android based tv where I occasionally watch SmartTube.
Hell I even get Netflix included with my T-Mobile subscription. My wife watches that.
And for now, I have an iPhone SE until it dies and I make the switch to a Google phone or something.
Typing this out makes me wonder what I’m waiting for to find alternatives for this FAANG garbage, but I have no idea how Facebook still exists.
It turned out that it’s incredible easy to order as guest at other sides
Yes but I don’t want to type my billing details every time I need some thing. I don’t want to wait 6 weeks. I don’t know if other sites are reputable. I don’t want to pay shipping. I like being able to wishlist stuff or store stuff in my cart for later and read lots of reviews on products (I’m aware many are fake).
There’s also the fact that nearly every website runs on AWS, so even if I boycott Amazon (I’m sure they’ll miss my $100 a month in purchases), I’m still providing them money by visiting the sites that are hosted on AWS. Pretty hard to completely avoid them in this day and age.
Amazon for me has been utter garbage in the last 10 years. Fake products, stuff that is supposedly coming next day comes in 3+ days, customer service is some copy/paste canned answers etc
Monopolies don’t care about the user experience, only profit. The AI doesnt understand the former, only the latter. The continued degredation of the user experience is a likely indicator of an increase in revenue as function of successful application of AI.
Do you possibly mean “The AI evangelists” or something similar?
Like, I could totally understand it in the “software will also include the biases of those who wrote it” kind of way (a la Amazon’s failed attempt at automating job candidate search). If the only incentive you’re given as a programmer is “make it make money”, then yeah, your AI is going to bias towards that end.
Just couldn’t tell on first reading
I’m not actually asking for good faith answers to these questions. Asking seems the best way to illustrate the concept.
Does the programmer fully control the extents of human meaning as the computation progresses, or is the value in leveraging ignorance of what the software will choose?
Shall we replace our judges with an AI?
Does the software understand the human meaning in what it does?
The problem with the majority of the AI projects I’ve seen (in rejecting many offers) is that the stakeholders believe they’ve significantly more influence over the human meaning of the results than exists in the quality and nature of the data they’ve access to. A scope of data limits a resultant scope of information, which limits a scope of meaning. Stakeholders want to break the rules with “AI voodoo”. Then, someone comes along and sells the suckers their snake oil.
And people will still say AI isn’t a bubble.
There is a bubble in AI, AI isnt a bubble. In the same way there was a bubble in e-commerce that lead to the dotcom crash. But that didnt mean there was nothing of value there, just that there was too much money chasing hype.
I think it will hinge on one thing: Will AI provide an experience that is maybe worse, but still sufficient to keep the market share, at lower cost than putting in the proper effort? If so, it might still become a tragic “success”-story.
It’s very, very costly, both but the hardware and the electricity it takes to run it. There may be a bit of sunk cost fallacy at play for some, especially the execs who are calling for AI Everything, but in the end, in AI doesn’t generate enough increase in revenue to offset its operational costs, even those execs will bow out. I think the economics of AI will cause the bubble to burst because end users aren’t going to pay money for a service that does a mediocre job at most things but costs more.
Summary: stick to open source if you want usability.
How does that address web search and online shopping?