The pathological need to find something to use LLMs for is so bizzare.
It’s like the opposite of classic ML, relatively tiny special purpose models trained for something critical, out of desperation, because it just can’t be done well conventionally.
But this:
AI-enhanced tab groups. Powered by a local AI model, these groups identify related tabs and suggest names for them. There is even a “Suggest more tabs for group” button that users can click to get recommendations.
Take out the word AI.
Enhanced tab groups. Powered by a local algorithm, these groups identify related tabs and suggest names for them. There is even a “Suggest more tabs for group” button that users can click to get recommendations.
If this feature took, say, a gigabyte of RAM and a bunch of CPU, it would be laughed out. But somehow it ships because it has the word AI in it? That makes no sense.
I am a massive local LLM advocate. I like “generative” ML, within reason and ethics. But this is just stupid.
When I’m browsing around with multiple tabs open, the last thing I want is something to start moving them around and messing my flow up. This is a solution looking for a problem.
Yup
Auto naming functionality is neat in some cases, like the AI chat UI itself
- It’s convenient to have names when toggling between a few recent chats or searching through 10s or 100s of chats later on
 - I spawn new chats often and it’s tedious to name them all
 - I don’t have a strong preference for what the title is as long as it’s clear what the chat was about
 
Tab groups don’t hit those points at all
- I’ll have a handful of tab groups
 - I don’t make them often
 - I have a strong preference for what it’s called, and the AI will have trouble figuring out exactly what I’m using those sites for
 
I agree with you on almost everything.
It’s like the opposite of classic ML, relatively tiny special purpose models trained for something critical, out of desperation, because it just can’t be done well conventionally.
Here i disagree. ML is using high dimensional statistics. There exist many problems, which are by their nature problems of high dimensional statistics.
If you have for an example an engineering problem, it can make sense to use an ML approach, to find patterns in the relationship between input conditions and output results. Based on this patterns you have an idea, where you need to focus in the physical theory for understanding and optimizing it.
Another example for “generative AI” i have seen is creating models of hearts. So by feeding it the MRI scans of hundreds of real hearts, millions of models for probable heart shapes can be created and the interaction with medical equipment can be studied on them. This isn’t a “desperate” approach. It is a smart approach.
Based on this patterns you have an idea, where you need to focus in the physical theory for understanding and optimizing it.
How do you tell what the patterns are, or how to interpret them?
The recognition of the pattern is done by the machine learning. That is the core concept of machine learning.
For the interpretation you need to use your domain knowledge. Machine learning together with knowledge in the domain analyzed can be a very powerful combination.
Another example in research i have heard about recently, is detection of brain tumors before they occur. MRIs are analyzed of people who later developed brain tumors to see if patterns can be detected in the people who developed the tumors that are absent in the people who didn’t develop tumors. This knowledge of a correlation between certain patterns and later tumor development could help specialists to further their understanding of how tumors develop as they can analyze these specific patterns.
What we see with ChatGPT and other LLMs is kind of doing the opposite by detaching the algorithm from any specific knowledge. Subsequently the algorithm can make predictions on anything and they are worth nothing.
The pathological need to find something to use LLMs for is so bizzare.
Venture capital dumped so much money into the tech without understanding the full scope of what it was capable of. Now they’re so in so deep that they desperately NEED to find something profitable it can do, otherwise they’ll lose the farm.
TBH despite I don’t like this specific idea, nor use Firefox directly, I do like the usage of local inference vs sending your data to thirdparty to do AI.
They just needed to do it OPT IN, not OPT OUT.
It is though.
then why the fuck is this newsworthy? ugh. Why is there such a huge hateboner for firefox lately?
I really don’t get it either.
It’s not like it’s a paid product either.
Because they keep betraying their supposed values for short-term gains.
What is the gain? What is a single gain you think they have milked from their users?
Money for their executives
Literally no one on this green earth asked for this shit. In fact, we’ve been pretty direct about how much we don’t want it.
It’s exhausting.
browser.ml.chat.enabled falseI hate how many of these you have to do on any new installation of Firefox.
You only disable the chat. Overall setting seems to be
browser.ml.enable.
I just wish one could donate to firefox development specifically. Then they could rid it of all the advertisement and tracking stuff.
Bless your heart.
They will do whatever they believe will maximize profit.
I thought mozilla was a non profit?
Mozilla is a bizarre Matryoshka doll with a for profit company inside of the nonprofit. If anything, I believe this structure is responsible for Mozilla’s problems
So the profit from the for-profit is passed up to the non-profit.
This is a really common organisational structure and not bizarre.
There’s loads of worthy criticisms to make of mozilla but this is not one of them.
Sure, whereupon the CEO alone can receive an 8 figure compensation package. That is not at all an issue to the viability of a non-profit.
It’s not as simple as just deciding to hire people at lower rates of pay.
Cost cutting is a tricky game. When an organisation is not on a positive trajectory, cost cutting has a very high risk of re-enforcing the underlying problems.
That’s not to say cost cutting isn’t a worthy objective, but it needs to be carefully considered.
If you want a CEO with the right skills and connections you need to pay.
But they have a strong history of paying a lot for CEOs that don’t have the right skills and connections. It’s not just this one, it’s a systemic issue for them.
where is this AI bloat exactly? I use Firefox every day and see no difference
There is none, this is all AI=bad knee-jerk reaction. From what I can tell, so far Firefox has 3 ML-based systems implemented:
- Site / text translation - fully local, small model, requires manual action from user
 - Tab grouping suggestions - fully local, small model, requires manual action from user
 - Image alt text generation (when adding images to a PDF) - fully local, small model, looks like it’s enabled by default but can be turned off directly in the modal that appears when adding alt text
 
All of these models are small enough to be quickly run locally on mobile devices with minimal wait time. The CPU spikes appear to be a bug in the inference module implementation - not an intended behavior.
Firefox also provides UI for connecting to cloud-based chatbots on a sidebar, but they need to be manually enabled to be used. The sidebar is also customizable so anyone who doesn’t want this button there can just remove it. There’s also a setting in about:config that removes it harder.
I actually really like the way Mozilla is introducing these features. I recently had to visit another country’s post office site and having the ability to just instantly translate it directly on my device is great.
You meant to tell me the general public has kneejerk reactions and don’t know how a computer works?
What a shock that lemmy bashes Mozilla for doing their job.
Firefox does run better when you disable all “ml.chat” settings.













