@omgubuntu
@mozilla has shown their way I guess
⁃ Closed their almost nothing costing Mastodon instance
⁃ Links to racist Twitter on website
⁃ No links to FOSS platforms
⁃ Chases the AI dragon
#Tag
@omgubuntu
@mozilla has shown their way I guess
⁃ Closed their almost nothing costing Mastodon instance
⁃ Links to racist Twitter on website
⁃ No links to FOSS platforms
⁃ Chases the AI dragon
Mozilla has detailed it's pivot to AI, lazily framing its "people-first AI" vs "big tech AI" gambit as a rehash of 2000s browser wars.
Over the next three years all of Mozilla’s portfolio "will design their strategies" and "measure their success" with how much AI they're adding.
AI features in Firefox will be "opt-in", but no doubt it'll nag you to try them since Mozilla is making 20% yearly increases in non-search revenue part of its "bottom line" mission.
https://blog.mozilla.org/en/mozilla/rewiring-mozilla-ai-and-web/
@omgubuntu
@mozilla has shown their way I guess
⁃ Closed their almost nothing costing Mastodon instance
⁃ Links to racist Twitter on website
⁃ No links to FOSS platforms
⁃ Chases the AI dragon
3/4 What do the #DigitalOmnibus proposals mean for protections against harmful #AI systems?
🚩 the proposals tear apart the #AIAct, letting companies secretly exempt themselves from oversight and giving risky AI systems a free pass to discriminate or cause harm
🚩give in to industry demands and delay the implementation of essential protections.
@EUCommission in 2020: Some #AI systems are so dangerous, they should be regulated as high-risk applications with lots of safeguards.
@EUCommission in 2025: Bah, let's wait and let the high-risk shit run wild first.
2/4 What do the #DigitalOmnibus proposals mean for our #DataProtection and #privacy?
🚩weaken ePrivacy rules, opening the door to constant tracking of phones, cars, and smart devices
🚩 reopen and hollow out the GDPR, allowing corporations to mark their own homework
🚩create a digital environment where state actors and corporate powers – especially Big Tech - gain more freedom to collect and exploit personal information
3/4 What do the #DigitalOmnibus proposals mean for protections against harmful #AI systems?
🚩 the proposals tear apart the #AIAct, letting companies secretly exempt themselves from oversight and giving risky AI systems a free pass to discriminate or cause harm
🚩give in to industry demands and delay the implementation of essential protections.
@EUCommission in 2020: Some #AI systems are so dangerous, they should be regulated as high-risk applications with lots of safeguards.
@EUCommission in 2025: Bah, let's wait and let the high-risk shit run wild first.
Almost half a century later, IBM's internal training documents from the late 70s are specifically valuable in the era of #AI.
Almost half a century later, IBM's internal training documents from the late 70s are specifically valuable in the era of #AI.
FWIW I read this as a not-very-coded warning to governments that they need to get their wallets out. They learned from the banks that as long as they got "too big to fail" they could knowingly inflate this bubble without personal consequences. #AI #Google #USPol #UKPol #TechNews
https://www.bbc.co.uk/news/articles/cwy7vrd8k4eo
FWIW I read this as a not-very-coded warning to governments that they need to get their wallets out. They learned from the banks that as long as they got "too big to fail" they could knowingly inflate this bubble without personal consequences. #AI #Google #USPol #UKPol #TechNews
https://www.bbc.co.uk/news/articles/cwy7vrd8k4eo
So the whole thread that emerged after the cats and bears video I posted yesterday got me thinking about the philosophical and ethical issues surrounding #AI.
As a technology, no matter how much I hate it, it’s not going away.
It’s what gets done with it - the intention behind its use - that becomes disturbing, right? As with any technology.
And sadly, our collective experience is that new technologies usually get employed, one was or another, to harm us.
1/
So the whole thread that emerged after the cats and bears video I posted yesterday got me thinking about the philosophical and ethical issues surrounding #AI.
As a technology, no matter how much I hate it, it’s not going away.
It’s what gets done with it - the intention behind its use - that becomes disturbing, right? As with any technology.
And sadly, our collective experience is that new technologies usually get employed, one was or another, to harm us.
1/
Today, I think we’d do well by distancing ourselves from the AI hype. The slop is real, and it’s now obvious that it has created massive problems, and is likely to continue to do so. I don’t want to associate myself with something this damaging. Do you?
The issues are many.
For one, AI is fundamentally hostile against anyone contributing to the digital commons, by the fact that AI companies are massively freeloading on published source code, articles, images and any other creative content, without any thought of license constraints or contributing back. If AI companies, for example, were funding the open source ecosystems they are DOS’ing, or paying the artist they are copying, the situation might be marginally better. They’re not. This means the training material these companies are misappropriating should lead to their models being considered ethically tainted.
Next, we have to remember that these models are “grown” on whatever data is fed to them. If this input contains bias, lies inaccuracies or omissions, then the resulting model will reflect this. Garbage in, garbage out.
And even worse, the resulting model is opaque by design. Any rules, corrections, filters or other efforts to compensate for “weaknesses”, are under the full control of the entity growing the model. This puts a massive amount of leverage into their hands; they can color, censor or emphasize any political, social, cultural or even religious agenda they wish! The only choice we have is to accept the models as they are delivered, or to try to polish these turds so that they are a little better for some narrow use-cases. But at the core, it is still a turd.
And then there’s the economic aspect. Let’s keep in mind that there are massive investments in AI companies (in the order of 100’s of Bln USD announced), all of which is expected to turn a profit at some point.
We know how expensive it is to train a model, and how error prone it’s inferred output is, and even if some of this can be compensated by massively increasing the energy costs (e.g. by “agentifying” their products or adding manual rules to catch the worst output), these expenses WILL ultimately be put on the end-user. This is where the Return on Investment is extracted.
How this happens is not a secret:
The problem with this picture is that we know that the initial investments have been insane, and are scheduled to increase. We know that enormous amounts of the costs of these models have been externalized.
(e.g. in the form or excessive use of water and fossil fuels needed to power these systems, or the societal damage that happens when people are replaced by LLMs, or the opportunity cost IT students pay when they realize they won’t find any work in the field they studied for, or the lost business for artists, or wasted time spent on compensating or second-guessing output that one is unsure includes any hallucinations).
We also know who is going to pay in the end – the users and businesses who decide to go “all-in”. At some point, these people will have to ask themselves:
How much am I – or my customers – willing to pay for this slop?
– Random Hapless Rube
AI proponents also tend to use cheap rhetoric to convince other to buy into their message. Why is that necessary? Pushing a panic-like FOMO messages onto unsuspecting techno-optimists is cruel and unnecessary. There’s no need for manipulative language like “Embrace it or get out“. People with good intentions don’t have to resort to hostile language like this!
The AI hype is clearly cruel, irrational and ignorant of the real consequences it creates, and therefore needs to be shut down, or at minimum, put AI on pause.
This particular lemon is NOT worth the squeeze.
If we continue to encourage this insanity, we’re complicit in the waste of resources, attention, life and humanity. THIS IS NOT OKAY.
Today, I think we’d do well by distancing ourselves from the AI hype. The slop is real, and it’s now obvious that it has created massive problems, and is likely to continue to do so. I don’t want to associate myself with something this damaging. Do you?
The issues are many.
For one, AI is fundamentally hostile against anyone contributing to the digital commons, by the fact that AI companies are massively freeloading on published source code, articles, images and any other creative content, without any thought of license constraints or contributing back. If AI companies, for example, were funding the open source ecosystems they are DOS’ing, or paying the artist they are copying, the situation might be marginally better. They’re not. This means the training material these companies are misappropriating should lead to their models being considered ethically tainted.
Next, we have to remember that these models are “grown” on whatever data is fed to them. If this input contains bias, lies inaccuracies or omissions, then the resulting model will reflect this. Garbage in, garbage out.
And even worse, the resulting model is opaque by design. Any rules, corrections, filters or other efforts to compensate for “weaknesses”, are under the full control of the entity growing the model. This puts a massive amount of leverage into their hands; they can color, censor or emphasize any political, social, cultural or even religious agenda they wish! The only choice we have is to accept the models as they are delivered, or to try to polish these turds so that they are a little better for some narrow use-cases. But at the core, it is still a turd.
And then there’s the economic aspect. Let’s keep in mind that there are massive investments in AI companies (in the order of 100’s of Bln USD announced), all of which is expected to turn a profit at some point.
We know how expensive it is to train a model, and how error prone it’s inferred output is, and even if some of this can be compensated by massively increasing the energy costs (e.g. by “agentifying” their products or adding manual rules to catch the worst output), these expenses WILL ultimately be put on the end-user. This is where the Return on Investment is extracted.
How this happens is not a secret:
The problem with this picture is that we know that the initial investments have been insane, and are scheduled to increase. We know that enormous amounts of the costs of these models have been externalized.
(e.g. in the form or excessive use of water and fossil fuels needed to power these systems, or the societal damage that happens when people are replaced by LLMs, or the opportunity cost IT students pay when they realize they won’t find any work in the field they studied for, or the lost business for artists, or wasted time spent on compensating or second-guessing output that one is unsure includes any hallucinations).
We also know who is going to pay in the end – the users and businesses who decide to go “all-in”. At some point, these people will have to ask themselves:
How much am I – or my customers – willing to pay for this slop?
– Random Hapless Rube
AI proponents also tend to use cheap rhetoric to convince other to buy into their message. Why is that necessary? Pushing a panic-like FOMO messages onto unsuspecting techno-optimists is cruel and unnecessary. There’s no need for manipulative language like “Embrace it or get out“. People with good intentions don’t have to resort to hostile language like this!
The AI hype is clearly cruel, irrational and ignorant of the real consequences it creates, and therefore needs to be shut down, or at minimum, put AI on pause.
This particular lemon is NOT worth the squeeze.
If we continue to encourage this insanity, we’re complicit in the waste of resources, attention, life and humanity. THIS IS NOT OKAY.