Searching We.Love.Privacy.Club

Twts matching #AI
Sort by: Newest, Oldest, Most Relevant

‘Stratospheric’ AI Spending By Four Wealthy Companies Reaches $360B Just For Data Centers
“Maybe you’ve heard that artificial intelligence is a bubble poised to burst,” writes a Washington Post technology columnist. “Maybe you have heard that it isn’t. (No one really knows either way, but that won’t stop the bros from jabbering about it constantly.)”

“But I can confidently tell you that the m … ⌘ Read more

⤋ Read More

Ryzen AI Software 1.6.1 Advertises Linux Support
Ryzen AI Software as AMD’s collection of tools and libraries for AI inferencing on AMD Ryzen AI class PCs has Linux support with its newest point release. Though this “early access” Linux support is restricted to registered AMD customers… ⌘ Read more

⤋ Read More

‘Vibe Coding’ Named Word of the Year By Collins Dictionary
Collins Dictionary has named “vibe coding” its 2025 word of the year – a term coined by Andrej Karpathy for when a user makes an app or website by describing it to AI rather than writing programming code manually. The term, which is confusingly made up of two words, was “one of 10 words on a shortlist to reflect the mood, language and preoccupations of 2025,” repo … ⌘ Read more

⤋ Read More

Corporate Profits Surge as Companies Cut Nearly 1 Million Jobs
U.S. corporate profits have risen to record levels this year as companies eliminated nearly 1 million jobs. Chen Zhao of Alpine Macro calls the disconnect a “jobless boom.” Companies typically cut workers when profits decline. Amazon laid off 30,000 employees despite strong earnings. Zhao attributes the pattern to AI adoption boosting productivity across … ⌘ Read more

⤋ Read More
In-reply-to » … and now I just read @bender’s other post that said the Gemini text was a shortened version, so I might have criticized things that weren’t true for the full version. Okay, sorry, I’m out. (And I won’t play that game, either. Don’t send me another AI output, possibly tweaked to address my criticism. That is besides the point and not worth my time.)

@bender@twtxt.net All good. ✌️ It’s just that I’ve been through several iterations of this (on other platforms), AI output back and forth, pointing out what’s wrong, but in the end people were just trolling (not saying that’s what you had in mind), because apparently that’s “fun”.

⤋ Read More
In-reply-to » @bender Thanks for this illustration, it completely “misunderstood” everything I wrote and confidently spat out garbage. 👌

… and now I just read @bender@twtxt.net’s other post that said the Gemini text was a shortened version, so I might have criticized things that weren’t true for the full version. Okay, sorry, I’m out. (And I won’t play that game, either. Don’t send me another AI output, possibly tweaked to address my criticism. That is besides the point and not worth my time.)

⤋ Read More
In-reply-to » @bender Thanks for this illustration, it completely “misunderstood” everything I wrote and confidently spat out garbage. 👌

@prologic@twtxt.net Let’s go through it one by one. Here’s a wall of text that took me over 1.5 hours to write.

The criticism of AI as untrustworthy is a problem of misapplication, not capability.

This section says AI should not be treated as an authority. This is actually just what I said, except the AI phrased/framed it like it was a counter-argument.

The AI also said that users must develop “AI literacy”, again phrasing/framing it like a counter-argument. Well, that is also just what I said. I said you should treat AI output like a random blog and you should verify the sources, yadda yadda. That is “AI literacy”, isn’t it?

My text went one step further, though: I said that when you take this requirement of “AI literacy” into account, you basically end up with a fancy search engine, with extra overhead that costs time. The AI missed/ignored this in its reply.

Okay, so, the AI also said that you should use AI tools just for drafting and brainstorming. Granted, a very rough draft of something will probably be doable. But then you have to diligently verify every little detail of this draft – okay, fine, a draft is a draft, it’s fine if it contains errors. The thing is, though, that you really must do this verification. And I claim that many people will not do it, because AI outputs look sooooo convincing, they don’t feel like a draft that needs editing.

Can you, as an expert, still use an AI draft as a basis/foundation? Yeah, probably. But here’s the kicker: You did not create that draft. You were not involved in the “thought process” behind it. When you, a human being, make a draft, you often think something like: “Okay, I want to draw a picture of a landscape and there’s going to be a little house, but for now, I’ll just put in a rough sketch of the house and add the details later.” You are aware of what you left out. When the AI did the draft, you are not aware of what’s missing – even more so when every AI output already looks like a final product. For me, personally, this makes it much harder and slower to verify such a draft, and I mentioned this in my text.

Skill Erosion vs. Skill Evolution

You, @prologic@twtxt.net, also mentioned this in your car tyre example.

In my text, I gave two analogies: The gym analogy and the Google Translate analogy. Your car tyre example falls in the same category, but Gemini’s calculator example is different (and, again, gaslight-y, see below).

What I meant in my text: A person wants to be a programmer. To me, a programmer is a person who writes code, understands code, maintains code, writes documentation, and so on. In your example, a person who changes a car tyre would be a mechanic. Now, if you use AI to write the code and documentation for you, are you still a programmer? If you have no understanding of said code, are you a programmer? A person who does not know how to change a car tyre, is that still a mechanic?

No, you’re something else. You should not be hired as a programmer or a mechanic.

Yes, that is “skill evolution” – which is pretty much my point! But the AI framed it like a counter-argument. It didn’t understand my text.

(But what if that’s our future? What if all programming will look like that in some years? I claim: It’s not possible. If you don’t know how to program, then you don’t know how to read/understand code written by an AI. You are something else, but you’re not a programmer. It might be valid to be something else – but that wasn’t my point, my point was that you’re not a bloody programmer.)

Gemini’s calculator example is garbage, I think. Crunching numbers and doing mathematics (i.e., “complex problem-solving”) are two different things. Just because you now have a calculator, doesn’t mean it’ll free you up to do mathematical proofs or whatever.

What would have worked is this: Let’s say you’re an accountant and you sum up spendings. Without a calculator, this takes a lot of time and is error prone. But when you have one, you can work faster. But once again, there’s a little gaslight-y detail: A calculator is correct. Yes, it could have “bugs” (hello Intel FDIV), but its design actually properly calculates numbers. AI, on the other hand, does not understand a thing (our current AI, that is), it’s just a statistical model. So, this modified example (“accountant with a calculator”) would actually have to be phrased like this: Suppose there’s an accountant and you give her a magic box that spits out the correct result in, what, I don’t know, 70-90% of the time. The accountant couldn’t rely on this box now, could she? She’d either have to double-check everything or accept possibly wrong results. And that is how I feel like when I work with AI tools.

Gemini has no idea that its calculator example doesn’t make sense. It just spits out some generic “argument” that it picked up on some website.

3. The Technical and Legal Perspective (Scraping and Copyright)

The AI makes two points here. The first one, I might actually agree with (“bad bot behavior is not the fault of AI itself”).

The second point is, once again, gaslighting, because it is phrased/framed like a counter-argument. It implies that I said something which I didn’t. Like the AI, I said that you would have to adjust the copyright law! At the same time, the AI answer didn’t even question whether it’s okay to break the current law or not. It just said “lol yeah, change the laws”. (I wonder in what way the laws would have to be changed in the AI’s “opinion”, because some of these changes could kill some business opportunities – or the laws would have to have special AI clauses that only benefit the AI techbros. But I digress, that wasn’t part of Gemini’s answer.)

tl;dr

Except for one point, I don’t accept any of Gemini’s “criticism”. It didn’t pick up on lots of details, ignored arguments, and I can just instinctively tell that this thing does not understand anything it wrote (which is correct, it’s just a statistical model).

And it framed everything like a counter-argument, while actually repeating what I said. That’s gaslighting: When Alice says “the sky is blue” and Bob replies with “why do you say the sky is purple?!”

But it sure looks convincing, doesn’t it?

Never again

This took so much of my time. I won’t do this again. 😂

⤋ Read More
In-reply-to » You do raise very good points though, but I don't think any of this is particularly new because there are many other examples of technology and evolution of change over time where people have forgotten certain skills like for example, changing a car tyre

@prologic@twtxt.net when I first “fed” the text to Gemini, I asked for a three paragraphs summary. It provided it. Then I asked to “elaborate on three areas: user experience, moral/political impact, and technical/legal concerns”. The reply to that is too long for a twtxt.

I then asked to counter the OP opinions—as in “how would you counter the author’s opinion?”. The reply was very long, but started like this:

“That’s an excellent question, as the post lays out some very strong, well-reasoned criticisms. Countering these points requires acknowledging the valid concerns while presenting a perspective focused on mitigation, responsible integration, and the unique benefits of AI.”

What followed was extensive, so I asked for a summary, which didn’t do justice to the wall of text that preceded it.

⤋ Read More

Magika 1.0 Goes Stable As Google Rebuilds Its File Detection Tool In Rust
BrianFagioli writes: Google has released Magika 1.0, a stable version of its AI-based file type detection tool, and rebuilt the entire engine in Rust for speed and memory safety. The system now recognizes more than 200 file types, up from about 100, and is better at distinguishing look-alike formats such as JSON vs JSONL, TS … ⌘ Read more

⤋ Read More

Microsoft Forms Superintelligence Team Under AI Chief Suleyman ‘To Serve Humanity’
Microsoft is launching a new MAI Superintelligence Team under Mustafa Suleyman to build practical, controllable AI aimed at digital companions, medical diagnostics, and renewable-energy modeling. “We are doing this to solve real concrete problems and do it in such a way that it remains grounded and controllable,” Sule … ⌘ Read more

⤋ Read More

Amazon is Testing an AI Tool That Automatically Translates Books Into Other Languages
An anonymous reader shares a report: Amazon just introduced an AI tool that will automatically translate books into other languages. The appropriately-named Kindle Translate is being advertised as a resource for authors that self publish on the platform.

The company says the tool can translate entire boo … ⌘ Read more

⤋ Read More

Google Plans Secret AI Military Outpost on Tiny Island Overrun By Crabs
An anonymous reader shares a report: On Wednesday, Reuters reported that Google is planning to build a large AI data center on Christmas Island, a 52-square-mile Australian territory in the Indian Ocean, following a cloud computing deal with Australia’s military. The previously undisclosed project will reportedly position advanced A … ⌘ Read more

⤋ Read More

Trump AI Czar Says ‘No Federal Bailout For AI’ After OpenAI CFO’s Comments
Venture capitalist David Sacks, who is serving as President Donald Trump’s AI and crypto czar, said Thursday that there will be “no federal bailout for AI.” From a report: “The U.S. has at least 5 major frontier model companies. If one fails, others will take its place,” Sacks wrote in a post on X. Sacks’ comments came after Open … ⌘ Read more

⤋ Read More

A New White-Collar Gig Economy: Training AI To Take Over
AI labs are paying skilled professionals hundreds of dollars per hour to train their models in specialized fields. Companies like Mercor, Surge AI, Scale AI and Turing recruit bankers, lawyers, engineers and doctors to improve the accuracy of AI systems in professional settings. Mercor advertises roles for medical secretaries, movie directors and private detectives at … ⌘ Read more

⤋ Read More

OpenAI CFO Says Company Isn’t Seeking Government Backstop, Clarifying Prior Comment
OpenAI CFO Sarah Friar said late Wednesday that the AI startup is not seeking a government backstop for its infrastructure commitments, clarifying previous comments she made on stage during the Wall Street Journal’s Tech Live event. From a report: At the event, Friar said OpenAI is looking to create an ecosyste … ⌘ Read more

⤋ Read More

Nvidia’s Jensen Huang Says China ‘Will Win’ AI Race With US
Nvidia chief executive Jensen Huang has warned that China will beat the US in the AI race, thanks to lower energy costs and looser regulations. From a report: In the starkest comments yet from the head of the world’s most valuable company, Huang told the FT: “China is going to win the AI race.” Huang’s remarks come after the Trump administration maintained a ban o … ⌘ Read more

⤋ Read More

Grinn GenioBoard Offers MediaTek Genio 700 SoM, Dual M.2 Expansion, and CRA-Ready Security
Grinn has unveiled the GenioBoard, a compact single-board computer aimed at accelerating development of embedded and AI-enabled systems. It integrates the company’s GenioSOM-510 and GenioSOM-700 modules built on MediaTek’s Genio processor family, combining multiple Arm Cortex-A cores with an integrated GPU and NPU for edge inference applications. Powered by the Medi … ⌘ Read more

⤋ Read More

Is it ok for politicians to use AI? Survey shows where the public draws the line
New survey evidence from the UK and Japan shows people are open to MPs using AI as a tool, but deeply resistant to handing over democratic decisions to machines. ⌘ Read more

⤋ Read More

Gemini AI To Transform Google Maps Into a More Conversational Experience
An anonymous reader quotes a report from the Associated Press: Google Maps is heading in a new direction with artificial intelligence sitting in the passenger’s seat. Fueled by Google’s Gemini AI technology, the world’s most popular navigation app will become a more conversational companion as part of a redesign announced Wednesda … ⌘ Read more

⤋ Read More

New Bipartisan Bill Would Require Companies To Report AI Job Losses
A new bipartisan bill introduced by Senators Mark Warner and Josh Hawley would require companies and federal agencies to report quarterly on AI-related workforce changes, including layoffs, new hires, and retraining efforts. The data from the AI-Related Job Impacts Clarity Act (PDF) would then be compiled by the Department of Labor into a public … ⌘ Read more

⤋ Read More

More Intel Crescent Island Enablement Prepped For Linux 6.19
Following Intel’s disclosure less than one month ago of Crescent Island as a upcoming Xe3P graphics card with 160GB of vRAM focused on enterprise-level AI inferencing, Intel’s open-source Linux graphics driver engineers have been quick to begin plumbing the Xe kernel graphics driver for this next-generation graphics card… ⌘ Read more

⤋ Read More
In-reply-to » For the innocent bystanders (because I know that I won’t change @bender’s opinion):

And, one last missed:

  • AI is Forcing Legal Modernization: The copyright double standard is a failure of outdated law. AI provides the necessary impetus for legal reform to either create fair compensation frameworks for creators or establish a clear new definition of fair use for data-driven models.

⤋ Read More
In-reply-to » For the innocent bystanders (because I know that I won’t change @bender’s opinion):

@movq@www.uninformativ.de Gemini liked your opinion very much. Here is how it countered:

1. The User Perspective (Untrustworthiness)

The criticism of AI as untrustworthy is a problem of misapplication, not capability.

  • AI as a Force Multiplier: AI should be treated as a high-speed drafting and brainstorming tool, not an authority. For experts, it offers an immense speed gain, shifting the work from slow manual creation to fast critical editing and verification.
  • The Rise of AI Literacy: Users must develop a new skill—AI literacy—to critically evaluate and verify AI’s probabilistic output. This skill, along with improving citation features in AI tools, mitigates the “gaslighting” effect.
2. The Moral/Political Perspective (Skill Erosion)

The fear of skill loss is based on a misunderstanding of how technology changes the nature of work; it’s skill evolution, not erosion.

  • Shifting Focus to High-Level Skills: Just as the calculator shifted focus from manual math to complex problem-solving, AI shifts the focus from writing boilerplate code to architectural design and prompt engineering. It handles repetitive tasks, freeing humans for creative and complex challenges.
  • Accessibility and Empowerment: AI serves as a powerful democratizing tool, offering personalized tutoring and automation to people who lack deep expertise. While dependency is a risk, this accessibility empowers a wider segment of the population previously limited by skill barriers.
3. The Technical and Legal Perspective (Scraping and Copyright)

The legal and technical flaws are issues of governance and ethical practice, not reasons to reject the core technology.

  • Need for Better Bot Governance: Destructive scraping is a failure of ethical web behavior and can be solved with better bot identification, rate limits, and protocols (like enhanced robots.txt). The solution is to demand digital citizenship from AI companies, not to stop AI development.

⤋ Read More

GitHub Copilot tutorial: How to build, test, review, and ship code faster (with real prompts)
How GitHub Copilot works today—including mission control—and how to get the most out of it. Here’s what you need to know.

The post [GitHub Copilot tutorial: How to build, test, review, and ship code faster (with real prompts)](https://github.blog/ai-and-ml/github-copilot/a-developers-guide-to-writing-debugging-reviewing-and-shipping-co … ⌘ Read more

⤋ Read More
In-reply-to » It happened.

@prologic@twtxt.net Nothing, yet. It was sent in written form. There’s probably little point in fighting this, they have made up their minds already (and AI is being rolled up en masse in other departments), but on the other hand, there are – truthfully – very few areas where AI could actually be useful to me.

There are going to be many discussions about this …

This is completely against the “spirit” of this company, btw. We used to say: “It’s the goal that matters. Use whatever tools you think are appropriate.” That’s why I’m allowed to use Linux on my laptop. Maybe they will back down eventually when they realize that trying to push this on people is pointless. Maybe not.

⤋ Read More