Microsoft Forms Superintelligence Team Under AI Chief Suleyman âTo Serve Humanityâ
Microsoft is launching a new MAI Superintelligence Team under Mustafa Suleyman to build practical, controllable AI aimed at digital companions, medical diagnostics, and renewable-energy modeling. âWe are doing this to solve real concrete problems and do it in such a way that it remains grounded and controllable,â Sule ⌠â Read more
@movq@www.uninformativ.de Gemini liked your opinion very much. Here is how it countered:
1. The User Perspective (Untrustworthiness)The criticism of AI as untrustworthy is a problem of misapplication, not capability.
- AI as a Force Multiplier: AI should be treated as a high-speed drafting and brainstorming tool, not an authority. For experts, it offers an immense speed gain, shifting the work from slow manual creation to fast critical editing and verification.
- The Rise of AI Literacy: Users must develop a new skillâAI literacyâto critically evaluate and verify AIâs probabilistic output. This skill, along with improving citation features in AI tools, mitigates the âgaslightingâ effect.
The fear of skill loss is based on a misunderstanding of how technology changes the nature of work; itâs skill evolution, not erosion.
- Shifting Focus to High-Level Skills: Just as the calculator shifted focus from manual math to complex problem-solving, AI shifts the focus from writing boilerplate code to architectural design and prompt engineering. It handles repetitive tasks, freeing humans for creative and complex challenges.
- Accessibility and Empowerment: AI serves as a powerful democratizing tool, offering personalized tutoring and automation to people who lack deep expertise. While dependency is a risk, this accessibility empowers a wider segment of the population previously limited by skill barriers.
The legal and technical flaws are issues of governance and ethical practice, not reasons to reject the core technology.
- Need for Better Bot Governance: Destructive scraping is a failure of ethical web behavior and can be solved with better bot identification, rate limits, and protocols (like enhanced
robots.txt). The solution is to demand digital citizenship from AI companies, not to stop AI development.
@zvava@twtxt.net My clients trusts the first url field it finds. If there is none, it uses the URL that Iâm using for fetching the feed.
No validation, no logging.
In practice, Iâve not seen issues with people messing with this field. (What I do see, of course, is broken threads when people do legitimate edits that change the hash.)
I donât see a way how anyone can impersonate anybody else this way. đ¤ Sure, you could use my URL in your url field, but then what? You will still show up as zvava in my client or, if you also change your nick field, as movq (zvava).
@bender@twtxt.net Absolutely. My computer science teacher was really great and in a lot of aspects very similar. Especially combining the theoretical and practical parts. Heâs also the main reason I ended up where I am today. Iâm very grateful to him. Mr. Burger, however, takes this on a whole new level.
@lyse@lyse.isobeef.org thatâs an amazing way to teach, and one many old school (I remember my father telling me âschools need to teach both theoretical and practical skills!â) people will agree with. The fact that graduates need to learn on the job after they graduate exemplifies the importance of hands on.
@movq@www.uninformativ.de Yeah, weâve seen how this plays out in practice 𤣠@dce@hashnix.club My advice, do what @movq@www.uninformativ.de has hinted at and donât change the 1st # url = field in your feed. Iâm not sure if you had already, but the first url field is kind of important in your feed as it is used as the âHashing URIâ for threading.
@andros@twtxt.andros.dev You know, Iâd really love to see how/if location-based addressing works in practice. I might fork jenny to judy and run both things in parallel for a while ⌠đ¤
One benefit with bluesky is your username is also a website. And not a clunky URL with slashes and such. I wish twtxt adopted that. I have advocated for webfinger to for twtxt to let us do something like it with usernames. Nostr has something like it
By default the bsky.social urls all redirect to their feeds like: hmpxvt.bsky.social
Many custom urls will redirect to some kind of linktree or just their feed cwebonline.com or la.bonne.petite.sour.is or if you are a major outlet just to your web presence like https://theonion.com⏠or https://netflix.com
Its just good SEO practice
Do all nostr addresses take you to the person if typed into a browser? That is the secret sauce.
No having to go to some random page first. no accounts. no apps to install. just direct to the person.
More thoughts about changes to twtxt (as if we havenât had enough thoughts):
- There are lots of great ideas here! Is there a benefit to putting them all into one document? Seems to me this could more easily be a bunch of separate efforts that can progress at their own pace:
1a. Better and longer hashes.
1b. New possibly-controversial ideas like edit: and delete: and location-based references as an alternative to hashes.
1c. Best practices, e.g. Content-Type: text/plain; charset=utf-8
1d. Stuff already described at dev.twtxt.net that doesnât need any changes.
We wonât know what will and wonât work until we try them. So Iâm inclined to think of this as a bunch of draft ideas. Maybe later when weâve seen it play out it could make sense to define a group of recommended twtxt extensions and give them a name.
Another reason for 1 (above) is: I like the current situation where all you need to get started is these two short and simple documents:
https://twtxt.readthedocs.io/en/latest/user/twtxtfile.html
https://twtxt.readthedocs.io/en/latest/user/discoverability.html
and everything else is an extension for anyone interested. (Deprecating non-UTC times seems reasonable to me, though.) Having a big long âtwtxt v2â document seems less inviting to people looking for something simple. (@prologic@twtxt.net you mentioned an anonymous comment âyouâve ruined twtxtâ and while I donât completely agree with that commenterâs sentiment, I would feel like twtxt had lost something if it moved away from having a super-simple core.)All that being said, these are just my opinions, and Iâm not doing the work of writing software or drafting proposals. Maybe I will at some point, but until then, if youâre actually implementing things, youâre in charge of what you decide to make, and Iâm grateful for the work.
@prologic@twtxt.net Thanks for writing that up!
I hope it can remain a living document (or sequence of draft revisions) for a good long time while we figure out how this stuff works in practice.
I am not sure how I feel about all this being done at once, vs. letting conventions arise.
For example, even today I could reply to twt abc1234 with â(#abc1234) Edit: âŚâ and I think all you humans would understand it as an edit to (#abc1234). Maybe eventually it would become a common enough convention that clients would start to support it explicitly.
Similarly we could just start using 11-digit hashes. We should iron out whether itâs sha256 or whatever but thereâs no need get all the other stuff right at the same time.
I have similar thoughts about how some users could try out location-based replies in a backward-compatible way (append the replyto: stuff after the legacy (#hash) style).
However I recognize that Iâm not the one implementing this stuff, and itâs less work to just have everything determined up front.
Misc comments (I havenât read the whole thing):
Did you mean to make hashes hexadecimal? You lose 11 bits that way compared to base32. Iâd suggest gaining 11 bits with base64 instead.
âClients MUST preserve the original hashâ â do you mean they MUST preserve the original twt?
Thanks for phrasing the bit about deletions so neutrally.
I donât like the MUST in âClients MUST follow the chain of reply-to referencesâŚâ. If someone writes a client as a 40-line shell script that requires the user to piece together the threading themselves, IMO we shouldnât declare the client non-conforming just because they didnât get to all the bells and whistles.
Similarly I donât like the MUST for user agents. For one thing, you might want to fetch a feed without revealing your identty. Also, it raises the bar for a minimal implementation (Iâm again thinking again of the 40-line shell script).
For âwho followsâ lists: why must the long, random tokens be only valid for a limited time? Do you have a scenario in mind where they could leak?
Why canât feeds be served over HTTP/1.0? Again, thinking about simple software. I recently tried implementing HTTP/1.1 and it wasnât too bad, but 1.0 would have been slightly simpler.
Why get into the nitty-gritty about caching headers? This seems like generic advice for HTTP servers and clients.
Iâm a little sad about other protocols being not recommended.
I donât know how I feel about including markdown. I donât mind too much that yarn users emit twts full of markdown, but Iâm more of a plain text kind of person. Also it adds to the length. I wonder if putting a separate document would make more sense; that would also help with the length.
@prologic@twtxt.net I believe you when you say registries as designed today do not crawl. But when I first read the spec, it conjured in my mind a search engine. Now I donât know how things work out in practice, but just based on reading, I donât see why it canât be an API for a crawling search engine. (In fact I donât see anything in the spec indicating registry servers shouldnât crawl.)
(I also noticed that https://twtxt.readthedocs.io/en/latest/user/registry.html recommends âThe registries should sync each others user list by using the users endpointâ. If I understood that right, registering with one should be enough to appear on others, even if they donât crawl.)
Does yarnd provide an API for finding twts? Is it similar?
I have been doing interview prep for next year. The problems have been great to get practice and make it fun when compared to the dry solve this you get on hacker rank or code scene.
That and so many great write-ups to explain the problems.
@marado@twtxt.net It canât possibly be defensible, which to me always signals an attempt at a power grab. They never explicitly said âwe will use anything we scrape from the web to train our AIâ beforeâthatâs new. There is growing pushback against that practice, with numerous legal cases winding through the legal system right now. Some day those cases will be heard and decided on by judges. So theyâre trying to get out ahead of that, in my opinion, and cement their claims to this data before thereâs a precedent set.
@prologic@twtxt.net @carsten@yarn.zn80.net
There is (I assure you there will be, donât know what it is yetâŚ) a price to be paid for this convenience.
Exactly prologic, and thatâs why Iâm negative about these sorts of things. Iâm almost 50, Iâve been around this tech hype cycle a bunch of times. Look at what happened with Facebook. When it first appeared, people loved it and signed up and shared incredibly detailed information about themselves on it. Facebook made it very easy and convenient for almost anyone, even people who had limited understanding of the internet or computers, to get connected with their friends and family. And now here we are today, where 80% of people in surveys say they donât trust Facebook with their private data, where they think Facebook commits crimes and should be broken up or at least taken to task in a big way, etc etc etc. Facebook has been fined many billions of dollars and faces endless federal lawsuits in the US alone for its horrible practices. Yet Facebook is still exploitative. Itâs a societal cancer.
All signs suggest this generative AI stuff is going to go exactly the same way. That is the inevitable course of these things in the present climate, because the tech sector is largely run by sociopathic billionaires, because the tech sector is not regulated in any meaningful way, and because the tech press / tech media has no scruples. Some new tech thing generates hype, people get excited and sign up to use it, then when the people who own the tech think they have a critical mass of users, they clamp everything down and start doing whatever it is they wanted to do from the start. Theyâll break laws, steal your shit, cause mass suffering, who knows what. They wonât stop until they are stopped by mass protest from us, and the government action that follows.
Thatâs a huge price to pay for a little bit of convenience, a price we pay and continue to pay for decades. We all know better by now. Why do we keep doing this to ourselves? It doesnât make sense. Itâs insane.
This is like my 5rh day at it. I suck at words and spelling. So this is good practice.
Bookmarking this to read over a few more times. https://dave.cheney.net/practical-go/presentations/qcon-china.html #practical #GO
Thanks to a pointer from Richard Miller, got screen rotation working on my Pi 4s. Makes this absurdly wide display more practical.