Searching We.Love.Privacy.Club

Twts matching #practical
Sort by: Newest, Oldest, Most Relevant

Microsoft Forms Superintelligence Team Under AI Chief Suleyman ‘To Serve Humanity’
Microsoft is launching a new MAI Superintelligence Team under Mustafa Suleyman to build practical, controllable AI aimed at digital companions, medical diagnostics, and renewable-energy modeling. “We are doing this to solve real concrete problems and do it in such a way that it remains grounded and controllable,” Sule … ⌘ Read more

⤋ Read More
In-reply-to » For the innocent bystanders (because I know that I won’t change @bender’s opinion):

@movq@www.uninformativ.de Gemini liked your opinion very much. Here is how it countered:

1. The User Perspective (Untrustworthiness)

The criticism of AI as untrustworthy is a problem of misapplication, not capability.

  • AI as a Force Multiplier: AI should be treated as a high-speed drafting and brainstorming tool, not an authority. For experts, it offers an immense speed gain, shifting the work from slow manual creation to fast critical editing and verification.
  • The Rise of AI Literacy: Users must develop a new skill—AI literacy—to critically evaluate and verify AI’s probabilistic output. This skill, along with improving citation features in AI tools, mitigates the “gaslighting” effect.
2. The Moral/Political Perspective (Skill Erosion)

The fear of skill loss is based on a misunderstanding of how technology changes the nature of work; it’s skill evolution, not erosion.

  • Shifting Focus to High-Level Skills: Just as the calculator shifted focus from manual math to complex problem-solving, AI shifts the focus from writing boilerplate code to architectural design and prompt engineering. It handles repetitive tasks, freeing humans for creative and complex challenges.
  • Accessibility and Empowerment: AI serves as a powerful democratizing tool, offering personalized tutoring and automation to people who lack deep expertise. While dependency is a risk, this accessibility empowers a wider segment of the population previously limited by skill barriers.
3. The Technical and Legal Perspective (Scraping and Copyright)

The legal and technical flaws are issues of governance and ethical practice, not reasons to reject the core technology.

  • Need for Better Bot Governance: Destructive scraping is a failure of ethical web behavior and can be solved with better bot identification, rate limits, and protocols (like enhanced robots.txt). The solution is to demand digital citizenship from AI companies, not to stop AI development.

⤋ Read More

@zvava@twtxt.net My clients trusts the first url field it finds. If there is none, it uses the URL that I’m using for fetching the feed.

No validation, no logging.

In practice, I’ve not seen issues with people messing with this field. (What I do see, of course, is broken threads when people do legitimate edits that change the hash.)

I don’t see a way how anyone can impersonate anybody else this way. 🤔 Sure, you could use my URL in your url field, but then what? You will still show up as zvava in my client or, if you also change your nick field, as movq (zvava).

⤋ Read More
In-reply-to » @lyse that's an amazing way to teach, and one many old school (I remember my father telling me "schools need to teach both theoretical and practical skills!") people will agree with. The fact that graduates need to learn on the job after they graduate exemplifies the importance of hands on.

@bender@twtxt.net Absolutely. My computer science teacher was really great and in a lot of aspects very similar. Especially combining the theoretical and practical parts. He’s also the main reason I ended up where I am today. I’m very grateful to him. Mr. Burger, however, takes this on a whole new level.

⤋ Read More
In-reply-to » Woooooaaaahh, that's bloody amazing! I wish I'd had a teacher like that.

@lyse@lyse.isobeef.org that’s an amazing way to teach, and one many old school (I remember my father telling me “schools need to teach both theoretical and practical skills!”) people will agree with. The fact that graduates need to learn on the job after they graduate exemplifies the importance of hands on.

⤋ Read More

One benefit with bluesky is your username is also a website. And not a clunky URL with slashes and such. I wish twtxt adopted that. I have advocated for webfinger to for twtxt to let us do something like it with usernames. Nostr has something like it

By default the bsky.social urls all redirect to their feeds like: hmpxvt.bsky.social
Many custom urls will redirect to some kind of linktree or just their feed cwebonline.com or la.bonne.petite.sour.is or if you are a major outlet just to your web presence like https://theonion.com‬ or https://netflix.com

Its just good SEO practice

Do all nostr addresses take you to the person if typed into a browser? That is the secret sauce.
No having to go to some random page first. no accounts. no apps to install. just direct to the person.

⤋ Read More

More thoughts about changes to twtxt (as if we haven’t had enough thoughts):

  1. There are lots of great ideas here! Is there a benefit to putting them all into one document? Seems to me this could more easily be a bunch of separate efforts that can progress at their own pace:

1a. Better and longer hashes.

1b. New possibly-controversial ideas like edit: and delete: and location-based references as an alternative to hashes.

1c. Best practices, e.g. Content-Type: text/plain; charset=utf-8

1d. Stuff already described at dev.twtxt.net that doesn’t need any changes.

  1. We won’t know what will and won’t work until we try them. So I’m inclined to think of this as a bunch of draft ideas. Maybe later when we’ve seen it play out it could make sense to define a group of recommended twtxt extensions and give them a name.

  2. Another reason for 1 (above) is: I like the current situation where all you need to get started is these two short and simple documents:
    https://twtxt.readthedocs.io/en/latest/user/twtxtfile.html
    https://twtxt.readthedocs.io/en/latest/user/discoverability.html
    and everything else is an extension for anyone interested. (Deprecating non-UTC times seems reasonable to me, though.) Having a big long “twtxt v2” document seems less inviting to people looking for something simple. (@prologic@twtxt.net you mentioned an anonymous comment “you’ve ruined twtxt” and while I don’t completely agree with that commenter’s sentiment, I would feel like twtxt had lost something if it moved away from having a super-simple core.)

  3. All that being said, these are just my opinions, and I’m not doing the work of writing software or drafting proposals. Maybe I will at some point, but until then, if you’re actually implementing things, you’re in charge of what you decide to make, and I’m grateful for the work.

⤋ Read More

@prologic@twtxt.net Thanks for writing that up!

I hope it can remain a living document (or sequence of draft revisions) for a good long time while we figure out how this stuff works in practice.

I am not sure how I feel about all this being done at once, vs. letting conventions arise.

For example, even today I could reply to twt abc1234 with “(#abc1234) Edit: …” and I think all you humans would understand it as an edit to (#abc1234). Maybe eventually it would become a common enough convention that clients would start to support it explicitly.

Similarly we could just start using 11-digit hashes. We should iron out whether it’s sha256 or whatever but there’s no need get all the other stuff right at the same time.

I have similar thoughts about how some users could try out location-based replies in a backward-compatible way (append the replyto: stuff after the legacy (#hash) style).

However I recognize that I’m not the one implementing this stuff, and it’s less work to just have everything determined up front.

Misc comments (I haven’t read the whole thing):

  • Did you mean to make hashes hexadecimal? You lose 11 bits that way compared to base32. I’d suggest gaining 11 bits with base64 instead.

  • “Clients MUST preserve the original hash” — do you mean they MUST preserve the original twt?

  • Thanks for phrasing the bit about deletions so neutrally.

  • I don’t like the MUST in “Clients MUST follow the chain of reply-to references…”. If someone writes a client as a 40-line shell script that requires the user to piece together the threading themselves, IMO we shouldn’t declare the client non-conforming just because they didn’t get to all the bells and whistles.

  • Similarly I don’t like the MUST for user agents. For one thing, you might want to fetch a feed without revealing your identty. Also, it raises the bar for a minimal implementation (I’m again thinking again of the 40-line shell script).

  • For “who follows” lists: why must the long, random tokens be only valid for a limited time? Do you have a scenario in mind where they could leak?

  • Why can’t feeds be served over HTTP/1.0? Again, thinking about simple software. I recently tried implementing HTTP/1.1 and it wasn’t too bad, but 1.0 would have been slightly simpler.

  • Why get into the nitty-gritty about caching headers? This seems like generic advice for HTTP servers and clients.

  • I’m a little sad about other protocols being not recommended.

  • I don’t know how I feel about including markdown. I don’t mind too much that yarn users emit twts full of markdown, but I’m more of a plain text kind of person. Also it adds to the length. I wonder if putting a separate document would make more sense; that would also help with the length.

⤋ Read More
In-reply-to » @movq Is there a good way to get jenny to do a one-off fetch of a feed, for when you want to fill in missing parts of a thread? I just added @slashdot to my private follow file just because @prologic keeps responding to the feed :-P and I want to know what he's commenting on even though I don't want to see every new slashdot twt.

@prologic@twtxt.net I believe you when you say registries as designed today do not crawl. But when I first read the spec, it conjured in my mind a search engine. Now I don’t know how things work out in practice, but just based on reading, I don’t see why it can’t be an API for a crawling search engine. (In fact I don’t see anything in the spec indicating registry servers shouldn’t crawl.)

(I also noticed that https://twtxt.readthedocs.io/en/latest/user/registry.html recommends “The registries should sync each others user list by using the users endpoint”. If I understood that right, registering with one should be enough to appear on others, even if they don’t crawl.)

Does yarnd provide an API for finding twts? Is it similar?

⤋ Read More

I have been doing interview prep for next year. The problems have been great to get practice and make it fun when compared to the dry solve this you get on hacker rank or code scene.

That and so many great write-ups to explain the problems.

⤋ Read More
In-reply-to » Google Says It'll Scrape Everything You Post Online for AI

@marado@twtxt.net It can’t possibly be defensible, which to me always signals an attempt at a power grab. They never explicitly said “we will use anything we scrape from the web to train our AI” before–that’s new. There is growing pushback against that practice, with numerous legal cases winding through the legal system right now. Some day those cases will be heard and decided on by judges. So they’re trying to get out ahead of that, in my opinion, and cement their claims to this data before there’s a precedent set.

⤋ Read More

@prologic@twtxt.net @carsten@yarn.zn80.net

There is (I assure you there will be, don’t know what it is yet…) a price to be paid for this convenience.

Exactly prologic, and that’s why I’m negative about these sorts of things. I’m almost 50, I’ve been around this tech hype cycle a bunch of times. Look at what happened with Facebook. When it first appeared, people loved it and signed up and shared incredibly detailed information about themselves on it. Facebook made it very easy and convenient for almost anyone, even people who had limited understanding of the internet or computers, to get connected with their friends and family. And now here we are today, where 80% of people in surveys say they don’t trust Facebook with their private data, where they think Facebook commits crimes and should be broken up or at least taken to task in a big way, etc etc etc. Facebook has been fined many billions of dollars and faces endless federal lawsuits in the US alone for its horrible practices. Yet Facebook is still exploitative. It’s a societal cancer.

All signs suggest this generative AI stuff is going to go exactly the same way. That is the inevitable course of these things in the present climate, because the tech sector is largely run by sociopathic billionaires, because the tech sector is not regulated in any meaningful way, and because the tech press / tech media has no scruples. Some new tech thing generates hype, people get excited and sign up to use it, then when the people who own the tech think they have a critical mass of users, they clamp everything down and start doing whatever it is they wanted to do from the start. They’ll break laws, steal your shit, cause mass suffering, who knows what. They won’t stop until they are stopped by mass protest from us, and the government action that follows.

That’s a huge price to pay for a little bit of convenience, a price we pay and continue to pay for decades. We all know better by now. Why do we keep doing this to ourselves? It doesn’t make sense. It’s insane.

⤋ Read More