GTK Adds āReduced Motionā Accessibility Option To Follow macOS, Windows & Others
In addition to GNOMEās Mutter compositor removing its X11 back-end support to focus exclusively on Wayland while keeping around XWayland client support, another notable GNOME change this week was the GTK toolkit adding a āreduced motionā accessibility option⦠ā Read more
Automattic Inc. Claims It Owns the Word āAutomaticā
An anonymous reader shares a report: Automattic, the company that owns WordPress.com, is asking Automatic.CSS ā a company that provides a CSS framework for WordPress page builders ā to change its name amid public spats between Automattic founder Matt Mullenweg and Automatic.CSS creator Kevin Geary. Automattic has two Tās as a nod to Matt.
āAs you know, our client owns and ⦠ā Read more
I keep getting this email occadionally:
Your iCloud storage is almost full
Now for various reasons, I donāt want my children to be using iCloud to store data, files, photos or any of the sort. Theyāre free to use iMessages, and other Apple services like the App Store, etc, but not storage.
So Iāve set about blocking iCloud Storage API(s) via AdGuard Home tonight as well as ensuring that my local network (client users) cannot bypass DNS policies and get out other sneaky ways, because some applications will just use other DNS servers, or DOH or DOT.
Of course, all things optional is fine. Like, it will be ignored (just like banner would) for clients having no knowledge of it.
(#abcdefghijkl https://example.com/tw.txt#:~:text=2025-10-01T10:28:00Z), because it can be simply hacked in to clients currently on hashv1 and provides an off-ramp to location-based addressing
I like that property (an off-ramp to location-based addressing), so I think I could live with that approach. ā
(Iām not sure why weāre using text fragments, though. Wouldnāt that link to the first occurence of 2025-10-01T10:28:00Z? Thatās not necessarily correct. And, to be proper URLs that Firefox and Chromium understand, it would also need to be written as 2025%2D10%2D01T10:28:00Z. The dash carries meaning, sadly. I think all this just creates needless complication. How about we just go with https://example.com/tw.txt#2025-10-01T10:28:00Z?)
@zvava@twtxt.net My clients trusts the first url field it finds. If there is none, it uses the URL that Iām using for fetching the feed.
No validation, no logging.
In practice, Iāve not seen issues with people messing with this field. (What I do see, of course, is broken threads when people do legitimate edits that change the hash.)
I donāt see a way how anyone can impersonate anybody else this way. š¤ Sure, you could use my URL in your url field, but then what? You will still show up as zvava in my client or, if you also change your nick field, as movq (zvava).
Great to see so many new clients popping up. š
Exactly, @zvava@twtxt.net, I agree. (Although, in my client at least, I wouldnāt use hashes anywhere.)
Put another way, what you are proposing/pushing for requires hundreds of lines of code to change across a half dozen or so clients and lots of breaking changes, not to mention unknowns.
What I want us to do is make only a few half dozen or so lines of code changes to our clients and minimize the breaking changes and unknowns.
@zvava@twtxt.net Going to have to hard disagree here Iām sorry. a) no-one reads the raw/plain twtxt.txt files, the only time you do is to debug something, or have a stick beak at the comments which most clients will strip out and ignore and b) Iām sorry youāve completely lost me! Iām old enough to pre-date before Linux became popular, so Iām not sure what UNIX principles you think are being broken or violated by having a Twt Subject (Subject) whose contents is a cryptographic content-addressable hash of the āthingā⢠youāre replying to and forming a chain of other replies (a thread).
Iām sorry, but the simplest thing to do is to make the smallest number of changes to the Spec as possible and all agree on a āMagic Dateā for which our clients use the modified function(s).
@prologic@twtxt.net considering other alternatives we have seeing (of which I have lost track already), yes. Why donāt you guys (client makers) take a step at a time and, for now, increase the hash length to deal with the collisions. Then location-based addressing can be added⦠or not, you know. š
TNO Threading (draft):
Each origin feed numbers new threads (tno:N). Replies carry both (tno:N) and (ofeed:<origin-url>). Thread identity = (ofeed, tno).
- Roots:
(tno:N)(implicitofeed=self).
- Replies:
(tno:N) (ofeed:<url>).
- Clients: increment
tnolocally for new threads, copy tags on reply.
- Subjects optional, not required.
ā¦
@prologic@twtxt.net I know we wonāt ever convince each other of the otherās favorite addressing scheme. :-D But I wanna address (haha) your concerns:
I donāt see any difference between the two schemes regarding link rot and migration. If the URL changes, both approaches are equally terrible as the feed URL is part of the hashed value and reference of some sort in the location-based scheme. It doesnāt matter.
The same is true for duplication and forks. Even today, the ācannonical URLā has to be chosen to build the hash. Thatās exactly the same with location-based addressing. Why would a mirror only duplicate stuff with location- but not content-based addressing? I really fail to see that. Also, who is using mirrors or relays anyway? I donāt know of any such software to be honest.
If there is a spam feed, I just unfollow it. Done. Not a concern for me at all. Not the slightest bit. And the byte verification is THE source of all broken threads when the conversation start is edited. Yes, this can be viewed as a feature, but how many times was it actually a feature and not more behaving as an anti-feature in terms of user experience?
I donāt get your argument. If the feed in question is offline, one can simply look in local caches and see if there is a message at that particular time, just like looking up a hash. Whereās the difference? Except that the lookup key is longer or compound or whatever depending on the cache format.
Even a new hashing algorithm requires work on clients etc. Itās not that you get some backwards-compatibility for free. It just cannot be backwards-compatible in my opinion, no matter which approach we take. Thatās why I believe some magic time for the switch causes the least amount of trouble. You leave the old world untouched and working.
If these are general concerns, Iām completely with you. But I donāt think that they only apply to location-based addressing. Thatās how I interpreted your message. I could be wrong. Happy to read your explanations. :-)
Here is just a small list of things⢠that Iām aware will break, some quite badly, others in minor ways:
- Link rot & migrations: domain changes, path reshuffles, CDN/mirror use, or moving from txt ā jsonfeed will orphan replies unless every reader implements perfect 301/410 history, which they wonāt.
- Duplication & forks: mirrors/relays produce multiple valid locations for the same post; readers see several āparentsā and split the thread.
- Verification & spam-resistance: content addressing lets you dedupe and verify youāre pointing at exactly the post you meant (hash matches bytes). Location anchors can be replayed or spoofed more easily unless you add signing and canonicalization.
- Offline/cached reading: without the original URL being reachable, readers canāt resolve anchors; with hashes they can match against local caches/archives.
- Ecosystem churn: all existing clients, archives, and tools that assume content-derived IDs need migrations, mapping layers, and fallback logic. Expect long-lived threads to fracture across implementations.
@zvava@twtxt.net Good question. This is the spec, I think:
https://twtxt.dev/exts/metadata.html#nick
It doesnāt say much. š¤
In the wild, Iāve only seen ātraditionalā nick names, i.e. ASCII 0x21 thru 0x7E.
My client removes anything but r'[a-zA-Z0-9]' from nick names.
@zvava@twtxt.net There would be only one hash for a message. Some to be defined magic date selects which hash to use. If the message creation timestamp is before this epoch, hash it with v1, otherwise hammer it through v2. Eventually, support for v1 could be dropped as nobody interacts with the old stuff anymore. But Iād keep it around in my client, because why not.
If users choose a client which supports the extensions, they donāt have to mess around with v1 and v2 hashing, just like today.
As for the school of thought, personally, Iād prefer something else, too. Iām in camp location-based addressing, or whatever it is called. There more I think about it, a complete redesign of twtxt and its extensions would be necessary in my opinion. Retrofitting has its limits. Of course, this is much more work, though.
@zvava@twtxt.net And yes yarnd does have a well documented API and two clients (CLI and unmaintained Flutter App)
Wanting to add, this isnāt a twtxt client. It is Yarnd on steroids! š
@kat@yarn.girlonthemoon.xyz I reckon the original <details> need to have the open attribute set in order to expand it, so I cannot just define some custom CSS rules to do that in my browser.
But in regards to twtxt, my client wonāt hide anything in that realm anyway. :-) Itās just more noise.
@zvava@twtxt.net Hooray for more twtxt clients š„³
@dce@hashnix.club Ah, oh, well then. š„“
My client supports that, if you set multiple url = fields in your feedās metadata (the top-most one must be the āmainā URL, that one is used for hashing).
But yeah, multi-protocol feeds can be problematic and some have considered it a mistake to support them. š¤
@dce@hashnix.club Twet is a far better command line client. Yea š
yarnd (what runs twtxt.net). I'd change this to something that's more supproted like PNG, JPEG, etc.
@eric@itsericwoodward.com Name change is no worries! š Interesting/funnily enough my client yarnd seems to have picked it up automatically which is nice (Iāve historically always had a few bugs to iron out there š¤£)
@kat@yarn.girlonthemoon.xyz yeah itās pretty terrible these days. Most recent trouble I had was something as simple as installing and setting up the Tailscale client. On literally all my other devices (Linux and Android) that was a cinch, but on Windowsā¦. ohh boy, I had to mess around with reg edits and all sorts of crap and eventually bludgeoned it into working, but it was a bloody pain.
In 1996, they came up with the X11 āSECURITYā extension:
https://www.reddit.com/r/linux/comments/4w548u/what_is_up_with_the_x11_security_extension/
This is what could have (eventually) solved the security issues that weāre currently seeing with X11. Those issues are cited as one of the reasons for switching to Wayland.
That extension never took off. The person on reddit wonders why ā I think itās simple: Containers and sandboxes werenāt a thing in 1996. It hardly mattered if X11 was āinsecureā. If you could run an X11 client, you probably already had access to the machine and could just do all kinds of other nasty things.
Today, sandboxing is a thing. Today, this matters.
Iāve heard so many times that āX11 is beyond fixable, itās hopeless.ā I donāt believe that. I believe that these problems are solveable with X11 and some devs have said āyeah, we could have kept working on itā. Itās that people donāt want to do it:
Why not extend the X server?
Because for the first time we have a realistic chance of not having to do that.
https://wayland.freedesktop.org/faq.html
Iām not in a position to judge the devs. Maybe the X.Org code really is so bad that you want to run away, screaming in horror. I donāt know.
But all this was a choice. I donāt buy the argument that we never would have gotten rid of things like core fonts.
All the toolkits and programs had to be ported to Wayland. A huge, still unfinished effort. If that was an acceptable thing to do, then it would have been acceptable to make an āX12ā that keeps all the good things about X11, remains compatible where feasible, eliminates the problems, and requires some clients to be adjusted. (You could have still made āX11X12ā like āXWaylandā for actual legacy programs.)
Hereās an example of X11/Xlib being old and archaic.
X11 knows the data type ācardinalā. For example, the window property _NET_WM_ICON (which holds image data for icons) is an array of ācardinalā. I am already not really familiar with that word and Iām assuming that it comes from mathematics:
https://en.wikipedia.org/wiki/Cardinal_number
(It could also be a bird, but probably not: https://en.wikipedia.org/wiki/Cardinalidae)
We would probably call this an āintegerā today.
EWMH says that icons are arrays of cardinals and that theyāre 32-bit numbers:
https://specifications.freedesktop.org/wm-spec/latest-single/#id-1.6.13
So itās something like 0x11223344 with 0x11 being the alpha channel, 0x22 is red, and so on.
You would assume that, when you retrieve such an array from the X11 server, youād get an array of uint32_t, right?
Nope.
Xlib is so old, they use char for 8-bit stuff, short int for 16-bit, and long int for 32-bit:
That is congruent with the general C data types, so it does make sense:
https://en.wikipedia.org/wiki/C_data_types
Now the funny thing is, on modern x86_64, the type long int is actually 64 bits wide.
The result is that every pixel in a Pixmap, for example, is twice as large in memory as it would need to be. Just because Xlib uses long int, because uint32_t didnāt exist, yet.
And this is something that I wouldnāt know how to fix without breaking clients.
It annoys me when I clone a git repository A in order to build and self-host some software, only to realize later that I also needed to clone repos B, C and D. Iām not saying thatās a bad thingālogical separation of code between, say, a client and a server is very handyābut some projects do not communicate very well when you need multiple tools to get it running independently.
OpenBSD has the wonderful pledge() and unveil() syscalls:
https://www.youtube.com/watch?v=bXO6nelFt-E
Not only are they super useful (the program itself can drop privileges ā like, it can initialize itself, read some files, whatever, and then tell the kernel that it will never do anything like that again; if it does, e.g. by being exploited through a bug, it gets killed by the kernel), but they are also extremely easy to use.
Imagine a server program with a connected socket in file descriptor 0. Before reading any data from the client, the program can do this:
unveil("/var/www/whatever", "r");
unveil(NULL, NULL);
pledge("stdio rpath", NULL);
Done. Itās now limited to reading files from that directory, communicating with the existing socket, stuff like that. But it cannot ever read any other files or exec() into something else.
I canāt wait for the day when we have something like this on Linux. There have been some attempts, but itās not that easy. And itās certainly not mainstream, yet.
I need to have a closer look at Linuxās Landlock soon (āsoonā), but this is considerably more complicated than pledge()/unveil():
@movq@www.uninformativ.de Me too š ā Speaking of which i know youāve lost a bit of āmojoā or āenergyā (so have i of late), rest assured, I want to keep the status quo here with what weāve built, keep it simple and change very little. What weāve built has worked very well for 5+ years and we have at least 3 very strong clients (maybe 4 or 5?).
#<2025-05-10T19:34:00+00:00 https://andros.dev/texudus.txt> Nice:) And is this implemented in your client as well? Iāve started to brainstorm on how to parse texudus in php, but I guess it could snatch some code from you?
@<nick url timestamp>) and having location based treading this way, might not break older clients, since they might just igonore the last value within the brackets.
@sorenpeter@darch.dk Unfortunately it does break all clients, because the original spec stated:
Mentions are embedded within the text in either @ or @ format
Z for UTC +00:00- is that allowed in your specs?
Regarding url = I would suggest to only allow one and the maybe add url_old = or url_alt = !?
I'm still not a fan of a DM feature, even thou it helps that i have now been split out into a separate feed file. Instead if would suggest a contact = field for where people can put an email or other id/link for an established chat protocol like signal or matrix.
@bender@twtxt.net My point was that the suggested syntax for extending mentions to point to a specific message (@<nick url timestamp>) and having location based treading this way, might not break older clients, since they might just igonore the last value within the brackets.
Z for UTC +00:00- is that allowed in your specs?
Regarding url = I would suggest to only allow one and the maybe add url_old = or url_alt = !?
I'm still not a fan of a DM feature, even thou it helps that i have now been split out into a separate feed file. Instead if would suggest a contact = field for where people can put an email or other id/link for an established chat protocol like signal or matrix.
@sorenpeter@darch.dk you wrote:
āThis might even be backward compatible with older (pre-yarn) clients.ā
Yarnd is as backwards compatible with older clients as this. I dare to say, even more so. š
@andros@twtxt.andros.dev Thanks for consolidating a lot of good ideas. Especially how you have deiced to just extend the mention syntax for location-based treads. This might even be backward compatible with older (pre-yarn) clients.
What about using Z for UTC +00:00- is that allowed in your specs?
Regarding url = I would suggest to only allow one and the maybe add url_old = or url_alt = !?
Iām still not a fan of a DM feature, even thou it helps that i have now been split out into a separate feed file. Instead if would suggest a contact = field for where people can put an email or other id/link for an established chat protocol like signal or matrix.
@andros@twtxt.andros.dev @eapl.me@eapl.me Still lots of bugs in my client. š„“ Iāll try to fix it next week.
And yes, using the same timestamp twice will very likely break threads.
@andros@twtxt.andros.dev I set up a test feed here:
https://www.uninformativ.de/texudus.txt
I made some preliminary adjustments to my client so that it can work with the different threading model. (And I totally get the concerns, this can be quite a bit of work. Especially in a large code base like Yarn.)
if clauses to this. My point is: Every time I see a hash, Iād like to have a hint as to where to find the corresponding twt.
@movq@www.uninformativ.de I think we can make this work š As long as itās just a client hint.
@movq@www.uninformativ.de If weāre focusing on solving the āmissing rootsā problems. I would start to think about āclient recommendationsā. The first recommendation would be:
- Replying to a Twt that has no initial Subject must itself have a Subject of the form (hash; url).
This way itās a hint to fetching clients that follow B, but not A (in the case of no mentions) that the Subject/Root might (very likely) is in the feed url.
@lyse@lyse.isobeef.org likewise I donāt have the energy for a fundamental shift in any of our specifications that would inevitably cause a lot of toil and try and change in our clients implementations and unforeseen problems that we havenāt really fully understood:
twtxt.txt feeds. Instead, we use modern Twtxt clients that conform to the specifications at Twtxt.dev for a seamless, automated experience. #Twtxt #Twt #UserExperience
@lyse@lyse.isobeef.org Hahahaha 𤣠I mean itās āokayā every now and then, but whatās the point of having good clients and tools if we donāt use āem š¤£
7 to 12 and use the first 12 characters of the base32 encoded blake2b hash. This will solve two problems, the fact that all hashes today either end in q or a (oops) š
And increasing the Twt Hash size will ensure that we never run into the chance of collision for ions to come. Chances of a 50% collision with 64 bits / 12 characters is roughly ~12.44B Twts. That ought to be enough! -- I also propose that we modify all our clients and make this change from the 1st July 2025, which will be Yarn.social's 5th birthday and 5 years since I started this whole project and endeavour! š± #Twtxt #Update
We have 4 clients but this should be 6 I believe with tt2 from @lyse@lyse.isobeef.org and Twtxtory from @javivf@adn.org.es?
Finally I propose that we increase the Twt Hash length from 7 to 12 and use the first 12 characters of the base32 encoded blake2b hash. This will solve two problems, the fact that all hashes today either end in q or a (oops) š
And increasing the Twt Hash size will ensure that we never run into the chance of collision for ions to come. Chances of a 50% collision with 64 bits / 12 characters is roughly ~12.44B Twts. That ought to be enough! ā I also propose that we modify all our clients and make this change from the 1st July 2025, which will be Yarn.socialās 5th birthday and 5 years since I started this whole project and endeavour! š± #Twtxt #Update
And speaking of Twtxt (See: #xushlda, feeds should be treated as append-only. Your client(s) should be appending Twts to the bottom of the file. Edits should never modify the timestamp of the Twt being edited, nor should a Twt that was edited by deleted, unless you actually intended to delete it (but thatās more complicated as itās very hard to control or tell clients what to do in a truely decentralised ecosystem for the deletion case). #Twtxt #Client #Recommendations
Just like we donāt write emails by hand anymore (See: #a3adoka), we donāt manually write Twts or update our twtxt.txt feeds. Instead, we use modern Twtxt clients that conform to the specifications at Twtxt.dev for a seamless, automated experience. #Twtxt #Twt #UserExperience
Nobody writes emails by hand using RFC 5322 anymore, nor do we manually send them through telnet and SMTP commands. The days of crafting emails in raw format and dialing into servers are long gone. Modern email clients and services handle it all seamlessly in the background, making email easier than ever to send and receiveāwithout needing to understand the protocols or formats behind it! #Email #SMTP #RFC #Automation
@javivf@adn.org.es Ahh! So this is your client implementation? š§
Was just looking at the client youāre using Twtxtory š¤ Very nice! š is this your client, did you write it? Iād not come across it before!
$ bat https://twtxt.net/twt/edgwjcq | jq '.subject'
"(#yarnd)"
hahahahaha 𤣠Does your client allow you to do this or what? š¤
Interesting factoid⦠By inspecting my āfollowersā list every now and again, I can tell who uses a client like jenny, tt or any other client where fetches are driven by user interactions of invoking the app. What do we call this type of client? Hmmm š¤ Then I can tell who uses yarnd because they are āseenā more frequently š¤£
My Hypothesis for why registries didnāt work and why they still wonāt really work today is because the bend the rules of ātrueā decentralization a bit. Users have to pick one or more registries to āregisterā to. Why would they want to do this? What is their incentive to do so? Then on the other hand, users need a client that has registry support, but now which registry or sets of registries do you choose?