Searching We.Love.Privacy.Club

Twts matching #hash
Sort by: Newest, Oldest, Most Relevant

@zvava@twtxt.net I might misunderstand what you wrote, but only hashing the message once and storing the hash together with the message in the database seems a way better approch to me. It’s fixed and doesn’t change, so there’s no need to recompute it during runtime over and over and over again. You just have it. And can easily look up other messages by hash.

⤋ Read More
In-reply-to » Which actively maintained Yarn/twtxt clients are there at the moment? Client authors raise your hands! šŸ™‹

@lyse@lyse.isobeef.org Damn. That was stupid of me. I should have posted examples using 2026-03-01 as cutoff date. šŸ˜‚

In my actual test suite, everything uses 2027-01-01 and then I have this, hoping that that’s good enough. 🄓

def test_rollover():
    d = jenny.HASHV2_CUTOFF_DATE
    assert len(jenny.make_twt_hash(URL, d - timedelta(days=7), TEXT)) == 7
    assert len(jenny.make_twt_hash(URL, d - timedelta(seconds=3), TEXT)) == 7
    assert len(jenny.make_twt_hash(URL, d - timedelta(seconds=2), TEXT)) == 7
    assert len(jenny.make_twt_hash(URL, d - timedelta(seconds=1), TEXT)) == 7
    assert len(jenny.make_twt_hash(URL, d, TEXT)) == 12
    assert len(jenny.make_twt_hash(URL, d + timedelta(seconds=1), TEXT)) == 12
    assert len(jenny.make_twt_hash(URL, d + timedelta(seconds=2), TEXT)) == 12
    assert len(jenny.make_twt_hash(URL, d + timedelta(seconds=3), TEXT)) == 12
    assert len(jenny.make_twt_hash(URL, d + timedelta(days=7), TEXT)) == 12

(In other words, I don’t care as long as it’s before 2027-01-01. šŸ˜šŸ˜…)

⤋ Read More

All my newly added test cases failed, that movq thankfully provided in https://git.mills.io/yarnsocial/twtxt.dev/pulls/28#issuecomment-20801 for the draft of the twt hash v2 extension. The first error was easy to see in the diff. The hashes were way too long. You’ve already guessed it, I had cut the hash from the twelfth character towards the end instead of taking the first twelve characters: hash[12:] instead of hash[:12].

After fixing this rookie mistake, the tests still all failed. Hmmm. Did I still cut the wrong twelve characters? :-? I even checked the Go reference implementation in the document itself. But it read basically the same as mine. Strange, what the heck is going on here?

Turns out that my vim replacements to transform the Python code into Go code butchered all the URLs. ;-) The order of operations matters. I first replaced the equals with colons for the subtest struct fields and then wanted to transform the RFC 3339 timestamp strings to time.Date(…) calls. So, I replaced the colons in the time with commas and spaces. Hence, my URLs then also all read https, //example.com/twtxt.txt.

But that was it. All test green. \o/

⤋ Read More

Linux Looks To Remove SHA1 Support For Signing Kernel Modules
Patches posted to the Linux kernel mailing list this week are seeking to remove SHA1 support for signing of kernel modules. This is part of the larger effort in the industry for moving away from SHA1 given its vulnerabilities to hash collisions and superior hashing algorithms being available… ⌘ Read more

⤋ Read More
In-reply-to » Hmmm, looks like my twt hash algorithm implementation calculates incorrect values. Might be the tilde in the URL that throws something off. :-? At least yarnd and jenny agree on a different hash.

No, I was using an empty hash URL when the feed didn’t specify a url metadata. Now I’m correctly falling back to the feed URL.

⤋ Read More

Hmmm, looks like my twt hash algorithm implementation calculates incorrect values. Might be the tilde in the URL that throws something off. :-? At least yarnd and jenny agree on a different hash.

⤋ Read More

Net zero Australia LIVE updates: Liberals in party room meeting set to hash out emissions policy; Meeting set to last hours; MPs to speak in alphabetical order
Follow along as we bring you the latest live news updates from Australia and around the world. ⌘ Read more

⤋ Read More

Net zero Australia LIVE updates: Liberals in party room meeting set to hash out net zero emissions policy; McCormack says Coalition should stick together regardless of decision
Follow along as we bring you the latest live news updates from Australia and around the world. ⌘ Read more

⤋ Read More

Net zero Australia LIVE updates: Liberals in party room meeting set to hash out net zero emissions policy; McCormack says Coalition should stick together regardless of decision
Follow along as we bring you the latest live news updates from Australia and around the world. ⌘ Read more

⤋ Read More

Net zero Australia LIVE updates: Liberals in party room meeting set to hash out net zero emissions policy; McCormack says Coalition should stick together regardless of decision
Follow along as we bring you the latest live news updates from Australia and around the world. ⌘ Read more

⤋ Read More
In-reply-to » @aelaraji tell us all about it, without omitting details!

Just typing twts directly into my twtxt file.

Details:

  • Opening my twtxt file remotely using vim scp://user@remote:port//path/to/twtxt.txt
  • Inserting the date, time and tab part of the twt with :.!echo "$(date -Is)\t"
  • In case I need to add a new line I just Ctrl+Shift+u, type in the 2028 and hit Enter
  • In order to replay, you just steal a twt hash from your favorite Yarn instance.

It looks tedious, but it’s fun to know I can twt no matter where I am, as long as can ssh in.

⤋ Read More

@zvava@twtxt.net My clients trusts the first url field it finds. If there is none, it uses the URL that I’m using for fetching the feed.

No validation, no logging.

In practice, I’ve not seen issues with people messing with this field. (What I do see, of course, is broken threads when people do legitimate edits that change the hash.)

I don’t see a way how anyone can impersonate anybody else this way. šŸ¤” Sure, you could use my URL in your url field, but then what? You will still show up as zvava in my client or, if you also change your nick field, as movq (zvava).

⤋ Read More

@zvava@twtxt.net Yes, the specification defines the first url to be used for hashing. No matter if it points to a different feed or whatever. Just unsubscribe from malicious feeds and you’re done.

Since the first url is used for hashing, it must never change. Otherwise, it will break threading, as you already noticed. If your feed moves and you wanna keep the old messages in the same new feed, you still have to point to the old url location and keep that forever. But you can add more urls. As I said several times in the past, in hindsight, using the first url was a big mistake. It would have been much better, if the last encountered url were used for hashing onwards. This way, feed moves would be relatively straightforward. However, that ship has sailed. Luckily, feeds typically don’t relocate.

⤋ Read More

@alexonit@twtxt.alessandrocutolo.it Yeah I think we’re overstating the UNIX principles a bit here 🤣 I get what you’re trying to say though @zvava@twtxt.net šŸ˜… If I could go back in time and do it all over again, I would have gotten the Hash length correct and I would have used SHA-256 instead. But someone way smarter than me designed the Twt Hash spec, we adopted it and well here we are today, it worksā„¢ šŸ˜…

⤋ Read More
In-reply-to » @bender Really? šŸ¤”

@zvava@twtxt.net Going to have to hard disagree here I’m sorry. a) no-one reads the raw/plain twtxt.txt files, the only time you do is to debug something, or have a stick beak at the comments which most clients will strip out and ignore and b) I’m sorry you’ve completely lost me! I’m old enough to pre-date before Linux became popular, so I’m not sure what UNIX principles you think are being broken or violated by having a Twt Subject (Subject) whose contents is a cryptographic content-addressable hash of the ā€œthingā€ā„¢ you’re replying to and forming a chain of other replies (a thread).

I’m sorry, but the simplest thing to do is to make the smallest number of changes to the Spec as possible and all agree on a ā€œMagic Dateā€ for which our clients use the modified function(s).

⤋ Read More
In-reply-to » @bender Really? šŸ¤”

@bender@twtxt.net Well honestly, this is just it. My strong position on this is quite simple:

Do the simplest thing that could work.

It’s one of the age old UNIX philosphies.

Therefore, the simplest thingā„¢ to do here is to just increase the hash length, mark a magicā„¢ date/time as @lyse@lyse.isobeef.org has indicated and call it a day. We’ll then be fine for a few hundred years, at which point there’ll be no-one left alive to give a shitā„¢ anyway 🤣

⤋ Read More
In-reply-to » @bender Really? šŸ¤”

@prologic@twtxt.net considering other alternatives we have seeing (of which I have lost track already), yes. Why don’t you guys (client makers) take a step at a time and, for now, increase the hash length to deal with the collisions. Then location-based addressing can be added… or not, you know. šŸ˜…

⤋ Read More
In-reply-to » (#altkl2a) Here is just a small list of thingsā„¢ that I'm aware will break, some quite badly, others in minor ways:

@lyse@lyse.isobeef.org I don’t think there’s any point in continuing the discussion of Location vs. Content based addressing.

I want us to preserve Content based addressing.

Let’s improve the user experience and fix the hash commission problems.

⤋ Read More
In-reply-to » (#altkl2a) Here is just a small list of thingsā„¢ that I'm aware will break, some quite badly, others in minor ways:

@prologic@twtxt.net I know we won’t ever convince each other of the other’s favorite addressing scheme. :-D But I wanna address (haha) your concerns:

  1. I don’t see any difference between the two schemes regarding link rot and migration. If the URL changes, both approaches are equally terrible as the feed URL is part of the hashed value and reference of some sort in the location-based scheme. It doesn’t matter.

  2. The same is true for duplication and forks. Even today, the ā€œcannonical URLā€ has to be chosen to build the hash. That’s exactly the same with location-based addressing. Why would a mirror only duplicate stuff with location- but not content-based addressing? I really fail to see that. Also, who is using mirrors or relays anyway? I don’t know of any such software to be honest.

  3. If there is a spam feed, I just unfollow it. Done. Not a concern for me at all. Not the slightest bit. And the byte verification is THE source of all broken threads when the conversation start is edited. Yes, this can be viewed as a feature, but how many times was it actually a feature and not more behaving as an anti-feature in terms of user experience?

  4. I don’t get your argument. If the feed in question is offline, one can simply look in local caches and see if there is a message at that particular time, just like looking up a hash. Where’s the difference? Except that the lookup key is longer or compound or whatever depending on the cache format.

  5. Even a new hashing algorithm requires work on clients etc. It’s not that you get some backwards-compatibility for free. It just cannot be backwards-compatible in my opinion, no matter which approach we take. That’s why I believe some magic time for the switch causes the least amount of trouble. You leave the old world untouched and working.

If these are general concerns, I’m completely with you. But I don’t think that they only apply to location-based addressing. That’s how I interpreted your message. I could be wrong. Happy to read your explanations. :-)

⤋ Read More

Here is just a small list of thingsā„¢ that I’m aware will break, some quite badly, others in minor ways:

  1. Link rot & migrations: domain changes, path reshuffles, CDN/mirror use, or moving from txt → jsonfeed will orphan replies unless every reader implements perfect 301/410 history, which they won’t.
  2. Duplication & forks: mirrors/relays produce multiple valid locations for the same post; readers see several ā€œparentsā€ and split the thread.
  3. Verification & spam-resistance: content addressing lets you dedupe and verify you’re pointing at exactly the post you meant (hash matches bytes). Location anchors can be replayed or spoofed more easily unless you add signing and canonicalization.
  4. Offline/cached reading: without the original URL being reachable, readers can’t resolve anchors; with hashes they can match against local caches/archives.
  5. Ecosystem churn: all existing clients, archives, and tools that assume content-derived IDs need migrations, mapping layers, and fallback logic. Expect long-lived threads to fracture across implementations.

⤋ Read More

We’ve been discussing the idea of changing the threading model from Content-based Addressing to Location-based addressing for years now. The problem is quite complex, but I feel I have to keep reminding y’all of the potential perils of changing this and the pros/cons of each model:

With content-addressed threading, a reply points at something that’s intrinsically identified (hash of author/feed URI + timestamp + content). That ID never changes as long as the content doesn’t. Switching to location-based anchors makes the reply target extrinsic—it now depends on where the post currently lives. In a pull-based, decentralised network, locations drift. The moment they do, thread identity fragments.

⤋ Read More

@zvava@twtxt.net There would be only one hash for a message. Some to be defined magic date selects which hash to use. If the message creation timestamp is before this epoch, hash it with v1, otherwise hammer it through v2. Eventually, support for v1 could be dropped as nobody interacts with the old stuff anymore. But I’d keep it around in my client, because why not.

If users choose a client which supports the extensions, they don’t have to mess around with v1 and v2 hashing, just like today.

As for the school of thought, personally, I’d prefer something else, too. I’m in camp location-based addressing, or whatever it is called. There more I think about it, a complete redesign of twtxt and its extensions would be necessary in my opinion. Retrofitting has its limits. Of course, this is much more work, though.

⤋ Read More

@zvava@twtxt.net I was about to suggest that you post some examples. By now, we’re pretty good at debugging hashing issues, because that happens so often. šŸ˜‚ But it looks like you figured it out on your own. āœŒļø

⤋ Read More

@dce@hashnix.club Ah, oh, well then. 🄓

My client supports that, if you set multiple url = fields in your feed’s metadata (the top-most one must be the ā€œmainā€ URL, that one is used for hashing).

But yeah, multi-protocol feeds can be problematic and some have considered it a mistake to support them. šŸ¤”

⤋ Read More

I have zero mental energy for programming at the moment. 🫤

I’ll try to implement the new hashing stuff in jenny before the ā€œdeadlineā€. But I don’t think you’ll see any texudus development from me in the near future. ā˜¹ļø

⤋ Read More

@ About the URL, since it no longer used for hashing there might be no need to change it. I agree that we keep all the parts that already are out there for the most parts. Instead of a contact field you could also just use links like: link = Email mailto:user@example.dk or link = Signal https://signal.me/sthF4raI5Lg_ybpJwB1sOptDla4oU7p[...]

⤋ Read More
In-reply-to » @prologic Not sure I’d attach any if clauses to this. My point is: Every time I see a hash, I’d like to have a hint as to where to find the corresponding twt.

The reason I think this can work so well and I’m in full support of it is that it’s the least disruptive way to resolve the issue of:

where did this hash come from?

⤋ Read More