Searching We.Love.Privacy.Club

Twts matching #date
Sort by: Newest, Oldest, Most Relevant

Facebook Dating Is a Surprise Hit For the Social Network
An anonymous reader quotes a report from the New York Times: Facebook Dating, which debuted in 2019, has become a surprise hit for the company. It lets people create a dating profile free in the app, where they can swipe and match with other eligible singles. It has more than 21 million daily users, quietly making it one of the most popular online dating service … ⌘ Read more

⤋ Read More

Direct File Won’t Happen in 2026, IRS Tells States
NextGov: The IRS has notified states that offered the free, government tax filing service known as Direct File in 2025 that the program won’t be available next filing season. In an email sent from the IRS to 25 states, the tax agency thanked them for collaborating and noted that ā€œno launch date has been set for the future.ā€

ā€œIRS Direct File will not be available in Filing Se … ⌘ Read more

⤋ Read More
In-reply-to » @aelaraji tell us all about it, without omitting details!

@bender@twtxt.net You are totally correct! The thing is: The Caveman within was thinking how minimal can one go before things start to get too uncomfortable? And if cavemen weren’t supposed to be too self-conscious about their spelling, I could have just ssh remote echo "$(date -Is)\tTwt Twt Mother-Lover! 🤣🤣" >> /path/to/twtxt.txt and called it a day.

⤋ Read More
In-reply-to » @aelaraji tell us all about it, without omitting details!

Just typing twts directly into my twtxt file.

Details:

  • Opening my twtxt file remotely using vim scp://user@remote:port//path/to/twtxt.txt
  • Inserting the date, time and tab part of the twt with :.!echo "$(date -Is)\t"
  • In case I need to add a new line I just Ctrl+Shift+u, type in the 2028 and hit Enter
  • In order to replay, you just steal a twt hash from your favorite Yarn instance.

It looks tedious, but it’s fun to know I can twt no matter where I am, as long as can ssh in.

⤋ Read More
In-reply-to » @lyse it hasn't happened yet. It is this coming Saturday.

@bender@twtxt.net Hm, are we talking about different dates or are there different timezone offsets for this timezone abbreviation? With EDT being UTC-4, 2025-11-02T12:00:00Z is Sunday at 8:00 in the morning local time for you. Or were did I mess up here? :-?

@prologic@twtxt.net You want me to submit a reply with ā€œI probably won’t show upā€?

⤋ Read More
In-reply-to » https://zsblog.mills.io/ for anyone interested. I think I still have some small tweaking to do befor eI use this for realz.

the single posts have no date (intended?)

What do you mean by this? šŸ¤”

⤋ Read More
In-reply-to » https://zsblog.mills.io/ for anyone interested. I think I still have some small tweaking to do befor eI use this for realz.

@prologic@twtxt.net need to work on the CSS. For example, the tags are too big, the code blocks (and the inline ones) are too small, the single posts have no date (intended?), and so on. It’s an alpha start!

⤋ Read More
In-reply-to » @bender Really? šŸ¤”

@zvava@twtxt.net Going to have to hard disagree here I’m sorry. a) no-one reads the raw/plain twtxt.txt files, the only time you do is to debug something, or have a stick beak at the comments which most clients will strip out and ignore and b) I’m sorry you’ve completely lost me! I’m old enough to pre-date before Linux became popular, so I’m not sure what UNIX principles you think are being broken or violated by having a Twt Subject (Subject) whose contents is a cryptographic content-addressable hash of the ā€œthingā€ā„¢ you’re replying to and forming a chain of other replies (a thread).

I’m sorry, but the simplest thing to do is to make the smallest number of changes to the Spec as possible and all agree on a ā€œMagic Dateā€ for which our clients use the modified function(s).

⤋ Read More
In-reply-to » @bender Really? šŸ¤”

@bender@twtxt.net Well honestly, this is just it. My strong position on this is quite simple:

Do the simplest thing that could work.

It’s one of the age old UNIX philosphies.

Therefore, the simplest thingā„¢ to do here is to just increase the hash length, mark a magicā„¢ date/time as @lyse@lyse.isobeef.org has indicated and call it a day. We’ll then be fine for a few hundred years, at which point there’ll be no-one left alive to give a shitā„¢ anyway 🤣

⤋ Read More

The driver’s license documents in Germany now have an expiration date. You have to renew them every 15 years. (Not the license itself, just the documents.)

I just got my renewed documents. Their expiration date says something like 01.09.40. Huh? That looks super weird to me, like an error. But no, it’s 2040 … Just 15 years away.

⤋ Read More

@zvava@twtxt.net There would be only one hash for a message. Some to be defined magic date selects which hash to use. If the message creation timestamp is before this epoch, hash it with v1, otherwise hammer it through v2. Eventually, support for v1 could be dropped as nobody interacts with the old stuff anymore. But I’d keep it around in my client, because why not.

If users choose a client which supports the extensions, they don’t have to mess around with v1 and v2 hashing, just like today.

As for the school of thought, personally, I’d prefer something else, too. I’m in camp location-based addressing, or whatever it is called. There more I think about it, a complete redesign of twtxt and its extensions would be necessary in my opinion. Retrofitting has its limits. Of course, this is much more work, though.

⤋ Read More
In-reply-to » Finally I propose that we increase the Twt Hash length from 7 to 12 and use the first 12 characters of the base32 encoded blake2b hash. This will solve two problems, the fact that all hashes today either end in q or a (oops) šŸ˜… And increasing the Twt Hash size will ensure that we never run into the chance of collision for ions to come. Chances of a 50% collision with 64 bits / 12 characters is roughly ~12.44B Twts. That ought to be enough! -- I also propose that we modify all our clients and make this change from the 1st July 2025, which will be Yarn.social's 5th birthday and 5 years since I started this whole project and endeavour! 😱 #Twtxt #Update

I will be adding the code in for yarnd very soonā„¢ for this change, with a if the date is >= 2025-07-01 then compute_new_hashes else compute_old_hashes

⤋ Read More
In-reply-to » jenny really isn’t well equipped to handle edits of my own twts.

@kat@yarn.girlonthemoon.xyz It’s more like a cache, it stores things like ā€œtimestamp of the most recent twt we’ve seen per feedā€ or ā€œlast modification dateā€ (to be used with HTTP’s if-modified-since header). You can nuke these files at any time, it might just result in more traffic (e.g., always getting a full response instead of just ā€œHTTP 304 nope, didn’t changeā€).

@quark@ferengi.one Yes, I often write a couple of twts, don’t publish them, then sometimes notice a mistake and want to edit it. You’re right, as soon as stuff is published, threads are going to break/fork by edits.

⤋ Read More
In-reply-to » Search syntax appears to be:

Ahhh! It’s all Soren’s fault 🤣

commit ea9eaaf3d3977701dcb84b927c77c4f921bdbf43
Author: sorenpeter <sorenpeter@noreply@mills.io>
Date:   Sat Sep 24 23:34:07 2022 +0000

    Replacing Pico.css with Simple.css (#990)

    Replacing pico.css with simple.css along with some small UI changes

⤋ Read More

@andros@twtxt.andros.dev Can you reproduce any of this outside of your client? I can’t spot a mistake here:

$ curl -sI 'http://movq.de/v/8684c7d264/.html%2Dindex%2Dthumb%2Dgimp11%2D1.png.jpg'
HTTP/1.1 200 OK
Connection: keep-alive
Content-Length: 2615
Content-Type: image/jpeg
Date: Wed, 19 Mar 2025 19:53:17 GMT
Last-Modified: Wed, 19 Mar 2025 17:34:08 GMT
Server: OpenBSD httpd

$ curl -sI 'https://movq.de/v/8684c7d264/gimp11%2D1.png'
HTTP/1.1 200 OK
Connection: keep-alive
Content-Length: 131798
Content-Type: image/png
Date: Wed, 19 Mar 2025 19:53:19 GMT
Last-Modified: Wed, 19 Mar 2025 17:18:07 GMT
Server: OpenBSD httpd

$ telnet movq.de 80
Trying 185.162.249.140...
Connected to movq.de.
Escape character is '^]'.
HEAD /v/8684c7d264/.html%2Dindex%2Dthumb%2Dgimp11%2D1.png.jpg HTTP/1.1
Host: movq.de
Connection: close

HTTP/1.1 200 OK
Connection: close
Content-Length: 2615
Content-Type: image/jpeg
Date: Wed, 19 Mar 2025 19:53:31 GMT
Last-Modified: Wed, 19 Mar 2025 17:34:08 GMT
Server: OpenBSD httpd

Connection closed by foreign host.
$ 

⤋ Read More
In-reply-to » Any idea What's this "twtxtfeevalidator/0.0.1" UA about? I thought I could ask before throwing a 1000GB file at it 🪤 could it be the same 'xt' thing @lyse was talking about the other day?

hmm… apparently the invalid twts are the latest ones I’d posted from Timeline but highly probably because I’d tried to restore them manually, after unintentionally overriding my twtxt file with one that was out of date 🤦

⤋ Read More
In-reply-to » Simplified twtxt - I want to suggest some dogmas or commandments for twtxt, from where we can work our way back to how to implement different feature like replies/treads:

@Codebuzz@www.codebuzz.nl Speed is an issue for the client software, not the format itself, but yes I agree that it makes the most sense to append post to the end of the file. I’m referring to the definition that it’s the first url = in the file that is the one that has to be used for the twthash computation, which is a too arbitrary way of defining something that breaks treading time and time again. And this is the case for not using url+date+message = twthash.

⤋ Read More

Some more arguments for a local-based treading model over a content-based one:

  1. The format: (#<DATE URL>) or (@<DATE URL>) both makes sense: # as prefix is for a hashtag like we allredy got with the (#twthash) and @ as prefix denotes that this is mention of a specific post in a feed, and not just the feed in general. Using either can make implementation easier, since most clients already got this kind of filtering.

  2. Having something like (#<DATE URL>) will also make mentions via webmetions for twtxt easier to implement, since there is no need for looking up the #twthash. This will also make it possible to make 3th part twt-mentions services.

  3. Supporting twt/webmentions will also increase discoverability as a way to know about both replies and feed mentions from feeds that you don’t follow.

⤋ Read More

I demand full 9 digit nano second timestamps and the full TZ identifier as documented in the tz 2024b database! I need to know if there was a change in daylight savings as per the locality in question as of the provided date.

⤋ Read More

@sorenpeter@darch.dk I like this idea. Just for fun, I’m using a variant in this twt. (Also because I’m curious how it non-hash subjects appear in jenny and yarn.)

URLs can contain commas so I suggest a different character to separate the url from the date. Is this twt I’ve used space (also after ā€œreplytoā€, for symmetry).

I think this solves:

  • Changing feed identities: although @mckinley@twtxt.net points out URLs can change, I think this syntax should be okay as long as the feed at that URL can be fetched, and as long as the current canonical URL for the feed lists this one as an alternate.
  • editing, if you don’t care about message integrity
  • finding the root of a thread, if you’re not following the author

An optional hash could be added if message integrity is desired. (E.g. if you don’t trust the feed author not to make a misleading edit.) Other recent suggestions about how to deal with edits and hashes might be applicable then.

People publishing multiple twts per second should include sub-second precision in their timestamps. As you suggested, the timestamp could just be copied verbatim.

⤋ Read More
In-reply-to » Hmm... I replied to this message:

This is how my original message shows up on jenny:

From: quark <quark>
Subject: (#o) @prologic this was your first twtxt. Cool! :-P
Date: Mon, 16 Sep 2024 12:42:27 -0400
Message-Id: <k7imvia@twtxt>
X-twtxt-feed-url: https://ferengi.one/twtxt.txt

(#o) @<prologic https://twtxt.net/user/prologic/twtxt.txt> this was your first twtxt. Cool! :-P

⤋ Read More
In-reply-to » (#o) @prologic this was your first twtxt. Cool! :-P

Hmm… I replied to this message:

From: prologic <prologic>
Subject: Hello World! 😊
Date: Sat, 18 Jul 2020 08:39:52 -0400
Message-Id: <o6dsrga>
X-twtxt-feed-url: https://twtxt.net/user/prologic/twtxt.txt

Hello World! 😊

And see how the hash shows… Is it because that hash isn’t longer used?

⤋ Read More

I was not suggesting to that everyone need to setup a working webfinger endpoint, but that we take the format of nick+(sub)domain as base for generating the hashed together with the message date and content.

If we omit the protocol prefix from the way we do things now will that not solve most of the problems? In the case of gemini://gemini.ctrl-c.club/~nristen/twtxt.txt they also have a working twtxt.txt at https://ctrl-c.club/~nristen/twtxt.txt … damn I just notice the gemini. subdomain.

Okay what about defining a prefers protocol as part of the hash schema? so 1: https , 2: http 3: gemini 4: gopher ?

⤋ Read More

@prologic@twtxt.net Some criticisms and a possible alternative direction:

  1. Key rotation. I’m not a security person, but my understanding is that it’s good to be able to give keys an expiry date and replace them with new ones periodically.

  2. It makes maintaining a feed more complicated. Now instead of just needing to put a file on a web server (and scan the logs for user agents) I also need to do this. What brought me to twtxt was its radical simplicity.

Instead, maybe we should think about a way to allow old urls to be rotated out? Like, my metadata could somehow say that X used to be my primary URL, but going forward from date D onward my primary url is Y. (Or, if you really want to use public key cryptography, maybe something similar could be used for key rotation there.)

It’s nice that your scheme would add a way to verify the twts you download, but https is supposed to do that anyway. If you don’t trust https to do that (maybe you don’t like relying on root CAs?) then maybe your preferred solution should be reflected by your primary feed url. E.g. if you prefer the security offered by IPFS, then maybe an IPNS url would do the trick. The fact that feed locations are URLs gives some flexibility. (But then rotation is still an issue, if I understand ipns right.)

⤋ Read More
In-reply-to » There is a bug in yarnd that's been around for awhile and is still present in the current version I'm running that lets a person hit a constructed URL like

@prologic@twtxt.net What? I compiled, updated, and restarted. If you check what my pod reports, it gives that 7a… SHA. I don’t know what that other screenshot is showing but it seems to be out of date. That was the SHA I was running before this update.

⤋ Read More

An official FBI document dated January 2021, obtained by the American association ā€œProperty of Peopleā€ through the Freedom of Information Act.

This document summarizes the possibilities for legal access to data from nine instant messaging services: iMessage, Line, Signal, Telegram, Threema, Viber, WeChat, WhatsApp and Wickr. For each software, different judicial methods are explored, such as subpoena, search warrant, active collection of communications metadata (ā€œPen Registerā€) or connection data retention law (ā€œ18 USC§2703ā€). Here, in essence, is the information the FBI says it can retrieve:

  • Apple iMessage: basic subscriber data; in the case of an iPhone user, investigators may be able to get their hands on message content if the user uses iCloud to synchronize iMessage messages or to back up data on their phone.

  • Line: account data (image, username, e-mail address, phone number, Line ID, creation date, usage data, etc.); if the user has not activated end-to-end encryption, investigators can retrieve the texts of exchanges over a seven-day period, but not other data (audio, video, images, location).

  • Signal: date and time of account creation and date of last connection.

  • Telegram: IP address and phone number for investigations into confirmed terrorists, otherwise nothing.

  • Threema: cryptographic fingerprint of phone number and e-mail address, push service tokens if used, public key, account creation date, last connection date.

  • Viber: account data and IP address used to create the account; investigators can also access message history (date, time, source, destination).

  • WeChat: basic data such as name, phone number, e-mail and IP address, but only for non-Chinese users.

  • WhatsApp: the targeted person’s basic data, address book and contacts who have the targeted person in their address book; it is possible to collect message metadata in real time (ā€œPen Registerā€); message content can be retrieved via iCloud backups.

  • Wickr: Date and time of account creation, types of terminal on which the application is installed, date of last connection, number of messages exchanged, external identifiers associated with the account (e-mail addresses, telephone numbers), avatar image, data linked to adding or deleting.

TL;DR Signal is the messaging system that provides the least information to investigators.

⤋ Read More
In-reply-to » @prologic hmm, dunno about the recency of that line of thought. I suspect though that given his (recent or not) history, if someone directly asked him "do you support rape" he would not say "no", he'd go on one of these rambling answers about property crime like he did in the video. Maybe I'm mind poisoned by being around academics my whole career, but that way of talking is how an academic gives you an answer they know will be unpopular. PhD = Piled Higher And Deeper, after all right? In other words, if he doesn't say "no" right away, he's saying "yes", except with so many words there's some uncertainty about whether he actually meant yes. And he damn well knows that, and that's why I give him no slack.

@prologic@twtxt.net

Let’s assume for a moment that an answer to a question would be met with so many words you don’t know what the answer was at all. Why? Why do this? Is this a stereotype of academics and philosophers? If so, it’s not a very straight-forward way of thinking, let alone answering a simple question.

Well, I can’t know what’s in these peoples’ minds and hearts. Personally I think it’s a way of dissembling, of sowing doubt, and of maintaining plausible deniability. The strategy is to persuade as many people as possible to change their minds, and then force the remaining people to accept the idea because they think too many other people believe it.

Let’s say you want, for whatever reason, to get a lot of people to accept an idea that you know most people find horrible. The last thing you should do is express the idea clearly and concisely and repeat it over and over again. All you’d accomplish is to cement people’s resistance to you, and label yourself as a person who harbors horrible ideas that they don’t like. So you can’t do that.

What do you do instead? The entire field of ā€œrhetoricā€, dating back at least to Plato and Aristotle (400 years BC), is all about this. How to persuade people to accept your idea, even when they resist it. There are way too many techniques to summarize in a twt, but it seems almost obvious that you have to use more words and to use misleading or at least embellished or warped descriptions of things, because that’s the opposite of clearly and concisely expressing yourself, which would directly lead to people rejecting your idea.

That’s how I think of it anyway.

⤋ Read More

@prologic@twtxt.net

#!/bin/sh

# Validate environment
if ! command -v msgbus > /dev/null; then
    printf "missing msgbus command. Use:  go install git.mills.io/prologic/msgbus/cmd/msgbus@latest"
    exit 1
fi

if ! command -v salty > /dev/null; then
    printf "missing salty command. Use:  go install go.mills.io/salty/cmd/salty@latest"
    exit 1
fi

if ! command -v salty-keygen > /dev/null; then
    printf "missing salty-keygen command. Use:  go install go.mills.io/salty/cmd/salty-keygen@latest"
    exit 1
fi

if [ -z "$SALTY_IDENTITY" ]; then
    export SALTY_IDENTITY="$HOME/.config/salty/$USER.key"
fi

get_user () {
    user=$(grep user: "$SALTY_IDENTITY" | awk '{print $3}')
    if [ -z "$user" ]; then
        user="$USER"
    fi
    echo "$user"
}

stream () {
    if [ -z "$SALTY_IDENTITY" ]; then
        echo "SALTY_IDENTITY not set"
        exit 2
    fi

    jq -r '.payload' | base64 -d | salty -i "$SALTY_IDENTITY" -d
}

lookup () {
    if [ $# -lt 1 ]; then
    printf "Usage: %s nick@domain\n" "$(basename "$0")"
    exit 1
    fi

    user="$1"
    nick="$(echo "$user" | awk -F@ '{ print $1 }')"
    domain="$(echo "$user" | awk -F@ '{ print $2 }')"

    curl -qsSL "https://$domain/.well-known/salty/${nick}.json"
}

readmsgs () {
    topic="$1"

    if [ -z "$topic" ]; then
        topic=$(get_user)
    fi

    export SALTY_IDENTITY="$HOME/.config/salty/$topic.key"
    if [ ! -f "$SALTY_IDENTITY" ]; then
        echo "identity file missing for user $topic" >&2
        exit 1
    fi

    msgbus sub "$topic" "$0"
}

sendmsg () {
    if [ $# -lt 2 ]; then
        printf "Usage: %s nick@domain.tld <message>\n" "$(basename "$0")"
        exit 0
    fi

    if [ -z "$SALTY_IDENTITY" ]; then
        echo "SALTY_IDENTITY not set"
        exit 2
    fi

    user="$1"
    message="$2"

    salty_json="$(mktemp /tmp/salty.XXXXXX)"

    lookup "$user" > "$salty_json"

    endpoint="$(jq -r '.endpoint' < "$salty_json")"
    topic="$(jq -r '.topic' < "$salty_json")"
    key="$(jq -r '.key' < "$salty_json")"

    rm "$salty_json"

    message="[$(date +%FT%TZ)] <$(get_user)> $message"

    echo "$message" \
        | salty -i "$SALTY_IDENTITY" -r "$key" \
        | msgbus -u "$endpoint" pub "$topic"
}

make_user () {
    mkdir -p "$HOME/.config/salty"

    if [ $# -lt 1 ]; then
        user=$USER
    else
        user=$1
    fi

    identity_file="$HOME/.config/salty/$user.key"

    if [ -f "$identity_file" ]; then
        printf "user key exists!"
        exit 1
    fi

    # Check for msgbus env.. probably can make it fallback to looking for a config file?
    if [ -z "$MSGBUS_URI" ]; then
        printf "missing MSGBUS_URI in environment"
        exit 1
    fi


    salty-keygen -o "$identity_file"
    echo "# user: $user" >> "$identity_file"

    pubkey=$(grep key: "$identity_file" | awk '{print $4}')

    cat <<- EOF
Create this file in your webserver well-known folder. https://hostname.tld/.well-known/salty/$user.json

{
  "endpoint": "$MSGBUS_URI",
  "topic": "$user",
  "key": "$pubkey"
}

EOF
}

# check if streaming
if [ ! -t 1 ]; then
    stream
    exit 0
fi

# Show Help
if [ $# -lt 1 ]; then
    printf "Commands: send read lookup"
    exit 0
fi


CMD=$1
shift

case $CMD in
    send)
        sendmsg "$@"
    ;;
    read)
        readmsgs "$@"
    ;;
    lookup)
        lookup "$@"
    ;;
    make-user)
        make_user "$@"
    ;;
esac

⤋ Read More

Given that we don’t have a ā€œhome phoneā€, what’s the best way to create a ā€œhunt groupā€ for my partner’s and my cell phones? My first thought is Asterisk on a VPS, but my knowledge of such things is years out of date. Is there a better way?

⤋ Read More

@movq@www.uninformativ.de Perfect! Setting the display_filter did the trick. I have come across that SE yesterday while looking for answers, but I wanted to make sure there was nothing else I was missing to notice. Thanks! @quark@twtxt.netbros.com (#spngeda) Hmm, that’s mostly an issue of how mutt displays the Date header. The index should already display local time, only the pager shows the raw header: https://movq.de/v/8c92fff081/s.png To be honest, I’d like to keep it that way (i.e., Date stores the original stamp as it occured in the twtxt feed). To convince mutt to show local time here, you’d probably have to use display_filter: https://unix.stackexchange.com/a/516101

⤋ Read More

@prologic@twtxt.net yeah it reads a seed file. I’m using mine. it scans for any mention links and then scans them recursively. it reads from http/s or gopher. i don’t have much of a db yet.. it just writes to disk the feed and checks modified dates.. but I will add a db that has hashs/mentions/subjects and such.

⤋ Read More