Searching We.Love.Privacy.Club

Twts matching #reading:
Sort by: Newest, Oldest, Most Relevant

Recent #fiction #scifi #reading:

  • The Memory Police by Yōko Ogawa. Lovely writing. Very understated; reminded me of Kazuo Ishiguro. Sort of like Nineteen Eighty-Four but not. (I first heard it recommended in comparison to that work.)

  • Subcutanean by Aaron Reed; https://subcutanean.textories.com/ . Every copy of the book is different, which is a cool idea. I read two of them (one from the library, actually not different from the other printed copies, and one personalized e-book). I don’t read much horror so managed to be a little creeped out by it, which was fun.

  • The Wind from Nowhere, a 1962 novel by J. G. Ballard. A random pick from the sci-fi section; I think I picked it up because it made me imagine some weird 4-dimensional effect (ā€œfrom nowhereā€ meaning not in a normal direction) but actually (spoiler) it was just about a lot of wind for no reason. The book was moderately entertaining but there was nothing special about it.

Currently reading Scale by Greg Egan and Inversion by Aric McBay.

⤋ Read More

@prologic@twtxt.net Thanks for writing that up!

I hope it can remain a living document (or sequence of draft revisions) for a good long time while we figure out how this stuff works in practice.

I am not sure how I feel about all this being done at once, vs. letting conventions arise.

For example, even today I could reply to twt abc1234 with ā€œ(#abc1234) Edit: ā€¦ā€ and I think all you humans would understand it as an edit to (#abc1234). Maybe eventually it would become a common enough convention that clients would start to support it explicitly.

Similarly we could just start using 11-digit hashes. We should iron out whether it’s sha256 or whatever but there’s no need get all the other stuff right at the same time.

I have similar thoughts about how some users could try out location-based replies in a backward-compatible way (append the replyto: stuff after the legacy (#hash) style).

However I recognize that I’m not the one implementing this stuff, and it’s less work to just have everything determined up front.

Misc comments (I haven’t read the whole thing):

  • Did you mean to make hashes hexadecimal? You lose 11 bits that way compared to base32. I’d suggest gaining 11 bits with base64 instead.

  • ā€œClients MUST preserve the original hashā€ — do you mean they MUST preserve the original twt?

  • Thanks for phrasing the bit about deletions so neutrally.

  • I don’t like the MUST in ā€œClients MUST follow the chain of reply-to referencesā€¦ā€. If someone writes a client as a 40-line shell script that requires the user to piece together the threading themselves, IMO we shouldn’t declare the client non-conforming just because they didn’t get to all the bells and whistles.

  • Similarly I don’t like the MUST for user agents. For one thing, you might want to fetch a feed without revealing your identty. Also, it raises the bar for a minimal implementation (I’m again thinking again of the 40-line shell script).

  • For ā€œwho followsā€ lists: why must the long, random tokens be only valid for a limited time? Do you have a scenario in mind where they could leak?

  • Why can’t feeds be served over HTTP/1.0? Again, thinking about simple software. I recently tried implementing HTTP/1.1 and it wasn’t too bad, but 1.0 would have been slightly simpler.

  • Why get into the nitty-gritty about caching headers? This seems like generic advice for HTTP servers and clients.

  • I’m a little sad about other protocols being not recommended.

  • I don’t know how I feel about including markdown. I don’t mind too much that yarn users emit twts full of markdown, but I’m more of a plain text kind of person. Also it adds to the length. I wonder if putting a separate document would make more sense; that would also help with the length.

⤋ Read More

Could someone knowledgable reply with the steps a grandpa will take to calculate the hash of a twtxt from the CLI, using out-of-the-box tools? I swear I read about it somewhere, but can’t find it.

⤋ Read More

More:

Subject: The [tag URI scheme](https://en.wikipedia.org/wiki/Tag_URI_scheme) looks interesting. I like that it human read- and writable. And since we already got the timestamp in the twtxt.txt it would be
        somewhat trivial to parse. But there are still the issue with what the name/id should be... Maybe it doesn't have to bee that stick? Instead of using `tag:` as the prefix/protocol, it would more it clear
        what we are talking about by using `in-reply-to:` (https://indieweb.org/in-reply-to) or `replyto:` similar to `mailto:` 1. `(reply:sorenpeter@darch.dk,2024-09-15T12:06:27Z)' 2.
        `(in-reply-to:darch.dk/twtxt.txt,2024-09-15T12:06:27Z)' 2. `(replyto:http://darch.dk/twtxt.txt,2024-09-15T12:06:27Z)' I know it's longer that 7-11 characters, but it's self-explaining when looking at the
        twtxt.txt in the raw, and the cases above can all be caught with this regex: `\([\w-]*reply[\w-]*\:` Is this something that would work?
Subject: The [tag URI scheme](https://en.wikipedia.org/wiki/Tag_URI_scheme) looks interesting. I like that it human read- and writable. And since we already got the timestamp in the twtxt.txt it would be
        somewhat trivial to parse. But there are still the issue with what the name/id should be... Maybe it doesn't have to bee that stick? Instead of using `tag:` as the prefix/protocol, it would more it clear
        what we are talking about by using `in-reply-to:` (https://indieweb.org/in-reply-to) or `replyto:` similar to `mailto:` 1. `(reply:sorenpeter@darch.dk,2024-09-15T12:06:27Z)` 2.
        `(in-reply-to:darch.dk/twtxt.txt,2024-09-15T12:06:27Z)` 3. `(replyto:http://darch.dk/twtxt.txt,2024-09-15T12:06:27Z)` I know it's longer that 7-11 characters, but it's self-explaining when looking at the
        twtxt.txt in the raw, and the cases above can all be caught with this regex: `\([\w-]*reply[\w-]*\:` Is this something that would work?

Notice the difference? Soren edited, and broke everything.

⤋ Read More
In-reply-to » The tag URI scheme looks interesting. I like that it human read- and writable. And since we already got the timestamp in the twtxt.txt it would be somewhat trivial to parse. But there are still the issue with what the name/id should be... Maybe it doesn't have to bee that stick?

@mckinley@twtxt.net Thanks for the feedback.

  1. Yeah I agrees that nick sound not be part of syntax. Any valid URL to a twtxt.txt-file should be enough and is more clear, so it is not confused with a email (one of the the issues with webfinger and fedivese handles)
  2. I think any valid URL would work, since we are not bound to look for exact matches. Accepting both http and https as well as a gemni and gophe could all work as long as the path to the twtxt.txt is the same.
  3. My idea is that you quote the timestamp as it is in the original twtxt.txt that you are referring to, so you can do it by simply copy/pasting. Also what are the change that the same human will make two different posts within the same second?!

Regarding the whole cryptographic keys for identity, to me it seems like an unnecessary layer of complexity. If you move to a new house or city you tell people that you moved - you can do the same in a twtxt.txt. Just post something like ā€œI move to this new URL, please follow me there!ā€ I did that with my feeds at least twice, and you guys still seem to read my posts:)

⤋ Read More
In-reply-to » (#2qn6iaa) @prologic Some criticisms and a possible alternative direction:

The tag URI scheme looks interesting. I like that it human read- and writable. And since we already got the timestamp in the twtxt.txt it would be somewhat trivial to parse. But there are still the issue with what the name/id should be… Maybe it doesn’t have to bee that stick?

Instead of using tag: as the prefix/protocol, it would more it clear what we are talking about by using in-reply-to: (https://indieweb.org/in-reply-to) or replyto: similar to mailto:

  1. (reply:sorenpeter@darch.dk,2024-09-15T12:06:27Z)
  2. (in-reply-to:darch.dk/twtxt.txt,2024-09-15T12:06:27Z)
  3. (replyto:http://darch.dk/twtxt.txt,2024-09-15T12:06:27Z)

I know it’s longer that 7-11 characters, but it’s self-explaining when looking at the twtxt.txt in the raw, and the cases above can all be caught with this regex: \([\w-]*reply[\w-]*\:

Is this something that would work?

⤋ Read More
In-reply-to » @prologic earlier you suggested extending hashes to 11 characters, but here's an argument that they should be even longer than that.

@prologic@twtxt.net Brute force. I just hashed a bunch of versions of both tweets until I found a collision.

I mostly just wanted an excuse to write the program. I don’t know how I feel about actually using super-long hashes; could make the twts annoying to read if you prefer to view them untransformed.

⤋ Read More
In-reply-to » Interesting.. QUIC isn't very quick over fast internet.

@prologic@twtxt.net

They’re in Section 6:

  • Receiver should adopt UDP GRO. (Something about saving CPU processing UDP packets; I’m a but fuzzy about it.) And they have suggestions for making GRO more useful for QUIC.

  • Some other receiver-side suggestions: ā€œsending delayed QUICK ACKsā€; ā€œusing recvmsg to read multiple UDF packets in a single system callā€.

  • Use multiple threads when receiving large files.

⤋ Read More
In-reply-to » @movq Is there a good way to get jenny to do a one-off fetch of a feed, for when you want to fill in missing parts of a thread? I just added @slashdot to my private follow file just because @prologic keeps responding to the feed :-P and I want to know what he's commenting on even though I don't want to see every new slashdot twt.

@prologic@twtxt.net I believe you when you say registries as designed today do not crawl. But when I first read the spec, it conjured in my mind a search engine. Now I don’t know how things work out in practice, but just based on reading, I don’t see why it can’t be an API for a crawling search engine. (In fact I don’t see anything in the spec indicating registry servers shouldn’t crawl.)

(I also noticed that https://twtxt.readthedocs.io/en/latest/user/registry.html recommends ā€œThe registries should sync each others user list by using the users endpointā€. If I understood that right, registering with one should be enough to appear on others, even if they don’t crawl.)

Does yarnd provide an API for finding twts? Is it similar?

⤋ Read More
In-reply-to » @movq Is there a good way to get jenny to do a one-off fetch of a feed, for when you want to fill in missing parts of a thread? I just added @slashdot to my private follow file just because @prologic keeps responding to the feed :-P and I want to know what he's commenting on even though I don't want to see every new slashdot twt.

@movq@www.uninformativ.de, that would be a nice addition. :-) I would also love the ability to hide/not show the hash when reading twtxts (after all, that’s on the header on each ā€œemailā€). Could that be added as a user configurable toggle?

⤋ Read More
In-reply-to » (#gxolr6a) > I think @abucci and @stigatle are running snac? I didn’t have a closer look at snac (no intention of running it), but if that is a relatively small daemon (maybe comparable to Yarn?) that gives you access to the whole world of ActivityPub, then, well, yeah … That’s tough to beat.

@bender@twtxt.net I have nothing against GoToSocial, but:

GoToSocial stores statuses, accounts, etc, in a database. This can be either SQLite or Postgres.

snac is simpler. Some JSON files and that’s it. I can read them with jq and less. I can use tar to back them up. I can hand edit them in a text editor.

⤋ Read More

@bender@twtxt.net ha! He goes his ā€œpoemā€:

A string of letters, a forgotten name,
An email crafted, a message to claim.
We hit send with a click, a hopeful sigh,
But a bounce-back arrives, a tear in our eye.

ā€œDelivery failed,ā€ the message reads cold,
The address it seems, is a story untold.
A ghost in the system, a memory’s trace,
Lost in the void of cyberspace.

:-D

⤋ Read More
In-reply-to » Also made a webfinger lookup resolver that works with my own webfinger endpoint as well as yarnd servers: http://darch.dk/wf-lookup.php Media Media

Thanks @prologic@twtxt.net, I also just manage to get my own version of webmentions working. Please have a read at Webmentions vs. Custom Mentions Spec for Twtxt/Yarn - HedgeDoc and User Lookup for Twtxt/Yarn - Webfinger or Decentralized Identifiers (DIDs) - HedgeDoc for how it sorta works

⤋ Read More
In-reply-to » (#fytbg6a) What about using the blockquote format with > ?

@sorenpeter@darch.dk this makes sense as a quote twt that references a direct URL. If we go back to how it developed on twitter originally it was RT @nick: original text because it contained the original text the twitter algorithm would boost that text into trending.

i like the format (#hash) @<nick url> > "Quoted text"\nThen a comment
as it preserves the human read able. and has the hash for linking to the yarn. The comment part could be optional for just boosting the twt.

The only issue i think i would have would be that that yarn could then become a mess of repeated quotes. Unless the client knows to interpret them as multiple users have reposted/boosted the thread.

The format is also how iphone does reactions to SMS messages with +number liked: original SMS

⤋ Read More

@lyse@lyse.isobeef.org I have read the white papers for MLS before. I have put a lot of thought on how to do it with salty/ratchet. Its a very good tech for ensuring multiple devices can be joined to an encrypted chat. But it is bloody complicated to implement.

⤋ Read More
In-reply-to » So.. Of y'all that had covid. Did you have at the end a night where for no reason your brain amped up to 11 and can't sleep at all? It happened to me last night and my FIL the night before.

@movq@www.uninformativ.de I lasted for a long time.. Not sure where or when it was ā€œgotā€. We had been having a cold go around with the kiddos for about a week when the wife started getting sicker than normal. Did a test and she was positive. We tested the rest of the fam and got nothing. Till about 2 days later and myself and the others were positive. It largely hasn’t been too bad a little feaver and stuffy noses.

But whatever it was that hit a few days ago was horrible. Like whatever switch in my head that goes to sleep mode was shut off. I would lay down and even though I felt sleepy, I couldn’t actually go to sleep. The anxiety hit soon after and I was just awake with no relief. And it persisted that way for three nights. I got some meds from the clinic that seemed to finally get me to sleep.

Now the morning after I realized for all that time a part of me was missing. I would close my eyes and it would just go dark. No imagination, no pictures, nothing. Normally I can visualize things as I read or think about stuff.. But for the last few days it was just nothing. The waking up to it was quite shocking.

Though its just the first night.. I guess I’ll have to see if it persists. šŸ¤ž

⤋ Read More

Found another example of Google stealing something I’ve written and putting it in a ā€œfeatured snippetā€.

What’s super annoying about this one is that the source is a course page at Tufts University, not the official page of the publication they’re taking this text from. I know the professor who taught that course and I’ve guest lectured for them before on this topic. They put this publication in their course readings, and I guess that’s where Google picked it up.

Image

⤋ Read More

I’ve only been using snac/the fediverse for a few days and already I’ve had to mute somebody. I know I come on strongly with my opinions sometimes and some people don’t like that, but this person had already started going ad hominem (in my reading of it), and was using what felt to me like sketchy tactics to distract from the point I was trying to make and to shut down conversation. They were doing similar things to other people in the thread so rather than wait for it to get bad for me I just muted them. People get so weirdly defensive so fast when you disagree with something they said online. Not sure I fully understand that.

⤋ Read More
In-reply-to » (#l4nwadq) @prologic omg yes! They are both ultra-right-wing assholes! The worst of the worst! Please tell me you don't listen to these guys' brain poison?

@prologic@twtxt.net Maybe so, but that’s not because of the people who are objecting to Jordan Peterson, that’s for sure. You really need to read the articles I’ve posted before going there. Really.

⤋ Read More
In-reply-to » (#l4nwadq) @prologic omg yes! They are both ultra-right-wing assholes! The worst of the worst! Please tell me you don't listen to these guys' brain poison?

@prologic@twtxt.net

Taking Jordan Peterson asn an example, the only thing he ā€œpreachesā€ (if you want to call it that) is to be honest with yourself and to take responsibility.

This is simply untrue. Read the articles I posted, seriously.

In a tweet in one of the articles I posted, Peterson states there is no white supremacy in Canada. This is blatantly false. It is disinformation. Peterson has made statements that rape is OK (he uses ā€œfancyā€ language like ā€œwomen should be naturally converted into mothersā€ but unpack that a bit–what he means is legalized rape followed by forced conception). He is openly anti-LGBTQ and refuses to use peoples’ preferred pronouns. He seems to believe that women who wear makeup at work are asking to be sexually harassed.

He’s using his platform in academia to pretend that straight, white men are somehow the most aggrieved group in the world and everyone else is just whining and can get fucked. The patron saint of Men’s Rights Activists and incels. I find him odious.

⤋ Read More

I’ve seen BlueSky referred to as BS (as in Blue Sky, but you know…), which seems apt.

CEO is a cryptocurrency fool, as is Jack Dorsey, so I don’t expect much from it. Then again I’m old and refuse to join any new hotness so take my curmudgeonly opinions with a grain of salt.

I read somewhere or another that the ā€œdecentralizationā€ is only going to be there so that they can push content moderation onto users. They will happily welcome Nazis and fascists, leaving it up to end users to block those instances.

I wonder how they plan to handle the 4chan-level stuff, since that will surely come.

⤋ Read More
In-reply-to » Progress! so i have moved into working on aggregates. Which are a grouping of events that replayed on an object set the current state of the object. I came up with this little bit of generic wonder.

(cont.)

Just to give some context on some of the components around the code structure.. I wrote this up around an earlier version of aggregate code. This generic bit simplifies things by removing the need of the Crud functions for each aggregate.

Domain Objects

A domain object can be used as an aggregate by adding the event.AggregateRoot struct and finish implementing event.Aggregate. The AggregateRoot implements logic for adding events after they are either Raised by a command or Appended by the eventstore Load or service ApplyFn methods. It also tracks the uncommitted events that are saved using the eventstore Save method.

type User struct {
  Identity string ```json:"identity"`

  CreatedAt time.Time

  event.AggregateRoot
}

// StreamID for the aggregate when stored or loaded from ES.
func (a *User) StreamID() string {
	return "user-" + a.Identity
}
// ApplyEvent to the aggregate state.
func (a *User) ApplyEvent(lis ...event.Event) {
	for _, e := range lis {
		switch e := e.(type) {
		case *UserCreated:
			a.Identity = e.Identity
			a.CreatedAt = e.EventMeta().CreatedDate
        /* ... */
		}
	}
}
Events

Events are applied to the aggregate. They are defined by adding the event.Meta and implementing the getter/setters for event.Event

type UserCreated struct {
	eventMeta event.Meta

	Identity string
}

func (c *UserCreated) EventMeta() (m event.Meta) {
	if c != nil {
		m = c.eventMeta
	}
	return m
}
func (c *UserCreated) SetEventMeta(m event.Meta) {
	if c != nil {
		c.eventMeta = m
	}
}
Reading Events from EventStore

With a domain object that implements the event.Aggregate the event store client can load events and apply them using the Load(ctx, agg) method.

// GetUser populates an user from event store.
func (rw *User) GetUser(ctx context.Context, userID string) (*domain.User, error) {
	user := &domain.User{Identity: userID}

	err := rw.es.Load(ctx, user)
	if err != nil {
		if err != nil {
			if errors.Is(err, eventstore.ErrStreamNotFound) {
				return user, ErrNotFound
			}
			return user, err
		}
		return nil, err
	}
	return user, err
}
OnX Commands

An OnX command will validate the state of the domain object can have the command performed on it. If it can be applied it raises the event using event.Raise() Otherwise it returns an error.

// OnCreate raises an UserCreated event to create the user.
// Note: The handler will check that the user does not already exsist.
func (a *User) OnCreate(identity string) error {
    event.Raise(a, &UserCreated{Identity: identity})
    return nil
}

// OnScored will attempt to score a task.
// If the task is not in a Created state it will fail.
func (a *Task) OnScored(taskID string, score int64, attributes Attributes) error {
	if a.State != TaskStateCreated {
		return fmt.Errorf("task expected created, got %s", a.State)
	}
	event.Raise(a, &TaskScored{TaskID: taskID, Attributes: attributes, Score: score})
	return nil
}
Crud Operations for OnX Commands

The following functions in the aggregate service can be used to perform creation and updating of aggregates. The Update function will ensure the aggregate exists, where the Create is intended for non-existent aggregates. These can probably be combined into one function.

// Create is used when the stream does not yet exist.
func (rw *User) Create(
  ctx context.Context,
  identity string,
  fn func(*domain.User) error,
) (*domain.User, error) {
	session, err := rw.GetUser(ctx, identity)
	if err != nil && !errors.Is(err, ErrNotFound) {
		return nil, err
	}

	if err = fn(session); err != nil {
		return nil, err
	}

	_, err = rw.es.Save(ctx, session)

	return session, err
}

// Update is used when the stream already exists.
func (rw *User) Update(
  ctx context.Context,
  identity string,
  fn func(*domain.User) error,
) (*domain.User, error) {
	session, err := rw.GetUser(ctx, identity)
	if err != nil {
		return nil, err
	}

	if err = fn(session); err != nil {
		return nil, err
	}

	_, err = rw.es.Save(ctx, session)
	return session, err
}

⤋ Read More

Hi, I am playing with making an event sourcing database. Its super alpha but I thought I would share since others are talking about databases and such.

It’s super basic. Using tidwall/wal as the disk backing. The first use case I am playing with is an implementation of msgbus. I can post events to it and read them back in reverse order.

I plan to expand it to handle other event sourcing type things like aggregates and projections.

Find it here: sour-is/ev

@prologic@twtxt.net @movq@www.uninformativ.de @lyse@lyse.isobeef.org

⤋ Read More

@prologic@twtxt.net

#!/bin/sh

# Validate environment
if ! command -v msgbus > /dev/null; then
    printf "missing msgbus command. Use:  go install git.mills.io/prologic/msgbus/cmd/msgbus@latest"
    exit 1
fi

if ! command -v salty > /dev/null; then
    printf "missing salty command. Use:  go install go.mills.io/salty/cmd/salty@latest"
    exit 1
fi

if ! command -v salty-keygen > /dev/null; then
    printf "missing salty-keygen command. Use:  go install go.mills.io/salty/cmd/salty-keygen@latest"
    exit 1
fi

if [ -z "$SALTY_IDENTITY" ]; then
    export SALTY_IDENTITY="$HOME/.config/salty/$USER.key"
fi

get_user () {
    user=$(grep user: "$SALTY_IDENTITY" | awk '{print $3}')
    if [ -z "$user" ]; then
        user="$USER"
    fi
    echo "$user"
}

stream () {
    if [ -z "$SALTY_IDENTITY" ]; then
        echo "SALTY_IDENTITY not set"
        exit 2
    fi

    jq -r '.payload' | base64 -d | salty -i "$SALTY_IDENTITY" -d
}

lookup () {
    if [ $# -lt 1 ]; then
    printf "Usage: %s nick@domain\n" "$(basename "$0")"
    exit 1
    fi

    user="$1"
    nick="$(echo "$user" | awk -F@ '{ print $1 }')"
    domain="$(echo "$user" | awk -F@ '{ print $2 }')"

    curl -qsSL "https://$domain/.well-known/salty/${nick}.json"
}

readmsgs () {
    topic="$1"

    if [ -z "$topic" ]; then
        topic=$(get_user)
    fi

    export SALTY_IDENTITY="$HOME/.config/salty/$topic.key"
    if [ ! -f "$SALTY_IDENTITY" ]; then
        echo "identity file missing for user $topic" >&2
        exit 1
    fi

    msgbus sub "$topic" "$0"
}

sendmsg () {
    if [ $# -lt 2 ]; then
        printf "Usage: %s nick@domain.tld <message>\n" "$(basename "$0")"
        exit 0
    fi

    if [ -z "$SALTY_IDENTITY" ]; then
        echo "SALTY_IDENTITY not set"
        exit 2
    fi

    user="$1"
    message="$2"

    salty_json="$(mktemp /tmp/salty.XXXXXX)"

    lookup "$user" > "$salty_json"

    endpoint="$(jq -r '.endpoint' < "$salty_json")"
    topic="$(jq -r '.topic' < "$salty_json")"
    key="$(jq -r '.key' < "$salty_json")"

    rm "$salty_json"

    message="[$(date +%FT%TZ)] <$(get_user)> $message"

    echo "$message" \
        | salty -i "$SALTY_IDENTITY" -r "$key" \
        | msgbus -u "$endpoint" pub "$topic"
}

make_user () {
    mkdir -p "$HOME/.config/salty"

    if [ $# -lt 1 ]; then
        user=$USER
    else
        user=$1
    fi

    identity_file="$HOME/.config/salty/$user.key"

    if [ -f "$identity_file" ]; then
        printf "user key exists!"
        exit 1
    fi

    # Check for msgbus env.. probably can make it fallback to looking for a config file?
    if [ -z "$MSGBUS_URI" ]; then
        printf "missing MSGBUS_URI in environment"
        exit 1
    fi


    salty-keygen -o "$identity_file"
    echo "# user: $user" >> "$identity_file"

    pubkey=$(grep key: "$identity_file" | awk '{print $4}')

    cat <<- EOF
Create this file in your webserver well-known folder. https://hostname.tld/.well-known/salty/$user.json

{
  "endpoint": "$MSGBUS_URI",
  "topic": "$user",
  "key": "$pubkey"
}

EOF
}

# check if streaming
if [ ! -t 1 ]; then
    stream
    exit 0
fi

# Show Help
if [ $# -lt 1 ]; then
    printf "Commands: send read lookup"
    exit 0
fi


CMD=$1
shift

case $CMD in
    send)
        sendmsg "$@"
    ;;
    read)
        readmsgs "$@"
    ;;
    lookup)
        lookup "$@"
    ;;
    make-user)
        make_user "$@"
    ;;
esac

⤋ Read More

I would HIGHLY recommend reading up on the keybase architecture. They designed device key system for real time chat that is e2e secure. https://book.keybase.io/security

A property of ec keys is deriving new keys that can be determined to be ā€œon curve.ā€ bitcoin has some BIPs that derive single use keys for every transaction connected to a wallet. And be derived as either public or private chains. https://qvault.io/security/bip-32-watch-only-wallets/

⤋ Read More

@movq@www.uninformativ.de i believe the delete of any twt was a tech limitation with retwt parser not knowing where in the file a twt came from. lextwt tracks the bytes in file where a twt was read from. which could be used to delete a twt from file.. in theory.

⤋ Read More