Searching We.Love.Privacy.Club

Twts matching #design
Sort by: Newest, Oldest, Most Relevant

Nintendo Won’t Shy Away From Continuing To ‘Try Anything’
An anonymous reader shares a report: Nintendo has always been a company willing to try just about anything. Cardboard cutout toys that mesh with games? Done. A console called the Wii with a remote-shaped controller? Massive success. Legendary game designer and Nintendo executive Shigeru Miyamoto offered more insight into how the company operates in a recent fina … ⌘ Read more

⤋ Read More

IncusOS Announced As Immutable Linux OS With ZFS For Running Containers
It has been two years already since the Linux Containers project forked Canonical’s LXD project as Incus. Now joining the Incus family is IncusOS as an immutable Linux OS built atop a Debian base with OpenZFS file-system support and designed around running containers with Incus… ⌘ Read more

⤋ Read More
In-reply-to » @bender Thanks for this illustration, it completely “misunderstood” everything I wrote and confidently spat out garbage. 👌

@prologic@twtxt.net Let’s go through it one by one. Here’s a wall of text that took me over 1.5 hours to write.

The criticism of AI as untrustworthy is a problem of misapplication, not capability.

This section says AI should not be treated as an authority. This is actually just what I said, except the AI phrased/framed it like it was a counter-argument.

The AI also said that users must develop “AI literacy”, again phrasing/framing it like a counter-argument. Well, that is also just what I said. I said you should treat AI output like a random blog and you should verify the sources, yadda yadda. That is “AI literacy”, isn’t it?

My text went one step further, though: I said that when you take this requirement of “AI literacy” into account, you basically end up with a fancy search engine, with extra overhead that costs time. The AI missed/ignored this in its reply.

Okay, so, the AI also said that you should use AI tools just for drafting and brainstorming. Granted, a very rough draft of something will probably be doable. But then you have to diligently verify every little detail of this draft – okay, fine, a draft is a draft, it’s fine if it contains errors. The thing is, though, that you really must do this verification. And I claim that many people will not do it, because AI outputs look sooooo convincing, they don’t feel like a draft that needs editing.

Can you, as an expert, still use an AI draft as a basis/foundation? Yeah, probably. But here’s the kicker: You did not create that draft. You were not involved in the “thought process” behind it. When you, a human being, make a draft, you often think something like: “Okay, I want to draw a picture of a landscape and there’s going to be a little house, but for now, I’ll just put in a rough sketch of the house and add the details later.” You are aware of what you left out. When the AI did the draft, you are not aware of what’s missing – even more so when every AI output already looks like a final product. For me, personally, this makes it much harder and slower to verify such a draft, and I mentioned this in my text.

Skill Erosion vs. Skill Evolution

You, @prologic@twtxt.net, also mentioned this in your car tyre example.

In my text, I gave two analogies: The gym analogy and the Google Translate analogy. Your car tyre example falls in the same category, but Gemini’s calculator example is different (and, again, gaslight-y, see below).

What I meant in my text: A person wants to be a programmer. To me, a programmer is a person who writes code, understands code, maintains code, writes documentation, and so on. In your example, a person who changes a car tyre would be a mechanic. Now, if you use AI to write the code and documentation for you, are you still a programmer? If you have no understanding of said code, are you a programmer? A person who does not know how to change a car tyre, is that still a mechanic?

No, you’re something else. You should not be hired as a programmer or a mechanic.

Yes, that is “skill evolution” – which is pretty much my point! But the AI framed it like a counter-argument. It didn’t understand my text.

(But what if that’s our future? What if all programming will look like that in some years? I claim: It’s not possible. If you don’t know how to program, then you don’t know how to read/understand code written by an AI. You are something else, but you’re not a programmer. It might be valid to be something else – but that wasn’t my point, my point was that you’re not a bloody programmer.)

Gemini’s calculator example is garbage, I think. Crunching numbers and doing mathematics (i.e., “complex problem-solving”) are two different things. Just because you now have a calculator, doesn’t mean it’ll free you up to do mathematical proofs or whatever.

What would have worked is this: Let’s say you’re an accountant and you sum up spendings. Without a calculator, this takes a lot of time and is error prone. But when you have one, you can work faster. But once again, there’s a little gaslight-y detail: A calculator is correct. Yes, it could have “bugs” (hello Intel FDIV), but its design actually properly calculates numbers. AI, on the other hand, does not understand a thing (our current AI, that is), it’s just a statistical model. So, this modified example (“accountant with a calculator”) would actually have to be phrased like this: Suppose there’s an accountant and you give her a magic box that spits out the correct result in, what, I don’t know, 70-90% of the time. The accountant couldn’t rely on this box now, could she? She’d either have to double-check everything or accept possibly wrong results. And that is how I feel like when I work with AI tools.

Gemini has no idea that its calculator example doesn’t make sense. It just spits out some generic “argument” that it picked up on some website.

3. The Technical and Legal Perspective (Scraping and Copyright)

The AI makes two points here. The first one, I might actually agree with (“bad bot behavior is not the fault of AI itself”).

The second point is, once again, gaslighting, because it is phrased/framed like a counter-argument. It implies that I said something which I didn’t. Like the AI, I said that you would have to adjust the copyright law! At the same time, the AI answer didn’t even question whether it’s okay to break the current law or not. It just said “lol yeah, change the laws”. (I wonder in what way the laws would have to be changed in the AI’s “opinion”, because some of these changes could kill some business opportunities – or the laws would have to have special AI clauses that only benefit the AI techbros. But I digress, that wasn’t part of Gemini’s answer.)

tl;dr

Except for one point, I don’t accept any of Gemini’s “criticism”. It didn’t pick up on lots of details, ignored arguments, and I can just instinctively tell that this thing does not understand anything it wrote (which is correct, it’s just a statistical model).

And it framed everything like a counter-argument, while actually repeating what I said. That’s gaslighting: When Alice says “the sky is blue” and Bob replies with “why do you say the sky is purple?!”

But it sure looks convincing, doesn’t it?

Never again

This took so much of my time. I won’t do this again. 😂

⤋ Read More

Thoughts/Opinions on Cap 🤔

The modern, open-source CAPTCHA

Lightweight, self-hosted, privacy-friendly, and designed to put you first. Switch from reCAPTCHA in minutes.

⤋ Read More
In-reply-to » For the innocent bystanders (because I know that I won’t change @bender’s opinion):

@movq@www.uninformativ.de Gemini liked your opinion very much. Here is how it countered:

1. The User Perspective (Untrustworthiness)

The criticism of AI as untrustworthy is a problem of misapplication, not capability.

  • AI as a Force Multiplier: AI should be treated as a high-speed drafting and brainstorming tool, not an authority. For experts, it offers an immense speed gain, shifting the work from slow manual creation to fast critical editing and verification.
  • The Rise of AI Literacy: Users must develop a new skill—AI literacy—to critically evaluate and verify AI’s probabilistic output. This skill, along with improving citation features in AI tools, mitigates the “gaslighting” effect.
2. The Moral/Political Perspective (Skill Erosion)

The fear of skill loss is based on a misunderstanding of how technology changes the nature of work; it’s skill evolution, not erosion.

  • Shifting Focus to High-Level Skills: Just as the calculator shifted focus from manual math to complex problem-solving, AI shifts the focus from writing boilerplate code to architectural design and prompt engineering. It handles repetitive tasks, freeing humans for creative and complex challenges.
  • Accessibility and Empowerment: AI serves as a powerful democratizing tool, offering personalized tutoring and automation to people who lack deep expertise. While dependency is a risk, this accessibility empowers a wider segment of the population previously limited by skill barriers.
3. The Technical and Legal Perspective (Scraping and Copyright)

The legal and technical flaws are issues of governance and ethical practice, not reasons to reject the core technology.

  • Need for Better Bot Governance: Destructive scraping is a failure of ethical web behavior and can be solved with better bot identification, rate limits, and protocols (like enhanced robots.txt). The solution is to demand digital citizenship from AI companies, not to stop AI development.

⤋ Read More

Benchmarking The AMD EPYC 9V64H: Azure HBv5’s Custom AMD CPU With HBM3
Nearly one year ago Microsoft announced the HBv5 virtual machines powered by a custom-designed AMD 4th Gen EPYC processor with high bandwidth memory (HBM3). Finally today the Azure HBv5 series is reaching general availability for those with memory-intensive HPC applications and other workloads. Microsoft kindly provided Phoronix with HBv5 access in advance to begin testing these new VMs with the AMD EPYC 9V64H CPUs featuring HBM memory, so … ⌘ Read more

⤋ Read More

Grrrrr…eat, one of my Bessey spring clamps broke. Ripped the arm right in half. I wouldn’t be surprised if it’s just designed in Germany but actually made out of Chinesium. :-( I will attempt to glue it back together with two component adhesive tomorrow, but I don’t have high hopes.

⤋ Read More
In-reply-to » There are no really good GUI toolkits for Linux, are there?

@movq@www.uninformativ.de Yeah, give it a shot. At worst you know that you have to continue your quest. :-)

Fun fact, during a semester break I was actually a little bored, so I just started reading the Qt documentation. I didn’t plan on using Qt for anything, though. I only looked at the docs because they were on my bucket list for some reason. Qt was probably recommended to me and coming from KDE myself, that was motivation enough to look at the docs just for fun.

The more I read, the more hooked I got. The documentation was extremely well written, something I’ve never seen before. The structure was very well thought out and I got the impression that I understood what the people thought when they actually designed Qt.

A few days in I decided to actually give it a real try. Having never done anything in C++ before, I quickly realized that this endeavor won’t succeed. I simply couldn’t get it going. But I found the Qt bindings for Python, so that was a new boost. And quickly after, I discovered that there were even KDE bindings for Python in my package manager, so I immediately switched to them as that integrated into my KDE desktop even nicer.

I used the Python KDE bindings for one larger project, a planning software for a summer camp that we used several years. It’s main feature was to see who is available to do an activity. In the past, that was done on a large sheet of paper, but people got assigned two activities at the same time or weren’t assigned at all. So, by showing people in yellow (free), green (one activity assigned) and red (overbooked), this sped up and improved the planning process.

Another core feature was to generate personalized time tables (just like back in school) and a dedicated view for the morning meeting on site.

It was extended over the years with all sorts of stuff. E.g. I then implemented a warning if all the custodians of an activitiy with kids were underage to satisfy new the guidelines that there should be somebody of age.

Just before the pandemic I started to even add support for personalized live views on phones or tablets during the planning process (with web sockets, though). This way, people could see their own schedule or independently check at which day an activity takes place etc. For these side quests, they don’t have to check the large matrix on the projector. But the project died there.

Here’s a screenshot from one of the main views: https://lyse.isobeef.org/tmp/k3man.png

This Python+Qt rewrite replaced and improved the Java+Swing predecessor.

⤋ Read More
In-reply-to » @lyse LOLz! Way to destroy @prologic's newest playground! :-P

@lyse@lyse.isobeef.org To be fair, I’m not convinced of the web design / user interface decisions either. I just hacked this together over a couple of days. I’m not sold on any of the UI/UX thus far. Open to suggestions, improvements, hell even a complete CSS rewrite 🤣 UI/UX nor CSS is my strong suite 😂

⤋ Read More
In-reply-to » @lyse

@movq@www.uninformativ.de Uh, that actually looks not that terrible. Somehow, I remember Swing GUIs being way uglier.

As for Visual Basic, I only had to use VBA once in my life. That was in the beginning of my career when I inherited a project from a leaving coworker. Fuck me, was that awful. Just alone the damn compiler error dialog box popping up in my face all the time while editing and the compiler already trying to parse the unfinished and hence of course uncompilable code. Boy, that left a lasting impression on me. I ported everything to Java very quickly. Luckily, the code base wasn’t all that large at that point in time. I had to add a bunch of new features after that, so I was very glad that I convinced my workmate/project manager to do that first. We didn’t even need a GUI, the button in Excel was transformed to a command line program that just generated the large file.

But I cannot comment on the VB GUI designer, I never used that. Your screenshot looks very similar to the Delphi one, though. Only towards the end of my Delphi days I found out about the possibility to make the widgets snap to window edges and corners (I don’t remember how that was called), so that resizing the windows was actually possible without messing up their entire contents.

Switching to Linux, Delphi wasn’t an option anymore. For some reason I couldn’t use Kylix. Maybe it was already dead by the time I changed OSes. Or I couldn’t get it to run. I just don’t remember. I just recall that the unavailability of Delphi was the reason it took me a while to actually settle on Linux. I then fully switched to Java. The GridBagLayout was my absolutely favorite Swing layout manager. I reckon I used it 98% of the time, because it was so powerful and made the windows resize properly, just as I had learned to do in Delphi shortly before.

Up until discovering Swing, I used Java’s AWT for a short amount of time. That was very limited I think and I hit the limits fairly quickly. Later at uni, we had one project making use of SWT. Didn’t convince me either. I could be wrong, but I think there was also a SWT GUI designer plugin for Eclipse. If there really was, that one wasn’t in the same street as Delphi’s (there must be a reason I forgot about it ;-)).

⤋ Read More
In-reply-to » @lyse LOLz! Way to destroy @prologic's newest playground! :-P

@bender@twtxt.net Kaboom! Hahaha, I did not think of that at all, thanks for pointing it out, mate! :‘-D

But let me clarify just in case: I honestly do not want to bash this project. In fact, it’s a great little invention. It’s just that I’m not conviced by the current user interface decisions. Anyway, web design isn’t right up my alley. I just wanted to add some fun. And luckily, at least someone liked it so far. :-)

⤋ Read More
In-reply-to » There are no really good GUI toolkits for Linux, are there?

@movq@www.uninformativ.de Don’t you worry, this was meant as a joke. :-D

There was a time when I thought that Swing was actually really good. But having done some Qt/KDE later, I realized how much better that was. That were the late KDE 3 and early KDE 4 days, though. Not sure how it is today. But back then it felt Trolltech and the KDE folks put a hell lot more thought into their stuff. I was pleasantly surprised how natural it appeared and all the bits played together. Sure, there were the odd ends, but the overall design was a lot better in my opinion.

To be fair, I never used it from C++, always the Python bindings, which were considerably more comfortable (just alone the possibility to specify most attributes right away as kwargs in the constructor instead of calling tons of setters). And QtJambi, the Java binding, was also relatively nice. I never did a real project though, just played around with the latter.

⤋ Read More
In-reply-to » And maybe I should go back to using GUI designers. Haven’t used those since the Visual Basic days. 🤔 It wasn’t pretty, but you got results very quickly and efficiently.

@movq@www.uninformativ.de The one for Delphi was quite good. But JCreator (I don’t remember exactly) was awful and I never looked back to GUI designers. Always layed out the GUI by hand in code myself since then. These days I don’t deal with GUI programming anymore.

⤋ Read More

And maybe I should go back to using GUI designers. Haven’t used those since the Visual Basic days. 🤔 It wasn’t pretty, but you got results very quickly and efficiently.

(When I switched to Linux, I quickly got stuck with GTK and that only had Glade, which wasn’t super great at the time, so I didn’t start using it … and then I never questioned that decision …)

https://movq.de/v/eaa24b109b/vb.png

⤋ Read More

@alexonit@twtxt.alessandrocutolo.it Yeah I think we’re overstating the UNIX principles a bit here 🤣 I get what you’re trying to say though @zvava@twtxt.net 😅 If I could go back in time and do it all over again, I would have gotten the Hash length correct and I would have used SHA-256 instead. But someone way smarter than me designed the Twt Hash spec, we adopted it and well here we are today, it works™ 😅

⤋ Read More
In-reply-to » @bender Really? 🤔

And I need to make something absolutely clear as well here. Twtxt was completely and utterly dead back in {Aug 2020](https://yarn.social/about.html) when I came across the spec and its simplicity and realised the lost opportunity. Since then we’ve continued to grow a small but thriving community. The extensions we’ve built over time have stood and lasted the test of time for the past ~5 years. We need not break things too badly, because what we have today and was designed years ago actually works quite well™ (despite some flaws).

⤋ Read More
In-reply-to » TNO Threading (draft):
Each origin feed numbers new threads (tno:N). Replies carry both (tno:N) and (ofeed:<origin-url>). Thread identity = (ofeed, tno).

Example:

Alice starts thread href=”https://we.loveprivacy.club/search?q=%2342:”>#42:**

2025-09-25T12:00:00Z (tno:42) Launching storage design review.

Bob replies:

2025-09-25T12:05:00Z (tno:42) (ofeed:https://alice.example/twtxt.txt
) I think compaction stalls under load.

Carol replies to Bob:

2025-09-25T12:08:00Z (tno:42) (ofeed:https://alice.example/twtxt.txt
) Token bucket sounds good.

⤋ Read More
In-reply-to » (#6wo52rq) @lyse you will have to agree, though, that Yarn has contributed to make it possible to mass adopt (with its many glitches, bugs, and all) because, still, the web is king.

@bender@twtxt.net Yeah, the subject and multiline extensions are great and absolutely needed. If incorporated right from the beginning, though, they could have been designed even better. :-)

⤋ Read More

Saw this on Mastodon:

https://racingbunny.com/@mookie/114718466149264471

18 rules of Software Engineering

  1. You will regret complexity when on-call
  2. Stop falling in love with your own code
  3. Everything is a trade-off. There’s no “best” 3. Every line of code you write is a liability 4. Document your decisions and designs
  4. Everyone hates code they didn’t write
  5. Don’t use unnecessary dependencies
  6. Coding standards prevent arguments
  7. Write meaningful commit messages
  8. Don’t ever stop learning new things
  9. Code reviews spread knowledge
  10. Always build for maintainability
  11. Ask for help when you’re stuck
  12. Fix root causes, not symptoms
  13. Software is never completed
  14. Estimates are not promises
  15. Ship early, iterate often
  16. Keep. It. Simple.

Solid list, even though 14 is up for debate in my opinion: Software can be completed. You have a use case / problem, you solve that problem, done. Your software is completed now. There might still be bugs and they should be fixed – but this doesn’t “add” to the program. Don’t use “software is never done” as an excuse to keep adding and adding stuff to your code.

⤋ Read More

On QRs, as long as they work (and they are quite resilient), it doesn’t matter. Their design, and colours, will be based on theme in which they are included. They are getting used more now in the US. They are king on East Asia. They are awesome.

⤋ Read More
In-reply-to » Also spent the morning continuing to think about a new design for EdgeGuard's WAF. I'm basically going to build an entirely new pluggable WAF that will be designed to only consider Rate Limiting, IP/ASN-based filtering, JavaScript challenge handling, Basic behavioral analysis and Anomaly detection.

One thing about my design here is that it would no longer incorporate “regex”-based rules like OWASP, mostly because my experience thus far has taught me that these rules are kind of overly sensitive, produce false positives and I’m not sure they are really very effective. For example, why is the point of performing SQL injection detection at the Edge using a WAF if you already handle SQL properly in the first place? (seriously does anyone still construct SQL queries by hand with effectively printf?!)

⤋ Read More

Also spent the morning continuing to think about a new design for EdgeGuard’s WAF. I’m basically going to build an entirely new pluggable WAF that will be designed to only consider Rate Limiting, IP/ASN-based filtering, JavaScript challenge handling, Basic behavioral analysis and Anomaly detection.

The only part of this design I’m not 100% sure about is the Javascript-based challenge handling? 🤔 I’m also considering making this into a “proof of work” requirement too, but I also don’t want to falsely block folks that a) turn Javascript™ off or b) Use a browser like links, elinks or lynx for example.

Hmmm 🧐

⤋ Read More

Inversion by Aric McBay was another random library pick. Like The Fall of Io, it’s the most recent in a series, though I think this series is pretty loosely connected. In contrast, the villain in this book is simple and cartoonishly evil. The book presents a design for utopia which was interesting but a little cloying. I’m not sure if I’m supposed to want to live there, but I don’t think I do. I enjoyed the book as easy reading, and might try the others in the series some time. (4/4)

⤋ Read More

@prologic@twtxt.net

There’s a simple reason all the current hashes end in a or q: the hash is 256 bits, the base32 encoding chops that into groups of 5 bits, and 256 isn’t divisible by 5. The last character of the base32 encoding just has that left-over single bit (256 mod 5 = 1).

So I agree with #3 below, but do you have a source for #1, #2 or #4? I would expect any lack of variability in any part of a hash function’s output would make it more vulnerable to attacks, so designers of hash functions would want to make the whole output vary as much as possible.

Other than the divisible-by-5 thing, my current intuition is it doesn’t matter what part you take.

  1. Hash Structure: Hashes are typically designed so that their outputs have specific statistical properties. The first few characters often have more entropy or variability, meaning they are less likely to have patterns. The last characters may not maintain this randomness, especially if the encoding method has a tendency to produce less varied endings.

  2. Collision Resistance: When using hashes, the goal is to minimize the risk of collisions (different inputs producing the same output). By using the first few characters, you leverage the full distribution of the hash. The last characters may not distribute in the same way, potentially increasing the likelihood of collisions.

  3. Encoding Characteristics: Base32 encoding has a specific structure and padding that might influence the last characters more than the first. If the data being hashed is similar, the last characters may be more similar across different hashes.

  4. Use Cases: In many applications (like generating unique identifiers), the beginning of the hash is often the most informative and varied. Relying on the end might reduce the uniqueness of generated identifiers, especially if a prefix has a specific context or meaning.

⤋ Read More
In-reply-to » @movq Is there a good way to get jenny to do a one-off fetch of a feed, for when you want to fill in missing parts of a thread? I just added @slashdot to my private follow file just because @prologic keeps responding to the feed :-P and I want to know what he's commenting on even though I don't want to see every new slashdot twt.

@prologic@twtxt.net I believe you when you say registries as designed today do not crawl. But when I first read the spec, it conjured in my mind a search engine. Now I don’t know how things work out in practice, but just based on reading, I don’t see why it can’t be an API for a crawling search engine. (In fact I don’t see anything in the spec indicating registry servers shouldn’t crawl.)

(I also noticed that https://twtxt.readthedocs.io/en/latest/user/registry.html recommends “The registries should sync each others user list by using the users endpoint”. If I understood that right, registering with one should be enough to appear on others, even if they don’t crawl.)

Does yarnd provide an API for finding twts? Is it similar?

⤋ Read More

Fire-proof safes are generally designed so the internal temperature stays at or below ~350°F. Is there a computer medium I can write that’s likely to survive an extended stay around that temperature? Storage size doesn’t matter too much; a CD would be plenty (although an actual CD would presumably turn to soup).

⤋ Read More

@lyse@lyse.isobeef.org its a hierarchy key value format. I designed it for the network peering tools i use.. I can grant access to different parts of the tree to other users.. kinda like directory permissions. a basic example of the format is:

@namespace
# multi
# line
# comment
root :value

# example space comment
@namespace.name space-tag 

# attribute comments
attribute attr-tag  :value for attribute

# attribute with multiple 
# lines of values
foo :bar
      :bin
      :baz

repeated :value1
repeated :value2

each @ starts the definition of a namespace kinda like [name] in ini format. It can have comments that show up before. then each attribute is key :value and can have their own # comment lines.
Values can be multi line.. and also repeated..

the namespaces and values can also have little meta data tags added to them.

the service can define webhooks/mqtt topics to be notified when the configs are updated. That way it can deploy the changes out when they are updated.

⤋ Read More

@prologic@twtxt.net I think those headsets were not particularly usable for things like web browsing because the resolution was too low, something like 1080p if I recall correctly. A very small screen at that resolution close to your eye is going to look grainy. You’d need 4k at least, I think, before you could realistically have text and stuff like that be zoomable and readable for low vision people. The hardware isn’t quite there yet, and the headsets that can do that kind of resolution are extremely expensive.

But yeah, even so I can imagine the metaverse wouldn’t be very helpful for low vision people as things stand today, even with higher resolution. I’ve played VR games and that was fine, but I’ve never tried to do work of any kind.

I guess where I’m coming from is that even though I’m low vision, I can work effectively on a modern OS because of the accessibility features. I also do a lot of crap like take pictures of things with my smartphone then zoom into the picture to see detail (like words on street signs) that my eyes can’t see normally. That feels very much like rudimentary augmented reality that an appropriately-designed headset could mostly automate. VR/AR/metaverse isn’t there yet, but it seems at least possible for the hardware and software to develop accessibility features that would make it workable for low vision people.

⤋ Read More
In-reply-to » Looks like Google's using this blog post of mine without my permission. I hate this kind of tech company crap so much.

There’s a link to the blog post, but they extracted a summary in hopes of keeping people in Google properties (something they’ve been called out on many times).

I was never contacted to ask if I was OK with Google extracting a summary of my blog post and sticking it on the web site. There is a very clear copyright designation at the bottom of each page, including that one. So, by putting their own brand over my text, they violated my copyright. Straightforward theft right there.

⤋ Read More
In-reply-to » On LinkedIn I see a lot of posts aimed at software developers along the lines of "If you're not using these AI tools (X,Y,Z) you're going to be left behind."

@prologic@twtxt.net yeah. I’d add “Big Data” to that hype list, and I’m sure there are a bunch more that I’m forgetting.

On the topic of a GPU cluster, the optimal design is going to depend a lot on what workloads you intend to run on it. The weakest link in these things is the data transfer rate, but that won’t matter too much for compute-heavy workloads. If your workloads are going to involve a lot of data, though, you’d be better off with a smaller number of high-VRAM cards than with a larger number of interconnected cards. I guess that’s hardware engineering 101 stuff, but still…

⤋ Read More

This is by design due to Google culture. The only way to get promoted into the higher pay scales is to ship a new product. So you have people shipping what worked before without regard to how it will exist within the product ecosystem. Also, why they seem to die off so quickly after launch. see allo and duo for example. The person that launches gets promoted to a higher level and off the original team and so it is left to wither and die.

⤋ Read More

I would HIGHLY recommend reading up on the keybase architecture. They designed device key system for real time chat that is e2e secure. https://book.keybase.io/security

A property of ec keys is deriving new keys that can be determined to be “on curve.” bitcoin has some BIPs that derive single use keys for every transaction connected to a wallet. And be derived as either public or private chains. https://qvault.io/security/bip-32-watch-only-wallets/

⤋ Read More