@stigatle@yarn.stigatle.no Thanks! Sooo cold 🥶
receieveFile()
)? 🤔
@stigatle@yarn.stigatle.no no problems 👌 one problem solved at least 🤣
receieveFile()
)? 🤔
@stigatle@yarn.stigatle.no @prologic@twtxt.net my /tmp
is also fine now! Thanks for your help @prologic@twtxt.net!
Anyway, I’m gonna have to go to bed… We’ll continue this on the weekend. Still trying to hunt down some kind of suspected mult-GB avatar using @stigatle@yarn.stigatle.no ’s pod’s cache:
$ (echo "URL Bytes"; sort -n -k 2 -r < avatars.txt | head) | column -t
URL Bytes
https://birkbak.neocities.org/avatar.jpg 667640
https://darch.neocities.org/avatar.png 652960
http://darch.dk/avatar.png 603210
https://social.naln1.ca/media/0c4f65a4be32ff3caf54efb60166a8c965cc6ac7c30a0efd1e51c307b087f47b.png 327947
...
But so far nothing much… Still running the search…
receieveFile()
)? 🤔
Out of interest, are you able to block whole ASN(s)? I blocked the entirely of teh AWS and Facebook ASN(s) recently.
receieveFile()
)? 🤔
@abucci@anthony.buc.ci Oh 🤣 Well my IP is a known subnet and static, so if you need to know what it is, Email me 😅
receieveFile()
)? 🤔
@abucci@anthony.buc.ci Seems to be okay now hmmm
@abucci@anthony.buc.ci Hmm I can see your twts on my pod now 🤔
Hmm, I wonder if I banned too many IPs and caused these issues for myself 😆
@abucci@anthony.buc.ci / @abucci@anthony.buc.ci Any interesting errors pop up in the server logs since the the flaw got fixed (unbounded receieveFile()
)? 🤔
Hmmm 🧐
for url in $(jq -r '.Twters[].avatar' cache.json | sed '/^$/d' | grep -v -E '(twtxt.net|anthony.buc.ci|yarn.stigatle.no|yarn.mills.io)' | sort -u); do echo "$url $(curl -I -s -o /dev/null -w '%header{content-length}' "$url")"; done
...
😅 Let’s see… 🤔
@movq@www.uninformativ.de My issue is, now that we have the chance of getting something fast, people artificially slow it down again. Wether they think it’s cool that they added some slow animation or just lack of knowledge or whatever. The absolute performance does not translate to the relative performance that I observe. Completely wasted potential. :-(
In today’s economy, nobody optimizes something if it can be just called good enough with the next generation hardware. That’s especially the mindset of big coorporations.
Anyway, getting sidetracked from the original post. :-)
@stigatle@yarn.stigatle.no The one you sent is fine. I’m inspecting it now. I’m just saying, do yourself a favor and nuke your pod’s garbage cache 🤣 It’ll rebuild automatically in a much more prestine state.
That was also a source of abuse that also got plugged (being able to fill up the cache with garbage data)
Ooof
$ jq '.Feeds | keys[]' cache.json | wc -l
4402
If you both don’t mind dropping your caches. I would recommend it. Settings -> Poderator Settings -> Refresh cache.
@prologic@twtxt.net Yup. Didn’t regret climbing these three hundred odd meters of elevation. :-)
@stigatle@yarn.stigatle.no Thank you! 🙏
@stigatle@yarn.stigatle.no Ta. I hope my theory is right 😅
But just have a look at the yarnd
server logs too. Any new interesting errors? 🤔 No more multi-GB tmp files? 🤔
@stigatle@yarn.stigatle.no You want to run backup_db.sh
and dump_cache.sh
They pipe JSON to stdout and prompt for your admin password. Example:
URL=<your_pod_url> ADMIN=<your_admin_user> ./tools/dump_cache.sh > cache.json
@stigatle@yarn.stigatle.no Worky, worky now! :-)
Mate, these are some really nice gems! What a stunning landscape. I love it. Holy cow, that wooden church looks really sick. Even though, I’m not a scroll guy and prefer simple, straight designs, I have to say, that the interior craftmanship is something to admire.
Just thinking out loud here… With that PR merged (or if you built off that branch), you might hopefully see new errors popup and we might catch this problematic bad feed in the act? Hmmm 🧐
@slashdot@feeds.twtxt.net I thought Sunday was the hottest day on Earth 🤦♂️ wtf is wrong with Slashdot these days?! 🤣
if we can figure out wtf is going on here and my theory is right, we can blacklist that feed, hell even add it to the codebase as an “asshole”.
@stigatle@yarn.stigatle.no The problem is it’ll only cause the attack to stop and error out. It won’t stop your pod from trying to do this over and over again. That’s why I need some help inspecting both your pods for “bad feeds”.
@abucci@anthony.buc.ci / @stigatle@yarn.stigatle.no Please git pull
, rebuild and redeploy.
There is also a shell script in ./tools
called dump_cache.sh
. Please run this, dump your cache and share it with me. 🙏
I’m going to merge this…
@abucci@anthony.buc.ci Yeah I’ve had to block entire ASN(s) recently myself from bad actors, mostly bad AI bots actually from Facebook and Caude AI
Or if y’all trust my monkey-ass coding skillz I’ll just merge and you can do a git pull
and rebuild 😅
@stigatle@yarn.stigatle.no / @abucci@anthony.buc.ci My current working theory is that there is an asshole out there that has a feed that both your pods are fetching with a multi-GB avatar URL advertised in their feed’s preamble (metadata). I’d love for you both to review this PR, and once merged, re-roll your pods and dump your respective caches and share with me using https://gist.mills.io/
@stigatle@yarn.stigatle.no I’m wondering whether you’re having the same issue as @abucci@anthony.buc.ci still? mulit-GB yarnd-avatar-*1
files piling up in /tmp/
? 🤔
watch -n 60 rm -rf /tmp/yarn-avatar-*
in a tmux
because all of a sudden, without warning, yarnd
started throwing hundreds of gigabytes of files with names like yarn-avatar-62582554
into /tmp
, which filled up the entire disk and started crashing other services.
@abucci@anthony.buc.ci So… The only way I see this happening at all is if your pod is fetching feeds which have multi-GB sized avatar(s) in their feed metadata. So the PR I linked earlier will plug that flaw. But now I want to confirm that theory. Can I get you to dump your cache to JSON for me and share it with me?
@abucci@anthony.buc.ci Yeah that should be okay, you get so much crap on the web 🤦♂️
watch -n 60 rm -rf /tmp/yarn-avatar-*
in a tmux
because all of a sudden, without warning, yarnd
started throwing hundreds of gigabytes of files with names like yarn-avatar-62582554
into /tmp
, which filled up the entire disk and started crashing other services.
@abucci@anthony.buc.ci sift
is a tool I use for grep/find, etc.
What would you like to know about the files?
Roughly what their contents are. I’ve been reviewing the code paths responsible and have found a flaw that needs to be fixed ASAP.
Here’s the PR: https://git.mills.io/yarnsocial/yarn/pulls/1169
Monday Was Hottest Recorded Day on Earth: ‘Uncharted Territory’
World temperature reached the hottest levels ever measured on Monday, beating the record that was set just one day before, data suggests. From a report: Provisional data published on Wednesday by the Copernicus Climate Change Service, which holds data that stretches back to 1940, shows that the global surface air temperature reached 62.87F (17.15C), co … ⌘ Read more
@abucci@anthony.buc.ci I believe you are correct.
@abucci@anthony.buc.ci That’s fucking insane 😱 I know what code-paths is triggering this, but need to confirm a few other things… Some correlation with logs would also help…
watch -n 60 rm -rf /tmp/yarn-avatar-*
in a tmux
because all of a sudden, without warning, yarnd
started throwing hundreds of gigabytes of files with names like yarn-avatar-62582554
into /tmp
, which filled up the entire disk and started crashing other services.
Do you happen to have the activitypub
feature turned on btw? In fact could you just list out what features you have enabled please? 🙏
watch -n 60 rm -rf /tmp/yarn-avatar-*
in a tmux
because all of a sudden, without warning, yarnd
started throwing hundreds of gigabytes of files with names like yarn-avatar-62582554
into /tmp
, which filled up the entire disk and started crashing other services.
@prologic@twtxt.net 10 Gbytes has accumulated since I made that last post. It’s coming in at a rate of 55 Mbits/second !
watch -n 60 rm -rf /tmp/yarn-avatar-*
in a tmux
because all of a sudden, without warning, yarnd
started throwing hundreds of gigabytes of files with names like yarn-avatar-62582554
into /tmp
, which filled up the entire disk and started crashing other services.
These should be getting cleaned up, but I’m very concerned about the sizes of these 🤔
watch -n 60 rm -rf /tmp/yarn-avatar-*
in a tmux
because all of a sudden, without warning, yarnd
started throwing hundreds of gigabytes of files with names like yarn-avatar-62582554
into /tmp
, which filled up the entire disk and started crashing other services.
Hah 😈
prologic@JamessMacStudio
Fri Jul 26 00:22:44
~/Projects/yarnsocial/yarn
(main) 0
$ sift 'yarnd-avatar-*'
internal/utils.go:666: tf, err := receiveFile(res.Body, "yarnd-avatar-*")
@abucci@anthony.buc.ci Don’t suppose you can inspect one of those files could you? Kinda wondering if there’s some other abuse going on here that I need to plug? 🔌
watch -n 60 rm -rf /tmp/yarn-avatar-*
in a tmux
because all of a sudden, without warning, yarnd
started throwing hundreds of gigabytes of files with names like yarn-avatar-62582554
into /tmp
, which filled up the entire disk and started crashing other services.
@abucci@anthony.buc.ci Hmm that’s a bit weird then. Lemme have a poke.
Hmm remove the cpu limits on this pod, not even sure why I had ‘em set tbh, we decided at my day job that setting cpu limits on containers is a bit of a silly idea too. Anyway, pod should be much snappier now 😅
@movq@www.uninformativ.de Oh nothing much 🤣 Just a bunch of folks running really old versions of yarnd
that were susceptible to abuse on the open web 🤣
What the heck is going on here today, so many messages. 😂
watch -n 60 rm -rf /tmp/yarn-avatar-*
in a tmux
because all of a sudden, without warning, yarnd
started throwing hundreds of gigabytes of files with names like yarn-avatar-62582554
into /tmp
, which filled up the entire disk and started crashing other services.
Hopefully you should see traffic die off a bit too as the /external
endpoint is no longer externally abusable (get it) without being an authenticated user – which became problematic 🤦♂️ – The web is so fucking hostile 🤬
watch -n 60 rm -rf /tmp/yarn-avatar-*
in a tmux
because all of a sudden, without warning, yarnd
started throwing hundreds of gigabytes of files with names like yarn-avatar-62582554
into /tmp
, which filled up the entire disk and started crashing other services.
@abucci@anthony.buc.ci Hopefully it shouldn’t 🤞
watch -n 60 rm -rf /tmp/yarn-avatar-*
in a tmux
because all of a sudden, without warning, yarnd
started throwing hundreds of gigabytes of files with names like yarn-avatar-62582554
into /tmp
, which filled up the entire disk and started crashing other services.
@abucci@anthony.buc.ci Fuck that script 🤣 you’re good! Just follow the Build from Source docs 😅
Thinking we need to adapt the UI a little bit to something like this