Searching We.Love.Privacy.Club

Twts matching #twtxt.
Sort by: Newest, Oldest, Most Relevant

@movq@www.uninformativ.de pleas no.

My wifes mom nearly got her account fully taken over by some hacker. They were able to get control and change password but I was able to get it recovered before they could get the phone number reset. They sent messages to all her contacts to send cash.

⤋ Read More

@prologic@twtxt.net The headline is interesting and sent me down a rabbit hole understanding what the paper (https://aclanthology.org/2024.acl-long.279/) actually says.

The result is interesting, but the Neuroscience News headline greatly overstates it. If I’ve understood right, they are arguing (with strong evidence) that the simple technique of making neural nets bigger and bigger isn’t quite as magically effective as people say — if you use it on its own. In particular, they evaluate LLMs without two common enhancements, in-context learning and instruction tuning. Both of those involve using a small number of examples of the particular task to improve the model’s performance, and they turn them off because they are not part of what is called ā€œemergenceā€: ā€œan ability to solve a task which is absent in smaller models, but present in LLMsā€.

They show that these restricted LLMs only outperform smaller models (i.e demonstrate emergence) on certain tasks, and then (end of Section 4.1) discuss the nature of those few tasks that showed emergence.

I’d love to hear more from someone more familiar with this stuff. (I’ve done research that touches on ML, but neural nets and especially LLMs aren’t my area at all.) In particular, how compelling is this finding that zero-shot learning (i.e. without in-context learning or instruction tuning) remains hard as model size grows.

⤋ Read More
In-reply-to » (#dhdo3hq) @movq The success of large neural nets. People love to criticize today's LLMs and image models, but if you compare them to what we had before, the progress is astonishing.

@prologic@twtxt.net I don’t know what you mean when you call them stochastic parrots, or how you define understanding. It’s certainly true that current language models show an obvious lack of understanding in many situations, but I find the trend impressive. I would love to see someone achieve similar results with much less power or training data.

⤋ Read More
In-reply-to » (#icme3oq) @bender Is it so maxed out you couldn't fit a pretty small program like Headscale on it? Headscale by itself and only personal home type use as far as amount of peers go, it really isn't noticeable I don't think resource-wise. The Docker version I guess could be a different story.

@prologic@twtxt.net Good to know. I must admit I’ve never actually used a Docker instance, probably as I just assumed the overhead might be a bit much for my usual very modest servers.

⤋ Read More

@bender@twtxt.net Is it so maxed out you couldn’t fit a pretty small program like Headscale on it? Headscale by itself and only personal home type use as far as amount of peers go, it really isn’t noticeable I don’t think resource-wise. The Docker version I guess could be a different story.

⤋ Read More
In-reply-to » I setup and switched to Headscale last night. It was relatively simple, I spent more time installing a web GUI to manage it to be honest, the actual server is simple enough. The native Tailscale Android app even works with it thankfully.

@prologic@twtxt.net Yes I suppose that is true. There is an article on Tailscale’s site that explains it all quite a bit: https://tailscale.com/blog/how-nat-traversal-works

To me, with CGNAT, it’s a small miracle that a direct connection can be made between peers (as opposed to going through a relay constantly) but it does indeed work. I guess to host it at home you would need to have it WAN accessible, and if you’ve already gone to the trouble of port forwarding etc… well šŸ˜…
Not that I could personally do that, but for those with static IPs etc.

⤋ Read More
In-reply-to » I setup and switched to Headscale last night. It was relatively simple, I spent more time installing a web GUI to manage it to be honest, the actual server is simple enough. The native Tailscale Android app even works with it thankfully.

@bender@twtxt.net on my hosted VPS, as I’m on Starlink which is CGNAT, I need some sort of external intermediary.

⤋ Read More

@prologic@twtxt.net I don’t think it’s your code. As you said in one of your commit comments, the internet is a hostile place! That’s partly why I reacted the way I did: all things considered it’s usually better to react quickly and clean up the mess later, then it is to wait and risk further damage. Anyway it sucks @xuu@txt.sour.is got caught up in it. Hopefully it’s all good now.

⤋ Read More

@stigatle@yarn.stigatle.no @xuu@txt.sour.is @lyse@lyse.isobeef.org ā€œNot coolā€? I was receiving many broken (HTTP 400 error) requests per second from an IP address I didn’t recognize, right after having my VPS crash because the hard drive filled up with bogus data. None of this had happened on this VPS before, so it was a new problem that I didn’t understand and I took immediate action to get it under control. Of course I reported the IP address to its abuse email. That’s a 100% normal, natural, and ā€œcoolā€ thing to do in such a situation. At the time I had no idea it was @xuu@txt.sour.is .

The moment I realized it was @xuu@txt.sour.is and definitely a false alarm, I emailed the ISP and told them this was a false positive and to not ban or block the IP in question because it was not abusive traffic. They haven’t yet responded but I do hope they’ve stopped taking action, and if there’s anything else I can do to certify to them that this is not abuse then I will do that.

I run numerous services on that VPS that I rely on, and I spent most of my day today cleaning up the mess all this has caused. I get that this caused @xuu@txt.sour.is a lot of stress and I’m sincerely sorry about that and am doing what I can to rectify the situation. But calling me ā€œnot coolā€ isn’t necessary. This was an unfortunate situation that we’re trying to make right and there’s no need for criticizing anyone.

⤋ Read More
In-reply-to » (#2rxkcca) he emailed my ISP about causing logging abuse. This is the only real ISP in my area, its gonna basically send me back to dialup.

@bender@twtxt.net haha funny! though i just realized my ISP is the only one with fiber pulled to the property so i would have to get a phone line from them some how. The other ISP in the area is basically a mobile hotspot.

⤋ Read More

@prologic@twtxt.net I don’t know if this is new, but I’m seeing:

Jul 25 16:01:17 buc yarnd[1921547]: time="2024-07-25T16:01:17Z" level=error msg="https://yarn.stigatle.no/user/stigatle/twtxt.txt: client.Do fail: Get \"https://yarn.stigatle.no/user/stigatle/twtxt.txt\": dial tcp 185.97.32.18:443: i/o timeout (Client.Timeout exceeded while awaiting headers)" error="Get \"https://yarn.stigatle.no/user/stigatle/twtxt.txt\": dial tcp 185.97.32.18:443: i/o timeout (Client.Timeout exceeded while awaiting headers)"

I no longer see twts from @stigatle@yarn.stigatle.no at all.

⤋ Read More

@prologic@twtxt.net Hitting that URL returns a bunch of HTML even though there is no user named lovetocode999 on my pod. I think it should 404, and maybe with a delay, to discourage whatever this abuse is. Basically this can be used to DDoS a pod by forcing it to generate a hunch of HTML just by doing a bogus GET like this.

⤋ Read More
In-reply-to » @prologic 10 Gbytes has accumulated since I made that last post. It's coming in at a rate of 55 Mbits/second !

@prologic@twtxt.net There are a lot of logs being generated by yarnd, which is something I haven’t seen before too:

Jul 25 14:32:42 buc yarnd[1911318]: [yarnd] 2024/07/25 14:32:42 (162.211.155.2) "GET /twt/ubhq33a HTTP/1.1" 404 29 643.251µs
Jul 25 14:32:43 buc yarnd[1911318]: [yarnd] 2024/07/25 14:32:43 (162.211.155.2) "GET /twt/112073211746755451 HTTP/1.1" 400 12 505.333µs
Jul 25 14:32:44 buc yarnd[1911318]: [yarnd] 2024/07/25 14:32:44 (111.119.213.103) "GET /twt/whau6pa HTTP/1.1" 200 37360 35.173255ms
Jul 25 14:32:44 buc yarnd[1911318]: [yarnd] 2024/07/25 14:32:44 (162.211.155.2) "GET /twt/112343305123858004 HTTP/1.1" 400 12 455.069µs
Jul 25 14:32:44 buc yarnd[1911318]: [yarnd] 2024/07/25 14:32:44 (168.199.225.19) "GET /external?nick=lovetocode999&uri=http%3A%2F%2Fwww.palapa.pl%2Fbaners.php%3Flink%3Dhttps%3A%2F%2Fwww.dwnewstoday.com HTTP/1.1" 200 36167 19.582077ms
Jul 25 14:32:44 buc yarnd[1911318]: [yarnd] 2024/07/25 14:32:44 (162.211.155.2) "GET /twt/112503061785024494 HTTP/1.1" 400 12 619.152µs
Jul 25 14:32:46 buc yarnd[1911318]: [yarnd] 2024/07/25 14:32:46 (162.211.155.2) "GET /twt/111863876118553837 HTTP/1.1" 400 12 817.678µs
Jul 25 14:32:46 buc yarnd[1911318]: [yarnd] 2024/07/25 14:32:46 (162.211.155.2) "GET /twt/112749994821704400 HTTP/1.1" 400 12 540.616µs
Jul 25 14:32:47 buc yarnd[1911318]: [yarnd] 2024/07/25 14:32:47 (103.204.109.150) "GET /external?nick=lovetocode999&uri=http%3A%2F%2Fampurify.com%2Fbbs%2Fboard.php%3Fbo_table%3Dfree%26wr_id%3D113858 HTTP/1.1" 200 36187 15.95329ms

I’ve seen that nick=lovetocode999 a bunch.

⤋ Read More
In-reply-to » Hack of the day: running watch -n 60 rm -rf /tmp/yarn-avatar-* in a tmux because all of a sudden, without warning, yarnd started throwing hundreds of gigabytes of files with names like yarn-avatar-62582554 into /tmp, which filled up the entire disk and started crashing other services.

@prologic@twtxt.net Inspect? What’s sift? What would you like to know about the files?

⤋ Read More
In-reply-to » Hack of the day: running watch -n 60 rm -rf /tmp/yarn-avatar-* in a tmux because all of a sudden, without warning, yarnd started throwing hundreds of gigabytes of files with names like yarn-avatar-62582554 into /tmp, which filled up the entire disk and started crashing other services.

@prologic@twtxt.net 10 Gbytes has accumulated since I made that last post. It’s coming in at a rate of 55 Mbits/second !

⤋ Read More
In-reply-to » Hack of the day: running watch -n 60 rm -rf /tmp/yarn-avatar-* in a tmux because all of a sudden, without warning, yarnd started throwing hundreds of gigabytes of files with names like yarn-avatar-62582554 into /tmp, which filled up the entire disk and started crashing other services.

@prologic@twtxt.net I’m still getting this crap:

abucci@buc:~/yarnd/yarn$ ls -lh /tmp/yarnd-avatar-*
-rw------- 1 abucci abucci 863M Jul 25 14:19 /tmp/yarnd-avatar-1594499680
-rw------- 1 abucci abucci 7.8G Jul 25 14:19 /tmp/yarnd-avatar-2144295337
-rw------- 1 abucci abucci 9.8G Jul 25 14:19 /tmp/yarnd-avatar-2334738193
-rw------- 1 abucci abucci  10G Jul 25 14:14 /tmp/yarnd-avatar-2494107777
-rw------- 1 abucci abucci 9.5G Jul 25 13:59 /tmp/yarnd-avatar-2619243454
-rw------- 1 abucci abucci  11G Jul 25 14:04 /tmp/yarnd-avatar-2922187513
-rw------- 1 abucci abucci 7.5G Jul 25 14:14 /tmp/yarnd-avatar-349775570
-rw------- 1 abucci abucci  10G Jul 25 14:09 /tmp/yarnd-avatar-3640724243
-rw------- 1 abucci abucci 901M Jul 25 14:19 /tmp/yarnd-avatar-3921595598
-rw------- 1 abucci abucci 9.5G Jul 25 13:59 /tmp/yarnd-avatar-609094539
-rw------- 1 abucci abucci 9.3G Jul 25 14:04 /tmp/yarnd-avatar-755173392
-rw------- 1 abucci abucci 7.9G Jul 25 14:09 /tmp/yarnd-avatar-984061000

Something like 100 Gbytes of this junk has accumulated since I updated and re-started the server. I’m now running the latest version of yarnd, so the update did not fix the problem. Something else is going wrong.

How are temporary files growing to 10 Gbytes in size? The name of the file is ā€œyarn-avatarā€, but why would avatars be so large?

⤋ Read More