i only allow myself to go the hard way. good or bad?
if agents are usually suffering negative sum games might be good via destructive warfare
MANGAO
Gauzian
The Honeycomb Conjecture was probably the longest-standing open conjecture in the history of mathematics. Open for 2035 years, if Wikipedia is to believed
predict yourself as if you’re a well-meaning but very unreliable friend of yourself
futuristic city image “the world if all externalities were internalized”
monotonic occam’s razor
multifaceted hedgehog
killing moloch is not glorious, it’s just tedious.
interpretability can ~re-create more discrete alignment methods over a leaky abstraction
maybe social constructivism is just descriptive in how most social scientists create their theories?
what’s the entomology of that?
gratitude should be a brahmavihara, not upekkha
{adequate,¬adequate}×{exploitable,¬exploitable}×{efficient,¬efficient}
public relations is a reality-masking puzzle.
actual nazis talk about moloch a lot
i care about the instantiation of human values in the universe, not about whether humanity reaches the stars
“citta” should probably just be translated as “psyche”
tension between markets & hacker æsthetic: markets want competition (same thing is done often, separately, with lots of turnover), while hackers want things to be done once and well, with high maintenance (canonical resources)
could MDMA microdosing help me get into the 2nd jhana?
new cause area: buy 0day exploits
on priors, if you want to impact the world, found a religion or start a company.
/r/gonewildstories will compliment vogon poetry, it seems
often⩬percent
MTF π-12 “tensor grease”
It’s impossible to tell whether he’s good or not
Yudkowsky moved AI alignment research forward by 4 years, but he also sped up timelines by 2.5 years, so it all cancels out
soft (rigidity), soft (texture)
Angels from Dr. Who but everytime you look away there’s new AI capabilities progress
Most zero-sum games are actually negative-sum games, because of transaction costs
It’s not fair! The graph was supposed to peak in 2047, not 2028!
pet peeve: line-splitting in an ff ligature
I’d hate to be the baby strapped to Voyager 1
there are actually six hindrances: the original five and thinking about how to practically dovetail the shortest brainfuck quine
rationalist meditation retreat: listening to Replacing Guilt as a dharma talk
Todo: become an anti-recycling fanatic at some point
Maybe just not have your search space contain Turing complete elements?
Anyone else wanna laugh at millennials when they realize they have gotten old?
80,000 hours podcast but it’s only Rob Wiblin speaking. Useful for voice synthesis.
Planecrash but just the dath ilan bits
Retaking an airplane after terrorist takeover is a coordination problem
You may start century time horizon projects
Playing hard to get imposes costs on other actually impossible to get players.
Well, you have my curse.
Why not focus on getting old LessWrongers to work on alignment instead of students? They might not be as skilled technically, but they probably have much deeper & well formed intuitions around the problem.
Concepts: Semiquine (a program that only outputs its code, but never halts); prefixquine (program that outputs its code, but something after that). Trivial other versions are postfixquine, substringquine, prefixsemiquine.
“I assume that not everything that can improve my thinking is found in The Sequences.”
put an epistemic status on a thing you’re really confident in, once in a while.
reaching exalted meditation states is not important per se, but that you have ample material to learn not clinging to.