Talk:LessWrong/Archive5

From RationalWiki
Jump to navigation Jump to search

This is an archive page, last updated 3 May 2016. Please do not make edits to this page.
Archives for this talk page: , (new)(back)

Trope[edit]

The point of this one wasn't clear to me:

A rational person must defend a non-mainstream belief.[ref]http://lesswrong.com/lw/1ww/undiscriminating_skepticism/[/ref] Apart from the fact that what is mainstream varies enormously around the world (the mainstream beliefs of Southern Baptists are not mainstream beliefs of the scientific community and vice versa), being contrarian just for the sake of it is not rational.

That ref is already linked in the article ("Any group that resorts to saying its critics are doing skepticism wrong[22] is already fucked and is never going anywhere.") and isn't an example of what you seem to be talking about (it's an example of a different stupidity). Do you have examples that would make your point clearer? - David Gerard (talk) 19:10, 19 June 2012 (UTC)

I don't quite understand what the part you quoted is intended to mean. I provided the link as an example. Yud says that nobody dares to consider cryonics after watching Penn and Teller, and that Shermer "blew it" by mocking one of his holy cows.--Baloney Detection (talk) 19:31, 19 June 2012 (UTC)
I think cryonics may work, but that'd be by pure serendipity or by future science being way good at reconstructing data in the shredded beyond repair brains. The research appears to be crackpottery for the most part, there's no clear feedback or standards. What would be needed is e.g. scanning microscopy study of a brain sample frozen under some conditions, demonstrating that gate concentrations on membranes can be read off it. Proof of concept micro uploading essentially. edit: or preserving neural network so that it retains original electrical function along with training. I do believe there's something we can do right now that would preserve brains well enough for future scanning if any appears, the problem is knowing *what* to do right now, and the cryonics people aren't really working on that, the cryobiology people are; as is always the case with such futurism. Dmytry (talk) 05:38, 20 June 2012 (UTC)
If it's a "trope", it should have a few examples. Please give some so your text can be written to make sense - David Gerard (talk) 06:39, 20 June 2012 (UTC)
I think the undiscriminating scepticism article is good enough example (edit: i.e. it refers to Penn and Teller). One has to somehow demonstrate that the scepticism is discriminating by agreeing with Eliezer on the pet issues or one blows it. Believing that AI could in principle be made doesn't suffice (implied) as demonstration of discriminating scepticism. Dmytry (talk) 07:43, 20 June 2012 (UTC)
A rational person must defend non-mainstream beliefs? Are they defining rationalists or hipsters? ±Knightoftldrsig.pngKnightOfTL;DRmore at 11 23:54, 21 June 2012 (UTC)
@Dymytry: Actually crynics has been tested on animals and failed.--Baloney Detection (talk) 19:49, 27 June 2012 (UTC)

Science vs Bayesianism[edit]

Yudkowsky has declared that Bayes' theorem is superior to science. Luke Muehlhauser (unsuprisingly) agrees. Massimo Pigliucci found the whole notion flawed.

So what do you say? Is Bayes' theorem superior to science as a way of knowing? I'm not very well-versed on probability theory, but I do get suspicious when someone proclaims he/she has a better way of knowing than the scientific method, especially if that individual is an avid promoter of pseudoscience.--Baloney Detection (talk) 19:44, 27 June 2012 (UTC)

He doesn't really understand jack shit about probability (beyond doing real simple problems level). He read some books, got vague idea that beliefs can have prior probabilities set via Solomonoff induction and updated using Bayesian inference and that ought to be better than scientific method. He haven't bothered to understand what Solomonoff induction is or how Bayesian belief propagation works in general, or the like. A child prodigy inadvertently trained by parents in the art of sounding smart; all prodigies are, to some extent, but this one didn't face harsh reality of having to actually try to solve problems correctly rather than generate smart garble. edit: To substantiate this technically: 1: Bayesian propagation on belief graphs needs graphs that perfectly reflect reality to work. It is also, in general, NP-complete, as in, computationally expensive, and requires tracking of where probabilities came from, if there's loops or cycles in the graph. 2: Solomonoff probability a: doesn't provide good priors and b: it is mindbogglingly computationally expensive, the cost rising faster than any computable function. There's no definable argument to criticize here. Just techno-babble justification for how it can possibly be that pure speculations could find truth. Dmytry (talk) 08:22, 28 June 2012 (UTC)
Thanks for the reply. I'm disappointed that the local LWers did not weigth in on this one. As for the merits of Bayesianism, from what I know you still have to perform the experiment, not simply gloss it over with Bayes' theorem, correct? If you don't do experiments (in the broad sense), then you are not doing science (not that it bothers Yudkowsky as he rejects the scientific method). It is really striking how he never refers to anything when he writes. Is his stuff based on any science, or is it just his own ideas as he passes himself off as a genius?--Baloney Detection (talk) 17:50, 5 July 2012 (UTC)
Hard to tell. Fuzzy enough that one can read sense into stuff. Regarding the Bayes theorem: basically, you have to perform experiments if you want to improve the state of your knowledge. Furthermore, applying Bayes rule correctly to a belief graph requires a fairly nontrivial algorithm to handle loops and cycles (and it is fairly trivial to see that you need some such algorithm). It's the Solomonoff induction that gives you probabilities without needing to perform experiments. It assigns probabilities to possible future events based on the probability that an universal Turing machine with random data on input tape will spit out the observed sequence followed by the predicted event. It is incomputable, and severely so as even very short programs can have incredibly long winded behaviours (see Busy Beaver), and the probabilities it provides depend to the specific machine running the program, albeit the difference has certain properties (the difference in lengths of programs is bounded by the length of emulator of one machine on other machine). You can read more here: http://www.scholarpedia.org/article/Algorithmic_probability and formally in papers by Marcus Hutter, who's a no-shit theoretical AI researcher doing actual theoretical work (as opposed to armchair handwaving which the common folk tend to think is theoretical). Note that there is no need to use it together with Bayes theorem; the induction alone is includes the mechanism for calculating probabilities of the predictions based on experimental data (and on the incomputable).
The incomputability is not something that is only a problem to theoretical stuff. For example, if I make a claim that e.g. tvelve state two symbol busy beaver is the God almighty who creates the world and messes with it complete with fake dinosaur bones, flood, Jesus, and holy bible, while the simplest atheist 'theory of everything' program is ten thousands kilobits long, I should have extreme confidence that God exists, on order of 1 - 2-10000. Such view would be un-falsifiable, contrary to what Luke seem to think. The Solomonoff induction and Bayes theorem seem to provide the 'theoretical' backing for idea that making up the numerical probabilities ex nihilo and then doing 'updates' on them is rational. Note also that with sufficiently low prior (as the prior for atheism in my 'deus ex machine' example above), and given unreliable senses and common failure modes to experiments (such as God testing your faith), no amount of practically attainable evidence can be sufficient to change one's view through Bayesian updates. What they call rationality is the belief that you could, and in fact, should pull numbers out of the arse and treat them as probabilities and update on them using Bayes theorem, magically doing it right without sorting out the highly non-trivial algorithm that is necessary for the belief propagations to work correctly in a graph with loops and cycles. There is another issue when the belief graph itself has to be constructed, the problem as outlined in this paper by Pei Wang, another no-shit AI researcher that Luke Muehlhauser has interviewed but has apparently failed to learn anything from, I guess not in the least because of the mind set where it was a "Muehlhauser-Wang Dialogue" rather than "my name is Luke, I do not know much about AI, and I seek advice from an expert" which would have been much more adequate and useful attitude to have for someone with Luke's credentials. Dmytry (talk) 20:51, 5 July 2012 (UTC)
Well, Bayes can be useful for narrowing down hypotheses, but you still need to make hypotheses and then test them. Putting too much stock in it won't get you anywhere, as you can see with Yudlowsky.--User:Brxbrx/sig 21:19, 5 July 2012 (UTC)

Yudkowsky overstretching Bayes again[edit]

Read this.

It just so happens that I took a critical thinking course at university that taught that if two people disgree about a factual matter and they both have access to the same information, then both of them can't be rational. And believe it or not, there was not a single reference to Bayes (and no, not even to Yudkowsky). Just plain old logic.--Baloney Detection (talk) 20:08, 5 July 2012 (UTC)

We have, of course, the article - David Gerard (talk) 21:00, 5 July 2012 (UTC)
Is that article based on the real stuff, or just LW parrotting?--Baloney Detection (talk) 21:22, 5 July 2012 (UTC)
Why not read it and find out, and improve it otherwise?--ADtalkModerator 22:04, 5 July 2012 (UTC)
I'm not knowledgeable about the subject. Hence I'm asking.--Baloney Detection (talk) 20:13, 6 July 2012 (UTC)

Let's summarize[edit]

Let's summarize something here: LW is a discussion board where the owner slaps name of a dead mathematician onto irrationalist philosophy of his ('Bayesianism') and calls this rationality and himself a rationalist (even though Bayesian, and rationalist, already have quite different meanings). Furthermore, it is more than creepy enough in the cult like way, and has more than enough doomsday prophecy, as to have a highly unusual percentile of cultist personalities. The boards owner is a pseudoscientist by trade, earning a living on donations for his pseudoscientific research slash salvation from doomsday. He is also a self improvement guru. They run thought-reform type self improvement minicamps / meetups where people go to purge of irrationality; they believe they can attain some sort of enlightenment if they only don't be 'irrational' (relying on the stereotypical cultist notion of inner perfect self that is held back by things one must purge); they promote propagating your beliefs , which sounds like a good idea but sometimes leads to such outcome (Alexei is an indie game developer donating unusual fraction of his income to SI).

To digress a little, counter intuitively, the beliefs can not be represented with probability numbers and then simply 'propagated' in the way that one would imagine. Suppose that there is a machine that spits out two red balls labelled A and B, or two blue balls labelled A and B, and another machine that takes two balls, and gives you candy if at least one ball is blue. You take two balls, without looking at them, put them into second machine, and obtain candy with probability 50% ; but if you have been told that one ball has 50% probability of being blue, and then told that other ball has 50% probability of being blue, and you 'propagate' beliefs to candy machine's output, you will expect 75% probability of getting candy. To avoid this and similar issues (especially with the cyclic propagations), one does not propagate unless one has sufficient information on where the probabilities came from, and sufficient computing time to evaluate the correlations; furthermore the computational expense of tracking the correlations blows up very rapidly with size of the graph, and so even vastly powerful, accurate computational processes have to employ weird heuristics on even moderate sized graphs.

On the up side it is not all Yudkowsky and SI and self improvement thought reforms; online discussion boards, even creepy ones, do not attain the entrance exclusivity typical of offline cults, and do include a lot of people who are not on board with the cult side of things, even though there is the local sentiment against this.

In the past Roko was a top contributor. He got deleted after he pondered a bit what a 'friendly' AI implementing Yudkowsky's disturbing psychopathy-on-steroids decision theory might do to those not supportive of it's coming. Now it is Yvain, who would probably stick for a while as he does not get into the AI side of things, and who posts digestible, albeit second hand, accounts of interesting stuff (e.g. at the moment he's explaining game theory). Those posts may be somewhat worthwhile, if you are looking for second-hand-account introductory level texts, which is not very commonly sought.

Dmytry (talk) 07:07, 6 July 2012 (UTC)

Additional information on the finances of the SI: http://lesswrong.com/lw/5il/siai_an_examination/ . That is about two thousand http://lesswrong.com/lw/9a0/dead_child_currency/ dead children, in their own terms (conversion as proposed by the above-endorsed Yvain). What's the output? The people won't even e.g. review existing AI approaches, or analyze if AIXI-tl would be reasonably safe. So far what they've done is talking about singularity (if anything, making noise that would make any legitimate safety arguments, if those arise, harder to hear), and doing a little bit of philosophy of mind. Even if you assume that their papers are not worthless shit (which they are), they've still misappropriated about 90% of what they got as the output is so little. edit: and yes, LW is not just SI people, it has a plenty of non stupid crazies like Yvain with his dead babies currency idea that would make SI people imagine themselves guilty of genocide by neglect if they are wrong. Reminds me of Shutter Island. There is a lot of boards online, and the board that is an attachment to something like SI, and has such woo, is a rather bad place for anything else. Dmytry (talk) 07:55, 6 July 2012 (UTC)

Top-rated posts[edit]

Oh look, Yudkowsky doesn't actually feature in the top rated posts! In fact, Yvain is pretty much the highest rated contributor. Thoughts, oh Great Cult of Eliezer? Scarlet A.pngtheist 12:21, 5 July 2012 (UTC)

Thank you for linking that - a lot of really interesting posts.--ADtalkModerator 13:17, 5 July 2012 (UTC)
Been reading a lot of them... wow, you have single-handedly redeemed the main flaws I'd seen in the LW community. I'm creating an account.--ADtalkModerator 13:50, 5 July 2012 (UTC)
Please do me a favor and bring up Roko's basilisk when you post there.--Baloney Detection (talk) 18:44, 5 July 2012 (UTC)
Saying that LW is a pit of pseudoscientific claptrap because of Yudkowsky's views on AI and cryonics is like saying RW is a hotbed of misogyny and idiocy because someone once said "I'd fuck her is a complement". Scarlet A.pngpostate 13:54, 5 July 2012 (UTC)
This one in particular I think really deserves to be up there. It's a very specific exemplification of the principle that when we want to categorise things, it's rarely to do with the observed properties, but how we go about treating it after it's been categorised. The "is atheism a religion?" debate, for example, has fuck-all to do with "what is a religion?" and far more to do with "are atheists therefore hypocrites?". "Is it a disease?" is more about "do you want it to be treatable or not?" Scarlet A.pngtheist 14:14, 5 July 2012 (UTC)
I don't know anything about AI. However, cryonics is fraudulent and is rejected by every mainstream science-based medical organization there is. Why special pleading for Yudkowsky? I presume you wouldn't let the Discovery Institute off the hook becaause they promote one particular pseudoscience rather than every pseudoscience in existence. I think LW should be thought of as primarily a transhumanist site, and thus they are prone to the kind of pseudoscience memes common among transhumanists. You got any better idea?--Baloney Detection (talk) 17:33, 5 July 2012 (UTC)
I feel as life LessWrong has become the 2012 version of CP - and am actually surprised you all don't have a blow by blow like WIGO. This one site really does get a lot of you going - on both sides of the argument. not that i'm saying it's bad.Green mowse.pngGodotFire! Fire! Fire! (please send spare firefighters) 15:54, 5 July 2012 (UTC)
Perhaps I'm lost on popular culture, but what is CP and WIGO?--Baloney Detection (talk) 18:44, 5 July 2012 (UTC)
Conservapedia, and "what is going on in". The latter is our page here at RW that details the comings and goings of CP. I feel a bit like users here are doing similar with lesswrong. But it was mostly a joke (and a bad one, if it must be explained).Green mowse.pngGodotFire! Fire! Fire! (please send spare firefighters) 18:47, 5 July 2012 (UTC)
Ahh, thanks for the explanation!--Baloney Detection (talk) 19:51, 5 July 2012 (UTC)

Ya and back in the day Roko was the top contributor, until he threaded upon poking holes in what an AI would do if it implements insane Eliezer's decision theory. You make it sound as if EY just has some views on the AI. The fact is, the guy is a charlatan/crank hybrid AI 'researcher' collecting the money for the AI 'research' and the LW is only here because EY seem to think 'raising sanity waterline' equals people donating. edit: and also I am well aware of Yvain. This, for example, is a very good post: http://lesswrong.com/lw/7o7/calibrate_your_selfassessments/ , something that everyone there should get tattooed on inside of their eyelids, and to apply this to the assessment of groups they belong to, also. edit: ohh wait, actually not so good - he mentions the people who under-rate themselves, and the Dunning-Kruger afflicted then start pondering the possibility that they underestimate themselves, with the obvious outcome. Dmytry (talk) 16:30, 5 July 2012 (UTC)

Uhm yeah? The whole site is still based on the ideas of ONE single person claiming to live in the end times. You are expected to agree with their phyg/cult, and if scientists disagree with their dogmas, then obviously they are stupid (or have watched Penn & Teller). How good is it to base your life around the writings of an Internet crank? How do we know that his writings in the sequences are based on sound scientific knowledge? He doesn't give any references, it's just the words of a self-proclaimed autodidact with no exchange with the scientific community on the issues he writes about, and who explicitly rejects the scientific method.--Baloney Detection (talk) 17:42, 5 July 2012 (UTC)
I think we get that you guys think LW is Scientology squared, but the rest of us think it has some decent material as long as you strip out all of the transhumanist piffle. So is there a point here besides having a bitchfest about the Cult of Yudkowsky? Honest question here, because as much fun as it is to dump on the crankier side of LW, that's all you guys have done with your accounts at RW. Now, I'm off to sip some more of that tasty Yudkowsky-brand Kool-Aid and donate my life savings to the SIAI. The End is nigh! The Singularity is near! Nebuchadnezzar (talk) 18:54, 5 July 2012 (UTC)
The 'cult of Yudkowsky' is a strawman. The guy himself would rather believe that his idea is extremely important mankind saving, than have a personality cult, therefore, he's pushing the bad idea better than someone who's just in for personality cult, which is actually worse as far as I am concerned. Dmytry (talk) 19:15, 5 July 2012 (UTC)
And yet, as I pointed out above, that crap doesn't actually appear in many of the top-rated posts. Scarlet A.pngmoral 19:31, 5 July 2012 (UTC)
So? Does that mean that they don't buy into his woo? If they didn't, they wouldn't be on his site.--Baloney Detection (talk) 20:09, 5 July 2012 (UTC)
Does Yudkowskian rationalism teach you to strawman those who disagree with you? I never compared LW to Scientology, because I don't know much about Scientology. I did however compare it to Objectivism, and gave my reasons for it. And no, I've not only been about LW here, I created the entry on Sean Carroll. This wiki is supposed to exposed woo and crankery. Yudkowsky promotes his share of those, and therefore he (and his followers) get called out on it. Outlandish, I know. But I'm not accepting special pleading for Yudkowsky.--Baloney Detection (talk) 19:50, 5 July 2012 (UTC)
"Does Yudkowskian rationalism teach you to strawman those who disagree with you?" Wow, so I used a bit of hyperbole. Do you seriously think I'm in thrall to the Yudkowsky-ian personality cult? You created a new page, good work! (I'm not being sarcastic here.) But if I open up the contribs page for you or the others who have suddenly descended on RW, it's nearly all LW/Yudkowsky all the time. I know he promotes his share of crankery, though much of the criticism belongs more on the transhumanism page since he's not the only one peddling this stuff. I'm just wondering what the fixation is with LW/Yudkowsky here. And now you're tarring people like me and David Gerard as some kind of LW apologists, which is frankly laughable. DG pushed the cryonics-debunking article to gold status and I've added massive amounts of critical material to transhumanism and related articles. It's a very, shall we say, Yudkowskian "with-us-or-against-us" attitude toward "rationality," the same one you claim to be decrying. Nebuchadnezzar (talk) 20:28, 5 July 2012 (UTC)
I was not thinking primarily of you and Gerard, I was thinking more of Armondikov who just above tried to whitewash Yudkowsky's cryonics promotion. Good work with the stuff on cryonics and transhumanism. Yes, Yudkowsky is not the only person promoting it, but he is a big one in that admittedly fringe field, and he has a rather devoted following. If you consider Robin Hanson who is into the same transhumanist woo as Yudkowsky, he doesn't seem to have quite the same kind of following around his personality. I admit I might currently have a bit of fixation with LW/Yudkowsky. It will probably go down in a while. (When that happens, will this entry be edited to the pleasure of LW?)--Baloney Detection (talk) 20:48, 5 July 2012 (UTC)
How the hell was that a whitewash? Because I don't personally agree with the cryonics and point out that cryonics isn't a major aspect of the site? Hell "Does Yudkowskian rationalism teach you to strawman those who disagree with you?" makes me go BOING! Scarlet A.pngpathetic 21:03, 5 July 2012 (UTC)
(ec) ADK is not a LW Kool-Aid drinker, believe me. See above. My point is that, in this zealous anti-LW crusade, you've painted anyone who says "Okay, it's not that bad" as shills for the Cult of Yudkowsky. (See, e.g., "When that happens, will this entry be edited to the pleasure of LW?") Just drop the persecution complex and we can all play nice. Even our resident LW-er hasn't whitewashed the article, nor does he imbibe the LW kool-aid wholesale. And that's the pro-LW end of our userbase -- I could write reams and reams of material on all the stuff Yudkowsky is wrong about. Nebuchadnezzar (talk) 21:14, 5 July 2012 (UTC)
Hey, I'm currently on the special brew because I maxed out my overdraft donating to the SIAI! Scarlet A.pngtheist 21:19, 5 July 2012 (UTC)
It's not a persecution complex. I'm just wondering why Yuddite woo should be treated with kid gloves compared to say creationism or homeoathy? And I'm still critical of the idea of elevating one guy and pretty much swallow wholesale his ideas (just look at Luke Muehlhauser). Though LW has its local contrarians, many of the members are prone to do this to a very high degree. As for writings reams about what Eliezer Yudkowsky is wrong about, please consider adding some to the entry on him.--Baloney Detection (talk) 20:25, 6 July 2012 (UTC)
How is "Yuddite woo" being treated with kid gloves? Like I said, I think all the transhumanist stuff is total nonsense, but those criticisms are better off on the transhumanism page itself because they apply to many more than just Yudkowsky. Some of the other nonsense I might add in, though I'd have to go and read or re-read some of the material and the Friendly AI stuff really puts me to sleep. You're free to edit the article, you know. Nebuchadnezzar (talk) 04:15, 8 July 2012 (UTC)
Wasn't Armondikov unhappy with LessWrong being listed among the pseudoscience promoters? And yes, I started a section on the Eliezer Yudkowsky entry, hopefully it will be continued on.--Baloney Detection (talk) 14:01, 8 July 2012 (UTC)