User talk:Yvain

From RationalWiki
Jump to navigation Jump to search
New logo large.png Welcome to RationalWiki, Yvain!

Check out our guide for newcomers and our community standards!

Tell us how you found RationalWiki here!

If you are interested in contributing:

You're still wrong :D (just trolling) PercivalCox 08:14, 1 January 2013 (UTC)

re: the comment[edit]

Now you seem to be changing your argument from "they are a cult", which I do think is Worst Argument In The World, to "they might commit violence", which is a perfectly good empirical question.

How's about "those neighbours are nuts"?

But I worry that "they might commit violence" can be used on almost any group and so it's unfair to use it on us specifically. Environmentalism might convince people to commit violence against companies that are "destroying the Earth" (as has in fact happened). Conservativism might convince people to commit violence against liberal politicians who are "destroying our country" (as has in fact happened). Heck, a guy recently got beaten up for being a conspiracy theorist - does that mean RationalWiki, which singles out conspiracy theorists and talks about what bad people they are, is at risk of "promoting violence"? By this argument, anyone who points out a potential danger, or has an opinion that anything is bad, is "promoting violence" and needs to be stopped. I think it's much saner to wait to condemn people until they *actually* promote violence in some way. I can't see a shred of evidence that Less Wrong or SI has done so, or that anyone *except you* is promoting this interpretation of AI risk. Eliezer has said both that he thinks Unfriendly AI is decades away and that no current team is even remotely a threat, and that he thinks ends don't justify means even in the case of saving the world. As far as I can tell we are about as low-risk for violence as it is possible to be while still actually having opinions.

No, that's of course not true. Look at FHI, which I do not have this concern with. At FHI, research assistants do not write about efficacy of hypothetical assassinations of Goldman-Sachs executives, Nick Bostrom would never say something like "And if Novamente crosses the finish line, we all die". Yudkowsky did, even though for all his ignorant ass could possibly know, Novamente would start achieving rather interesting (and troubling to the paranoid) results in couple years, in which case it seems pretty clear the issue would get rather hot. Or, do you think Bostrom, when discouraging violence, would make an argument that it is bad for PR and in the alternative that violence is good, talking about it online is bad? Now, why are they so different? Is it unfair that I am using it on SI/LW specifically, but not FHI? I attribute this enormous difference on a multitude of parameters, to FHI being a legitimate research organization the subject of which I happen to consider to be probably the usual bullshit grant seeking by philosophers - which is fine by me by the way. And SI/LW to be a fairy dangerous ensemble of nutjobs. There is probably not a single variable on which FHI is more nutty than LW/SI. And you of course have seen that I'm not the only one holding that view, even though I might presently be the most vocal one.

Also, can we stick to questions about living in California from now on as you have tried to have this debate with me many times and I don't really want to have it again? - Yvain

You aren't really living in some house full of lesswrongers, so you don't have the subject material to tell.
I'm sorry if I was unclear; my girlfriend and the other roommate are both LessWrongers. — Unsigned, by: Yvain / talk / contribs
Actually, I have a good analogy here with regards to speech. Two salesmen walk into two crowded theatres - back in the day when those were lit by candles. They are out to sell their candles. One is an educated businessman, knows the ethics of the trade. He looks the part, and tells that the candles may cause ignition of proximate flammable materials, and advocates improved candle safety, which his company could help with. Other screams "These candles are not safe! Eventually there will be a FIRE and we all DIE! Theatre should buy my candles instead!". The second also acts incredibly self righteous when there are complains of his behaviours. Reframe your statements as defence of that second salesman. Isn't there something that second salesman could do to be less of a hazard? Dmytry (talk) 03:02, 2 January 2013 (UTC)
That is the worst goddamn analogy I have ever read.--ADtalkModerator 04:02, 2 January 2013 (UTC)
Drive by posting much? The fire in the crowded theatre is a very common example of how you can get in trouble for speech, in form of shouting fire when you haven't seen any. Dmytry (talk) 09:09, 2 January 2013 (UTC)
Okay, same situation. Now they're in an underground mine where there's a known leak of methane gas. One salesman says "Ohmigod don't you know you're not supposed to have open flames around underground methane deposits you'll KILL US ALL put it out RIGHT NOW". The other salesman says "Please, calm down. No one else is making a fuss. You might make the people who put the candles in here feel bad, or you might cause some nutjob to attack candle manufacturers. Why don't you wait a little while to see if anything changes, and then maybe write a very technical pamphlet about combustion and see if that helps?" — Unsigned, by: Yvain / talk / contribs
So, you believe that AI doomsday is like an explosion of a known leak of methane gas, is that correct? Look, the problem here is that the technical side of things is a bit too complicated for militantly ignorant people like Yudkowsky and yourself. Maybe the mine and methane exists only in your mind. Maybe it is actually a Davy lamp, and your idea of how to put it out (take the cover off, blow flame out) would actually get everyone killed. You have no idea how much you do not know and how much what you do not know is relevant. Yudkowsky is even worse. You guys are far too narcissistic and non critical of yourselves. And in the alternative where the AI is, in fact, dangerous, the SI baseless claims only serve to annoy the researchers (Pei Wang for example) and make any actual safety work harder to publish e.g. in the journal where Pei Wang is the main editor. Entirely regardless of the truth value of the AI doomsday, the militantly ignorant crackpottery is counter productive. Dmytry (talk) 02:37, 3 January 2013 (UTC)

Hi[edit]

Please don't forget to sign your posts with four tildes (~~~~). Thanks! Sam Tally-ho! 01:23, 3 January 2013 (UTC)