Talk:Simulation argument/archive 1

From RationalWiki
Jump to navigation Jump to search

Motives for simulation[edit]

I have cut this and shall provide reasons for why I do not believe it is a correct response to the argument.

There is also something to be said as to why anyone would even bother to simulate billions of people on a massive machine whose primary activities revolve around working, eating, and sleeping.

I have read, although I cannot remember where, that one of the best proposed methods for "time travel" would be to accurately simulate the past and insert oneself into it. If the simulation was as accurate as the simulation argument implies, then this would be effectively time travel as there is no distinction between actually going into the past and going into a perfect simulation. However, commenting upon the motives of the simulators would be beyond us as humans and we should not second guess them any more than we should second guess God Himself. If I remember rightly, Eliezer Yudkowsky may have written on this sort of subject of second guessing transhuman beings.

Nothing terribly interesting or useful outside of the organism itself. It would be considerably more efficient to have the actual biological organisms since they can sustain themselves by their own hard work, and dedicate the massive processing power to do something more important. If some one is interested in the species, or think they might come up with something interesting, park a satellite and watch.

This is not necessarily the case. Simulation would allow the hypothetical species to change parameters far more efficiently than with merely replicating it in reality. Furthermore, it could potentially be executed faster than reality would allow. We see with many hypothetical and theoretical studies in real world science that it can be far more effective, in terms of time and energy, to simulate than to execute an experiment in reality. Also, the point above about second guessing the simulators.

An example like that The Matrix suffers from overly complex explanations to keep the story going. They could have easily powered themselves with things like nuclear fission, fuel cells, putting turbines next to the Earth's core instead of humans, shooting themselves into space to take advantage of solar power, or using all the processing power dedicated to humanity to figure out fusion.

Comparing the simulation argument to a work of fiction is, at best, disingenuous. The plot of The Matrix includes many absurdities, many of which are written in deliberately and others are conveniently changed - for example, it is not necessary for the Machines to use humans as a power source (indeed, it is a violation of the laws of the thermodynamics) however, it is somewhat "lampshaded" by the addition of the line "and some form of fusion" and the Machines' actual motives for constructing the Matrix are malice and sadism rather than power source. But none of these points are even remotely relevant to the simulated reality argument and again, I refer to the point regarding assuming we know the motives and reasons of the simulators. Valjean (talk) 12:11, 13 June 2011 (UTC)

I agree we can not know much; but I still think there is some validity in trying to extrapolate according to the assumption they are similar to us. Put it this way - either they are like us, or they aren't. If they are like us, we can guess at their motivations based on our own. If they are unlike us, then all bets are off, and we can't even guess. I would suggest, based on this, that conclusions based on the assumption they are like us, have some validity, but not a lot; a bit more than fifty-fifty, but not a lot more. (((Zack Martin))) 14:06, 13 June 2011 (UTC)
I appreciate that there is some validity to ascribing motives if we assume that the transcendental consciousness that is capable of simulating a universe have motives and ideals similar to ours. However, we must consider that this force is, as I said, transcendental compared to the human race as it is now. Therefore it would likely be unreasonable to make that assumption. Many of these objections and counter-arguments are merely restating the original assumptions of the hypothesis with "but it may not be the case" tagged onto them. In this case, one of the simulation argument's assumptions is that the advanced beings would also want to simulate a universe. Merely stating this again with reference to plot holes from The Matrix is not particularly constructive, nor does it build a solid argument. Valjean (talk) 15:27, 13 June 2011 (UTC)
It was to point out that building giant supercomputers and feeding the clusters massive amounts of power just to simulate things was extremely inefficient compared to plopping a society on one of the billions of planets and letting them survive. There is little point in taking a cluster of computers in a vast conspiracy and simulating people eating and shitting, even if they do it really fast or in the past, when that processing power can be running other things. Pleasure, experiences, and others would be a reason why...but there would be no reason for simulating the rest of the world "off camera". It would be like filming a TV series, and filming everything happing in the entire world around the focus of the story. This isn't meant to state fact, it's meant to bring up the argument of the massive amounts unnecessary complexity that would be associated with it.
Just to add, human beings do tons of weird things as hobbies or just for fun. Often spending tens, and sometimes hundreds, of thousands of dollars. It is more likely an intelligence would build a massive computer world like people keep large aquariums now then to answer cosmic questions (like Deep Thought). Making the assumption in reductions of costs for massive amounts of processing power.
The Matrix reference was honestly to help people conceptualize. Honestly, people can imagine things easier when they already have some grounding in it, and The Martix is a film most people have seen and involves the concept quite well. The power source argument my violate thermodynamics...but it comes from The Animatrix, which is part of the prologue (The Second Renaissance). ~ Subsound ~ 16:02, 13 June 2011 (UTC)
There is some merit to discussing these motives with regard to the specifics of the simulation argument, but I do not believe they are valid as standalone points as the technology level, the energy consumption and indeed the very minds and the physical universe inhabited by beings capable of simulation of an entire universe will be beyond us to consider. Even at a very conservative estimate that the simulators are the humanity of the future, we cannot expect to predict their energy uses and say, with sufficient confidence, that it would be more efficient to do an experiment than a simulation. Again, this would also tie in with the unfalsifiable nature of the simulation argument as regardless of how complex our reality appears, it may be a mere simple approximation of the real reality. Therefore considering it a waste of resources to simulate does not necessarily hold - our universe may be as simple compared to the real world as an 8-bit computer game is simple compared to our universe, and so simulation of the crude items listed above could be trivial. Perhaps it would be better organised if these points regarding motives used as more specific counter-points to the assumptions proposed in the simulation hypothesis. Valjean (talk) 02:35, 14 June 2011 (UTC)

Discussion on "A Patch for the Simulation Argument"[edit]

There is an obvious and somewhat embarrassing problem with Bostrom's original formula that he has addressed recently in his paper A Patch for the Simulation Argument, the problem is that he had previously arbitrarily assumed that the average number of individuals in real civilizations and the average number of individuals in civilizations that run ancestor simulations was the same, producing a formula that is not general and gives erroneous answers in certain circumstances:

While Bostrom discusses the problem in this paper, he does not explicitly give the correct form of his formula:

Here is the average number of individuals per real civilization, both post-human and non post-human, H is the average number of individuals per real post-human civilization, is the fraction of real civilizations that have run simulations in the time over which averages and have been determined purported to represent the probability that a civilization will run ancestor simulations, is the average number of ancestor simulations run by such civilizations, and is the fraction of simulated individuals purported to give the probability that we are ourselves simulated (erroneously in my opinion because any estimate we might make of these factors is observer dependent and not from the position of one observing the entire simulation hierarchy, and anyway who says the set of all civilizations must be finite such that said fraction corresponds to a probability?).

For the fraction of simulated individuals to tend to 1, we would require that . Now, there is a relationship between and that is only well defined at the extrema → 1 () and → 0 ( → 0). In the first instance → 1 gives → 1 leading to → 1 reproducing the third proposition of Bostrom's disjunction and in the second instance → 0 gives & → 0 and → ∞ leading to → 0 and reproducing propostions 1 and 2 of his disjunction. These limits are well defined, but between these is completely undefined.

A contrast between Bostrom's formula as originally presented and the correct form may be useful here. In its original form the only free parameter is which may vary between 0 and 1. Posed in this way Bostrom's disjunction holds if we agree with his reasoning for to be very large, as it is only when is in the vicinity of zero that is not in the vicinity of one (the exact range over which this holds depends on the order of ). However, for the correct form there are now three free parameters if we accept that must be very large, but two of these parameters, and , are not bounded at 1 like , rather they may take any value above 0 and may be arbitrarily large. This is important as it means that is completely undefined except, unsurprisingly, at the limits → 0 ( → 0) and → 1 ( → 1 as = necessarily) so an exploration of the limits of by varying is not feasible as it was for the original formula, given that the values of and are uncertain and not bounded like .

In order to patch the problem in his argument Bostrom suggests that we may extrapolate from the experience of our own civilization to give reasonable estimates of and , specifically he contends that ~ . This is an odd request because even if we were to restrict our analysis to pure ancestor simulations such that we might extrapolate our knowledge upward the simulation hierarchy, at each level above our own it is possible that unknown and materially very different civilizations abound, distorting and and making the contention that ~ unreasonable. But, we need not even go as far as to take this point to refute the simulation argument as currently presented because the proposition that most if not all simulations will be ancestor simulations is clearly unreasonable and Bostrom himself admits that simulations may be different to the history of the civilization running them, making his reasoning for the values of and illegitimate.

The above analysis has hinged on the assumption that is very large. Bostrom's reasoning for this assumption, or 'empirical fact' as he calls it, begins from the observation that we have determined physical laws that indicate that the limits of computation are such that with the proper technology stupendous numbers of ancestor simulations may be run. It would seem reasonable that we also know that any civilization simulating us must have available a greater amount of computing power than is required to simulate this universe, suggesting that any estimate of in this reality is a lower bound on in higher levels of the hierarchy. But, within the argument and reasonably so, Bostrom has allowed for incomplete simulations such that within a simulation appearances may be deceiving (he has to otherwise he would have to argue that we will obtain the technology to simulate this world fully, an impossibility, or admit that his argument says nothing about whether we may be a simulation). What this means is that although nominally we may be certain of our own reality, if we permit contemplation of the possibility that there is another reality simulating this one then we must discard any certainty of the applicability of our physical laws, at least until they are certified by implementation (and even then maybe not), as it is possible, even plausible, that the simulators may impose cut-offs on complexity within their simulation owing to computational limitations. Thus, by entertaining the simulation hypothesis, we dramatically weaken the strength of empirical reasoning; we cannot state that observation reliably implies that for the simulation hierarchy must be very large if we are working from within the strictures given us by the simulation argument.

What the simulation argument amounts is that Bostrom's disjunction might be true, as ~ might be true, but we have absolutely no way in which to even estimate the values of these two factors with any reliability and thus cannot assign probability to such a situation. Extrapolation from a sample of one, does not, an argument make.

Sorry if this is somewhat long, comments would be very welcome.--MarkB (talk) 16:22, 29 August 2011 (UTC)

Sod the maths, you pretty much summed it up in that last paragraph. It's all assumptions mounted on assumptions and nothing concrete attempting to provide proof or some sort of philosophical reasoning. Trouble is, someone can turn around and, with pretty much equal validation, say that those assumptions are wrong, choose a different set, and come up with a different answer. Smart people can be really dumb sometimes, I wonder if I'm so hostile to this mathematically rigourous rationalism because it's so easy to drink the Kool-Aid forget the basics. What you've got there is nothing more valid than one of those equation stories like "the equation for the perfect holiday" that the press tend to eat up, they have zero validity for doing anything useful unless the values are non-arbitrary. ADK...I'll glug your shark! 00:17, 30 August 2011 (UTC)
@ADK: In defense of mathematically rigorous rationalism, I would say that the math prevents you from making certain nasty errors when you start reasoning in the face of severe uncertainty. There are still other ways to get it totally wrong, of course, but being mathematically rigorous removes some of these point of failure. It also helps you formulate the problem in a correct way and doesn't let bad arguments hide behind fuzzy verbal reasoning.
@Mark: Thanks for the link, I hadn't seen that paper. (And your summary is very good!) At the moment I agree with your analysis about , although I probably need to think about it for a few days. Tetronian you're clueless 00:29, 30 August 2011 (UTC)
My point is that it allows you to make silly mistakes and still think you're right. As above, we have this equation to try and show what percentage of beings are simulated and what are "real" (all very interesting if you're into that sort of thing, I'm sure) yet fails to grasp the most obvious problems with the simulation argument. ADK...I'll stink your wigwam! 00:45, 30 August 2011 (UTC)
In that sense, it's no different from regular philosophical reasoning - people string together wildly overcomplicated and confused ideas and then claim victory all the time. So if it's a choice between using traditional reasoning to be 99% wrong and recklessly overconfident, and using math to be 98% wrong and recklessly overconfident, I would prefer to be...*puts on sunglasses*... less wrong. Tetronian you're clueless 01:34, 30 August 2011 (UTC)
Just imagine me flicking you on the forehead for that. ADK...I'll refill your search engine! 01:45, 30 August 2011 (UTC)
I would have been disappointed if you had done anything else :) Tetronian you're clueless 01:48, 30 August 2011 (UTC)
What I wanted to show is that Bostrom's argument relies on an analysis of the limits of a formula, which in turn hinges on the boundedness of a factor in the denominator, , which means that 1⁄ and are well defined for all values of in the original formula (if we accept that must be large then Bostrom's mathematical analysis is valid, although the argument fails in other places, his interpretation of as the probability that we are ourselves simulated for instance). Once we throw and into the mix we see that is really only well defined in 2 places over the range of values may take. Now, to make the formula behave nicely again, Bostrom not only has to argue that is very large, but also that ~ 1, and do so from observation of only one level of a supposed simulation hierarchy that is representative of only one civilizations experience in the root reality if we assume ancestor simulations (why should we though?). There is a case to be made that we can say that must be very large because observation in this reality must represent a lower bound on for the simulation hierarchy entire, but the same cannot be argued for ~ 1, the same reasoning does not apply, and anyway as I have said in the above his reasoning for to be very large is illegitimate (I really wish he wouldn't go about calling it 'empirical fact'). Ironically, given your comment ADK, part of my motivation for the above analysis of limits was that I was annoyed that Bostrom didn't deign to give the correct formula in his new paper; I felt that an explicit mathematical analysis was needed instead of the qualitative discussion in his paper.--MarkB (talk) 02:31, 30 August 2011 (UTC)

I think the real value in Bostrom's argument, is not to try to derive that the simulation hypothesis is true, or likely, or even that we can really claim to know what it's likelihood is; I think its value is more to show it is plausible. It takes classical arguments for philosophical scepticism (like Descartes'), and improves on them — while Descartes relied on sceptical hypothesises that one could not really have evidence for or against, Bostrom has constructed a sceptical hypothesis which, given a plausible but not certain future course of history, we would have increasing evidence for it. The moment we create AI-containing simulations, and ever more accurate ones, and ever more of them, and even nested and ever more nested ones... then suddenly the plausibility that we are simulated ourselves increases. Of course, we might not ever develop in that historical direction (existential risks, underestimation of technologically difficulty, etc), but Bostrom's hypothesis has "plausible plausibility", which is something that Descartes and friends lack... So Bostrom's argument is an important milestone in the history of philosophy, and especially in the history of philosophical scepticism. It shows that the scientific materialist worldview contains within itself the seeds of its own self-destruction. (((Zack Martin))) 09:30, 30 August 2011 (UTC)

That depends on what you mean by plausible. As currently presented the argument is not plausible only conceivable, it does not speak to the probability that we are ourselves simulated. If we were to go on to develop ancestor simulation technology, this would give us better estimates of , , and , but these values would only represent knowledge of a given subset of any hypothetical simulation hierarchy, they would not necessarily be anything like the values for the whole hierarchy and we have no way of telling what those values might be. only represents an observer dependent probability that a randomly selected individual from a subset of the hierarchy is simulated, from the observer's perspective. As the person evaluating this probability is not randomly selected it says nothing on the probability of whether they are simulated or not.
"So Bostrom's argument is an important milestone in the history of philosophy, and especially in the history of philosophical scepticism." I really don't think it is, Bostrom's argument is just radical scepticism dressed up in a bad probabilistic argument which doesn't do what he says it does. The fact that for nearly a decade Bostrom did not realise that his formula was incorrect exemplifies the weakness of thought that has gone into it, it took me only a few minutes after having read his original paper a week or so ago to notice the problem. There are also other major weaknesses, like his attempt to estimate the number of operations required to run an ancestor simulation of our civilization, which does not include the operations necessary to provide the environment throughout this history only those to replicate brain activity. Also his analysis of future technology only looks at processing power, nothing else, he makes no attempt to look at other hardware requirements or look at possible energy requirements.--MarkB (talk) 14:07, 30 August 2011 (UTC)
For me, the two major takeaways from Bostrom's paper were: (1) we could be living in a computer simulation, and (2) if we had knowledge that there are, or that there could be, computer simulations, simulated within our own world, but indistinguishable to their inhabitants from our own world, that would significantly raise the probability that we ourselves are in a simulation compared to not having such knowledge. I think (1) is simply Descartes in modern garb, but (2) is what is original. Whatever flaws the details of his arguments have, I don't think those flaws negate the validity of those two basic points. (((Zack Martin))) 06:45, 1 September 2011 (UTC)
I agree with you on (1), but I strongly disagree on (2). A situation such as the one you describe wouldn't say anything about the probability of whether our civilization is simulated or not, just that it is possible. To evaluate such a probability you need complete knowledge of the real universe. The only probability that can be evaluated in this situation is an observer dependent probability that a randomly selected individual from a subset of a hypothetical simulation hierarchy is simulated, from the observer's perspective, with the implicit assumption that the observers universe is real. For instance, if you had a list of numbers corresponding to all the individuals in our civilization and all those individuals that we simulate and then randomly select a number from this list and ask what is the probability that this number represents a simulated person. This could reasonably be extended to a probability that any future individual from the perspective of a person at the time of asking is a simulation, but no further. None of these correspond to a probability that an observer is themselves simulated.--MarkB (talk) 19:49, 1 September 2011 (UTC)
Just a further point, all (2) would do is elevate the status of a subset of logical possibilities to nomological possibilities, that would be its only effect. To obtain the probability that we are ourselves simulated all logical and nomological possibilities would have to be considered, because to permit the assumption that we might be simulated is to permit anything that might result in such a situation.--MarkB (talk) 19:58, 1 September 2011 (UTC)
I think it comes down to how we understand probability - you seem to be adopting a objective understanding of probability, i.e. probability is somehow an objective property of the situation, independent of our beliefs or opinions of it. By contrast, we could adopt a subjective understanding, and say that probability is just a quantitative estimate of our own degree of belief. In the later sense of probability, I believe (2) is true, if not for everyone, then certainly for very many people, as a truth of human psychology - if we knew there were very many people, from their own perspective indistinguishable from us, who were simulated, that would likely raise our probability that we ourselves were simulated. Now, moving beyond purely subjective probability, we can ask whether those subjective probability assignments are rational. The most basic requirement is that we don't commit a Dutch Book, and I don't see how (2) can lead to a Dutch Book. A more extensive requirement would be that our subjective probabilities approximate those derived in some sort of objective probability framework. The problem with the later, is that for many of these questions, we know so little, we can't even begin to objectively calculate a probability; and the problems are so abstract, that when faced with competing principles with which to evaluate whether subjective probability assignments are objectively valid, but which each produce different results, it can be difficult to justify the choice of one set of such principles over another. (((Zack Martin))) 08:54, 2 September 2011 (UTC)
The probability that our civilization is a simulation is an objective probability, Bostrom's argument is based on finding the limits of a quantity that corresponds to the objective probability that we are a simulation as other quantities are varied. Each of these quantities is defined over the set of all populations of civilizations with human-type experiences with defined subsets, the subset of populations of civilizations that are real and the subset of those that are simulated. But, it is impossible to have the requisite knowledge of this set to produce an objective probability, so Bostrom needs to redefine his interpretation of as a subjective, observer dependent probability or throw out the mathematical argument and simply state that it would increase our belief that we might be a simulation, but yet again we might not be. If we were to run, say, 5 simulations that were indistinguishable from our world to a single observer, what do you think the probability that the simulating world is simulated is? I agree that it would qualitatively increase our belief that we might be a simulation, but that's all that could ever be said, you can't argue that this probability has mathematical limits in the way Bostrom does.--MarkB (talk) 15:04, 2 September 2011 (UTC)

Refuting the Simulation Argument[edit]

It might be helpful to look at Bostrom's argument by noting the changes to the set of populations of all human-type civilizations as we go through each step in the argument, as the argument relies on knowledge of this set, , for its conclusions. There are two distinct steps in the argument:

(1) Assume that simulations of human-type civilizations are possible.
(2) Assume that our civilization might be such a simulation.

Prior to step (1) we can say that the set of populations of all human-type civilizations, given knowledge of our own civilization and the possibility of others, is:

With representing the population of our civilization. Taking step (1) invokes a change in , introducing a partition in the set between those human-type civilizations that are real, , and those that are simulated, :

where,

,

is the set of populations of all human-type civilizations simulated by our civilization with the number of such simulations run. At this point we are still assuming that our civilization is real, so it is plausible that we might draw conclusions about the full subsets and due to the fact that our civilization is ostensibly real and that this position must confer some knowledge of the limitations on what real civilizations might arise and what simulations might be possible.

Taking step (2) invokes a further change in :

The results of steps (1) and (2) are summarized below:


does not appear explicitly in as step (2) makes it uncertain as to whether our civilization is real or simulated and we can't classify it with any certainty. This means that we can no longer draw conclusions about and as before. Our knowledge only extends to a subset of and a single element that we can't attribute to either or with any certainty, but in order to calculate the probability that our civilization is in and not in we would require the 'actual' fraction of all observers with human-type experiences that inhabit simulations and thus would have to evaluate parameters , , and over the full set , and this would require full knowledge of both and . This can be seen simply by restating in its most simple form rather than employing the contrived quantities used by Bostrom:


With and the respective cardinalities of subsets and , and and their respective elements. For comparison, here is the (corrected) formula employing the quantities used by Bostrom with their definitions over the set alongside the original (incorrect) formula ( is the subset of populations of civilizations in that run simulations of human-type civilizations):




Is there a map, , such that the required knowledge of the full structure of could be reconstructed or even approximated? Bostrom suggests that the idea of ancestor simulations provides a case that there is such a map, as for such simulations the experiences of the simulated individuals would mirror individuals in the real civilization and thus represent knowledge of and also confer knowledge of . For this reasoning to be legitimate the vast majority of simulated individuals would have to be in ancestor simulations (it is not enough to simply argue that be very large you have to show that most simulated individuals occupy these simulations, not others, and that requires knowledge of both and ). Is this valid? No, most definitely not, there isn't any good reason to restrict simulations solely to ancestor simulations and we can't assume that the vast majority of simulated individuals occupy ancestor simulations. We simply don't know what might or might not be the case in the real world if we permit the assumption in step (2) and we can't ever know unless we are given evidence that we are a simulation (evidence that we are real is impossible), but Bostrom doesn't appear to realise this.

In the section before this one I tried to show why Bostrom's reasoning was, sort of (but not really), okay in his original paper and why his previous conclusions could not be held to be valid in the same way in light of the correct formula for . In contrast, this section is meant to represent a thorough refutation of the Simulation Argument in both forms. Of course, to get around these problems all Bostrom has to do is restate the probability that represents correctly so that it does not correspond to a probability that our civilization is itself simulated, and add the caveat that his formula (when corrected) only applies to a subset of any hypothetical simulation hierarchy, with an implicit assumption that this universe is real. Either that or admit that his argument only amounts to saying that it is conceivable that we might be a simulation and is thus vacuous, given that Bostrom's disjunction is valid only if we assume that for human-type civilizations and if we also assume that any potential simulator of our civilization would not try to deceive us (such that observation in this universe speaks to the validity of this inequality), but the converse of both these assumptions is also logically possible and cannot be discounted.--MarkB (talk) 21:41, 31 August 2011 (UTC)

Infinite regress[edit]

The simulation argument and the entire idea of a simulated reality doesn't rely on direct evidence. This is fair enough because if we could we wouldn't be in a very good simulation, would we? I won't discount the ability for us, in principle, to be able to see it in much the same way Truman Burbank escapes his studio, but for the time being evidence is as existent as that for God and Russel's celestial teapot; i.e., fuck all. So, it relies on this half baked philosophy some mathematical reasoning and just plain assertive conjecture.

However, what I completely fail to see is how any of the arguments for a simulated reality (even if they just render it plausible rather than possible) wouldn't equally apply to the outside "real" world. Precisely because it's half baked philosophy - philosophy basically being that which is necessarily true, so must be true everywhere. So as a necessary logical extension of accepting the arguments for us living in a simulated reality, one must also assume the outside "real" world is also simulated. And the world outside of that. And the world outside of that. This leads to an infinite regress of simulated realities - a clear absurdity, an impossible absurdity at that. Although you could get around the infinite regress by saying that this nesting of realities is ended based on a probability distribution (as the philosophical argument hinges on this probability based on the ratio of real:simulated beings) multiple nested realities are still a necessary extension.

Thing is, do you then extend this by saying the universes gradually get more complicated on the way out? The universe can't simulate itself (it can be itself to simulate it, of course) because there just isn't enough physical material to hold the data required. So whatever is simulating must be more complicated. We've already got The Sims and they themselves play with computers within that (GTA IV has near perfectly functional television and radio but oddly they didn't go so far as to program some computer games). Each successive step inwards being orders of magnitude less complicated but nevertheless simulated realities of a kind. Thing is, if you were to somehow accept that idea, then the logical extension described above would also apply and each successive step corresponding to "infinitely complex", whatever that may mean.

In short: dingo's kidneys. ADK...I'll negate your peat moss! 00:00, 2 September 2011 (UTC)

Why do you say an infinite regress of simulations is a "clear absurdity" or "impossible absurdity"? From the viewpoint of the mathematics of computation, I see no issues with an infinite hierarchy of simulations. Consider an abstract Turing-equivalent machine T0 (abstract machine = Turing machine, register machine, pick your favourite formalism). Let's say we have a universal abstract machine U. Let us call running U with abstract machine specification t as input U(t). Now, clearly we can have an abstract machine T1 = U(T0). And also T2 = U(T1) = U(U(T0)). And Tn for all natural numbers n. Hello infinite hierarchy of computer programs, a computer program being executed by an infinite hierarchy of emulators. Now if T0 simulates our universe, hello infinite hierarchy of simulations (well, replace U(t) with an abstract machine that simulates a universe which happens to contain a physical realisation of an abstract machine executing U(t)). So there seems to me to be nothing mathematically impossible about such an infinite regress.
As to your claim universes can't simulate themselves, that is true, but only assuming the universe is finite. A universe infinite in spatial and temporal extent could contain within it an infinitely large computer which perfectly simulates the universe of which it is a part, including itself. Of course, it appears our universe isn't like that, but maybe our universe is simulated in a universe with rather different laws of physics and which is like that. (((Zack Martin))) 09:09, 2 September 2011 (UTC)
So "infinite regress isn't possible in infinity" yes, fine, I agree because infinity is special like that. But as the universe evidently isn't infinite, you then demand special pleading of "the simulated universe isn't like ours". Which destroys the extrapolation from the known required for the simulation argument to exist. I once had trouble thinking of ways to disprove simulation (indeed, it's unfalsifiable in principle) but the more I think of it the less it seems like a clever idea and the more it seems like ad hoc theology. ADK...I'll mature your microcosm! 09:35, 2 September 2011 (UTC)
"A universe infinite in spatial and temporal extent could contain within it an infinitely large computer which perfectly simulates the universe of which it is a part, including itself." No, they could both be infinite, but their cardinalities aren't necessarily the same and their elements would be different, so they wouldn't be the same universe.--MarkB (talk) 15:30, 2 September 2011 (UTC)
Given an integer N, I can find an integer N+1, that's bigger than N. Now, clearly we can have an even bigger integer N+2. And also N+3. And so on for all natural numbers. Hello, integer that's bigger than all other integers!
Oh, wait, that makes no sense whatsoever. Just as Maratrean will see that he cannot actually define his infinite nested string of Turing machines and have a limit that is a Turing machine in any reasonable formalism. Whether there is a way to give meaning to this regress I have no idea, but obviously what he proposes does not come close, for reasons that have nothing to do with cardinality. --95.154.230.191 (talk) 16:51, 2 September 2011 (UTC)
Is that a response to my comment before I edited it to correct it or not, I'm not sure?--MarkB (talk) 17:04, 2 September 2011 (UTC)
@MarkB - We know there are programs that can output their own text (quinesWikipedia). And, it is not hard to write a program which outputs its own text plus some additional text (an arbitrary prefix and postfix). Or a program which outputs, not its actual text, but some transformation of its own text. Wouldn't that essentially be a self-nested simulation, just the prefix and postfix are huge, and the transformation is rather complicated... which makes me think, the universe wouldn't even have to be infinite. Just the complexity of the rest of the universe must be less than the complexity of the computer.
@95 - let's say we have an infinite sequence of computer programs P(n) for all natural numbers n. We can write this as an infinite string by diagonalization - if (m,n) represents instruction m of program n, then we can write (0,0),(0,1),(1,0),(2,0),(1,1),(0,2),etc... And then we diagonalize the execution of the program also, diagonalize its memory, and diagonalize its output. So a single Turing-equivalent machine, given an infinite input string, can execute an infinite number of programs simultaneously, and hence simulate an infinite number of universes. (((Zack Martin))) 22:59, 2 September 2011 (UTC)
In what possible way is a quine equivalent to a computer simulating not only itself but the entire universe it inhabits?--MarkB (talk) 16:14, 3 September 2011 (UTC)
A program P being executed on a computer C which exists in a universe U, whose output O is a precise description of U, e.g. the precise spacetime location and properties of every particle that shall ever exist in U. Now, thinking about it, I can see how in a finite universe O would never catch up to U, i.e. the universe-state the computer is currently outputting would almost always be very distant from the universe-state the universe is currently in. In an infinite universe that might not matter; in a universe containing supertasks, the computer might finish before the universe does (say if the universe is finite in temporal and spatial extent, but its laws of physics permits supertasks within it.) (((Zack Martin))) 22:50, 3 September 2011 (UTC)
I can't help but shake the feeling that I've this discussion before. Oh wait, I have. (Right down to the nonsense about quines!) The problem I have with all this (besides the sheer absurdity of things like "So a single Turing-equivalent machine, given an infinite input string, can execute an infinite number of programs simultaneously, and hence simulate an infinite number of universes") is that ultimate justification for all this stuff actually happening is laughingly vacuous. Bostrom's argument is at least sensible because it tries to estimate the probability of simulations existing; Maratrean's argument jumps the shark by proclaiming that because this precarious stack of leaky abstractions could exist, then by the power of faith it does exist! Mark, you seem like a sensible fellow - don't waste your time as I did. I may not know much about this stuff, but I do know one thing: You'll soon find that Maratrean's justifications are not actually based on the math, but on something else entirely. Tetronian you're clueless 16:34, 3 September 2011 (UTC)
I think I'll take your advice, I don't really this discussion going anywhere.--MarkB (talk) 17:12, 3 September 2011 (UTC)
Tetronian, the point is simply to establish that an infinite hierarchy of universes is possible, because that serves to justify idealism over materialism. Arguments about God, faith, etc., are separate issues (although adopting idealism may well change our attitudes to some of those arguments) (((Zack Martin))) 22:52, 3 September 2011 (UTC)