Forum:Are models/simulations science?

From RationalWiki
Jump to navigation Jump to search

This is in response to some points that originated over at the discussion at Talk:Scientific method#Cases where the scientific method is not best. Some of the questions I think are when are models/simulations science and when are they not? What are the limitations to them? How should they be used?

I use models and simulations in a lot of the work that I do in neuroscience research, and before that I have used models for a lot of other things some of which had nothing to do with science per say but rather problem solving. So some definitions are in order so that we are all on the same page. In this discussion for me models are representations of phenomenon primarily created to simplify the number of interactable elements to the minimum number to reproduce a given phenomenon to some predefined level of accuracy. A model could be more generally defined but I think this is the working definition for what makes one useful (for example why minimum number of elements? That's Occam's razor it is not really a requirement per say). A simulation is collecting the end state of the model based on some initial starting state. Some models are purely pre-deterministic and the end stats are able to be directly derived by the initial state, but most models have elements of noise, or randomness that make repeated simulations the only way to analyze a pattern in the results.

The central argument that I would make is that models are science and can't really be divorced from it as a system of gathering knowledge. When observing an event with multiple causative elements it is nearly impossible to differentiate out what is actually doing what. Science works by trying to isolate causative elements and test them individually. The experimental method is essentially model building. Good scientific experiments are never done in purely naturalistic settings, instead variables are carefully controlled such that the experimenter has eliminated all but a few causal elements as playing an active role.

Mathematics is a form of model building, it is representing something in a symbolic language that obeys a set of rules. In order for it to be tractable various elements and properties have to be stripped out or simplified (a bunch of physics jokes use the punch line of assuming some random noun is a perfect sphere).

So how do we relate models to the "real" world? The most obvious place to start is how well a model predicts phenomenon. If a model is able to make highly accurate predictions about the way things will behave and we can go out and test those predictions you have a good model. A famous example of this was Einstein's Theory of Relativity which was a mathematical model of physical phenomenon that made some very specific predictions that people went out and tested and verified the theory.

But things start to get a bit messy here. If a model accurately predicts some measurable phenomenon does that mean that the way the model makes that predictions is the way things work in reality? What happens if two different models both are able to accurately predict the same thing but through different mechanisms?

This issues came into focus with quantum physics where phenomenon could be accurately predicted with mathematical models, but the when you looked at what the math was seeming to say about what was going on it was difficult to impossible to conceptualize. This was when Bohr developed his Copenhagen Interpretation which said ignore the inner workings of the model, we can never conceptualize but only use these things as predictive tools.

This view of models as being essentially black boxes where all that matter is input and output was also being embraced around the same time in psychology by the behaviorist. These were the people being lead by B.F. Skinner and James Watson spurred by Pavlov's conditioning paradigm. That psychology should focus on building black box models of behavior, the inner workings don't matter all that needs to happen is that a given model can predict behavior based on a set of inputs. Sixty years of psychological research focused on this type of science.

But this instrumentalist view of models turned out to be horribly limited. In the 1960s a revolution in psychology (and other fields but this isn't a book) started to bring back the idea that the inner workings of the model and how the model when about making predictions could be very important. Chomsky gave one of the best examples of this when he argued against the behaviorist view of language acquisition by bringing in the types of errors children make when learning language. He demonstrated that there was an underlying cognitive system in place that had to be accounted for. This started the connectivist movement, with the advent of computers, where people started trying to construct complex models that not only ultimately learned but also learned in a way that was similar to the way people did. A huge focus developed on cognitive errors as a way of assessing model accuracy, does the model make the same sort of errors that a child would?

This marks an important development in the views of models, once more models were not just predictive but the way models went about solving something were thought to correlate with reality as well. And the quality of a model was not just how well it was able to make accurate predictions, but how its individual components and methodology were linked to the real world.

Enter the work that I do. The model that I develop is an attempt to model how animals (including humans) might learn through reinforcement and trial and error. Not only must my model be able to learn in a way similar to people, but it also must be shown to make errors in learning and action choice the same way animals do, also the various individual terms in the mathematical models have to shown to have neural correlates. For example, I show how one term in my equations changes over time and show that certain dopamine mediated cells mimic how that term changes. So my component pieces have to be matched to neural components as well. Also things like drug manipulations and lesions are brought into play. I have to program in analogs for injecting cocaine or amphetamine or having the hippocampus removed, and show that my model changes behaviour in a way similar to animals.

Models these days go through a lot before they are accepted as being able to be used as an actual representation of how things might actually work. The example brought up on the talk page was climate models and making predictions about temperature increases over long spans of time. And the accusation that this isn't science.

The thing is this is too simplistic of a construction. First of all good models don't merely make a single scalar prediction like that. There are always margins of error, and those margins increase the further you get way from the initial states. So with out those margins of errors it is hard to assess a statement or model for its accuracy. The second issue is what I have tried to talk about through out this post. Are we dealing with merely an instrumentalist model that is making a mathematical prediction based on some set of data? This could be something along the lines of curve fitting a function to temperature data and extrapolating out. Or is it a model that attempts to mimic actual meteorological processes and runs simulations? The assessment of how good these models are is very different between these two different paradigms, and the role of starting assumptions, and the actual conceptual meaning of the results shift as well.

The take home point is that science is all about model building, it is in integral part of the method can without it we can't do science. However, model building with out prediction and empirical observation is not science. Assessing models, and their predictions is a complicated process, I am qualified to asses certain classes of models, but others I wouldn't be. Climate models are one such example. Not climate scientist is going to be able to tell me whether my model of the role of dopamine in salience is accurate or not, and I can't tell if their model of climate projections is accurate.

This is when we get into the most pedestrian issue, we have to rely on expertise and peer review. If I am told 90 percent of the practicing climate scientist that a given model is accurate given our current understanding, than the rational thing to do is to proceed under the assumption that its predictions are accurate. tmtoulouse 21:37, 8 July 2010 (UTC)

"In this discussion... models... simplify the number of interactable elements" - real-world science often involves making approximations or simplifications for practical purposes. It may be a conceptual simplification: a Physicists may use Newtonian relativity, not general relativity, to solve a problem. Or it may be numerical: he may use data measured to an accuracy of 1 meter, not 1 Angstrom. Sometimes the approximations lead to such small differences that one give the matter scarcely any thought. But if one is using computers to find numerical solutions of complex differential equations on a fixed grid using inaccurate data, then it's hard to see how far into the future a reliable prediction can be made - there is surely a limit. I think you have to use common sense and a lot of experience (I don't have this) to judge what kind of results can be reliable. IF you produce information that can be trusted - i.e. knowledge - then I think you are indeed doing science.
In mathematics, it is different. One is dealing with a world of abstract concepts based on (possibly ridiculous) axioms assumed to be true. The scientific method still plays a role (e.g. verifying conjectures, discovering conjectures, and peer review to check for logical errors). But the method of logical deduction takes the spotlight. When used properly, it produces results no less certain than the original axioms. Assuming you don't have software or hardware bugs, a computer simulation that results in a proof of a theorem can be accepted with certainty. I think there aren't really big controversies (like global warming) or pseudosciences in mathematics because (1) much of it is unassailable and (2) most people don't care about abstract mathematics. There have been controversies such as the use of infinitesimals (which has been formally legitimized in the branch Non-Standard Analysis) and the assinging of values to divergent series (such as 1+2+3+4+...=-1/12). The latter was used to amazing effect by geniuses like Euler, and may be used in quantum field theory today, but most mathematicians consider it non-rigorous (without introducing a lot of qualifications to it) and don't trouble themselves with it.
I'll mention another field of science; one that is similar to mathematics: the field of human action (e.g. Austrian economics). At this site, I expect to be flamed for believing in what is probably deemed pseudoscience. But I think it yields true results nonetheless. Austrian economics is analagous to mathematics in that one starts with axioms - in this case referring to human behavior. The axioms themselves are based on introspection or empirical observation. Results are derived from the axioms are deduced through logical argumentation (almost entirely verbal, not mathematical). I am thinking about writing a page on this subject here - at least from being flamed perhaps I will learn something. Doubledork (talk) 22:55, 8 July 2010 (UTC)
BTW if Austrian economics is really a pseudoscience, then I wonder why it hasn't been debunked through published scientific experiments (or supported through fake experiments), like ESP has. Maybe I just haven't found the literature yet. Or perhaps some of its conclusions are just too hard to test: e.g. an economy run by a socialist economic planning board will ultimately collapse, or interest rates fixed by a controlling central bank at a value below the natural rate will lead to an economic bust. While I believe in the conclusions, they are good examples of going far down the path of deduction without experimental checking, so I don't blame others for doubting them. Doubledork (talk) 23:06, 8 July 2010 (UTC)
Nobody said Austrian economics is a psudoscience, just a theory that maybe based on faulty axioms. - π 23:15, 8 July 2010 (UTC)
Speaking of models, I was once halfway through a blog post (which I never finished and so deleted) that used models as an analogy to theories. Not just models in the way you say, but extending by analogy to actual models, like Airfix stuff. A good plastic model kit of a plane gives you quite a bit of information about the real plane, what it looks like, its proportions, to an extent its aerodynamics. A slightly bigger and more expensive one will give you the same info in better detail. At the other end of the spectrum you can make something out of cardboard which is simpler, but only gives you very basic info. Theories are, of course, only models of how the universe works. Do you think electrons really go around with the Schrödinger equation etched on their backsides? No, but the maths of quantum mechanics works as a usable model for science. Anyway, just throwing that out there since the original question asked "are models science?" and thus I'd say yes, very much so. Scarlet A.pngnarchist 23:41, 8 July 2010 (UTC)

Airfix[edit]

I love Airfix. I have catalogue from like 1968 in my "library". I also might have had a mild toluene problem as a teenager... ħumanUser talk:Human 01:57, 9 July 2010 (UTC)

Now I've spotted this, I'm sure I had a dream about airfix models last night... Scarlet A.pngnarchist 22:46, 12 July 2010 (UTC)