Talk:List of scientifically controlled double blind studies which have conclusively demonstrated the efficacy of homeopathy

From RationalWiki
Jump to navigation Jump to search

Molecular memory of water[edit]

Here is your proof and it is scientific http://br.geocities.com/criticandokardec/benveniste01.pdf. Excuse me why I "vigorous" shake my bottled water, Lol.-- I am the AlphaTimSand the Omega!. 15:43, 17 October 2007 (EDT)


I'd argue we don't need "conclusively" as much as "weren't invalidated by later studies"...but that's wordy.--PalMD-Goatspeed! 15:42, 26 July 2007 (CDT)

It's wordy anyway Doc. But I wanted to remove wiggle room for some that might have shown a marginal advantage over placebo. But if you want to improve it please go ahead. --Bob_M (talk) 15:45, 26 July 2007 (CDT)


Might get a visit from sshh you know who. KEEPquiet! 16:16, 26 July 2007 (CDT)


Nice article, shame about the title KEEPquiet! 16:18, 26 July 2007 (CDT)

Yes, it at least needs a lot less caps. How about "(List of )scientific proofs for the efficacy of homepathy"? The part in parens is optional I think. humanbe in 18:27, 9 September 2007 (CDT)

tumbleweeds[edit]

Nice one!!! We should have a template that includes crickets chirping an tumbleweeds blowing around... humanbe in 18:33, 9 September 2007 (CDT)

Who would pay for this study?[edit]

How about a list of double blind studies on homeopath, whether or not efficacy was demonstrated.

Double blind studies cost money. Who is going to pay for this? HeartGoldSwarm like a hive 17:11, 29 July 2007 (CDT)

Presumably, someone who doesn't want homeopathy considered a pseudoscience. --Kels 17:36, 29 July 2007 (CDT)
Such studies have been carried out. They have not shown any benefit for homeopathy.--Bob_M (talk) 01:32, 30 July 2007 (CDT)


This site is hilarious. You guys not only present the facts, but in an entertaining manner. Bravo. — Unsigned, by: 99.235.46.169 / talk / contribs

Thank you. We try. --Star of David.png Radioactive afikomen Please ignore all my awful pre-2014 comments. 23:21, 24 February 2008 (EST)
I protest, until you have gone through all existing scientifically controlled double blind studies concerning all topics, and present explanations as to why they do not conclusively demonstrate the efficacy of Homeopathy, this is not a fact. --84.109.54.156 17:03, 1 July 2008 (EDT)
Additionally, are we REALLY sure the Holocaust happened? Until we have gone through every photo taken during World War 2 and all documents, we can't conclusively demonstrate the efficacy of the Holocaust, this is not a fact.--Tom Moorefiat justitia ruat coelum 17:06, 1 July 2008 (EDT)


ARRGHHH!!!! (facepalm, facepalm, facedesk). There is no requirement in science for "100% disprove-ism". The null hypothesis in any good study of homeopathy should run along the lines of "homeopathy will have no effect on (useful operational health measure)." The null hypothesis will either be supported or non-supported by the data. Also, Bayesian analysis must be applied, especially given that the theory behind homeopathy is so damned improbable. At this point, given the data, and the lack of plausibility, it's pretty damned clear that homeopathy is a bunch of bullshit.-- Asclepius staff.png-PalMD --Do not read my blog 17:17, 1 July 2008 (EDT)

The lancet did a meta analysis in 2006 and came up with no effect above placebo.--Bobbing up 17:20, 1 July 2008 (EDT)
Actually, now that I think about it, there are thousands of tribal religions. We need to put disclaimers about inaccuracy on our atheism page until we have reviewed each religion's claims for validity, until we can conclusively demonstrate the efficacy of atheism in each instance.--Tom Moorefiat justitia ruat coelum 17:21, 1 July 2008 (EDT)
Good point. We need to examine all the fairy data as well before we dismiss it! Or perhaps we should suggest that it it the people who are presenting these remarkable ideas who have the obligation to present the remarkable evidence --Bobbing up 17:33, 1 July 2008 (EDT)
Well, we do have a rather comprehensive (though far from complete) list of gods that atheists and Christians don't believe in... ħumanUser talk:Human 20:53, 1 July 2008 (EDT)

Homeopathic vets[edit]

When Homeopathic vets, of whom there are a growing number, put an animal down, do they use terribly small gun or an infinitessimal dose of something completely harmless?Simon 20:02, 15 December 2008 (EST)

Hehe. I've heard radio ads for them on my liberal talk radio... they dilute the lead 10X300 and hold it near the animal's head... no, wait, homeopathy cures all, they never have to put down an aminal! ħumanUser talk:Human 20:41, 15 December 2008 (EST)

Well that explains why my internet has been so sluggish today[edit]

Reddit link to this site got picked up, largest single day traffic spike we have ever had. About three times what the Lenski debacle hit us with. Good news is the site didn't crash! tmtoulouse 05:47, 31 May 2009 (UTC)

International coverage :). 216.221.87.112 21:16, 31 May 2009 (UTC)

Dear goat. --Prim arthropleura.jpg 21:22, 31 May 2009 (UTC)
Wow, I wondered what it was - the slow down started last night, I think. What's funny is that this is probably the shortest article on RW! It's only four words longer than its title. Cool. ħumanUser talk:Human 21:27, 31 May 2009 (UTC)
Yeah, its been a pretty significant traffic spike, the fact that we are up and running still and not ground to a halt is a testament to moving to a private server as well as the hard work Nx put in making sure that the caching system worked to keep the processors free. tmtoulouse 21:30, 31 May 2009 (UTC)

Added by BON 122.164.130.200[edit]

This just shows the complete and utter bias, sending these silly images across the screen. Please take the time to research these:

1. Office of Technology Assessment. Assessing the Safety and Efficacy of Medical Technology, Washington, D.C., U.S. Government Printing Office, September 1978, p.7.

2. Bellows HP. The Test Drug Proving of the O.O. and L. Society: A Reproving of Belladonna. Boston: The American Homeopathic Ophthalmological, Otological, and Laryngological Society, 1906.

3. Paterson J. Report on mustard gas experiments. Journal of the American Institute of Homeopathy, 37(1944): 47-50, 88-92.

4. Owen RMM and Ives G. The mustard gas experiments of the British Homeopathic Society: 1941-1942. Proceedings of the 35th International Homeopathic Congress, 1982, pp. 258-259.

5. Sacks A. Nuclear magnetic resonance spectroscopy of homeopathic remedies. Journal of Holistic Medicine, 5 (Fall-Winter 1983): 172-175 Boericke GW and Smith RB.

Changes caused by succession on NMR patterns and bioassay of Bradykinin Triacetate (BKTA) successions and dilution. Journal of the American Institute of Homeopathy, 61 (November-December 1968): 197-212.

6. Gupta G and Singh LM. Antiviral efficacy of homeopathic drugs against animal viruses. British Homeopathic Journal, 74 (July 1985): 168-174.

7. Cazin JC et al. A study of the effect of decimal and centesimal dilution of arsenic on retention and mobilization of arsenic in the rat. Human Toxicology, July 1987.

8. Baumans V, Bol CJ, oude Luttikhuis WMT, and Beynen AC. Does chelidonium 3X lower serum cholesterol? British Homeopathic Journal, 76 (January 1987): 14-15.

9. Day C. Control of stillbirths in pigs using homeopathy. Veterinary Record, 114 (March 3, 1984): 216 reprinted in Journal of the American Institute of Homeopathy, 779 (December 1986): 146-147. Day C. Clinical trials in bovine mastitis: use of nosodes for prevention. British Homeopathic Journal, 75 (January 1986): 11-15.

10. Choudhury H. Cure of cancer in experimental mice with certain biochemic salts. British Homeopathic Journal, 69 (1980): 168-170.

11. Keysall KL, Williamson KL, and Tolman BD. The testing of some homeopathic preparations in rodents. Proceedings of the 40th International Homeopathic Congress (Lyon, France, 1985) pp. 228-231.

12. Boiron J, Abecassis J, and Belon P. The effects of Hahnemannian potencies of 7c histaminum and 7c apis mellifica upon basophil degranulation in allergic patients. Aspects of Research in Homeopathy (Lyon: Boiron, 1983) pp. 61-66.

13. Davenas E, Poitevin B and Benveniste J. Effect on mouse peritoneal macrophages of orally administered very high dilutions of silica. European Journal of Pharmacology, 135 (April 1987): 313-319.

14. Gibson RG, Gibson SLM, MacNeil AD, et al. Homeopathic therapy in rheumatoid arthritis: evaluation double-blind controlled trial. British Journal of Clinical Pharmacology, 9 (1980): 453-459.

15. Albertini H et al. Homeopathic treatment of neuralgia using arnica and hypericum: a summary of 60 observations. Journal of the American Institute of Homeopathy, 78 (September 1985): 126-128.

16. Claussen CF, Bergmann J, Bertora G, and Claussen E. Homoopathische kombination bei vertigo and nausea. Arzneim. Forsch/Drug Res., 34 (1984): 1791-98.

17. Dorfman P, Lasserre MN and Tetau M. Preparation a l'accouchement par homeopathie: experimentation en double-insu versus placebo (Preparation for birth by homeopathy: experimentation by double-blind versus placebo). Cahiers de Biotherapie, 94 (April 1987), 77-81.

Now are you just blind, or Double Blind?


18:25, 29 June 2010 (UTC) SusanG Toast

I may look up that NMR one. I've often thought about hyperpolarising a homoepathic dilution and trying to punk the journal of homoepathy. Scarlet A.pngsshole 11:50, 11 July 2010 (UTC)

Meta studies[edit]

But according to a Lancet meta-study on homeopathy published in 1997:

The combined odds ratio for the 89 studies entered into the main meta-analysis was 2.45 in favor of homeopathy. The odds ratio for the 26 good quality studies were 1.66 and that corrected for publication bias was 1.78. For studies on the effects of a single remedy and seasonal allergies had a pool the odds ratio for ocular symptoms at four weeks of 2.03. Five studies on post operative ileus had a pooled mean effect size difference of -.22 standard deviations for flatus, and -.18 SDS for stool.

Interpretation: the results of our meta-analysis are not compatible with the hypothesis that the clinical effects of homeopathy are completely due to the placebo. However we found insufficient evidence from these studies that homeopathy is clearly efficacious for any single clinical condition. Further research on homeopathy is warranted provided it is rigorous and systematic.

Paper in its entirety may be read at: http://www.homeovet.cl/BRIONES/Are%20the%20clinical%20effects%20of%20homoeopathy%20placebo%20effects%20%20A%20meta-analysis%20of%20....pdf

The Lancet meta-study in 2005 seemed to consider 110 studies but threw out all but 8 that it thought were big enough, as if that was the best criteria for inclusion. All 8, of course, found against homeopathy.

Rationalskeptic (talk) 11:46, 11 July 2010 (UTC)

What part of "conclusively demonstrated" confuses you? Also homeopathy is just water, so it is never going to work any way. - π 11:59, 11 July 2010 (UTC)

You are arguing backwards from your paradigm: "It cant work because I dont think it can.", rather than looking at what evidence there is. How do you know its "just water"? Teaching is "just talk", how can that work?

I am trying to publish the results of a Lancet study that refutes the obvious error of this article which purports that there is NO evidence. There is evidence, whether you personally want to accept it or not. Lancet is a peer-reviewed journal. Are you afraid of looking at the evidence? Rationalskeptic (talk) 12:08, 11 July 2010 (UTC)

You see this bit: "However we found insufficient evidence from these studies that homeopathy is clearly efficacious for any single clinical condition." that is the important bit. Whilst it is discernibly different from the placebo effect there is insufficient evidence that it does anything. Just make sure you drink 8 cups of water a day, it is good for you. - π 12:15, 11 July 2010 (UTC)
(EC)The 1997 Lancet study specifically concludes that the evidence has not "conclusively demonstrated the efficacy" of homeopathy, particularly in relevance to a specific treatment working on a specific malady - which is what you expect of a medicine and so far most alternative medicines fail that test. The discussion section towards the end looks into why the overall effect may be positive and makes some interesting reading if you care to delve into it. They cite the lack of very good, conclusive and large scale trials and the fact that even small biases (i.e., a study that was only 95% perfect) can still produce highly erroneous results if the bias is placed in a powerful position. It's very interesting that they don't believe publication bias is a major factor, I'd presume the number of unpublished results was much higher, but I trust their research is thorough in coming up with the 15-30 figure. This is why the 2005 meta study only looked at 8 of the biggest and best studies, because they are most reliable and aren't prone to small biases creeping up and up and up into an apparently significant effect. As the 1997 study states, large decent trials are expensive and who is going to foot the bill for these? Not to mention the fact that large trials just cement the idea "oh, there must be something to it!" in the public mind even if the result is negative, which is something I haven't considered before although it makes some sense. They also state how the majority of the trials looked at are made by advocates rather than skeptics, and such biases can swing results quite far (Bad Science has a summary of different effects and how they swing results, it's quite far). Scarlet A.pngsshole 12:21, 11 July 2010 (UTC)

Bigger is not better when it comes to studies. Here is just one indication of the problems with the 2005 meta-study based on their selection of studies to be included:

  • Studies using “clinical homeopathy“. Patients did not receive a comprehensive homeopathic history and all patients received a single, identical remedy. This accounted for 48, or 44% of the homeopathy studies analyzed in the Lancet meta-analysis.
  • Studies using “complex homeopathy“. Patients did not receive a comprehensive homeopathic history and all patients received a mixture of different commonly used homeopathic remedies. This accounted for 35, or 32% of the homeopathy studies analyzed.
  • Studies using “classical homeopathy“. Patients were given a comprehensive patient history and received a single, individualized remedy. This accounted for 18, or 16% of the homeopathy studies analyzed.
  • Studies using “isopathy“. Patients did not receive a comprehensive homeopathic history and all patients received a diluted substance that was believed to be the cause of the disorder (e.g. pollen in seasonal allergies). This accounted for 8, or 7% of the homeopathy studies analyzed.

Only the 3rd group of studies, "classical homeopathy", (only 16% of the total) are really a judge of true homeopathy as its defined. How many of the 8 finally selected studies were in that group. I dont know but I doubt if many, since they require a comprehensive patient history, not easy to do in a large study.

Rationalskeptic (talk) 12:42, 11 July 2010 (UTC)

Let me get this straight. Before you were pushing the Lancet paper as having "conclusively demonstrated the efficacy of homeopathy", but now we have pointed out that it doesn't you are critising it? - π 12:46, 11 July 2010 (UTC)
(EC)First off ';' is your friend. Second off, the argument is, "people who gets extra attention feel better"? That's very true, whether it's psychology, homeopathy, or just chillin with friends. That doesn't prove the drug is more useful. And it's not. It just isn't. Sorry. Quaru (talk) 12:53, 11 July 2010 (UTC)
Really that's just evidence that it is a placebo - and is a better placebo when the homeopath takes time to interview a patient and gives them added attention to enhance that. No one denies this and indeed the evidence outright supports it. But this is not the same as "efficacy". If there was any truth to homeopathic claims, the clinical homeopathy or isopathy would be more than sufficient to prove it, so far it fails. Scarlet A.pngsshole 12:51, 11 July 2010 (UTC)
There are lots of talk therapies that have poor outcomes. Rationalskeptic (talk) 13:08, 11 July 2010 (UTC)
Seriously. ":".. And there's also lots of ancer therapies that have poor outcomes.. It still increases the effectiveness of a placebo. This is why studies are double blind, so as not to add to the placebo effect, to either drug. Quaru (talk) 13:14, 11 July 2010 (UTC)
Now I see where all the fun has been today! Anyway, in order for the test to be truly "double blind" all the subjects must receive what appears to be the same treatment - if doing the "classical" above, even the placebo patients should receive a full "homeopathic history", and the homeopath must not know whether the patient is getting their prescribed cocktail or the other glass of pure water. PS, is there a reason Bond preferred his martinis "shaken, not stirred"? ħumanUser talk:Human 01:57, 12 July 2010 (UTC)

Where did I say that it "conclusively demonstrated" anything?? All I did was try to publish the findings of the Lancet article. It says all I want to say in this venue. Are you afraid that readers cant make their own conclusions? Rationalskeptic (talk) 12:49, 11 July 2010 (UTC)

It says "conclusively demonstrated the efficacy of homeopathy" in the title of the article. - π 12:51, 11 July 2010 (UTC)

Which article are you referring to? Rationalskeptic (talk) 12:56, 11 July 2010 (UTC)

The one you were typing on. - π 12:57, 11 July 2010 (UTC)

Click on the link that I posted and read the title. That is the only article that I was "typing on". Rationalskeptic (talk) 13:03, 11 July 2010 (UTC)

The article on the wiki on which you were typing. (please try and imagine everything I have typed I said really slowly) - π 13:06, 11 July 2010 (UTC)
Dude. http://rationalwiki.org/wiki/List_of_scientifically_controlled_double_blind_studies_which_have_conclusively_demonstrated_the_efficacy_of_homeopathy what part of "conclusively_demonstrated_the_efficacy_of_homeopathy" is confusing?

Use the proper terminology of "editing". Say the right words and people will understand you, no matter how slowly you speak. Lets stop insulting each other. Ok? 13:14, 11 July 2010 (UTC)

So you edit by screaming loudly at the computer? You typed the Lancet article? I really don't see why the terminology confused you. Quaru (talk) 13:16, 11 July 2010 (UTC)
I've already ran into a bit of a quandary when wondering what "article" I'm referring to; is it the wiki ones or the journal ones? This is kind of annoying. Scarlet A.pngsshole 13:42, 11 July 2010 (UTC)
My favorite line from that conclusion: "Homeopathic remedies can be perfectly matched with placebos..." hahaha. (p. 840) ħumanUser talk:Human 02:07, 12 July 2010 (UTC)

List of scientifically controlled double blind studies in general[edit]

It appears we could do with a list of studies in general and what they do and don't say. The 1999? study that showed effectiveness but which turned out to have ridiculously awful lab conditions should be listed, for example. As should all those listed above - David Gerard (talk) 12:46, 11 July 2010 (UTC)

So just a "List of scientifically controlled double blind studies of homeopathy"? - π 12:48, 11 July 2010 (UTC)
Yuh. Keep this page as a summary, make said detailed article a "see also" (and this one there) - David Gerard (talk) 12:51, 11 July 2010 (UTC)
There are so many that it would be difficult. The big meta-studies take ages to collate and are pretty comprehensive. So either it'd be A) a copy and paste job or B) just link out to the relevant reviews. Scarlet A.pngsshole 12:52, 11 July 2010 (UTC)
I don't mind the link farm idea, at least it will mean that all this information can be found in one place, vis a vie here. - π 12:54, 11 July 2010 (UTC)

A list of studies would be ok but the meta-studies are easier to wrap one's head around. And they can be used to find the individual studies if one is so inclined. The meta-studies did the work of identifying (what they thought) were quality studies. Whats the problem with publishing the Lancet meta-study conclusion and letting the reader's make their own decision? Rationalskeptic (talk) 12:59, 11 July 2010 (UTC)

Nothing, in fact the Lancet study is referenced in the main homeopathy article. Although that list does miss out the 1997 study, which may be useful as the conclusions on biases are useful information. Scarlet A.pngsshole 13:05, 11 July 2010 (UTC)

Fine, include both the 2005 and 1997 meta-studies and let the readers draw their own conclusion. Rationalskeptic (talk) 13:16, 11 July 2010 (UTC)

Actually, the 1997 study is interesting because of the correspondence sent around about it. Similar things happened with the 2005 study and they are now mentioned in the article. The other two papers need mentioned and I will look those up when ISI is back online. Scarlet A.pngsshole 13:40, 11 July 2010 (UTC)

Grammar?[edit]

"Which" vs. "that" is something I've never really understood properly, but I think this page should be moved to List of scientifically controlled double blind studies that have conclusively demonstrated the efficacy of homeopathy, the idea being "have conclusively demonstrated the efficacy of homeopathy" is integral to the rest of the phrase rather than a passing mention, meaning "that" is more appropriate than "which". Thoughts? X42bn6 (talk) 07:34, 5 August 2010 (UTC)

According to this site you're right, it should be "that." Lyra § talk 05:22, 6 August 2010 (UTC)
Who cares? Move it if you want to. ħumanUser talk:Human 08:17, 6 August 2010 (UTC)
I'm not sure it matters too much in a title, as you can play a bit more fast and lose with grammar, but it's true that "which" follows a commar while "that" doesn't. Scarlet A.pngsshole 08:22, 6 August 2010 (UTC)
I have seven comma I stole fro wikipedia tonight. Where do you want me to stick them? Before your "which" or after your "that"? Either one sounds painful. ħumanUser talk:Human 08:52, 6 August 2010 (UTC)
It would seem that British and American English have slightly different opinions about the importance of this. --BobSpring is sprung! 09:45, 6 August 2010 (UTC)

this latest addition[edit]

Anybody smarter than me want o look at that? In my high school dropout eyes, it seems the variance between the placebo and the "medicine" seemed rather insignificant--Brxbrx (talk) 20:07, 21 April 2011 (UTC)

It was a pilot study to begin with, so their sample size was obviously low. The researchers note that one of the scales they used wasn't validated in other published literature. They didn't really get any improvement in their overall measures, just in the ones where they ran a bunch of correlations between measures (and even they admit that many of the significant ones disappear with adjustments for other variables). Basically, it's really patchy and preliminary, at best, which the authors straight-up say in the paper. Just another case of someone looking at a paper with the word "homeopathy" in it and going, "Look, it's peer-reviewed!" Nebuchadnezzar (talk) 20:40, 21 April 2011 (UTC)

Moved from article[edit]

It is fascinating to me that a supposedly 'rational' site, which believes in empirical evidence would delete my posting here of a well-performed and well-received double-blind study regarding the efficacy of homeopathy in the treatment of closed-head brain injury.

One wonders if the supposedly 'rational' person who took down my posting is more wed to their idea of what should be, than they are to a true and open inquiry into what is. You may not understand how homeopathy works (I don't), but if it is proven to work, then one must conclude that there's something going on. I cited a reputable study that indicated just that. I DARE you to LEAVE the link to the study up so that people truly interested in a rational discussion can SEE FOR THEMSELVES.

Many, many things, most famously the germ theory of disease, were derided in the past by so-called rational 'men of science'. I personally think there's room in the world for both the germ theory and homeopathy, even if some folks on either side can only accept one or the other. And I think the history of science is RIFE with examples of the supposedly rational, empirical powers that be discounting a finding merely because it disagreed with their entrenched world view. In fact, I even experienced this personally: after a severe spinal injury, I 'cured' supposedly 'incurable' spondolisthesis - and I have the X-Rays to prove it. And yet, the physicians could not accept this as it was too far outside their frame of reference. It was actually easier to discount what their own eyes told them and blame it on mixed up X-rays (impossible since there are quite specific bony landmarks that identify both plates as coming from the same spine).

Let's keep an open mind and look at the studies. Do not reflexively say that any study that tends to prove something you believe is ipso facto erroneous - this is NOT rational thought; it's more akin to faith based science - you automatically discount and deride that which challenges your assumptions - this makes your thought process more akin to someone speaking in tongues in a Pentecostal church than that of a rational thinking being.

Please read the study with your 'science' glasses on, not your 'faith' or 'auto-derision' glasses on:

www.homeopathy.org/research/clinical/Chapman.pdf — Unsigned, by: 24.168.56.248 / talk / contribs

Yes, I did read it. It's not particularly conclusive, nor is it highly powered. ADK...I'll sniff your earlobe! 12:49, 24 April 2011 (UTC)
Asking patients to score themselves 1 to 5 on such things a "writing letters" is hardly a good measure of if they are improving. In fact it pretty much guarantees a high placebo effect. - π 13:04, 24 April 2011 (UTC)
I am only at the "Method" stage and I come across this sentence: "these symptoms were then cross-referenced with homeopathic medicines known to elicit or cure these symptoms".
It is never a good start to assume what you are trying to prove before you begin the test. DamoHi 13:09, 24 April 2011 (UTC)
(EC)Also looking at Table 4 on page 531, I notice that the placebo group has significantly more people with previous experience with alternative medicines. That would bring into question there ability to act as a placebo in these circumstances. - π 13:10, 24 April 2011 (UTC)
There have been many instances of spontaneous cures or reversal of symptoms but none of these "prove" the efficacy of any non-mainstream therapy they merely highlight that there is more going on in the body (especially the brain) than we currently understand.  Lily Inspirate me. 13:16, 24 April 2011 (UTC)
@pi. Wouldn't that be a point in the study's favour? I mean if the placebo group are more likely to believe in alternative medicine shouldn't the placebo effect be stronger? DamoHi 13:26, 24 April 2011 (UTC)
Not if they failed in the past. They could have been thinking "not this bullshit again". - π 13:27, 24 April 2011 (UTC)
Hmm, but if they felt that, why would they sign up for a homeopathy test. It's not a big point though. DamoHi 13:29, 24 April 2011 (UTC)
It is considering that their entire measurement system is asking the patient to score themselves on a range of tasks. That could not be more subjective if you tried, prior biasing is going to have a huge effect on that. - π 13:33, 24 April 2011 (UTC)
The assessment method isn't really a problem if it's the same in both groups - although it is pretty sucky. The randomisation and blinding was pretty standard but there could have been leaks, particularly with having labels on the vials. The only other methodological issue is the follow up time, which is quite short and has no time resolution. Those values could be fluctuating but it wouldn't be shown if you just stop it at 4 months and say "wow, there's the effect we're looking for". The main problem is that the result just isn't massively significant, both groups show apparent improvement, which would be expected from a placebo effect on self-report data and the improvement in the non-control group isn't staggeringly more impressive than the placebo-control group. If you compare it with the power and sample size shown in the meta-studies, correct for publication bias, the effect isn't in favour of any homeopathic remedy working at all. ADK...I'll refill your REM! 13:50, 24 April 2011 (UTC)
Also you could have a random effect where the homeopathic regime exhibits a statistically higher success rate than the placebo (with all the studies that have been done you will probably get a few). After all, the analysis group is only 27 patients being administered the homeopathic treatment so it really doesn't take many outliers to skew the results. This is why any single study cannot be used to demonstrate the principle of homeopathy in general. If homeopathy really worked it would be seen on a much wider basis. Although our BoN is advocating a rational approach it would appear that s/he is being fooled by randomness and selective bias in disregarding all the instances where homeopathy is shown not to work. One swallow does not make a summer and one study does not make a list. Furthermore the paper does not even claim decisive proof of the efficacy of homeopathy; it says "may" and "Our findings require large-scale, independent replication". Case not proved.  Lily Inspirate me. 14:47, 24 April 2011 (UTC)
P.S. Did anyone else notice that one of the authors is a Dr. Woo?
It admits to being a pilot study, sot the "our findings require large-scale, independent replication" is essentially a given. The more prestigious journals, such as the BMJ have effectively banned such "more research needed" phrases because it doesn't really conclude anything. In fact, many doctors seem to be of the opinion that we should stop saying that when trials of alternative medicine come up weak and simply state that they don't work effectively. ADK...I'll stink your pastry! 15:06, 24 April 2011 (UTC)
Large scale replication is key, this is a very small study with a small difference on a treatment that violates fundamental principles of chemistry and physics. You can take a look at the works that cite this paper, none of them are replication attempts. This study is 11 years old, why has there been no published replication of the data if it is so meaningful? With the positive result publication bias nothing can be taken too seriously without replication. Certainly not something like homeopathy which has so much counter-evidence against it. Tmtoulouse (talk) 15:22, 24 April 2011 (UTC)
We could bung it up on the evidence for homeopathy page. ADK...I'll freeze your mouth! 15:33, 24 April 2011 (UTC)

Error in title?[edit]

Shouldn't the title be, as to double blind studies, about the efficacy of "a homeopathic remedy," not of "homeopathy." Homeopathy practice involves a lot more than the remedies, but it's probably impossible to do a double blind study in the effectiveness of homeopathy itself; it might work for reasons completely separate from the theory and any specific efficacy of the remedies themselves. As an example, it might work (be effective) if something about the whole process is effective in amplifying the placebo effect or other consciousness-mediated effect, compared to other modalities or no modality.

The distinction between efficacy of a remedy and the effectiveness of the treatment is covered in the British review of homeopathy but seems to have been missed in reporting on that review here. A remedy may be useless, by itself, but be effective in context, and studying this double-blind is far from simple. The requirements of double-blind might interfere with the modality.

If what RationalWiki wants to do is to collect arguments on one side of a controversy, while ignoring rational arguments -- or at least somewhat reasonable arguments -- on the other side, then, sure it can do that, but pretending that this is "rational" is just that: pretense.

I'm starting by pointing this out with regard to homeopathy precisely because I thoroughly understand why homeopathy shouldn't work. I don't really know that much about homeopathy. I do know much more about certain other fields where conventional wisdom is really woo in disguise. Where reason has been applied outside of the application of the scientific method.

So, here, I intend to move the article to reflect a more reasonable name, as suggested above, since "double blind study" is probably impossible. Imagine a List of scientifically controlled double blind studies which have conclusively demonstrated the efficacy of medicine. Not a medicine, but medicine. Obviously, lots of medicines don't work, and medicine is not the same as the practice of medicine, if we are thinking about specific medicines, which is what we'd been looking at with homeopathic remedies.

Rather, what is being tested in double blind studies of homeopathic remedies is a theory that they are effective in themselves, aside from the whole practice of homeopathy, including the interview, conversation, etc. Homeopathic theory is bunk, that's obvious, but, even there, there can be surprises, unexpected conditions, such as, one information site on Homeopathy pointed out, the dilutions not necessarily being as dilute as claimed. I think that particular theory is a red herring, though. The dilutions are very high, even if they aren't zero-molecule.

I've learned, in science, to distinguish between theory and experimental observation. Theory is actually a big danger to science, when it's placed before observation; when we believe theories, we often won't even see contrary evidence, or we will explain it away, making up stories without actual evidence.

And then I also propose another page, List of scientifically controlled double blind studies which have conclusively demonstrated the inefficacy of a homeopathic remedy. Surely we'd want to know which list is longer. And, of course, we'd want to apply the same standards to judge inclusion, standards such as those applied above, with prior proposals for the article.

Dose response[edit]

Are there any studies of dose-response? I.e., allegedly, homeopathically, very high dilutions ("activations") are more efficacious than low dilutions. That's perfect for a double blind study. Does the dosage make a difference? Higher dilutions are more expensive than lower dilutions, because there is more processing involved. Is there any gain from that? This is an issue that I'd think homeopaths would want to know. If they care about "objective," I'm not sure that many do.... but some might. --Abd (talk) 22:39, 18 February 2012 (UTC)

The less of the active agent is in the dose, the more it works?! What kind of sorcery is that?! --Arisboch (talk) 22:01, 17 March 2015 (UTC)

Joke getting old[edit]

It was funny when the breast cancer article did this, but personally, many more of these will just be overkill of a good joke. --Pink mowse.pngGodotGrow a vagina 17:29, 19 February 2012 (UTC)

Wasn't this one first? It might even have been the first or second ever (on RW that is). Sophiebecause liberals 17:31, 19 February 2012 (UTC)
I'm pretty sure this was first. (The original and the best.) Years before the breast cancer one anyway. And number 13 on popular pages.--BobSpring is sprung! 17:37, 19 February 2012 (UTC)
It's the God one that was first and definitely most popular, right? Noting that we're, of course, all of 4 minutes work from finding out the answer to that and I'm just too damn lazy to do so. Scarlet A.pngmoral 17:47, 19 February 2012 (UTC)
I apologize then. I think it was the *other* one today, that made me think someone was running around making new funny pages. Pink mowse.pngGodotGrow a vagina 17:49, 19 February 2012 (UTC)
This one: July 2007
Evolution: September 2007
God: August 2009
Breast cancer: August 2011.--BobSpring is sprung! 18:34, 19 February 2012 (UTC)
I've always hated these, but this one is apparently one of our most viewed pages, so we kindof have to keep it. Frustrating, but what can you do? WèàšèìòìďWeaselly.jpgMethinks it is a Weasel 19:52, 19 February 2012 (UTC)
Can we just put it in funspace? And the tumbleweeds have become a RationalWiki staple, so... Osaka Sun (talk) 20:21, 19 February 2012 (UTC)
Just leave it as is. For whatever reason, this brings in the people, and maybe a few of them might hang around for a while. DamoHi 21:29, 19 February 2012 (UTC)
What Damo said. If someone new finds it amusing, then it accomplishes something. It's also less than 0.02% of all pages, and nobody is forcing you to look at it. steriletalk 21:33, 19 February 2012 (UTC)
  • There is nothing wrong with the page itself, it is, in fact, funny, and makes a point. For rational debate, though, the point should be completed and balanced. Without abandoning the point. That's why I suggested additional pages. They might be empty, they might not be empty, I'm not prejudging that. My general point is that if we are attached to outcomes, and manipulate arguments only to produce our preferred outcome, we are hardly being rational. Rather, if we trust the scientific process, we trust it all the way. We don't just stop with results we like. --Abd (talk) 22:18, 19 February 2012 (UTC)

Recent drive-by[edit]

And this is evidence that a person with a science degree is not necessairly a scientist. This is just the first article that I came across showing fairly conslusive positive evidence for the effectivenses of homeopathy using the words homeopathy and double blind.

http://www.ncbi.nlm.nih.gov/pubmed/17180695 — Unsigned, by: Factotum / talk / contribs

Anyone care to comment on this? Redchuck.gif ГенгисIs the Pope a Catholic?Moderator 01:58, 15 October 2012 (UTC)
The abstract doesn't reveal enough of the experimental design. What were the qualifications looked for in patient improvements? How was the placebo administered? What kind of homeopathic remedies are we talking about? I also don't know who funded and who conducted the study.
And even is this study is sound (which can't be known without looking at the whole paper), what about all the other studies disproving homeopathy? This would just be a statistical anomaly. It happens. That's why I think this article and articles like it should be deleted. It might be a nice zinger and it gets linked to a lot, but the criterion is too specific and the message is smug, rather than informative. Our articles covering homeopathy are good. Let people read those, instead.--"Shut up, Brx." 02:15, 15 October 2012 (UTC)
I was rather hoping that someone with academic access could cover these points. Redchuck.gif ГенгисOur ignorance is God; what we know is science.Moderator 02:29, 15 October 2012 (UTC)
I can check it tomorrow. statementword 02:34, 15 October 2012 (UTC)
I'm off for a week, so can't (besides, the original is in German, and in a fairly obscure journal). But in general, these things are looked into by meta-studies. Any trial can show a positive effect, the question is whether this effect is significant over a large value of N and with the right sort of objective criteria. This one, for instance, uses self-report data, which is often very ropey as far as evidence is concerned. Just having "placebo controlled" and "double blind" in the title doesn't make it work, it's entirely based on the method. Trials on SeaBands, for instance, involved a very precise piece of instruction on the experimental group but nothing on the control group. That's not like-for-like, therefore a flaw.
But primarily the results look very suspect to me. They're claiming the homeopathic remedy caused a near-magical effect, while the placebo group did nothing. If you got those sort of results reliably from a homeopathic remedy, it should be in Science or Nature. If the effect truly was that strong, then the meta-studies would have picked it up and there would be no controversy over homeopathy - i.e., with an effect that strong, no trial ever could conclude "no effect", and this is something that I don't think happens even in real medicine. If they've made a mistake, it would be vital to show what factor caused it, if it's fraudulent they've been very stupid. Because considering the lack of evidence in larger and better designed trials, fraud or cock up is the most likely explanation. Scarlet A.pnggnosticModerator 10:13, 15 October 2012 (UTC)
Got the paper, don't read German. @Genghis if you want I can send it to you. statementword 12:27, 15 October 2012 (UTC)

Fuck.[edit]

Look at this shit. Someone went and did it. Made this page technically wrong. Either they fabricated their p value of < 0.0001, the methodology glossed over something crucial, or this page is now wrong. ikanreed You probably didn't deserve that 20:46, 17 March 2015 (UTC)

Interesting tidbits:
C-potencies are prepared by diluting a drop of a parent substance in 99 drops of ethanol followed by agitation of the solution (1 C). Then one drop of this solution is diluted in 99 drops of ethanol followed by agitation of the solution (2 C). This procedure is repeated in consecutive agitated dilutions (3 C, 4 C, and so on). In the HOMDEP-MENOP study each individualized homeopathic remedy was prescribed in C-potencies. Higher initial potencies were tried, only 30 C and 200 C were prescribed. The factors that influenced the selection of the potency included: clarity of mental symptoms, patient's vitality and sensitivity, nature and kingdom (source) of medicine, chronicity, presence of any pathological disorder . As previously stated, in an IHT the homeopathic doctor selects a remedy based on the specific and most important symptoms the patient has. This individualized prescription also includes the selection of the appropriate potency based on the factors mentioned. Thus, the prescription is individualized in the selection of the remedy and in the appropriate potency required by the patient.
Meaning higher potencies may have been used.
C-potencies [The homeopathic drugs] were provided by Laboratorio Similia (Mexico City) and were manufactured according to Mexican Homeopathic Pharmacopoeia and Hahnemann's methodology. .... Patients in fluoxetine group received 20 mg/d PO plus IHT-dummy loaded. IHT-dummy loaded was repeated at week 4. Capsules of a generic fluoxetine were provided by Laboratorio Similia (Mexico City).
Both the homeopathic drugs and the fluoxetine were done by Laboratorio Similia. They appear to be a homeopathic drug manufacturer. I'm not certain they're reliable, since they're possibly invested in pro-homeopathic results. (But, because I can't find any English-language articles on them, I can't really say they're unreliable, either.) Their website here: http://www.similia.com.mx/index.php
Five hundred thirty-four women seeking medical care for menopausal complaints were interviewed. Four hundred and one women did not meet inclusion criteria and were excluded. One hundred thirty-three women (24.9%) met inclusion criteria, accepted to participate in the study and were randomized as previously described. .... Taking into account an effect size (eta squared) = 0.262, a sample size = 133, a three-groups design, with a 5% risk of type 1 error, the result is 77%. Although we did not achieve a statistic power of >80% with this sample size (133 participants), we found statistically significant differences for both, IHT and fluoxetine, in HRSD and for IHT in GS. If not, we should have included more participants, in order to increase the statistic power of the study to detect a difference, if the difference in reality exists.
44 women got homeopathy, 46 fluoxetine, and 43 placebo. The authors basically think they need a bigger sample.
Sixty-four percent of women had a history of depression (p = 0.857), 73% reported domestic violence (emotional, physical or economic) (p = 0.883), 36% history of sexual abuse in infancy (p = 0.748) and 53% marital dissatisfaction (p = 0.397).
This probably didn't affect the study, but it certainly increases the probability of an incident causing depression and skewing the results of a group.
Although the three groups had the same case history, in case of IHT group, participants received an individualized homeopathic prescription, which matched with the specific symptoms the patient had, whereas, all participants in fluoxetine group received the same antidepressant and dosage, fluoxetine 20 mg per day. The dosing protocol for fluoxetine was below the approved maximum (60–80 mg/d) [42]. For this reason, efficacy of fluoxetine relative to placebo could had been underestimated.
They used less than the maximum dosage for fluoxetine; fluoxetine may be more effective than this study shows.
Nemeroff et al conducted a RCT comparing fluoxetine, venlafaxine and placebo in depression and reported similar results in response rates and higher remission rates for fluoxetine (28%) and placebo (22%), but as in HOMDEP-MENOP study, Nemeroff did not found statistical significance in the remission definition [42]. Translating into a clinical scenario, these results indicate that a 6-week treatment is a short period of time to treat depression in climacteric women. Probably, it is required more time with fluoxetine or IHT to attain remission. A 6-weeks treatment only improves depression, and it may be possible that an amount of patients would still have mild depression after that period of time. Gibbons et al reported remission rates of 45.8% vs 30.2% for fluoxetine and placebo respectively [OR = 1.96, 95% CI (1.66–2.31), p < 0.001, NNT = 6.40] in a synthesis of RCTs. They concluded that few well-controlled studies, have documented response rates for extended treatment with a single effective antidepressant. In the Gibbons' study remission rate was 82%, with 75% achieving remission by 140 days (twenty weeks). For fluoxetine, 23% of patients who were unimproved at eight weeks showed full remission at twelve weeks [24]. Therefore, further studies of IHT for depression should be conducted to prove if a longer treatment is effective to attain remission of depressive symptoms in climacteric women.
The authors explicitly support a longer study, since it's more relevant to depression and better shows the effects of fluoxetine.
Although individual prescriptions are necessary in classical homeopathy, they have been considered as an obstacle for a double-blind trial in homeopathy [37]. Adler et al stated that 'a study design in which the selection of a suitable, individualized homeopathic medicine occurs during the double-blind randomized phase evaluates not only the efficacy of homeopathy, but also the efficiency of the homeopath in selecting and managing that medicine' [37]. The HOMDEP-MENOP study confirmed the efficacy of 'individualized homeopathic treatment' as a whole, that is, an individualized prescription means selecting an individualized remedy in the appropriate potency. One medicine for each patient was prescribed depending on the symptoms she experienced at the moment of the history case. Many homeopathic RCTs had failed when the same medicine was prescribed for all participants, due to individual differences in symptoms, so although an individualized prescription evaluates the efficiency of the homeopath in selecting the medicine, it can also contribute to resolve a methodological obstacle in homeopathic clinical trials in classical homeopathy.
The authors don't really respond to the point that they're personalizing their medicine.
Lastly, the study uses the HRSD, BDI and GS reporting systems. HRSD and GS showed significant differences, BDI didn't. I don't know how reliable or subjective these systems are.
In summary: The trial probably had too small of a sample size, too variable a sample, too short a timeframe, potentially homeopathies with low dilution, and potentially a biased manufacturer.
32℉uzzy, 0℃atPotato (talk/stalk) 23:08, 17 March 2015 (UTC)
Yeah, I found the source suspect as well, but you know, ad hominem and all that. I couldn't, out the gate, spot anything wrong with the actual study. ikanreed You probably didn't deserve that 21:12, 17 March 2015 (UTC)
It is randomized and blinded (assuming legitimacy) but not conclusive. It supports homeopathy but doesn't do it definitively. Notice that the observed score diffs didn't correspond to remission and the mismatch between HSRD and Beck results. Also that eta squared is suspiciously high in my experience. Nobody can prove it so, but this looks like the 1 in 20 that gains statistical significance by chance alone. Replication is called for. MarmotHead (talk) 23:02, 17 March 2015 (UTC)
Also 40-50 per arm is just not trustworthy for anything but early evidence. It's Phase II level proof. MarmotHead (talk) 23:04, 17 March 2015 (UTC)

HAHAHAHAHAHA I saw activity here in recent changes and finally looked at this article. Very clever.---Mona- (talk) 18:59, 18 September 2015 (UTC)

All-round denialist BoN strikes again[edit]

A new study someone posted[edit]

http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0118440

I'm not ready to believe this. --It's-a me, Lgm sigpic.png LeftyGreenMario!(Mod) 17:49, 20 July 2017 (UTC)

If you do enough studies on something eventually one will say what you want it to say, meta-analysis of all the studies on homeopathy has shown it to not be effective. Christopher (talk) 18:18, 20 July 2017 (UTC)
And besides, does no one consider the virtual impossibility of homeopathy anymore? --It's-a me, Lgm sigpic.png LeftyGreenMario!(Mod) 18:35, 20 July 2017 (UTC)

It's already been discussed above, but I'll look at it myself. But now I feel stupid for not reading.

A few things about this study, now that I kinda examined it more.

The selection of the individualized remedy was carried out after the case history by a certified medical doctor, specialized in homeopathy with 18 years experience in classical homeopathy based on Hahnemann's methodology described in paragraphs 83–104 of the Organon of Medicine, 6th edition. A complete medical history with clinical examination was done. All patients no matter the group assigned, had a full homeopathic case-taking including the collection of all the facts pertaining to the patient which may help in reaching the totality of the symptoms: past and present physical and emotional symptoms, family environment since childhood, stressful life events, marital satisfaction. The symptoms were organized by hierarchy: mental, general and physical. In first place, the strategy to choose the individualized remedy was based on the most characteristic and clear mental symptoms. Secondly, general symptoms were taken into account. Computerized version of Synthesis Homeopathic Repertory 9.1 (Radar version 10) was used to facilitate the prescription. Only one remedy was prescribed at a time but it could be changed at every follow-up according to patient's symptoms.

The process of selecting those who received the individualized remedy doesn't seem very random to me. What defines as the "most characteristic and clear mental symptoms"? What's the cut off? And based on this hierarchy, how would you compare someone with acute mental problems to acute physical? Are they equal? Would they both be chosen?

this is a response to a reader comment: The study aimed to evaluate the efficacy of the individualized homeopathic treatment as a whole, an individualized prescription means selecting an individualized remedy according to the patient's symptoms in the appropriate potency needed to heal. That is classical homeopathy. In general terms, homeopathy studies the patient as a whole, taking into account that is a human being that might be physically and mentally ill. This is not magic, is a comprehensive medical approach or holistic care. The study did not aim to evaluate the same medicine for all patients. As the selection and prescription in homeopathy is individualized, the same prescription for all patients will surely fail in improving depressive symptoms. Again, this is not magic, it is the methodology of the classical homeopathic prescription.

IHT-placebo difference in HRSD score was higher (5 points) than fluoxetine-placebo difference. This result deserves a comment. Although the three groups had the same case history, in case of IHT group, participants received an individualized homeopathic prescription, which matched with the specific symptoms the patient had, whereas, all participants in fluoxetine group received the same antidepressant and dosage, fluoxetine 20 mg per day.

What's the point showing a difference if you're not going to evaluate a consistent version of medicine? This sounds like them trying to evaluate a bunch of different methods of medicine rather than sticking to a select amount of methods. In fact, I believe 25 different medicines are used. Also, I think this would make it virtually impossible to replicate due to the individualized assessment of patients who received this individualized medicine that is allowed to change over time.

This study is not a homeopathic proving trial, which has a different objective. Our study was randomized and patients were allocated in one of the three groups. Regarding the study setting, I agree with you. It would be more objective to conduct the study in a setting that does not have a business in delivery of homeopathic treatments, but the same thing occurs with many other studies which are conducted in their own settings. Most of the time, medical doctors conduct studies in settings where they work on their own area of expertise, precisely because they are trained in that. Moreover, other medical doctors are employed by the pharmaceutical industry which is frequently involved with funds in many studies.

BIG PHARMA. And if it isn't intended to be evidence to show homeopathy works, then what exactly is this? Individualized treatment is good and it can't be replicated?

In case of IHT, if the participant underwent a severe 'homeopathic aggravation' (temporary intensification of symptoms before a condition improves), the homeopathic medicine was interrupted and the reaction was lessened by using frequent doses of the same remedy in lower potency.

Just a simple comment, but this assumes "homeopathic aggravation" is real and I find it funny that the procedure to lessen the reaction is lower potency, which translates to less dilution which means a more saturated solution, I think? Lower potency in Super Homeopathic Land means less dilutions in real life, right?

The study doesn't talk about HOW it works and how terribly implausible homeopathy is and thus doesn't explain how this result can happen aside from "well, it seems like it works".

Although in the HOMDEP-MENOP study there was no statistical difference between [Individualized Homeopathic Treatment] and fluoxetine, this study was not designed to prove if IHT is not worse or equivalent to fluoxetine.

Although we did not achieve a statistic power of >80% with this sample size (133 participants), we found statistically significant differences for both, IHT and fluoxetine, in HRSD and for IHT in GS. If not, we should have included more participants, in order to increase the statistic power of the study to detect a difference, if the difference in reality exists.

I'm not a statistician, but I think not achieving a this statistic power is not very convincing.

Authoer also responds to reader comment and simply dismisses contradicting evidence (which includes people we know Edzard Ernst and David Gorski), as "biased" and proceeds to counter using biased sources like Australian Homeopathic Association and Homeopathic Research Institute while also claiming the importance of in vitro and animal studies.

IDK, this trial doesn't yell "HOMEOPATHY WORKS". If anything, it seems to say that individualized treatment has some sort of effect on people. Anything else, guys? --It's-a me, Lgm sigpic.png LeftyGreenMario!(Mod) 18:35, 20 July 2017 (UTC)

How even can you do that in a supposedly double-blinded study? You don't even know what you're administering, right??? ikanreed 🐐Bleat at me 18:39, 20 July 2017 (UTC)
Interesting point. But I think there are dummy IHTs administered.

After inclusion, patients were randomly assigned to either one of three groups: (1) individualized homeopathic treatment (IHT) plus fluoxetine dummy-loaded; (2) fluoxetine (20 mg/d) plus IHT dummy-loaded; (3) fluoxetine placebo plus IHT placebo.

--It's-a me, Lgm sigpic.png LeftyGreenMario!(Mod) 18:43, 20 July 2017 (UTC)

Also, since this study is posted twice already, anyone think we should include past attempts? --It's-a me, Lgm sigpic.png LeftyGreenMario!(Mod) 20:37, 25 July 2017 (UTC)

'Unscientific study'[edit]

Breathe in over a now empty bottle of 'your favourite tipple.'

If you are not drunk as a result, homeopathy does not work. Anna Livia (talk) 09:20, 3 June 2019 (UTC)

List of POTENTIAL links of homeopathy working[edit]

https://www.karger.com/Article/Fulltext/494621

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5366148/

https://pubmed.ncbi.nlm.nih.gov/9828870/

https://clinicaltrials.gov/ct2/show/NCT02086864

https://journals.sagepub.com/doi/abs/10.1177/0269881108091259

https://www.sciencedirect.com/science/article/abs/pii/S096522991830829X

https://www.england.nhs.uk/wp-content/uploads/2017/11/sps-homeopathy.pdf

— Unsigned, by: 50.201.145.131 / talk / contribs

Looking at one, "Efficacy of individualized homeopathic treatment of insomnia" looks like at a glance the same issues with that "individualized" crap I sifted through a while back, see Individualized Homeopathic Treatment and Fluoxetine for Moderate to Severe Depression. Not to mention, that 60 sample size is pretty measly. Also looked at the first source: that doesn't support what's apparently being claimed at all. How is it a "potential" link of homeopathy working?? The second link is practically the same as the first one: too low quality to be of any use. That's just glancing from abstracts? Third one? Nope. Okay, you know, I'm not going to look through all the links, if far too many of what you're sharing don't support the "POTENTIAL links of homeopathy working", why share them? --It's-a me, Lgm sigpic.png LeftyGreenMario!(Mod) 08:58, 6 February 2022 (UTC)