Psychology as a Discipline has Lost its Mind, if it ever had One! By Brian Simpson

Disciplines like sociology and psychology have always disturbed me, as long as I have been thinking about these things, since the 1970s.Having done mathematics and biology for my teaching degree, and having started an MA, which was abandoned when I had a family, I have still been interested in the question of what makes a discipline a science. Sociology to my reading, the postmodern version which now dominates the discipline, makes the entire endeavour mere thinly disguised neo-Marxist pseudo-science nonsense.

Common to both psychology and sociology, for example is acceptance of the gender agenda idea that the sexual classification of people into males and females is arbitrary and lacks reality because intersex people, with both male and female secondary sexual characteristics, exist. This is illogical, as the very classification presupposes the concepts which it seeks to reject, making the entire proposition logically inconsistent.

As well psychology, which pretends to be an empirical science of sorts, unlike sociology, has a replication crisis, where a large percentage of studies simply cannot be replicated by other researchers. Even researchers who go back to try and replicate their past studies, often fail. There is something very wrong here.

Add to this, up to half the effects reported in psychology journals might not be real; one in every two studies could be making false claims. As detailed below, one of the main reasons for this are questionable research practices, such as not making your data available for other researchers to check; manipulating the analysis a positive result is achieved ('p-hacking'); running many analyses but only reporting the ones that give positive results; and forming hypotheses after analysing the data ("HARK-ing"). These practices all constitute intellectual fraud, but are so widespread in psychology as to constitute around a half of all papers published.

Psychology is clearly an intellectually corrupt discipline, which is probably the thin edge of a mighty thick wedge of academia, which I see as rotten to the core. All this poisonous bs is funded by the tax payers' dollars.

https://dailysceptic.org/2025/01/29/does-the-psychology-literature-reflect-reality/

"Does the psychology literature reflect reality? Thousands of studies have been published claiming this effects that and that affects this, and so on. But how many of these effects are actually real? A 2021 paper by Anne Scheel and colleagues looked at this question, and came to the disturbing conclusion that a large share of them might not be real. They might, in other words, be null effects masquerading as true (or 'statistically significant') effects.

The paper used a clever method. To understand it, you have to be aware of the two main sources of bias in the psychology literature (as well as the scientific literature more broadly).

The first is selective reporting, also known as the 'file drawer problem'. Essentially, it's much harder to get a null result published than it is to get a positive result published. Null results are regarded as 'boring' and 'uninformative' – not the sort of thing editors want filling up the pages of their vaunted journals. (This is despite the obvious fact that it's often very useful to know when something isn't true.) Not only that, but null results can ruffle people's feathers. If a distinguished academic publishes a paper claiming that such-and-such is true, and then some other academic publishes his own paper showing the opposite is true, the first one might get rather perturbed. (Academics can be extremely petty, presumably because the stakes are so low.)

The second source of bias comes under the heading of questionable research practices or QRPs. These are things like: not making your data available for other researchers to check; tweaking your analysis until you get a positive result ('p-hacking'); running many analyses but only reporting the ones that give positive results; and forming hypotheses after analysing the data ('HARK-ing').

In recent years, some journals and researchers have sought to address these two sources of bias through what are called 'registered reports'. A registered report is an academic paper with two key features: it tests hypotheses that have been pre-registered via a time-stamped protocol posted online; it is submitted to a journal and accepted for publication before the data have been collected and analysed (i.e., entirely on the basis of the hypotheses and proposed methods). In virtue of these two features, registered reports are immune from both selective reporting and questionable research practices.

Returning to Scheel and colleagues, they compared the percentage of articles with a positive result in a sample of registered reports and a sample of standard reports (i.e., ordinary academic papers). To be specific, they checked whether the first hypothesis tested in each article was deemed by the authors to have been supported. Did they write something like, 'Our first hypothesis was confirmed', in other words.

What did they find? …

As you can see, the first hypothesis was supported in 96% of standard reports but only 44% of registered reports – a huge gap. Now, registered reports are more likely to constitute replications of previous articles, so they might be less likely to find support for their hypothesis for that reason alone. However, even when the authors excluded replication studies from both samples, there was still a massive difference of 46 percentage points.

This suggests that up to half the effects reported in psychology might not be real; one in every two studies could be making false claims. While Scheel and colleagues' study has some limitations like any other, their findings suggest that selective reporting and questionable research practices are absolutely rampant. And in case you're wondering, yes, they did pre-register their own hypotheses.

https://journals.sagepub.com/doi/10.1177/25152459211007467

Abstract

Selectively publishing results that support the tested hypotheses ("positive" results) distorts the available evidence for scientific claims. For the past decade, psychological scientists have been increasingly concerned about the degree of such distortion in their literature. A new publication format has been developed to prevent selective reporting: In Registered Reports (RRs), peer review and the decision to publish take place before results are known. We compared the results in published RRs (N = 71 as of November 2018) with a random sample of hypothesis-testing studies from the standard literature (N = 152) in psychology. Analyzing the first hypothesis of each article, we found 96% positive results in standard reports but only 44% positive results in RRs. We discuss possible explanations for this large difference and suggest that a plausible factor is the reduction of publication bias and/or Type I error inflation in the RR literature." 

 

Comments

No comments made yet. Be the first to submit a comment
Already Registered? Login Here
Sunday, 09 February 2025

Captcha Image