• OK, it's on.
  • Please note that many, many Email Addresses used for spam, are not accepted at registration. Select a respectable Free email.
  • Done now. Domine miserere nobis.

P-hacking and scientific integrity

Hadoblado

think again losers
Local time
Today 2:11 PM
Joined
Mar 17, 2011
Messages
6,614
-->

I feel like one of the barriers to bridging conversations between people on the forum is the science/mysticism divide. Falling on the science side, it often seems to me that people understand that there are limitations to science, but they interpret these as fatal, which conveniently allows them to continue believing whatever other ALLCAPSTRUTH truth they prefer. I find it's difficult to address these criticisms without derailing the conversation. So I thought that, while I'm not part of any such conversation, it might be good to bring it up and have a talk about it without any stakes.

P-hacking is a huge issue, especially for the social sciences. It completely craps on the integrity of science. Essentially, p-hacking involves selective reporting of results. It's the equivalent of only keeping the experimental data that confirms your results, which is all too easy when you're dealing with the amount of variance inherent in any measure of human behaviour.

The video provided is an accessible resource, that explains a lot of issues inherent in p-value testing, as well as some viable workarounds.
 

Black Rose

An unbreakable bond
Local time
Yesterday 10:41 PM
Joined
Apr 4, 2010
Messages
10,783
-->
Location
with mama
science/mysticism divide ?

How long has it been since a significant number of debates like that has happened here? Wonkavission was banned. (I'm factitious). I remember one debate, (she was new) in the past six months. More?
 

Ex-User (14663)

Prolific Member
Local time
Today 4:41 AM
Joined
Jun 7, 2017
Messages
2,939
-->
I don't view p-value hacking as something detrimental to science. Because it's by a scientific process that one shows the flaws of a process which generates spurious results. Abusing statistics is fraud, not science.
 

redbaron

irony based lifeform
Local time
Today 3:41 PM
Joined
Jun 10, 2012
Messages
7,253
-->
Location
69S 69E
it's detrimental to the reputation of science then

but you'd be surprised how many spurious studies spawn other spurious studies
 

Ex-User (14663)

Prolific Member
Local time
Today 4:41 AM
Joined
Jun 7, 2017
Messages
2,939
-->
Well yeah, I guess I'm not sure whose perspective we are talking about the issue from. If it's the general public, then they should see that the reason studies are not reproducible is not that science itself is a flawed process, but it's because researchers cheat and circumvent prescribed procedures for how to perform valid inference. And this happens because they are pressured to publish stuff on a regular basis, etc.

The conclusion of which would be that the solution is not to discard science itself, but apply even more rigorous science.
 

Cognisant

Prolific Member
Local time
Yesterday 5:41 PM
Joined
Dec 12, 2009
Messages
10,564
-->
Nobody expects the scientific inquisition.
 

QuickTwist

Spiritual "Woo"
Local time
Yesterday 11:41 PM
Joined
Jan 24, 2013
Messages
7,182
-->
Location
...
Hado, I applaud what you are trying to accomplish in this thread. It is truly a noble goal.

But why must it be a difference in the science/mysticism divide? There has to be a third option because I believe it's not simply a binary option of the scientific and the mysticism.
 

Hadoblado

think again losers
Local time
Today 2:11 PM
Joined
Mar 17, 2011
Messages
6,614
-->
@AK
I was bringing it up now specifically because it doesn't apply to any active threads that come to mind. I wanted people to talk about it without being held hostage by their sacred cows.

@QT
From my perspective mysticism competes with, is opposed to, and is rendered irrelevant by, science. I think you're blatantly wrong. BUT you're right, it's not necessary for me to invoke that narrative in order to have this discussion, and I'm potentially alienating my audience. So I take back what I said there, it's a conversation for another day.
 

QuickTwist

Spiritual "Woo"
Local time
Yesterday 11:41 PM
Joined
Jan 24, 2013
Messages
7,182
-->
Location
...
@QT
From my perspective mysticism competes with, is opposed to, and is rendered irrelevant by, science. I think you're blatantly wrong. BUT you're right, it's not necessary for me to invoke that narrative in order to have this discussion, and I'm potentially alienating my audience. So I take back what I said there, it's a conversation for another day.

To be frank, I don't understand your position here. What about simple pragmatism which doesn't fit either category? The conclusion to the video you posted was basically that the human mind works best doing what it does best, which is acting in a natural way as opposed to an overly analytical PoV.
 

Hadoblado

think again losers
Local time
Today 2:11 PM
Joined
Mar 17, 2011
Messages
6,614
-->
You seem to be on a third option bender? I'm not saying there's nothing else, I'm saying whatever mysticism does, science does better, so boo mysticism. But that's not what this thread is about. So I take back what I said there, it's a conversation for another day.

BUT you're right, it's not necessary for me to invoke that narrative in order to have this discussion, and I'm potentially alienating my audience. So I take back what I said there, it's a conversation for another day.
 

QuickTwist

Spiritual "Woo"
Local time
Yesterday 11:41 PM
Joined
Jan 24, 2013
Messages
7,182
-->
Location
...
You seem to be on a third option bender? I'm not saying there's nothing else, I'm saying whatever mysticism does, science does better, so boo mysticism. But that's not what this thread is about. So I take back what I said there, it's a conversation for another day.

BUT you're right, it's not necessary for me to invoke that narrative in order to have this discussion, and I'm potentially alienating my audience. So I take back what I said there, it's a conversation for another day.

Sure, but why is mysticism even relevant here?
 

QuickTwist

Spiritual "Woo"
Local time
Yesterday 11:41 PM
Joined
Jan 24, 2013
Messages
7,182
-->
Location
...
NVM...
 

Pyropyro

Magos Biologis
Local time
Today 12:41 PM
Joined
Feb 3, 2012
Messages
4,044
-->
Location
Philippines
This is a researcher problem rather than a Science/ Social Science problem. We're humans too and will lie, cheat and steal if it will get stuff published or projects funded. Of course, some erroneous results are just due to human error though.

I'm with Serac on this one. If a research result is fishy then maybe its time to rerun the tests, politics be damned.
 

Hadoblado

think again losers
Local time
Today 2:11 PM
Joined
Mar 17, 2011
Messages
6,614
-->
I'd argue that the people level problems are so widespread, there's something structurally wrong that needs fixing (not that it can't be fixed).
- replication studies are not being published
- people are using p values and not using Bayes factors
- people are not conducting their research in an accountable way
- results aren't published if they come back negative
- novelty is highly sought for but science is not about novelty
- inter-rater reliability of result interpretation is too low
- there are a lot of fake journals that will publish for money, this isn't a problem for the scientific community, but is a problem for people who are trying to grapple with the literature without a scientific background. Which journals to trust is like insider knowledge.

I don't think it's useful to blame individuals when the problems are so apparent at a population level.
 

Pyropyro

Magos Biologis
Local time
Today 12:41 PM
Joined
Feb 3, 2012
Messages
4,044
-->
Location
Philippines
- there are a lot of fake journals that will publish for money, this isn't a problem for the scientific community, but is a problem for people who are trying to grapple with the literature without a scientific background. Which journals to trust is like insider knowledge.

Here's an interesting article about predatory journals and their relationship with academics. It's not a predatory relationship but more of a symbiotic one.

I don't think it's useful to blame individuals when the problems are so apparent at a population level.

Yeah, it's an institutional problem. Articles are written by several people and should have been reviewed by more senior researchers and/or colleagues prior to publication.
 

Hadoblado

think again losers
Local time
Today 2:11 PM
Joined
Mar 17, 2011
Messages
6,614
-->
Yeah, most people trying to get journal published know how to assess the quality of a journal, or can figure it out. I agree it's symbiotic.
 

Black Rose

An unbreakable bond
Local time
Yesterday 10:41 PM
Joined
Apr 4, 2010
Messages
10,783
-->
Location
with mama
Everything follows physics. Maybe mysticism has a physics. Consciousness is mysterious. Experiencing colors and hyperdimensional reality on DMT is mysterious.
 

Ex-User (14663)

Prolific Member
Local time
Today 4:41 AM
Joined
Jun 7, 2017
Messages
2,939
-->
Yeah, it's an institutional problem. Articles are written by several people and should have been reviewed by more senior researchers and/or colleagues prior to publication.
Not really. An outside observer cannot know whether a p-value was hacked, for example. A researcher can easily hide, for example, that he/she used several samples before arriving at a statistically significant result.

Scientific integrity is in large part the responsibility of individual researchers. How many people are ready to jeopardize their job and academic status in the name of scientific integrity, though? Not many.
 

Hadoblado

think again losers
Local time
Today 2:11 PM
Joined
Mar 17, 2011
Messages
6,614
-->
Okay, so you think it's an individual problem?

What about measures that could be standardised to limit the discretion of individual researchers in this way?

This is just an idea I'm playing with:
What if journals went through the peer review process before the experiment started, the first half of the article is published (including hypotheses) based on the merit of it's justifications. So the journal still has control over quality control.
But by accepting the first half of the experiment, the journal also accepts the second half when it comes in.
While this may seem clumsy, teaser format is already used in magazines to increase anticipation.
And this would change the experience from the equivalent of reading a book of "facts", to a journey of discovery, where the reader experiences the same wonder that the experimenter does.

Such a journal would be a bit more work, but I think it could stand out if the readers and researchers knew how much more rigorous it was.

Another option is if journals demanded Bayesian experimental design where applicable? This would prevent people from fiddling with their hypotheses and participant numbers as much (I think, it sounds like it would but I've only covered it in passing, the reality might be different, what do you think?).
 

QuickTwist

Spiritual "Woo"
Local time
Yesterday 11:41 PM
Joined
Jan 24, 2013
Messages
7,182
-->
Location
...
I think it's a problem of bias, which isn't actually a problem at all.
 

redbaron

irony based lifeform
Local time
Today 3:41 PM
Joined
Jun 10, 2012
Messages
7,253
-->
Location
69S 69E
bias is a problem if you value the scientific method, as well as for anyone who holds scientific research to a standard of integrity.
 

redbaron

irony based lifeform
Local time
Today 3:41 PM
Joined
Jun 10, 2012
Messages
7,253
-->
Location
69S 69E
@Hadoblado


I'm not sure having journals be accountable for integrity helps much, since numerous journals are notoriously biased themselves.

Wouldn't hurt to have more peer review ar different stages of the process though, and also replication studies published - even if boring. They don't need to publish a full study but that's also something I've found frustrating.
 

Hadoblado

think again losers
Local time
Today 2:11 PM
Joined
Mar 17, 2011
Messages
6,614
-->
Yeah, some journals are notoriously biased, but there's a lot of prestige surrounding being published in top journals, and they can help inform industry standards.

Also I'm wondering if, maybe, given that most journals are online, the format could be changed. So the first study is the stem, but then it branches off into all the replications. This way, the boringness factor wouldn't be such a big deal, but they'd still be there and easily accessible.

I get the sense I'm being starry eyed though. I don't really know that much about the realities of the process.
 

Ex-User (14663)

Prolific Member
Local time
Today 4:41 AM
Joined
Jun 7, 2017
Messages
2,939
-->
Okay, so you think it's an individual problem?

What about measures that could be standardised to limit the discretion of individual researchers in this way?

This is just an idea I'm playing with:
What if journals went through the peer review process before the experiment started, the first half of the article is published (including hypotheses) based on the merit of it's justifications. So the journal still has control over quality control.
But by accepting the first half of the experiment, the journal also accepts the second half when it comes in.
While this may seem clumsy, teaser format is already used in magazines to increase anticipation.
And this would change the experience from the equivalent of reading a book of "facts", to a journey of discovery, where the reader experiences the same wonder that the experimenter does.

Such a journal would be a bit more work, but I think it could stand out if the readers and researchers knew how much more rigorous it was.
I think it's a valid idea in the sense that it identifies the underlying problem: that as a reader of an article, you don't get to see the whole process behind the results. Richard Feynman once pointed out that unless you specify excactly how you carried out every step in an experiment, the results cannot be trusted from an outsider's point of view. But the problem here is that the method doesn't prevent the researchers from doing bad inference in the second half.
Another option is if journals demanded Bayesian experimental design where applicable? This would prevent people from fiddling with their hypotheses and participant numbers as much (I think, it sounds like it would but I've only covered it in passing, the reality might be different, what do you think?).
I think it's an interesting idea. Not because it would make it harder to hack statistical results, but because it would remove people's incentive to reach arbitrary p-value levels.

Bayesian inference is no joke in terms of the mathematical complications it entails though.
 

Hadoblado

think again losers
Local time
Today 2:11 PM
Joined
Mar 17, 2011
Messages
6,614
-->
In terms of stats that psychologists have to use, I think it's within the scope of what we already have to do (but I don't know that much about Bayes, having only briefly covered it). A lot of psychologists don't even understand p values properly anyway, they just know how to use and interpret them at a practical level.

I think removing arbitrary p values would be a very good step. I personally set a preliminary alpha at 0.01 for whether or not I keep on reading, but that's still arbitrary. 0.05 is just too hackable.
 

QuickTwist

Spiritual "Woo"
Local time
Yesterday 11:41 PM
Joined
Jan 24, 2013
Messages
7,182
-->
Location
...
bias is a problem if you value the scientific method, as well as for anyone who holds scientific research to a standard of integrity.

I agree, which is why the religion of science needs to fall.
 

Ex-User (14663)

Prolific Member
Local time
Today 4:41 AM
Joined
Jun 7, 2017
Messages
2,939
-->
In terms of stats that psychologists have to use, I think it's within the scope of what we already have to do (but I don't know that much about Bayes, having only briefly covered it). A lot of psychologists don't even understand p values properly anyway, they just know how to use and interpret them at a practical level.
Bayesian inference requires one to be more explicit about one's assumptions – e.g. specification of prior distributions. Which is very good in some sense, as it forces people to explicitly acknowledge that any measure of significance in a statistical test relies on assumptions about the random variables. That's in some contrast to frequentist inference where there is typically little emphasis on the fact that the choice of the null hypothesis and likelihood function will influence the p-value.
 

Grayman

Soul Shade
Local time
Yesterday 9:41 PM
Joined
Jan 8, 2013
Messages
4,416
-->
Location
You basement
Statistics and science should never mix. "God doesn't play dice".

Statistical analysis is good for determining probabil truths while science determines absolute facts. What is the probability the rock will fall when you let go? Any error in the rate of fall is because of uncalculated variables like wind resistance and not probability.

The greatest sin of science was to let staticians call themselves scientists.
 

Hadoblado

think again losers
Local time
Today 2:11 PM
Joined
Mar 17, 2011
Messages
6,614
-->
I'm not sure I follow grayman. I don't really understand where you've got any of these impressions from.

Namely:
- Science as being about "absolute facts" despite being a process that continuously overturns previous "absolute facts"
- "God does not play dice" was a criticism of quantum mechanics, not statistics. And while many people including myself feel kinda weird towards QM, it's not really up for discussion until we have a better theory to explain how the world works. Einstein is thought to have been wrong in this respect.

I guess it just feels like you're making up your own definition of science, and using it to describe a difference that is already acknowledged in any field that utilises statistics? Where in any definition of science does it imply a rejection of statistics?
 

Ex-User (14663)

Prolific Member
Local time
Today 4:41 AM
Joined
Jun 7, 2017
Messages
2,939
-->
Statistics and science should never mix. "God doesn't play dice".

Statistical analysis is good for determining probabil truths while science determines absolute facts. What is the probability the rock will fall when you let go? Any error in the rate of fall is because of uncalculated variables like wind resistance and not probability.

The greatest sin of science was to let staticians call themselves scientists.
Statistics is just a mathematically consistent method for doing inference and prediction in a stochastic envrionment. E.g. as in your example, if you cannot determine all possible variables that influence a phenomenon (e.g. acceleration of a falling rock), you treat the outcome as a random variable. Whether statistics is "science" is pretty much a matter of semantics without much practical consequence, I think.
 

Ex-User (9086)

Prolific Member
Local time
Today 4:41 AM
Joined
Nov 21, 2013
Messages
4,758
-->
Scientists who are causing the problem have to make regular publications as a matter of career survival.

It could mean several things;
- human bias, there should be greater emphasis put on training, retraining and review of the research
- admissions to the research groups are not rigorous enough (too many get to become scientists)
- the demand for scientists exceeds the supply of good scientists and mediocre scientists are utilized incorrectly

There's too much office politics (favours, general bias associated with human decision-making) in the academia and journal review groups, the whole process should be 'dehumanized', the people involved should follow procedures and if those are not observed they should be externally audited on their bias or mistakes, in performing research, hiring, review of publications and reproducing the studies.

It seems to me that there's not a lot of equally worthwhile occupations that a scientist can choose to do if they fail at their job, it's either that or having to suffer as a low-skilled employee for years. Especially given how it takes years to finish the general academic courses and even more years to specialise.


Scientists who show statistically less valuable contributions, judged for the utility value of the chosen subject of research and the actual quality of their research should receive less funding or should be demoted to less important positions or tasks. Inept scientists could be useful in performing the reproducibility studies, which would also allow them to acquire the skills necessary to perform innovative or independent work.

Peers who vote for less significant or less useful publications more often should be removed from the journal review boards/groups and should be discriminated against when time comes to choose new review groups etc.


Of course the above idea seems too much like tyranny, especially to those artistic and creative minds of those who think they don't want to be supervised and perform science.
The system as a whole resembles artistic patronage more so than the orderly scientific method.
 

Grayman

Soul Shade
Local time
Yesterday 9:41 PM
Joined
Jan 8, 2013
Messages
4,416
-->
Location
You basement
Statistics is just a mathematically consistent method for doing inference and prediction in a stochastic envrionment. E.g. as in your example, if you cannot determine all possible variables that influence a phenomenon (e.g. acceleration of a falling rock), you treat the outcome as a random variable. Whether statistics is "science" is pretty much a matter of semantics without much practical consequence, I think.

In the example you don't use statistics. You create a controlled environment and test it in a vacuum. It's 100% accurate in so far as we can measure. The only room for doubt is in the instruments. Using statistics to determine results undermines the whole scientific process which is discovering the why of it all. It is coorelation without causation.
 

Ex-User (14663)

Prolific Member
Local time
Today 4:41 AM
Joined
Jun 7, 2017
Messages
2,939
-->
Statistics is just a mathematically consistent method for doing inference and prediction in a stochastic envrionment. E.g. as in your example, if you cannot determine all possible variables that influence a phenomenon (e.g. acceleration of a falling rock), you treat the outcome as a random variable. Whether statistics is "science" is pretty much a matter of semantics without much practical consequence, I think.

In the example you don't use statistics. You create a controlled environment and test it in a vacuum. It's 100% accurate in so far as we can measure. The only room for doubt is in the instruments. Using statistics to determine results undermines the whole scientific process which is discovering the why of it all. It is coorelation without causation.
right... so you never have variables you cannot control, except in the cases you have variables you cannot control.

"correlation is not causation" – whatever the hell that means. If I have a hypothesis that variable x influences variable y, and I have no control over additional variables that influence y, then if there is a correlation, I have corroborated that x causes a certain response in y. So whether "correlation is causation" depends on the hypothesis.

In reality, btw, there are always variables that you cannot know the effect of. E.g. you cannot exactly measure the trajectory of earth around the sun, as there will be all kinds of small things that slightly affect the trajectory (comets, distant stars and whatnot). All these small things will amount to randomness in the result. I'm not an astronomer so I don't know how exactly they do these things, but I assume you can measure such things only up to certain statistical confidence levels.
 

Grayman

Soul Shade
Local time
Yesterday 9:41 PM
Joined
Jan 8, 2013
Messages
4,416
-->
Location
You basement
I'm not sure I follow grayman. I don't really understand where you've got any of these impressions from.

Namely:
- Science as being about "absolute facts" despite being a process that continuously overturns previous "absolute facts"
- "God does not play dice" was a criticism of quantum mechanics, not statistics. And while many people including myself feel kinda weird towards QM, it's not really up for discussion until we have a better theory to explain how the world works. Einstein is thought to have been wrong in this respect.

I guess it just feels like you're making up your own definition of science, and using it to describe a difference that is already acknowledged in any field that utilises statistics? Where in any definition of science does it imply a rejection of statistics?

Science is a truth discovered using the scientific method which is the best method for getting to the absolute truth practically speaking.

Using statistics to formulate a hypothesis is fine but it cannot replace the scientific method. Being a necessity because you cannot directly measure the experiment. Nor can you rely primarily on statistics out of necessity because you cannot run an experiment to prove your hypothesis as it is no longer scientific because it is no longer based on the scientific method. I am not saying that correlations and assumptions based on statistics are not useful or necessary or even not to be accepted. I am saying they are not scientific and should not be given the degree of respect or acceptance that scientific conclusions should normally get.


Erac might test gravity by using statistics and concludes that 90% of heavy objects fall at a different rate than lighter objects therefore all heavier objects fall at a faster rate than lighter objects and then others may repeat this experiment and conclude the same thing but they would all be wrong. They have only proven an association but not causation. They have to ask why the association exists, build a hypothesis and then create an environment where they can test it and it has to be repeatable. Repeatable means 100% repeatable within what is practicably measurable.

Of course a comment a million miles away of a mass of 4grams might create a gravitational anomly that would change the fall rate of some of the objects by 0.00003% but it wouldn't be practicably measurable so it would be a variable he could ignore. But perhaps the measurements changed by 0.01% at night. I am sure some might just call it a 99.99% statistically accurate and accept the number but me, I always look to the moon.
 

Hadoblado

think again losers
Local time
Today 2:11 PM
Joined
Mar 17, 2011
Messages
6,614
-->
I just don't know where you're getting that impression from?

Can you show me where in the scientific method it implies that only results that are 100% fact are science? How are you supposed to prove something to be 100% likely and not just 99.99999% likely?

Is this just your personal definition or is it something widely believed?
 

QuickTwist

Spiritual "Woo"
Local time
Yesterday 11:41 PM
Joined
Jan 24, 2013
Messages
7,182
-->
Location
...
I just don't know where you're getting that impression from?

Can you show me where in the scientific method it implies that only results that are 100% fact are science? How are you supposed to prove something to be 100% likely and not just 99.99999% likely?

Is this just your personal definition or is it something widely believed?

I think what he is saying is that you can do "studies" that "measure" a "statistic", but that this is an inferior way to go about the scientific method. IIRC, he's advocating against doing case studies because they are just not accurate.
 

Hadoblado

think again losers
Local time
Today 2:11 PM
Joined
Mar 17, 2011
Messages
6,614
-->
I share that view QT, but that's not what he's saying.

Nor can you rely primarily on statistics out of necessity because you cannot run an experiment to prove your hypothesis as it is no longer scientific because it is no longer based on the scientific method.
 

QuickTwist

Spiritual "Woo"
Local time
Yesterday 11:41 PM
Joined
Jan 24, 2013
Messages
7,182
-->
Location
...
I share that view QT, but that's not what he's saying.

Nor can you rely primarily on statistics out of necessity because you cannot run an experiment to prove your hypothesis as it is no longer scientific because it is no longer based on the scientific method.

Well, I guess I/we will not know what he means for sure until we actually see what he has to say about this.
 

Minuend

pat pat
Local time
Today 5:41 AM
Joined
Jan 1, 2009
Messages
4,142
-->
I enjoyed the video, don't have deep insight or commentary, I think the """scientific method""" is a good path to use as a guideline, the only problem is all the humans getting in the way with their silly biased thinking, personal motivations, corruption and so forth. Basically just wanted to say I enjoyed the content even if I don't have anything to contribute. Peace out :snowman:
 

TEpidemiologist

Redshirt
Local time
Today 12:41 AM
Joined
Sep 22, 2018
Messages
1
-->
Location
Toronto, Canada
I don't view p-value hacking as something detrimental to science. Because it's by a scientific process that one shows the flaws of a process which generates spurious results. Abusing statistics is fraud, not science.

You say yourself abusing statistics is fraud. P-hacking is abusing statistics.

P-hacking is NOT EXPLORATORY ANALYSIS. The term is used for concealed used of multiple analyses with the intent of finding a p < 0.05 and never reporting how you got there.
 

Hadoblado

think again losers
Local time
Today 2:11 PM
Joined
Mar 17, 2011
Messages
6,614
-->

How universities cover up compromised studies is another layer of fuckery that needs to be overcome. I thought this video was a good case study.
 

ApostateAbe

Banned
Local time
Yesterday 11:41 PM
Joined
Jul 23, 2010
Messages
1,272
-->
Location
MT

How universities cover up compromised studies is another layer of fuckery that needs to be overcome. I thought this video was a good case study.
According to an article of the AAAS,

"Two [of the five papers] found that the number of black people lynched in a U.S. county 100 years ago influences whether white people in the same area today perceive black people as a threat and favor harsh punishments for them."

What would be the odds of that? Had such an actual survey be conducted, it is unlikely that those two variables would significantly correlated. It is a correlation predicted by critical race theory, an ideology popular among criminologists. The theory makes a lot of bad predictions. Such a theory provides an incentive for fraud. If you are a straight shooter, then you can not provide data to support the bad theory, you can't get published, and you perish.
 

ZenRaiden

One atom of me
Local time
Today 4:41 AM
Joined
Jul 27, 2013
Messages
4,406
-->
Location
Between concrete walls
Take it from practical point of view. Science is always praised by results. No one talks about science that is done in-between to reach those results. That is because it looks trivial and unimpressive and also it is always something only the enlightened people who understand the field can appreciate.
People who invest into science do not always understand the value of what is being done. To add to the problem science is about unknown so even the scientist doing the research can guarantee anything. You can be doing research for 10 years and get nowhere. That is also legit science. Also researching what has also been researched is also hard. No one wants to invest into that. No one is interested in results that confirm something.
 

BurnedOut

Beloved Antichrist
Local time
Today 10:11 AM
Joined
Apr 19, 2016
Messages
1,309
-->
Location
A fucking black hole
P-hacking is a huge issue, especially for the social sciences. It completely craps on the integrity of science. Essentially, p-hacking involves selective reporting of results. It's the equivalent of only keeping the experimental data that confirms your results, which is all too easy when you're dealing with the amount of variance inherent in any measure of human behaviour.
The video provided is an accessible resource, that explains a lot of issues inherent in p-value testing, as well as some viable workarounds.


Son...

HYPERREAL ERA - "Hyperreality, in semiotics and postmodernism, is an inability of consciousness to distinguish reality from a simulation of reality, especially in technologically advanced postmodern societies.[1] Hyperreality is seen as a condition in which what is real and what is fiction are seamlessly blended together so that there is no clear distinction between where one ends and the other begins.[2] It allows the co-mingling of physical reality with virtual reality (VR) and human intelligence with artificial intelligence (AI).[2]"
 
Top Bottom