• ebits21@lemmy.ca
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    edit-2
    10 months ago

    The Freakonomics podcast covered this topic pretty nicely just recently. Would recommend a listen! It’s not just international or low impact journals that are having issues.

    I feel like zero trust research could be a thing in the future in some areas.

    So for example, the study would be pre registered with expected outcome as is starting to be done more often now. But also the third party has a private encryption key and the experiments data is encrypted somehow during collection with a public encryption key.

    Obviously very much depends on the type of study, but data is very often collected with collection software of some sort that could implement this.

    The scientist could not snoop the data even if they wanted. The public key can encrypt data but only the private key can unlock it.

    Then once uploaded to the third party they can unlock it with their private key. Then the data is public before any analysis.

    Seems to me that this would force science to be done the way it ought to be done!

    • bananabenana@lemmy.world
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      edit-2
      10 months ago

      Totally unnecessary and is not how science works.

      If you make data public before analysis, labs will get scooped with their own data. No one would invest in data collection.

      Often things are found or worked out during the process, which can change week to week or month to month, iteratively. Experiments don’t go to plan, data is cooked and can only be used in reduced ways etc. Researchers are meant to share their raw data anyway which should prevent this sort of stuff. Basic statistical analysis on datasets usually reveals tampering.

      The issue is the insane academic standards and funding bodies (public grant $) which reward high volume and high ‘impact’ work. These incentives need re-evaluation and people should not be punished for years of low activity. Sometimes science and discovery just doesn’t work the way you think it will, and that’s okay. We need a system which acknowledges that which everyone in science knows.

      • ebits21@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        10 months ago

        All it would do is create an audit trail of your data to keep scientists honest. You can still iterate and change course but now you’re responsible for the record (if you look at the data at some point the data at that point could be recorded as is and a log keeps track when you check the data). Why did you change course and when? Was that appropriate? The data is verified when and if you decide to review it.

        How science is done has a problem, just suggesting a solution. I know that’s not how it’s done.

        All the data is a matter of record. It makes sure the raw data is ACTUALLY the raw data without bias. It makes sure you’re not ignoring negative results (a huge issue). Statistical detection of cheating will never be as good as reviewing the raw data and changes over time.

        As for scooping data, it’s a matter of the record now. There’s data available showing that they scooped you. Currently there’s nothing. The data doesn’t have to be public until the study is published.

        I think the main barrier would be scientists and the incentives inherent in the system (career, money, prestige) that creates the cheating in the first place.