That is, they think all of their decisions were preordained, and then use this to claim that they can’t be held responsible for anything they do.

  • @[email protected]
    link
    fedilink
    English
    311 months ago

    I believe consciousness is a result of processes of the brain, and the brain is a very complex machine. It’s hard to say anything too concretely beyond that because I don’t really understand how it works. I live as though the brain and my consciousness are in perfect sync, but I’m unsure how true that is.

    There are, for example, experiments where it can be shown that decisions are made before we are consciously aware that we have made them. Others show that severing a nerve between the hemispheres of our brain can result in two independent consciousnesses. Who can say where I end and my brain begins?

    • @[email protected]
      link
      fedilink
      111 months ago

      Your brain is you, though, just like your hands are you. Whether there’s a lag between the time that imaging detects you made a decision and you say you made one does not change the fact that you’re the one making the decision.

      • @[email protected]
        link
        fedilink
        English
        2
        edit-2
        11 months ago

        That’s one way of seeing things, and I respect that viewpoint, but I disagree. I primarily view myself as my consciousness; everything else is secondary. How do you know you aren’t a brain in a vat?

        • @[email protected]
          link
          fedilink
          111 months ago

          I’m a fallibilist: I don’t believe we can know anything for certain. The best we can do is base propositions off contingent statements: “If what I see is reliable, then what I see in the mirror is not a brain in a vat.”

          A brain in a vat is not a very useful starting axioms, so I have no reason to give it serious consideration. By contrast, while taking the general accuracy of my own senses as axiomatic eventually leads me to conclude they can be fallible (example: hallucinations,) it is nonetheless a way more useful axiom for deriving a base of contingent knowledge.