Announcement

Collapse

Philosophy 201 Guidelines

Cogito ergo sum

Here in the Philosophy forum we will talk about all the "why" questions. We'll have conversations about the way in which philosophy and theology and religion interact with each other. Metaphysics, ontology, origins, truth? They're all fair game so jump right in and have some fun! But remember...play nice!

Forum Rules: Here
See more
See less

Is libertarian free will coherent?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by seer View Post
    Then Thinker you are hypocritical - the whole point of the Münchhausen trilemma is that none of the options are rational. Whether you use epistemological economy or not.
    That would be the most rational view there. Rationality is binary -- yes or no. It is a spectrum offering differing degrees.
    Blog: Atheism and the City

    If your whole worldview rests on a particular claim being true, you damn well better have evidence for it. You should have tons of evidence.

    Comment


    • Originally posted by Joel View Post
      Then that leaves open the possibility of freedom of choice. You claimed that I couldn't use hunger as an example of purpose that isn't cause, because hunger deterministically causes you to eat. But if it doesn't necessarily cause you to eat (you could do otherwise), the fact you ate doesn't imply that hunger was the cause.
      Sometimes it does, sometimes it doesn't. It all depends on other factors that play into the causal chain of events at that time. No one is saying that the physiological factors that cause hunger in a person must necessarily be the only thing that causes a person to do what they eventually do. The person could happen to see a mutilated dead body, which might instantly kill their appetite. None of this allows libertarian free will.


      What you are proposing here is that "random" implies "lacks an efficient cause". But that doesn't mean "lacks an efficient cause" implies "random". And is consistent with my saying that having purpose/order/rationale is sufficient to make something not random.

      E.g.,

      agent has cause & has purpose => not random/arbitrary
      agent has cause & lacks purpose => not random/arbitrary
      agent lacks cause & has purpose => not random/arbitrary
      agent lacks cause & lacks purpose => random/arbitrary


      The most one can do to show them logically possible is provide a model/scenario that is consistent with them. And my scenario is.
      You haven't explained how purpose/order/rationale has any kind of effect on a thought that isn't part of a causal physical process. You've given me an example where a physical thing can cause or influence a decision. To claim that the agent has no cause would indeed make whatever it does totally random. Why would it spontaneously think X instead of Y? If your argument is that it "lacks cause & has purpose" even if I assume a dualistic ontology, the agent can have no influence over whether that purpose is a factor in thinking X or Y, it would be totally random, unless you want to admit in here a causal factor.


      (I didn't admit any such thing. I said it's irrelevant, but I'm willing to suppose for the sake of argument that it was something you didn't choose.)
      Even if we suppose that the popping into your head at t-dt was involuntary, that doesn't imply that your continuing to think it at t+dt is involuntary. They are different events. Your thinking it at t-dt doesn't imply that you must think it at t+dt. At t+dt you might keep thinking it, or you might go make a sandwich instead. So there's no contradiction with the thought at t+dt being freely chosen.
      Yes there is. At t+dt you would not be able to tell whether whatever thought that happened at that time was a totally random thought that was completely out of your control. If you say that you weighed all the consequences and made a choice, all those thoughts that weighed different options were themselves thoughts you could not choose, because once again, you cannot have a thought, about a thought before you have the thought. The choice to choose X vs. go make a sandwich is something that arose in your head. Thinking about whether to keep thinking about X or go make a sandwich before hand makes no difference. The choice was a thought you could not have chosen beforehand.



      This line of questioning doesn't help you. If we can't tell whether it is LFW or not, then (as far as we know) it possibly is LFW and possible isn't, and thus LFW is possible.
      It does help me because you are unable to explain how you could choose your next thought and have control over it in a way that is distinguishable from total uncontrolled randomness. The rest of this thread is a waste of time.





      No, my t1, t2, t3 sequence applies to any choice/action (whether that is kicking a ball, or continuing to contemplate X).

      For your convenience I repeat it here:

      Time t1: Agent is thinking about ideas of possible actions (e.g. possible action X, possible action Y, ...), and is deliberating about them.
      Time t2: Agent selects one of those ideas to actualize.
      Time t3: Agent is doing action Y (or X, or Z,...)


      I see, the confusion comes from your saying something that means something different from what you mean to say. I encourage you to try to be more precise about this distinction, because it is the very issue of which question we are discussing, and thus is a distinction fundamental to the discussion.
      The selection is still a thought. If I am thinking about doing X vs doing Y - regardless of whether X or Y or physical actions or just thinking, when I eventually come to a decision, that is a mental event and a thought. I cognitively choose to do either X or Y. If it is a physical action, after I cognitively make my choice, then I perform the action.

      Definitions are separate from the chronological order of events.
      I suppose for our discussion, an agent means a human being, since the discussion is about the possibility of human LFW.
      What caused the thoughts at t1 is irrelevant. The LFW choice the sequence is describing is the action at t3.
      That's the thing. It isn't. The whole sequence of thoughts in your explanation are thoughts you cannot have chosen ahead of time and thus we cannot be in control of our thoughts.

      I don't think I need to even talk about theories of soul/body, because the chronology doesn't refer to them. You can think of the human being in question as a single unit--and individual being. Theories of soul/body unity or duality or whatever are discussion for a different time.

      (As I side note, your claim that no soul can affect the body would be seriously begging the question. But it's irrelevant to the model I'm presenting.)
      It matters because if the agent includes the soul and the body, the body is made of atoms and they follow the laws of physics, which there is no way for your consciousness to violate. Claiming it does would require you to bear the burden of proof of showing this is true. So to claim that you've shown my (1) "We are in control of our will" would require a soul that can violate the laws of physics. But I am not even assuming this for the sake of argument. I can grant you that a soul exists, and has a causal impact on the body for the sake of argument, and you would still be unable to demonstrate my (1).


      At most your statement here could refer only a particular mind/body theory (and I mentioned multiple possible theories). But because my model is agnostic to mind/body theory, the choice is not relevant. So whether the mind is part of the body is irrelevant. (And even if the mind were part of the body, your claim that that precludes LFW would be begging the question. You'd be assuming bodily, and thus mental, determinism.)
      It all matters in the larger context of the conversation. Including the body into the definition of the agent forces me to consider physics, chemistry, and biology, which are all things that do not violate the laws of physics. Saying the soul can break the causal chain would require that you demonstrate this. Bodily and mental determinism is well established. You can look at this thread post here for some of the arguments in favor of it.


      Only if you assume that what happens at t2 and t3 is determined by the events of t1. But that would be assuming determinism, which would be begging the question.
      I'm not assuming anything. I'm look at the events of t2 and t3 and saying they are either caused events or uncaused events. If they are uncaused, which is the position you are taking, they would have no relation or causal connection to t1 and would thus be inexplicable why they would be anything like them. Claiming that purpose resolves this doesn't resolve the problem. If t1 & t2 & t3 are totally uncaused, purpose can have no effect on them whatsoever.

      It is a mental event in the sense that it is the mind doing something. But it's different from a thought, as I've explained above. Those are different mental faculties.
      A thought is a mental event and a mental event is a thought. You haven't actually explained otherwise.

      By definition, in my model, the agent does it: "Time t2: Agent selects one of those ideas to actualize." I don't need to show that my model is true. It only needs to be consistent with your OP's 1, 2, and 3 (which it is), showing them to be possible.
      The confusion I have is that on your view the choice is not a thought, and so the agent would have to be making that choice via some process that is not a thought.


      Because that is the only sense in which the agent can be truly in control of the agent's thoughts. If something else does cause the agent to cause the thought, then we can't believe that the agent is control of the agent's thoughts.
      But the agent cannot be in control of something that is totally uncaused. It wouldn't have any way of being in control whether thought X arose in it's consciousness or thought Y. It cannot choose what thought it will have 5 seconds from now or 1 second from now. Hence the dilemma.
      Blog: Atheism and the City

      If your whole worldview rests on a particular claim being true, you damn well better have evidence for it. You should have tons of evidence.

      Comment


      • Originally posted by The Thinker View Post
        That would be the most rational view there. Rationality is binary -- yes or no. It is a spectrum offering differing degrees.
        Then accepting the stopping point at an ex cathedra point would be just as rational as your position, and I quote:

        One can stop at self-evidence or common sense or fundamental principles or speaking ex cathedra or at any other evidence, but in doing so, the intention to install 'certain' justification is abandoned.

        https://en.wikipedia.org/wiki/M%C3%BCnchhausen_trilemma
        Atheism is the cult of death, the death of hope. The universe is doomed, you are doomed, the only thing that remains is to await your execution...

        https://www.youtube.com/watch?v=Jbnueb2OI4o&t=3s

        Comment


        • Originally posted by seer View Post
          Then accepting the stopping point at an ex cathedra point would be just as rational as your position, and I quote:
          No it is not just as rational as my position because ex cathedra is the view that the pope is infallible. But the pope has made numerous mistakes. He claimed god will allow atheists in heaven, and then changed his mind when the rest of Catholic Church thought that was wrong. So we have proof using ex cathedra as a basic belief is irrational. You see, you cannot use as a basic belief something that is already shown to be false. That would clearly be irrational.
          Blog: Atheism and the City

          If your whole worldview rests on a particular claim being true, you damn well better have evidence for it. You should have tons of evidence.

          Comment


          • Originally posted by The Thinker View Post
            Sometimes [hunger] does [efficiently cause], sometimes it doesn't.
            Then assume that when I used hunger as an example, I was talking about a case in which it was not the efficient cause.

            Originally posted by Joel
            E.g.,

            agent has cause & has purpose => not random/arbitrary
            agent has cause & lacks purpose => not random/arbitrary
            agent lacks cause & has purpose => not random/arbitrary
            agent lacks cause & lacks purpose => random/arbitrary
            You haven't explained how purpose/order/rationale has any kind of effect on a thought that isn't part of a causal physical process.
            I've said that it does not affect the action. I'm only saying that the agent having an intent implies that the action isn't arbitrary. Lack of efficient cause (on the agent) is not sufficient to make the action arbitrary.

            To claim that the agent has no cause would indeed make whatever it does totally random.
            No, because an agent can act with order, producing orderly results that would be impossible (or highly improbable) with a truly random process. And that resulting order could not be the cause of the action; it would be an effect of the action.

            And I should remind that every causal chain begins at an uncaused causer. If the fact of an uncaused causer implies "totally random", then that would imply that everything in existence is "totally random". But you want to say that determinism is not-random. So it is not-random when there is a uncaused cause at the beginning of the chain. And that doesn't change regardless where that uncaused cause exists (whether in the big bang or in an agent).

            Yes there is. At t+dt you would not be able to tell whether whatever thought that happened at that time was a totally random thought that was completely out of your control.
            As I've pointed out before, this not-being-able-to-tell argument only helps me. If we can't tell whether it was LFW or not, then (as far as we can tell) each is possible, so LFW is possible.

            Originally posted by Joel
            This line of questioning doesn't help you. If we can't tell whether it is LFW or not, then (as far as we know) it possibly is LFW and possible isn't, and thus LFW is possible.
            It does help me because you are unable to explain how you could choose your next thought and have control over it in a way that is distinguishable from total uncontrolled randomness. The rest of this thread is a waste of time.
            But LFW being possible (and our knowing it to be possible) does not imply that we are also able to distinguish (i.e. after the fact) whether a choice was LFW. (I'm not saying we can't. I'm just don't think it matters for the discussion.) In other words, our ability to distinguish it is not a necessary condition of it being possible.

            If you say that you weighed all the consequences and made a choice, all those thoughts that weighed different options were themselves thoughts you could not choose, because once again, you cannot have a thought, about a thought before you have the thought. The choice to choose X vs. go make a sandwich is something that arose in your head. Thinking about whether to keep thinking about X or go make a sandwich before hand makes no difference. The choice was a thought you could not have chosen beforehand.
            The thinking about the options and the consequences and the weighing were all thoughts that occurred at t1. How they got in your head is irrelevant. Even if everything at t1 was involuntary, that doesn't imply that t2,t3 is involuntary.

            For your convenience I repeat it here:

            Time t1: Agent is thinking about ideas of possible actions (e.g. possible action X, possible action Y, ...), and is deliberating about them.
            Time t2: Agent selects one of those ideas to actualize.
            Time t3: Agent is doing action Y (or X, or Z,...)

            Originally posted by Joel
            What caused the thoughts at t1 is irrelevant. The LFW choice the sequence is describing is the action at t3.
            That's the thing. It isn't. The whole sequence of thoughts in your explanation are thoughts you cannot have chosen ahead of time and thus we cannot be in control of our thoughts.
            All the relevant thoughts are thought at t1. The selecting at t2 is not "chosen ahead of time", and does not require any additional thought prior to t2.

            Claiming it does would require you to bear the burden of proof of showing this is true.
            I have no need of claiming so because (a) I'm not tying myself to that or any other particular soul/body theory, and (b) I wouldn't need to show any of this is true, because we are only talking about logical possibility.

            It all matters in the larger context of the conversation. Including the body into the definition of the agent forces me to consider physics, chemistry, and biology, which are all things that do not violate the laws of physics. Saying the soul can break the causal chain would require that you demonstrate this.
            (For the record, this is referring to a particular kind of soul/mind theory, which I'm not tying myself to.)
            The laws of physics only describe that which happens. If a soul is different from the body, and the soul affects the body, then that wouldn't be a violation of the laws of physics. That would be a law of physics.

            Also in my model, the agent does not break any causal chain. The agent (in making a LFW choice) only begins a new causal chain.

            Originally posted by Joel
            Only if you assume that what happens at t2 and t3 is determined by the events of t1. But that would be assuming determinism, which would be begging the question.
            I'm not assuming anything. I'm look at the events of t2 and t3 and saying they are either caused events or uncaused events. If they are uncaused, which is the position you are taking, ...
            No I'm not. The action that is in progress at t3 is caused: by the agent. The agent was the uncaused causer of that deterministic chain. The event at t2 is just the agent causing the first effect in the new chain.

            ...they would have no relation or causal connection to t1 and would thus be inexplicable why they would be anything like them.
            They are not caused by events at t1, but that does not mean they have no other relation to the events at t1. Indeed, the fact that the agent (at t2) selects among the alternative ideas that were being thought at t1 (up to time t2), implies a necessary, but not causal, relationship between t1 and t2.

            A thought is a mental event and a mental event is a thought. You haven't actually explained otherwise.
            All thoughts are mental events, but not all mental events are thoughts. I did explain this before. To give a refresher: Thinking is just contemplation of an idea. But we are capable mentally of things other than that. (E.g. the faculty of memory and recalling of a particular memory is different than thinking about the idea that has been recalled). And thinking is clearly different from action. "Like the difference between thinking really hard about moving your hand, vs actually moving your hand. They are different faculties."

            Now I suppose one can define terms however one wants. You could define "thought" to be equivalent to "mental event", but then it wouldn't be a useful term for making the distinctions I'm making.

            But the agent cannot be in control of something that is totally uncaused.
            For LFW, all the agent need control is the agent's action (whether mental or physical), which are caused: by the agent.

            Comment


            • Originally posted by The Thinker View Post
              No it is not just as rational as my position because ex cathedra is the view that the pope is infallible. But the pope has made numerous mistakes. He claimed god will allow atheists in heaven, and then changed his mind when the rest of Catholic Church thought that was wrong. So we have proof using ex cathedra as a basic belief is irrational. You see, you cannot use as a basic belief something that is already shown to be false. That would clearly be irrational.
              That is not the point again ! It doesn't matter whether the Pope can be wrong or not, or whether your common sense can be wrong or not, or a fundamental principle. It is the same basic principle, and according to the trilema justification is abandoned.
              Atheism is the cult of death, the death of hope. The universe is doomed, you are doomed, the only thing that remains is to await your execution...

              https://www.youtube.com/watch?v=Jbnueb2OI4o&t=3s

              Comment


              • Originally posted by Joel View Post
                Then assume that when I used hunger as an example, I was talking about a case in which it was not the efficient cause.


                I've said that it does not affect the action. I'm only saying that the agent having an intent implies that the action isn't arbitrary. Lack of efficient cause (on the agent) is not sufficient to make the action arbitrary.

                No, because an agent can act with order, producing orderly results that would be impossible (or highly improbable) with a truly random process. And that resulting order could not be the cause of the action; it would be an effect of the action.
                Lack of efficient and material cause, which is your view, does make it arbitrary. Without a cause there is no way the thought would have any relation to your purpose. It would simply be a random fluctuation, and that would not be LFW.

                And I should remind that every causal chain begins at an uncaused causer. If the fact of an uncaused causer implies "totally random", then that would imply that everything in existence is "totally random". But you want to say that determinism is not-random. So it is not-random when there is a uncaused cause at the beginning of the chain. And that doesn't change regardless where that uncaused cause exists (whether in the big bang or in an agent).
                That's not true at all. Every causal chain is linked back to a initial cause, like branches of a tree going down to the root. The whole point, from my perspective, is that we are not uncaused causes, we are caused, that's why it makes sense that our behavior reflects previous events.

                As I've pointed out before, this not-being-able-to-tell argument only helps me. If we can't tell whether it was LFW or not, then (as far as we can tell) each is possible, so LFW is possible.
                No, that is not the case at all. If your claim is that a "free" event is indistinguishable from total randomness, it means you have no justification of saying LFW is logically possible. You need to show a situation that is only possible under LFW by showing how we can be in control of our will.

                But LFW being possible (and our knowing it to be possible) does not imply that we are also able to distinguish (i.e. after the fact) whether a choice was LFW. (I'm not saying we can't. I'm just don't think it matters for the discussion.) In other words, our ability to distinguish it is not a necessary condition of it being possible.
                Yes it is. Because something that is exactly like randomness is not justifiably LFW. There would be no way for you to justify that our thoughts aren't random fluctuations. And something that has the appearance of randomness is not compatible with us being in control of our will, my (1).

                The thinking about the options and the consequences and the weighing were all thoughts that occurred at t1. How they got in your head is irrelevant. Even if everything at t1 was involuntary, that doesn't imply that t2,t3 is involuntary.

                For your convenience I repeat it here:

                Time t1: Agent is thinking about ideas of possible actions (e.g. possible action X, possible action Y, ...), and is deliberating about them.
                Time t2: Agent selects one of those ideas to actualize.
                Time t3: Agent is doing action Y (or X, or Z,...)


                All the relevant thoughts are thought at t1. The selecting at t2 is not "chosen ahead of time", and does not require any additional thought prior to t2.
                The whole point is that t1,t2,t3 are all involuntary thoughts. At t2 the agent couldn't have chosen beforehand what they choose. The selection is just a mental state that arises in their consciousness that they have no control over. Prior deliberation does not in any way make it a freely willed mental event.

                Thanks by the way for reiterating the time sequence.

                (For the record, this is referring to a particular kind of soul/mind theory, which I'm not tying myself to.)
                The laws of physics only describe that which happens. If a soul is different from the body, and the soul affects the body, then that wouldn't be a violation of the laws of physics. That would be a law of physics.

                Also in my model, the agent does not break any causal chain. The agent (in making a LFW choice) only begins a new causal chain.
                For the record, if an immaterial soul affected the body, that would indeed be a violation of the laws of physics, because it would require some force that violates the Standard Model, and it would inject new energy into the universe, violating the law of conservation of energy. In your model, the agent would indeed break the causal chain because its uncaused cause would not be an event caused by the preceding chain of causes, and that breaks the chain.


                No I'm not. The action that is in progress at t3 is caused: by the agent. The agent was the uncaused causer of that deterministic chain. The event at t2 is just the agent causing the first effect in the new chain.
                If the chain is deterministic then no thoughts or mental events that come after the first event can be said to be free. And if the initial cause is indistinguishable from randomness, then it cannot justifiably be said to be LFW where we are in control of our will. It's simply logically impossible that we can control our thoughts or will. As Schopenhauer says, "A man can do what he wants, but not want what he wants."


                They are not caused by events at t1, but that does not mean they have no other relation to the events at t1. Indeed, the fact that the agent (at t2) selects among the alternative ideas that were being thought at t1 (up to time t2), implies a necessary, but not causal, relationship between t1 and t2.
                It implies a causal relationship. Also, if the choice is a mental event that arises in consciousness you couldn't have had any control over it. You had no control over the initial thoughts at t1, nor did you have any control over how they would affect your decision at t2.

                All thoughts are mental events, but not all mental events are thoughts. I did explain this before.
                I'm not buying this. I'm using a very broad definition of "thought." To me, a memory is a thought. Thoughts are more than just ideas. A mental decision, like the one at t2 to me is a thought. It is a change of consciousness.


                For LFW, all the agent need control is the agent's action (whether mental or physical), which are caused: by the agent.
                How do agents control mental events? This is the heart of this post.
                Blog: Atheism and the City

                If your whole worldview rests on a particular claim being true, you damn well better have evidence for it. You should have tons of evidence.

                Comment


                • Originally posted by seer View Post
                  That is not the point again ! It doesn't matter whether the Pope can be wrong or not, or whether your common sense can be wrong or not, or a fundamental principle. It is the same basic principle, and according to the trilema justification is abandoned.
                  It does matter, because granting a basic belief on something known to be wrong, or incoherent, is irrational. Granting a basic belief like "I exist" or "there is an external world" is so basic, that if I deny it it would lead to irrationality. If I don't believe there is an external world, then I cannot even justify that you exist, in which case, who am I debating with right now?

                  But to go back to my point, basic beliefs do not justifiably allow you to make any claim into a basic belief.
                  Blog: Atheism and the City

                  If your whole worldview rests on a particular claim being true, you damn well better have evidence for it. You should have tons of evidence.

                  Comment


                  • Originally posted by The Thinker View Post
                    It does matter, because granting a basic belief on something known to be wrong, or incoherent, is irrational. Granting a basic belief like "I exist" or "there is an external world" is so basic, that if I deny it it would lead to irrationality. If I don't believe there is an external world, then I cannot even justify that you exist, in which case, who am I debating with right now?
                    Again Thinker, you have still abandoned logical justification, no matter the content (that is the whole point of the trilema, no option is rational). And denying that the external world exists (as you subjectively experience) would not lead to the irrational. If you were a brain in a vat or living in the Matrix that would not necessarily lead to irrationality. It would just mean that you are deceived.

                    But to go back to my point, basic beliefs do not justifiably allow you to make any claim into a basic belief.
                    Except when you choose your arbitrary stopping point you have sacrificed rational justification. You have built your house on an irrational foundation. No matter what follows.
                    Atheism is the cult of death, the death of hope. The universe is doomed, you are doomed, the only thing that remains is to await your execution...

                    https://www.youtube.com/watch?v=Jbnueb2OI4o&t=3s

                    Comment


                    • Originally posted by The Thinker View Post
                      Lack of efficient and material cause, which is your view, does make it arbitrary. Without a cause there is no way the thought would have any relation to your purpose. It would simply be a random fluctuation, and that would not be LFW.
                      The bolded part is something you are assuming without justification. We have no reason to think that an uncaused causer cannot act in such a way that the causer relates the action with intents. We have no reason to think that an uncaused causer cannot aim his actions toward a goal. We have no reason to think an uncaused causer cannot act with orderly actions, producing order.

                      Here's another possibility to think about: Suppose for the sake of argument that the options and purposes considered at t1 were deterministically/involuntarily in the person's mind and deliberation. Suppose further that the options are between:
                      - Orderly action 1 with purpose X, or
                      - Orderly action 2 with purpose Y.

                      And then suppose (at t2) the person makes a LFW selection from these two options. The set of options (by supposition) deterministically restricted the person's LFW choice to options that are orderly and with purpose, and yet because the set of options was not deterministically reduced to one single option, room was still left for a LFW choice among the options. So even if you were to additionally suppose (as you do here) that the LFW choice were a random fluctuation, the result would still necessarily be orderly, purposeful action.

                      (Note that I don't actually concede to all those suppositions. They were made for the sake of that argument, to grant you as much leeway as possible.)

                      Originally posted by Joel
                      And I should remind that every causal chain begins at an uncaused causer. If the fact of an uncaused causer implies "totally random", then that would imply that everything in existence is "totally random". But you want to say that determinism is not-random. So it is not-random when there is a uncaused cause at the beginning of the chain. And that doesn't change regardless where that uncaused cause exists (whether in the big bang or in an agent).
                      That's not true at all. Every causal chain is linked back to a initial cause, like branches of a tree going down to the root. The whole point, from my perspective, is that we are not uncaused causes, we are caused, that's why it makes sense that our behavior reflects previous events.
                      In my model, we are (to some extent) uncaused causers. So you can't just assume we are not. That would be begging the question.

                      If you are saying that all causal chains go back to a single, common root/initial cause, then that disagrees with my model in which there are multiple roots. For you to assume there isn't would be begging the question.

                      In my model, our behavior has a necessary relationship to previous events (because at t2 the agent freely selects from the options contemplated at t1).

                      You need to show a situation that is only possible under LFW
                      No I don't. I just need to show a model that is consistent with LFW.

                      Originally posted by Joel
                      But LFW being possible (and our knowing it to be possible) does not imply that we are also able to distinguish (i.e. after the fact) whether a choice was LFW. (I'm not saying we can't. I'm just don't think it matters for the discussion.) In other words, our ability to distinguish it is not a necessary condition of it being possible.
                      Yes it is. Because something that is exactly like randomness is not justifiably LFW.
                      Our inability to distinguish whether something is X or Y, after the fact, does not imply that X and Y are identical (or exactly alike). Indeed, in this case, they are different, by definition. But that doesn't imply that we will have the capability to experimentally observe which of the two happened. Indeed if we can't determine which is the case, then (to the extent of our ability to know), we at most could say that it might be one and it might be the other. Thus it might be LFW.

                      (And note that I'm not saying we actually lack the ability. )

                      Originally posted by Joel
                      For your convenience I repeat it here:

                      Time t1: Agent is thinking about ideas of possible actions (e.g. possible action X, possible action Y, ...), and is deliberating about them.
                      Time t2: Agent selects one of those ideas to actualize.
                      Time t3: Agent is doing action Y (or X, or Z,...)
                      The whole point is that t1,t2,t3 are all involuntary thoughts. At t2 the agent couldn't have chosen beforehand what they choose. The selection is just a mental state that arises in their consciousness that they have no control over. Prior deliberation does not in any way make it a freely willed mental event.
                      In my model it is not the case that "t1,t2,t3 are all involuntary thoughts."
                      In my model, the agent does not choose before t2; the agent chooses at t2. Not beforehand.
                      In my model, the selection is caused by the agent. It is not a thing, such that the agent would need to control it. It is the agent exercising control. It is the very exercising of control (i.e., over which alternative is actualized). It is nothing but the actualizing/causing of the first effect in the new causal chain. And the agent did control which of the options was actualized.

                      For the record, if an immaterial soul affected the body, that would indeed be a violation of the laws of physics, because it would require some force that violates the Standard Model
                      That's not how science works. When scientists discovered things that violated Newtonian physics, they didn't discover something that violated the laws of physics. Rather, they discovered that Newtonian physics was not a complete description of the laws of physics. Likewise if things are ever observed that violate the Standard Model (i.e., of particle physics), it would not imply a violation of the laws of physics. Rather it would just imply that the Standard Model is not complete, and would need to be modified or replaced by something more general. (Indeed our current understanding of gravitation is not fully compatible with the Standard Model of particle physics. I pointed out that the equation you posted earlier looks like it doesn't include Enstein's full field equations.)
                      (Not that this discussion is relevant to the discussion. I just couldn't let something like that slide.)

                      If the chain is deterministic then no thoughts or mental events that come after the first event can be said to be free.
                      Yes and no. They would all be free in the sense that the person could have chosen an alternative chain to actualize. E.g. the person freely chose the chain in which he continues to think about X, rather than ceasing to think about X and instead going to make a sandwich. In that sense, the person freely chose what the person thought about (at t3).
                      And if the person made only that one LFW choice in his whole life, then the extent of his libertarian freedom would be limited to that sense. But if the person can make many LFW choices, then t3 could either be or lead to a new t1' state from which the person makes another free selection, and so on.

                      Originally posted by Joel
                      They are not caused by events at t1, but that does not mean they have no other relation to the events at t1. Indeed, the fact that the agent (at t2) selects among the alternative ideas that were being thought at t1 (up to time t2), implies a necessary, but not causal, relationship between t1 and t2.
                      It implies a causal relationship.
                      How so?

                      I'm not buying this. I'm using a very broad definition of "thought." To me, a memory is a thought. Thoughts are more than just ideas. A mental decision, like the one at t2 to me is a thought. It is a change of consciousness.
                      I guess you can define terms however you want. I'm inclined to stop using that term, then, because it's not very useful or clear to everyone. It would be more clear to just say "mental state" or "mental event" if that's what you mean.
                      And it obscures potentially important distinctions between different kinds of mental states or mental events. And it makes it more difficult to talk about the particular kind of mental activity of thinking about an idea, as distinguished from other kinds of mental activity.

                      Take for example, your statement from your OP: "You can't have a thought, about a thought, before you have a thought." If we translate this to: "You can't have a mental state, about a mental state, before you have a mental state." It raises questions: What does it mean to have a mental state about a mental state? Are the three instances of "mental state" in that statement referring to one-and-the-same mental state? Or about three different mental states? etc. What kind of mental state(s) are being referred to here? Your statement isn't clear.

                      But let me try to continue interpreting your OP argument using your definition:
                      You continued saying: "You can't choose what your next thought, desire, or idea will be. In order to do that, you'd have to think about it, before you think about it. That's incoherent."
                      If I try substituting your definition of "thought", the error in your reasoning becomes apparent: "In order to do that, you'd have to [have a mental state A] about [the mental state B], before you [have the mental state B itself]."
                      Or if I might try to make that more clear:
                      "In order to do that, you'd have to [contemplate (which is mental state A)] [the idea of the potential mental state B], before [the mental state B is actual]."
                      When you just say "think about it, before you think about it," that sounds contradictory. But when you dig deeper and realize that the two "think about it" clauses cannot refer to the same kind of thing, the apparent contradiction vanishes. Contemplating the idea of B is not the same as B being actual.

                      On the other hand, it does help me understand other statements of yours. When you said in the OP, "our thoughts are our will and mind." I can see why you would say that when you mean "our mental states are our will and mind". And you should be able to see why that statement seemed loony to me when I was understanding "thought" to mean "idea."

                      So your other OP argument is "If our [mental states] have causes, what ever caused that can't be our will or our mind, because our [mental states] are our will and mind." You are complaining that that would be claiming the mental state is self-caused, and nothing can be self-caused. Now we can't interpret that very strictly, otherwise it would also rule out the possibility of determinism. We have to at least distinguish between successive states at different times. In determinism, a state at one time causes the 'next' state. The succession of states would be the will and mind, not any particular state. So the 'next' state wouldn't be self-caused. It would be caused by the prior state. So your complaint is that with LFW, the 'next' state isn't caused by the prior state, so what was it caused by?

                      To help understand this, first let's consider an agent that does not change, but LFW causes changes outside the agent. Indeed this is the traditional Christian conception of God. In this case there is no problem of self-cause. The mental state of the agent doesn't change. The agent, as an uncaused causer only causes changes outside the agent. It is a mental event in the sense that the Uncaused Causer causes the external thing to happen. There is a causal event. But not a change of state of the Uncaused Causer.

                      So likewise we have no problem if a human agent's capacity for free will is a capacity only to affect things outside the agent's mind (external actions). Your argument says nothing against such a possibility.

                      But we can go further. Suppose now that in the human's mind, we think of the faculty of the will as one part of the mind, distinct from the rest of the mind. So now, just as we can avoid self-causation if we talk of an agent only making external changes, the will could make changes external to itself, including the state of the rest of the mind, without any problem of self-causation.

                      It seems your complaint is that you are thinking that the will itself must change state (internal to the will) in the process of causing changes external to itself. But there's no reason to suppose that it must. It could itself be unchanging while causing changes only external to itself. And I'm not entirely comfortable with even talking about the will as a thing having state. It's just a faculty/capability of the person. Is there some state within the person that determines whether the entity has this faculty? I don't know. But if so I don't see any reason to suppose that it is state that changes when a person makes a LFW choice. Thus we only need to deal with this capacity (of this person) originally coming into existence. And we can agree that that was not caused by the person. (E.g. the person and this LFW capacity of the person was originally created by God.)

                      And finally, I think we have to go back to the point that there must exist at least one uncaused cause. Thus the mere existence of an uncaused causer (and an action caused by it) cannot be contradictory. An agent's action must originate in some uncaused causer, which could possibly be anything. Why not the agent itself?

                      Comment


                      • Originally posted by seer View Post
                        Again Thinker, you have still abandoned logical justification, no matter the content (that is the whole point of the trilema, no option is rational). And denying that the external world exists (as you subjectively experience) would not lead to the irrational. If you were a brain in a vat or living in the Matrix that would not necessarily lead to irrationality. It would just mean that you are deceived.
                        That is completely false. That basic beliefs themselves can never be justified does not in any way say that all possible ideas are equally justified. If a view is incoherent, it cannot be true no matter what - regardless of whether we're living in a matrix or not.

                        Except when you choose your arbitrary stopping point you have sacrificed rational justification. You have built your house on an irrational foundation. No matter what follows.
                        Once again, arbitrary means "based on random choice or personal whim, rather than any reason or system." My methodology is not based on a random choice or a personal whim, but on a system of reason. So you're false again.
                        Blog: Atheism and the City

                        If your whole worldview rests on a particular claim being true, you damn well better have evidence for it. You should have tons of evidence.

                        Comment


                        • Originally posted by The Thinker View Post
                          That is completely false. That basic beliefs themselves can never be justified does not in any way say that all possible ideas are equally justified. If a view is incoherent, it cannot be true no matter what - regardless of whether we're living in a matrix or not.
                          What are you on about? The part about the Matrix was to point out that that would not be irrational. And you are still holding a logically unjustified belief, no matter what follows.

                          Once again, arbitrary means "based on random choice or personal whim, rather than any reason or system." My methodology is not based on a random choice or a personal whim, but on a system of reason. So you're false again.

                          Again, your stopping point is arbitrary, instead of following an infinite regression of possible explanations you decided to stop where you did - why? Only because it seemed subjectively right to do so, and that choice was not based on logic. As the Trilema points out.
                          Atheism is the cult of death, the death of hope. The universe is doomed, you are doomed, the only thing that remains is to await your execution...

                          https://www.youtube.com/watch?v=Jbnueb2OI4o&t=3s

                          Comment


                          • Wow. You never give up.

                            Originally posted by Joel View Post
                            The bolded part is something you are assuming without justification. We have no reason to think that an uncaused causer cannot act in such a way that the causer relates the action with intents. We have no reason to think that an uncaused causer cannot aim his actions toward a goal. We have no reason to think an uncaused causer cannot act with orderly actions, producing order.
                            We do have reason, because uncaused is random. Something uncaused cannot have a goal or intent affect it, because then it would be caused.

                            Here's another possibility to think about: Suppose for the sake of argument that the options and purposes considered at t1 were deterministically/involuntarily in the person's mind and deliberation. Suppose further that the options are between:
                            - Orderly action 1 with purpose X, or
                            - Orderly action 2 with purpose Y.

                            And then suppose (at t2) the person makes a LFW selection from these two options. The set of options (by supposition) deterministically restricted the person's LFW choice to options that are orderly and with purpose, and yet because the set of options was not deterministically reduced to one single option, room was still left for a LFW choice among the options. So even if you were to additionally suppose (as you do here) that the LFW choice were a random fluctuation, the result would still necessarily be orderly, purposeful action.
                            You're supposing LFW for the sake of argument, which is the very thing that is incoherent. The person cannot make the selection. The selection arises in consciousness that the person could not have chosen beforehand.


                            In my model, we are (to some extent) uncaused causers. So you can't just assume we are not. That would be begging the question.

                            If you are saying that all causal chains go back to a single, common root/initial cause, then that disagrees with my model in which there are multiple roots. For you to assume there isn't would be begging the question.

                            In my model, our behavior has a necessary relationship to previous events (because at t2 the agent freely selects from the options contemplated at t1).
                            I'm not begging any question, I'm just offering you my perspective. At t2 the agent cannot choose because the choice arises in consciousness and you cannot choose your future thought.


                            No I don't. I just need to show a model that is consistent with LFW.
                            No you don't. Saying LFW is tantamount to total randomness does not in any way show it is logically possible, because you still have to answer how we can have control over your will, which is not a problem on total randomness.


                            In my model it is not the case that "t1,t2,t3 are all involuntary thoughts."
                            In my model, the agent does not choose before t2; the agent chooses at t2. Not beforehand.
                            In my model, the selection is caused by the agent. It is not a thing, such that the agent would need to control it. It is the agent exercising control. It is the very exercising of control (i.e., over which alternative is actualized). It is nothing but the actualizing/causing of the first effect in the new causal chain. And the agent did control which of the options was actualized.
                            That is not in any way LFW. If t1,t2,t3 are all not involuntary thoughts on your model, you need to logically explain how an agent can control its thoughts. You say it's identical and indistinguishable from randomness. That's basically the end of the story for you. Saying an agent "chooses" something does not demonstrate that the agent had control over it. You cannot control something that is uncaused. That's incoherent.


                            That's not how science works. When scientists discovered things that violated Newtonian physics, they didn't discover something that violated the laws of physics. Rather, they discovered that Newtonian physics was not a complete description of the laws of physics. Likewise if things are ever observed that violate the Standard Model (i.e., of particle physics), it would not imply a violation of the laws of physics. Rather it would just imply that the Standard Model is not complete, and would need to be modified or replaced by something more general. (Indeed our current understanding of gravitation is not fully compatible with the Standard Model of particle physics. I pointed out that the equation you posted earlier looks like it doesn't include Enstein's full field equations.)
                            (Not that this discussion is relevant to the discussion. I just couldn't let something like that slide.)
                            Your response is a very common argument to this, but it's just wrong. That's not the case with the Standard Model. In a lecture he gave at Oxford, Poetic Naturalism, physicist Sean Carroll responds to this argument, saying:

                            The difference with our current framework of quantum field theory is that if quantum field theory is correct in a certain regime - Newtonian mechanics is correct in a certain regime, right? We don't need quantum mechanics to fly rocket to the moon for example. Quantum field theory unlike Newtonian mechanics tells us very precisely what regime it is valid in. It gives us a delineation of where the theory is supposed to work and where it's not supposed to work. You can draw that line, and it turns out that in practice, drawing the line around the quantum field theory I drew on the previous slide includes all our everyday experience.
                            Elsewhere, Carroll has articulated on the point:

                            As Michael Salem points out on an alternative social-media site (rhymes with “lacebook”), some of the resistance to this really quite unobjectionable claim comes from a lack of familiarity with the idea of a “range of validity” for a theory. We tend to think of scientific theories as “right” or “wrong,” which is hardly surprising. But not correct! Theories can be “right” within a certain regime, and useless outside that regime. Newtonian gravity is perfectly good if you want to fly a rocket to the Moon. But you need to toss it out and use general relativity (which has a wider range of validity) if you want to talk about black holes. And you have to toss out GR and use quantum gravity if you want to talk about the birth of the universe.
                            On another blog post he's written:

                            What there won’t be is some dramatic paradigm shift that says “Oops, sorry about those electrons and protons and neutrons, we found that they don’t really exist. Now it’s zylbots all the way down.” Nor will we have discovered new fundamental particles and forces that are crucial to telling the story of everyday phenomena. If those existed, we would have found them by now. The view of electrons and protons and neutrons interacting through the Standard Model and gravity will stay with us forever — added to and better understood, but never replaced or drastically modified.
                            So there is no new data that will have any effect on the Standard Model that will be able to allow for a soul to enter. That antiquated idea is just false.

                            Yes and no. They would all be free in the sense that the person could have chosen an alternative chain to actualize. E.g. the person freely chose the chain in which he continues to think about X, rather than ceasing to think about X and instead going to make a sandwich. In that sense, the person freely chose what the person thought about (at t3).
                            And if the person made only that one LFW choice in his whole life, then the extent of his libertarian freedom would be limited to that sense. But if the person can make many LFW choices, then t3 could either be or lead to a new t1' state from which the person makes another free selection, and so on.
                            The person could not have chosen because he or she cannot control its thoughts - which is the whole point. Just asserting that an agent can control its thoughts doesn't prove it or show it's logically possible.


                            How so?
                            Because either something is caused or it is not. If it is not it won't have any relationship with what's before it other then pure chance.


                            But let me try to continue interpreting your OP argument using your definition:

                            You continued saying: "You can't choose what your next thought, desire, or idea will be. In order to do that, you'd have to think about it, before you think about it. That's incoherent."

                            If I try substituting your definition of "thought", the error in your reasoning becomes apparent: "In order to do that, you'd have to [have a mental state A] about [the mental state B], before you [have the mental state B itself]."

                            Or if I might try to make that more clear:

                            "In order to do that, you'd have to [contemplate (which is mental state A)] [the idea of the potential mental state B], before [the mental state B is actual]."
                            When you just say "think about it, before you think about it," that sounds contradictory. But when you dig deeper and realize that the two "think about it" clauses cannot refer to the same kind of thing, the apparent contradiction vanishes. Contemplating the idea of B is not the same as B being actual.
                            You got it all wrong. It's mental state A through and through. Let's say mental state A is the thought of ice cream for example. Then my OP would be, "You can't have a thought [about ice cream], about a thought [about ice cream], before you have a thought [about ice cream]." In other words, in a situation where the thought or the very idea of ice cream itself pops into your mind, you couldn't have controlled it because you couldn't have planned ahead to think about ice cream. That would require thinking about it before you think about it. It's just impossible. And even if you could somehow do this, then the first thought (when you thought about it before you thought about it) would face the same problem as the latter thought, and you'd get an infinite regress. Now you tried to say that we can have a thought about X and then "choose" to think about X again, but that doesn't fix the problem. Thinking about thinking about X in the future would itself be a thought that popped into your head that you had no control over.


                            So your other OP argument is "If our [mental states] have causes, what ever caused that can't be our will or our mind, because our [mental states] are our will and mind." You are complaining that that would be claiming the mental state is self-caused, and nothing can be self-caused. Now we can't interpret that very strictly, otherwise it would also rule out the possibility of determinism. We have to at least distinguish between successive states at different times. In determinism, a state at one time causes the 'next' state. The succession of states would be the will and mind, not any particular state. So the 'next' state wouldn't be self-caused. It would be caused by the prior state. So your complaint is that with LFW, the 'next' state isn't caused by the prior state, so what was it caused by?
                            I'm not sure I even follow you. I wasn't complaining that that would be claiming the mental state is self-caused, and nothing can be self-caused. I'm merely saying that if our will/thoughts/mental states have a cause it cannot be a cause we are conscious of, and so our mind would not be the causal factor, but rather the effect of a previous cause we have no control over.

                            To help understand this, first let's consider an agent that does not change, but LFW causes changes outside the agent. Indeed this is the traditional Christian conception of God. In this case there is no problem of self-cause. The mental state of the agent doesn't change. The agent, as an uncaused causer only causes changes outside the agent. It is a mental event in the sense that the Uncaused Causer causes the external thing to happen. There is a causal event. But not a change of state of the Uncaused Causer.

                            So likewise we have no problem if a human agent's capacity for free will is a capacity only to affect things outside the agent's mind (external actions). Your argument says nothing against such a possibility.
                            This makes no sense at all. If an agent doesn't change, including its mental states, it will be causally impotent.

                            But we can go further. Suppose now that in the human's mind, we think of the faculty of the will as one part of the mind, distinct from the rest of the mind. So now, just as we can avoid self-causation if we talk of an agent only making external changes, the will could make changes external to itself, including the state of the rest of the mind, without any problem of self-causation.
                            See above.

                            It seems your complaint is that you are thinking that the will itself must change state (internal to the will) in the process of causing changes external to itself. But there's no reason to suppose that it must. It could itself be unchanging while causing changes only external to itself.
                            Totally incoherent in theory. Additionally, you know what disproves this? The fact that our mental states change.


                            And finally, I think we have to go back to the point that there must exist at least one uncaused cause. Thus the mere existence of an uncaused causer (and an action caused by it) cannot be contradictory. An agent's action must originate in some uncaused causer, which could possibly be anything. Why not the agent itself?
                            I don't think I've ever said the uncaused will/agent idea was contradictory, I said it wouldn't be LFW because you cannot control or have any influence over something that is uncaused.
                            Blog: Atheism and the City

                            If your whole worldview rests on a particular claim being true, you damn well better have evidence for it. You should have tons of evidence.

                            Comment


                            • Originally posted by seer View Post
                              What are you on about? The part about the Matrix was to point out that that would not be irrational. And you are still holding a logically unjustified belief, no matter what follows.
                              Logically unjustified does not = logically contradictory. So are you trying to say that all beliefs or knowledge is equally unjustified? Are you trying to say my beliefs are just as justified as yours are? If so, explain why?


                              Again, your stopping point is arbitrary, instead of following an infinite regression of possible explanations you decided to stop where you did - why? Only because it seemed subjectively right to do so, and that choice was not based on logic. As the Trilema points out.
                              I stopped where I stopped because that is the minimal amount that I need to assume to make sense of the world. It is not an arbitrary stopping point. You on the other hand start with the assumption that the Bible is god's word, which is assuming your conclusion -- and something that by the way is demonstrably false.
                              Blog: Atheism and the City

                              If your whole worldview rests on a particular claim being true, you damn well better have evidence for it. You should have tons of evidence.

                              Comment


                              • Originally posted by The Thinker View Post
                                We do have reason, because uncaused is random. Something uncaused cannot have a goal or intent affect it, because then it would be caused.
                                If you are defining "random" in that way, such that:

                                agent has cause & has purpose => not random
                                agent has cause & lacks purpose => not random
                                agent lacks cause & has purpose => random
                                agent lacks cause & lacks purpose => random

                                Then I have no problem with LFW choices being "random" in that sense. It would be merely tautological. Given that, a LFW actor could still act in orderly, rational manner and accomplish goals. What else would I want?

                                Originally posted by Joel
                                In my model it is not the case that "t1,t2,t3 are all involuntary thoughts."
                                In my model, the agent does not choose before t2; the agent chooses at t2. Not beforehand.
                                In my model, the selection is caused by the agent. It is not a thing, such that the agent would need to control it. It is the agent exercising control. It is the very exercising of control (i.e., over which alternative is actualized). It is nothing but the actualizing/causing of the first effect in the new causal chain. And the agent did control which of the options was actualized.
                                That is not in any way LFW.
                                It is consistent with your three points in the OP: (1) The agent is control of determining the resulting action, (2) The agent does actualize the selected action, and (3) in the same situation, the agent could have selected one of the other alternatives.

                                You say it's identical and indistinguishable from randomness.
                                I didn't say either of those things. I explicitly denied their being identical (unless we use your tautological definiton of "random"). And I made no claim about whether we are capable of distinguishing those two different things.

                                You cannot control something that is uncaused.
                                ...
                                I don't think I've ever said the uncaused will/agent idea was contradictory, I said it wouldn't be LFW because you cannot control or have any influence over something that is uncaused.
                                The only thing the agent need control is the agent's action. And the agent does control the action. The action is not uncaused. It is caused by the agent.

                                Originally posted by Joel
                                Originally posted by Thinker
                                Originally posted by Joel
                                They are not caused by events at t1, but that does not mean they have no other relation to the events at t1. Indeed, the fact that the agent (at t2) selects among the alternative ideas that were being thought at t1 (up to time t2), implies a necessary, but not causal, relationship between t1 and t2.
                                It implies a causal relationship.
                                How so?
                                Because either something is caused or it is not. If it is not it won't have any relationship with what's before it other then pure chance.
                                That doesn't follow. Let's suppose for the sake of argument that the selection at t2 were 'random', such that it were uncaused. That doesn't imply that no other relation exists. On the contrary, there necessarily is at least one relationship: the selected option must be one of the options from which the selection is made!

                                Originally posted by Joel
                                But let me try to continue interpreting your OP argument using your definition:

                                You continued saying: "You can't choose what your next thought, desire, or idea will be. In order to do that, you'd have to think about it, before you think about it. That's incoherent."

                                If I try substituting your definition of "thought", the error in your reasoning becomes apparent: "In order to do that, you'd have to [have a mental state A] about [the mental state B], before you [have the mental state B itself]."

                                Or if I might try to make that more clear:

                                "In order to do that, you'd have to [contemplate (which is mental state A)] [the idea of the potential mental state B], before [the mental state B is actual]."
                                When you just say "think about it, before you think about it," that sounds contradictory. But when you dig deeper and realize that the two "think about it" clauses cannot refer to the same kind of thing, the apparent contradiction vanishes. Contemplating the idea of B is not the same as B being actual.
                                You got it all wrong. It's mental state A through and through. Let's say mental state A is the thought of ice cream for example. Then my OP would be, "You can't have a thought [about ice cream], about a thought [about ice cream], before you have a thought [about ice cream]."
                                That doesn't work. There are three instances of "thought" in the sentence. The first instance is explicitly a thought about the second instance. If the second is a thought about ice cream, then the first would have to be a thought about the thought about ice cream, and thus cannot be the same thing. You have to choose. Is the first instance a thought about ice cream, or a thought about the thought about ice cream?

                                And my comments do eliminate any problem. A person could deliberate about whether to contemplate the idea of ice cream, by contemplating the idea of contemplating the idea of ice cream, which is not the same as actually contemplating the idea of ice cream.

                                (This is in addition to my other ways of eliminating any problem: E.g. one can contemplate the idea of ice cream at t1 when deliberating about whether to contemplate the idea of ice cream at t2. No contradiction. Or e.g. one can contemplate just an abstract idea about ice cream while deliberating about whether to contemplate concrete idea(s) of ice cream. Any of these eliminates any supposed contradiction, because in each case there is a difference between the earlier contemplation and the later contemplation, so there is no contradiction in contemplating the earlier idea when planning ahead to contemplate the later idea.)

                                This makes no sense at all. If an agent doesn't change, including its mental states, it will be causally impotent.
                                Says you.

                                If you are correct, then the original Uncaused Causer of the universe must have changed its state in the process of causing the first effect, in which case it follows that there is nothing impossible about an uncaused causer changing its own state in the process. (And thus the objection I was addressing vanishes.)


                                Originally posted by Joel
                                It seems your complaint is that you are thinking that the will itself must change state (internal to the will) in the process of causing changes external to itself. But there's no reason to suppose that it must. It could itself be unchanging while causing changes only external to itself. [including the state of the rest of the mind]
                                Totally incoherent in theory.
                                That's not an argument.

                                Additionally, you know what disproves this? The fact that our mental states change.
                                By hypothesis, I was discussing the possibility of one part of the mind (the will) changing state of another part of the mind, which would be a change in mental state. So a change in mental state can't disprove the hypothesis.

                                Comment

                                Related Threads

                                Collapse

                                Topics Statistics Last Post
                                Started by shunyadragon, 03-01-2024, 09:40 AM
                                172 responses
                                596 views
                                0 likes
                                Last Post seer
                                by seer
                                 
                                Started by Diogenes, 01-22-2024, 07:37 PM
                                21 responses
                                138 views
                                0 likes
                                Last Post shunyadragon  
                                Working...
                                X