Juarrero, Dynamics in Action, Chapter 13

Juarrero, A. (1999). Threading an agent’s control loop through the environment (Chapter 13). Dynamics in Action: Intentional Behavior as a Complex System (pp. 195–213). Cambridge, MA: The MIT Press.

This chapter can be summed up with the question, “How do we decide to do what we intend to do?” (p. 196). Juarerro rules out AI systems that work through the entire solution space (relies too much on processing with “brute speed); and expert systems where all the relevant rules and contingencies have to be programmed into the system in advance.

The first question is whether intentions are meanings in the head. Juarrero cites Quine and Putnam (“meanings just ain’t in the head”) for the conception of meaning being entangled with the agent’s context. (But I wonder whether Quine and Putnam were thinking of the type of claim that Juarrero makes, that meaning “is now thought to be intimately associated with how the word was learned, its causal history” etc., as opposed to the reference theory of meaning, in which meaning connects the word to its referent). Juarrero argues that “intentions ain’t just in the head either”—that “agents effectively import the environment into their internal dynamics” (p. 197).  Thus, for example, a child who knows how to reach for an object does not need to think about the steps required to reach, she just knows to reach, but the environment can require intentional adjustments, for example, if she is lying down as opposed to sitting up or standing.

The chapter relies centrally on a metaphor for explaining how intentions are formed: a cognitive decision space is conceived of as a three-dimensional topography, with up-and-down dimension representing specificity of an intention; the lower one is in the topography, the more definite an intention is. Within the topography are “basins,” where the depth of a basin indicates the extent to which a definite intention is formed—the increase in its probability and decrease of the probability of other intentions—and its breadth corresponds to the extent to which it is “multiply realizable” and “can delegate to its external structure” (p. 199). The basins represent “attractors,” which I interpreted as constraints guiding the agent toward some intention; these include physical constraints, affective/emotional constraints, and others.

Perhaps because of my earlier visualization of the sphere rolling down the plank, I kept visualizing the agent’s intention-potential as a metallic sphere—a shiny ball bearing—“dropped” from some height and pulled by gravity to some local minimum where it drives an intention. Her application of the concept/metaphor is more complicated, though, as we shall see.

Juarrero argues that we usually don’t actually have to decide to do what we intend to do. Our emotions (and other mental states) and our environment function as context –sensitive constraints that decrease the probability of certain motor actions and increase the probability of others. “As a result, context-sensitive constraints sculpt a chute that progressively and automatically narrows until it terminates in one actual behavior” (p. 200).

But my steel ball never seems to “settle” in any basin; it’s worth quoting Juarrero here:

Cleanup units (that is, context-sensitive feedback) “cause” actions by altering probability distributions in real time. Throughout an action’s performance, cleanup (recurrent) circuits take care of fine-graining the details in real time as the behavior proceeds downstream by stacking the odds against alternative behavior. As a result, context-sensitive constraints sculpt a chute that progressively and automatically narrows until it terminates in one actual behavior. (p. 200).

Thus, this “termination” in “one actual behavior” is just a sort of local minimum for the ball bearing; the basin or chute in which it finds itself represents all the constraints that have acted on the agent in the past, and this basin in turn represents a new topography, down through which the agent will continue to travel in the same manner, so long as it exists.

[S]uppose, upon noticing a child in distress I form the vague, unspecific prior intention “do something to help that child”: a very broad and shallow valley self-organizes in my mental landscape. In self-organizing a region of cognitive space, prior intentions… exclude some options from future consideration. That is, forming the prior intention “do something to help that child” eliminates not doing anything to help that child as a future possibility. The coordinate for that alternative drops out of my phase space, as does a chute that terminates on that coordinate. (p. 200)

In this context, the agent’s intention need not be specific for her particular “act-token” to be intentional. Once she decides to help the child, the act-token or actual behavior in which form the intention takes shape is determined by the “interaction between [this] vague prior intention and the ‘lay of the land’” (p. 201).

Given the length of this review already, I’ll try to summarize some of the uses to which Juarrero puts this concept of intention leading to act-tokens.

  • She contrasts “basic acts”—corresponding to shallow basins in the topography to “proximate intentions”—“deeper, narrower valley[s]” (p. 202).
  • She discusses how an intentional basic act allows attribution of intention to its necessary consequences, including the bombardier whose intention to destroy a city consequently includes an intention to kill the civilians in it (pp. 203-205).
  • Similarly, intention can be attributed to the basic as a result of an intention toward its consequent. For example, my intention to turn off the light gives intention to the act-token of flipping the switch, even if I do so unconsciously.
  • Juarrero discusses the philosophical problem of whether “I intend to A” implies “I will A” (pp. 270-208). She concludes “no,” on the grounds that an intend can later be overwhelmed by other attractors.
  • She discusses some examples in the context of an intention to get milk on the way home and whether to drive or take the bus home (pp. 209-211).
  • She acknowledges the possibility of error in this system, citing McClamrock: “running the control loop through the environment is a fast and efficient but sloppy strategy. But then, nature selects for resilience, not meticulousness” (p. 211).
  • Finally, Juarrero provides a nice summary (pp. 211-213), explaining how her approach relates to behavioralism, contemporary action philosophy, and cognitive science.

BNL questions/comments:

  • I struggled some with the text at pp. 144-145. I don’t really understand the balance of entropy that Juarrero argues the system maintains. It appears that she claims the top-down constraints locally increase order, but bottom-up, “enabling contextual constraints… renew message variety.”

Interesting concepts/terms/oppositions that are defined or explored in this text:

  • Aristotle’s formal and final causes, which I take to mean a description of its present material form and of its optimal future form (e.g., for an acorn, the final cause might be a full-grown oak) (p. 131).
  • (inert) epiphenomenon, as used at p. 131. I read this as the psychological sense of epiphenomenon, the phenomenon that cannot affect primary phenomena but can be affected by them; rather than as a synonym for “emergent effect” in complex systems theory.
  • Autocatalysis, which I understand to mean a group of reactions that produces its own catalyst(s) (p. 131).

1 thought on “Juarrero, Dynamics in Action, Chapter 13”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.