Jump to content

Moral agency

From Wikipedia, the free encyclopedia
(Redirected from Moral realm)

Moral agency is an individual's ability to make moral choices based on some notion of right and wrong and to be held accountable for these actions.[1] A moral agent is "a being who is capable of acting with the reference to right and wrong."[2]

Development and analysis

[edit]

Most philosophers suggest only rational beings, who can reason and form self-interested judgments, are capable of being moral agents. Some suggest those with limited rationality (for example, people who are mildly mentally disabled or infants[1]) also have some basic moral capabilities.[3]

Determinists argue all of our actions are the product of antecedent causes, and some believe this is incompatible with free will and thus claim that we have no real control over our actions. Immanuel Kant argued that whether or not our real self, the noumenal self, can choose, we have no choice but to believe that we choose freely when we make a choice. This does not mean that we can control the effects of our actions. Some Indeterminists would argue we have no free will either. If, with respect to human behavior, a so-called 'cause' results in an indeterminate number of possible, so-called 'effects', that does not mean the person had the free-thinking independent will to choose that 'effect'. More likely, it was the indeterminate consequence of his chance genetics, chance experiences and chance circumstances relevant at the time of the 'cause'.

In Kant's philosophy, this calls for an act of faith, the faith free agent is based on something a priori, yet to be known, or immaterial. Otherwise, without free agent's a priori fundamental source, socially essential concepts created from human mind, such as justice, would be undermined (responsibility implies freedom of choice) and, in short, civilization and human values would crumble.

It is useful to compare the idea of moral agency with the legal doctrine of mens rea, which means guilty mind, and states that a person is legally responsible for what he does as long as he should know what he is doing, and his choices are deliberate. Some theorists discard any attempts to evaluate mental states and, instead, adopt the doctrine of strict liability, whereby one is liable under the law without regard to capacity, and that the only thing is to determine the degree of punishment, if any. Moral determinists would most likely adopt a similar point of view.

Psychologist Albert Bandura has observed that moral agents engage in selective moral disengagement in regards to their own inhumane conduct.[4]

Distinction between moral agents and moral patients

[edit]

Moral agents are entities whose actions are eligible for moral consideration. An example of this would be a young child old enough to understand right from wrong, yet they hit their siblings on an occasion when they get angry. The action of hitting is up for moral consideration because the child is old enough to consider whether or not it is the correct action to take and the morality of their behavior.[5]

Moral patients are entities that themselves are eligible for moral consideration. An example of this would be a child who does not know how to determine right from wrong. A child in this situation is up for moral consideration by others because those around them understand they are incapable of understanding the consequences of their actions and are therefore unable to understand the morality of a situation due to developmental barriers.[5]

Many philosophers, such as Immanuel Kant, view morality as a transaction among rational parties, i.e., among moral agents. In Richard Dean’s article on Kant’s moral theory he discusses how agents who are able to control their tendencies or drives, are able to remain unbiased as they determine the path of moral action. The ability to be able to control this is called moral commitment. Agents need to become experts in this control in order to be able to declare something as moral or immoral and retain reputability.[6] For this reason, they would exclude other animals from moral consideration.

Utilitarian philosophers Jeremy Bentham and Peter Singer have argued that the key to inclusion in the moral community is not rationality — for if it were, we might have to exclude some disabled people and infants, and might also have to distinguish between the degrees of rationality of healthy adults — but the real object of moral action is the avoidance of suffering.[7][8] An example of this is the abortion debate. Further examples can be taken from the argument from marginal cases.

Artificial moral agents

[edit]

Discussions of artificial moral agency center around a few main ideas. The first discussion is on whether it is possible for an artificial system to be a moral agent - see artificial systems and moral responsibility. The second discussion concerns efforts to construct machines with ethically-significant behaviors - see machine ethics. Finally, there is debate about whether robots should be constructed as moral agents.

Research has shown that humans do perceive robots as having varying degrees of moral agency. These perceptions can manifest in two distinct ways: 1. ideas about a robot’s moral capacity (the ability to be/do good or bad) and 2. ideas about its dependence or independence on programming (where high dependency equates to low agency).[9]

Research suggests that the moral judgment of an action may not depend on whether the agent is a human or a robot. However, robots are rarely given credit for acting well, and must behave more consistently well to be trusted.[10]

The creation of a robot or "social machine" with the capability of understanding morality and agency has not been accomplished yet. However, a machine with those capabilities could be potentially created in the future.[11]

Non-human animals

[edit]

Discussion of moral agency in non-human animals involves both debate about the nature of morality and about the capacities and behavior of human and non-human animals.[12][13] Thinkers who agree about the nature, behavior and abilities of different species may still disagree about which capacities are important for moral agency or about the significance of particular behaviors in determining moral agency.[14] Since moral agents are often thought to warrant particular moral consideration, this discussion is sometimes linked to debates in animal rights about practices involving non-human animals.[15]

Studies of animal biology and behavior have provided strong evidence of complex social structures and behavioral norms in non-human species. There is also evidence that some non-human species, especially other primates, can demonstrate empathy and emotions such as guilt or grief, though some thinkers dispute this.[16][17] However, humans display distinctive capacities related to intelligence and rationality such as the ability to engage in abstract and symbolic thought and to employ complex language.[18]

Philosophers and biologists who claim that non-human animals are moral agents typically argue that moral agency is dependent on empathy or social relations, and stress the evidence for these in non-human animals.[19] They may also point out behaviors which in humans are described as moral activities, such as the punishment of individuals who break social norms. Some thinkers suggest that there are a variety of types or levels of moral agency which vary by species, or that animals may act morally without being full moral agents.[20][21]

Thinkers who hold that only humans can be moral agents typically argue that moral agency depends on rationality. They highlight distinctive human abilities and the unique complexity of human behavior. They argue that shared behaviors such as the punishment of wrongdoers are nevertheless underpinned by very different internal processes, meaning that these behaviors qualify as moral activity for humans but not for non-humans.[22]

See also

[edit]

Notes

[edit]
  1. ^ a b Angus, Taylor (2003). Animals & Ethics: An Overview of the Philosophical Debate. Peterborough, Ontario: Broadview Press. p. 20.
  2. ^ "Moral," Archived 2015-09-08 at the Wayback Machine Websters Revised Unabridged Dictionary, 1913, p. 943.
  3. ^ Hargrove, Eugene C., ed. (1992). The Animal Rights, Environmental Ethics Debate: The Environmental Perspective. Albany: State Univ. of New York Press. pp. 3–4. ISBN 978-0-7914-0933-6.
  4. ^ Bandura, Albert (June 2002). "Selective Moral Disengagement in the Exercise of Moral Agency". Journal of Moral Education. 31 (2): 101–119. CiteSeerX 10.1.1.473.2026. doi:10.1080/0305724022014322. S2CID 146449693.
  5. ^ a b Gray, Kurt; Wegner, Daniel M. (March 2009). "Moral typecasting: Divergent perceptions of moral agents and moral patients". Journal of Personality and Social Psychology. 96 (3): 505–520. doi:10.1037/a0013748. ISSN 1939-1315. PMID 19254100.
  6. ^ Wolemonwu, Victor Chidi (2020-06-01). "Richard Dean: The Value of Humanity in Kant's Moral Theory". Medicine, Health Care and Philosophy. 23 (2): 221–226. doi:10.1007/s11019-019-09926-2. ISSN 1572-8633. PMC 7260255. PMID 31571029.
  7. ^ "Utilitarianism, Act and Rule | Internet Encyclopedia of Philosophy". Retrieved 2024-03-20.
  8. ^ Singer, Peter (1972). "Famine, Affluence, and Morality". Philosophy & Public Affairs. 1 (3): 229–243. ISSN 0048-3915. JSTOR 2265052.
  9. ^ Banks, Jaime (2019-01-01). "A perceived moral agency scale: Development and validation of a metric for humans and social machines". Computers in Human Behavior. 90: 363–371. doi:10.1016/j.chb.2018.08.028. ISSN 0747-5632. S2CID 53783430.
  10. ^ Banks, Jaime (2020-09-10). "Good Robots, Bad Robots: Morally Valenced Behavior Effects on Perceived Mind, Morality, and Trust". International Journal of Social Robotics. 13 (8): 2021–2038. doi:10.1007/s12369-020-00692-3. hdl:2346/89911. ISSN 1875-4805.
  11. ^ Banks, Jamie (18 November 2018). Computers in Human Behavior. Vol. 90. Science Direct (published January 2019). pp. 363–371.
  12. ^ Johannsen, Kyle (1 January 2019). "Are some animals also moral agents?". Animal Sentience. 3 (23). doi:10.51291/2377-7478.1404. S2CID 159071494. Retrieved 30 April 2022.
  13. ^ Behdadi, Dorna (May 2021). "A Practice-Focused Case for Animal Moral Agency". Journal of Applied Philosophy. 38 (2): 226–243. doi:10.1111/japp.12486. S2CID 229471000.
  14. ^ Willows, Adam M.; Baynes-Rock, Marcus (December 2018). "Two Perspectives on Animal Morality" (PDF). Zygon. 53 (4): 953–970. doi:10.1111/zygo.12464. S2CID 150204045.
  15. ^ Monsó, Susana; Benz-Schwarzburg, Judith; Bremhorst, Annika (1 December 2018). "Animal Morality: What It Means and Why It Matters". The Journal of Ethics. 22 (3): 283–310. doi:10.1007/s10892-018-9275-3. ISSN 1572-8609. PMC 6404642. PMID 30930677.
  16. ^ Waal, F. B. M. de (2016). Primates and Philosophers : How Morality Evolved. Princeton, N.J. ISBN 9780691169163.{{cite book}}: CS1 maint: location missing publisher (link)
  17. ^ Tomasello, Michael (2016). A Natural History of Human Morality. Cambridge, Mass. ISBN 9780674088641.{{cite book}}: CS1 maint: location missing publisher (link)
  18. ^ Tse, Peter Ulric (2008–2014). "Symbolic thought and the evolution of human morality". In Sinnott-Armstrong, Walter (ed.). Moral Psychology Volume 1: The Evolution of Morality: Adaptations and Innateness. Cambridge, Mass.: MIT Press. pp. 269–297. ISBN 978-0262693547.
  19. ^ Clement, Grace (1 April 2013). "Animals and Moral Agency: The Recent Debate and Its Implications". Journal of Animal Ethics. 3 (1): 1–14. doi:10.5406/janimalethics.3.1.0001.
  20. ^ Rowlands, Mark (2012). Can animals be moral?. Oxford: Oxford University Press. ISBN 9780199842001.
  21. ^ Back, Youngsun (3 April 2018). "Are animals moral?: Zhu Xi and Jeong Yakyong's views on nonhuman animals". Asian Philosophy. 28 (2): 97–116. doi:10.1080/09552367.2018.1453234. S2CID 171543787.
  22. ^ Korsgaard, Christine M. (2010). Reflections on the Evolution of Morality (PDF). The Department of Philosophy at Amherst College.: Amherst Lecture in Philosophy.

References

[edit]