◎Cheng-hung Tsai (Institute of European and American Studies, Academia Sinica)
Forthcoming in AI & Society.
DOI: 10.1007/s00146-020-00949-5

Abstract: Human excellences such as intelligence, morality, and consciousness are investigated by philosophers as well as artificial intelligence researchers. One excellence that has not been widely discussed by AI researchers is practical wisdom, the highest human excellence, or the highest, seventh, stage in Dreyfus’s model of skill acquisition. In this paper, I explain why artificial wisdom matters and how artificial wisdom is possible (in principle and in practice) by responding to two philosophical challenges to building artificial wisdom systems. The result is a conceptual framework that guides future research on creating artificial wisdom.
Keywords: practical wisdom; artificial narrow intelligence; artificial general intelligence; specificationism; well-being
1. Introduction
Human excellences such as intelligence, morality, and consciousness are investigated by philosophers as well as artificial intelligence (AI) researchers. AI researchers ask whether it is possible to create artificial superintelligence (Bostrom 2014), artificial morality (Moor 2006; Wallach and Allen 2008; Anderson and Anderson 2011; Leben 2019), and artificial consciousness (Gamez 2008; Reggia 2013), among others. One excellence that has not been widely discussed by AI researchers is practical wisdom, the highest human excellence, or the highest, seventh, stage in Dreyfus’s model of skill acquisition (Dreyfus 2001). If AI researchers aim to build machines to mimic humans in almost every aspect of human excellence, then practical wisdom is likely to be of the utmost interest. Some researchers (Goertzel 2008; Casacuberta 2013; Marsh et al 2016; Kim and Mejia 2019) have tried to develop artificial wisdom (AW) systems, aiming to “design computational systems that can model at least some relevant aspects of human wisdom” (Casacuberta 2013: 199), or to “[explore] how the very human notion of Wisdom can be incorporated in the different behavior and ultimately reasonings of our computational systems” (Marsh et al 2016: 137).1However, unanticipated philosophical challenges are emerging from building AW systems, showing that AW is impossible either in principle or in practice. In this paper, I shall examine two philosophical challenges (sections 3 and 5) and offer responses to them (sections 4 and 6). The result is a conceptual framework that guides future research on creating artificial wisdom. Before examining and responding to the challenges, I explain why AW matters.
2. Why AW Matters
There are several reasons or motivations for building AW systems, and I classify them as epistemic, survival, and practical. The first, epistemic, reason is simple but highly driven, that is, out of human curiosity. We are wondering what a computational system can do to replicate a variety of aspects of human excellence: Can it be intelligent? Can it be moral? Can it be conscious? And can it be wise? Further, if a computational system can be wise, we might be wondering how powerful an artificial wisdom system can be; particularly, can it be wiser than human beings.
The second motivation for building AW systems is to secure our own survival. If “evil” superintelligence is possible, then AW can be one, and maybe the best, way to ensure the survival of the human race. According to Nick Bostrom, superintelligence is “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills” (2006: 11), or “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest” (2014: 22). For Bostrom, the creation of artificial superintelligence (ASI) might create an existential risk that “threatens to cause the extinction of Earth-originating intelligent life or to otherwise permanently and drastically destroy its potential for future desirable development” (2014: 115). ASI in itself might not be morally good or bad. However, from our perspective, ASI can be morally bad or evil if it uses resources, including human beings, to achieve a certain goal that is harmful to us. For example, imagine that an “AI, designed to manage production in a factory, is given the final goal of maximizing the manufacture of paperclips, and proceeds by converting first the Earth and then increasingly large chunks of the observable universe into paperclips” (Bostrom 2014: 123).2There are two possible scenarios in which such existential risk might not emerge: both are related to AW. First, if the evolution or development from AI to ASI must go through the stage of AW, then it is possible that AW foresees the outcome event (i.e., the existential catastrophe), and by its very nature (i.e., wisdom), decides not to evolve or develop itself any further beyond the stage of wisdom. Second, if ASI cannot be avoided whatsoever, then we might receive advice from AW, designed as a rival system to evil ASI, about how we can respond. After all, AW is anti-wicked3 and wiser than the human species.
The third motivation for creating AW is practical: AW, if successfully created, can provide guidance on what to do in our real and complicated world. I shall elaborate this point by comparing AW with artificial moral agents (AMAs)4via three dimensions. First, unlike AMAs, whose domain of application is just morality, AW’s domain of application is wider than that. AW is able to address complex decisions in the domains of morality, marriage, career, parenting, aging, illness, poverty, homelessness, crime and so on.5That is, AW can address most practical life and societal issues, which go beyond moral issues. (Admittedly, some life or societal issues, such as abortion, might involve moral issues, but others, such as career choice, might not do so obviously.) Second, even if the domain is limited to morality, AW systems can do better than AMAs because AW’s moral decisions are more humanizing. An AMA’s morality might be different from human morality, as noted by Gordon: “human morality is somewhat different from machine ethics, because human beings commonly make moral decisions against the background of the more foundational question of how to live a good life. Machines, on the other hand, can be expected to solve the moral problem at hand independently of any considerations of personal meaning, fulfilment or happiness” (Gordon 2019: 14; emphasis mine). The crucial component of humanizing moral decisions is the consideration of well-being. An AW is morally or humanly superior to an AMA because the former, as its default function, can consider moral issues against the background of human well-being. Third, AMAs cannot address value conflicts and deep disagreements, whereas AW can. AMAs can handle the moral task of telling right from wrong, but they cannot resolve the conflict between two moral goods or values, which is believed to be the task of practical wisdom: the “most common strategy in the face of worries about choices between incommensurable values is to appeal to practical wisdom—the faculty described by Aristotle—a faculty of judgment that the wise and virtuous person has, which enables him to see the right answer” (Mason 2018).6
3. Challenge I: AW Is Impossible in Principle
Although there are motivations for creating AW, there are also challenges for achieving it, and the two challenges mentioned in what follows are both related to philosophy. The first challenge claims that AW is impossible in principle because of its intrinsic nature. The major reason for this challenge comes from the crucial distinction in philosophy between practical wisdom (or phronesis) and practical intelligence (or something equivalent or very close to it, such as cleverness, skill, expertise, and instrumental rationality): the type of practical reasoning involved in wisdom is different in a crucial respect from practical reasoning involved in intelligence. John Hacker-Wright explains the difference as follows: “the distinctness of practical wisdom, as opposed to cleverness or instrumental rationality, lies in getting aims right, rather than reasoning well with a view to fulfilling our pre-existing aims” (2015: 984). Matt Stichter similarly explains the difference as follows: “skills are properly classified as mere expressions of instrumental rationality, while practical wisdom requires value rationality—not just selecting the correct means, but reasoning correctly about what ends to follow” (2016: 445). The practical reasoning involved in intelligence is instrumentalist, which is well characterized by Elijah Millgram as follows: “The received view in this area [of practical reasoning] is instrumentalism, which has it that all practical reasoning is means-end reasoning. That is, it holds that practical reasoning consists in finding ways to attain one’s goals or ends… [T]here is no such thing as reasoning about what one’s ultimate or primary or final ends…should be in the first place” (2008: 732). So construed, an agent or a system, if it is intelligent simpliciter, cannot be wise.7
Given the distinction between wisdom and intelligence, it is not difficult to see why AW is impossible in principle. AW is, at its core, intelligent, and intelligence, whether it is construed as human or artificial, is instrumentalist in character: it is constituted by means-end reasoning, the reasoning is only about the means to a given goal, rather than about the given goal. In contrast, wisdom, regardless of whether it is construed as human or artificial, should go beyond mere means-end reasoning; an agent with practical wisdom is able to deliberate well about the final goals. Because intelligence lacks what is crucial to being wise, AW, as a kind of intelligence, cannot be wise. Let me formulate the first challenge in the form of an argument as follows:
(P1) An agent is genuinely wise only if the agent can deliberate about the final goal of the domain in which the agent is situated.
(P2) An intelligent agent cannot deliberate about the final goal of the domain in which the agent is situated.
(C1) An intelligent agent cannot be genuinely wise.
(P3) An AW is, at its core, intelligent.
(C2) An AW cannot be genuinely wise.
Call this the argument against AW (AAAW). If this argument is sound, then AW exists in name only. It is doubtful whether such AW can achieve what we hope it can.
4. How AW Is Possible in Principle
How can AW proponents respond to AAAW? After all, P1, P2, and P3 seem to be conceptually true; that is, they seem to state, respectively, our understanding of wisdom, intelligence, and artificial intelligence. In this section, I consider two possible responses to AAAW, both of which try to show how it is possible for an AW to deliberate about the final goal.
4.1 From ANI to AGI
The first possible response suggests preliminarily that AAAW is sound only if the term “intelligent” used in P2 and P3 is understood as “narrowly intelligent”. In AI research, there is a common distinction between weak AI and strong AI,8or between artificial narrow intelligence (ANI) and artificial general intelligence (AGI). According to Pennachin and Goertzel’s characterization, ANIs are “programs that demonstrate intelligence in one or another specialized area, such as chess-playing, medical diagnosis, automobile-driving, algebraic calculation or mathematical theorem-proving” (2007: 1), whereas AGI is a program “that can solve a variety of complex problems in a variety of different domains” (2007: 1), or a program that has “the ability to solve general problems in a non-domain-restricted way, in the same sense that a human can” (2007: 7). By introducing the distinction, P2 can be clarified as follows: “A narrowly intelligent agent cannot deliberate about the final goal of the domain in which the agent is situated”, and P3 can be clarified as: “An AW is, at its core, merely narrowly intelligent”. For the first possible response, such a clarification does not suggest that AAAW is necessarily unsound, but it indicates a direction to get away from AAAW: What if AW is, at bottom, not narrowly intelligent?
Here AW proponents can thus conceive of AW as AGI. Let us consider this possibility. If an AW is at its core an AGI, does this suggest that such an AW can be genuinely wise? The answer depends on whether an AGI can deliberate about the final goal of the domain in which it is situated, that is, human well-being. However, in this respect, AGI is no better than ANI. That an AGI has a general ability to solve a variety of complex problems in a variety of different domains does not suggest that an AGI can deliberate about the goal of the domain with which it copes. An AGI can drive cars, heal patients, solve mathematical problems, and so on; but that it can do so many things does not mean that it can deliberate about the goal of driving, healing, problem-solving, and so on, let alone deliberate the goal of human life in general.
4.2 From Instrumentalism to Specificationism
Let us turn to the second possible response, which is the one I favor. The above strategy for responding to AAAW is based on the idea that there are varieties of (artificial) intelligence. There is another strategy, based on the idea that there are varieties of practical reasoning. Consider this question: Is instrumentalism about practical reasoning the only kind of reasoning that AI or AW can employ? Actually, in philosophy, there is an alternative to instrumentalism, i.e., specificationism, according to which “[d]eliberation consists… not, or not only, in determining what would be a means to one’s already given ends, but in coming to understand what would constitute realizing a vaguely specified end, such as happiness, having an entertaining evening, a good constitution for the body politic, or a cure for an illness” (Millgram 1997: 135). Why specificationism? As Millgram explains nicely with an example, “because many of the ends we have are not the kind of thing to which it yet makes sense to look for means. If what I want is to write a very good paper, I am not yet in a position to do anything about it; I must first settle on a much more definite conception of what sort of paper it is I wish to write” (Millgram 2005: 299-300).
Explaining practical wisdom (phronesis) through specificationism is not new in the literature. For example, Daniel Russell suggests that “we can understand phronesis as an excellence of deliberation concerned with the very specification of determinable but as yet indeterminate ends” (Russell 2009: 11). However, explaining, or at least highlighting, practical intelligence through specificationism is relatively new in philosophy literature and has been recently elaborated in Tsai 2019. A crucial implication of such a move is that wisdom and intelligence are not as distinct as originally thought with regard to the form of practical reasoning.
On the specificationist view, the second premise, P2, in AAAW is not necessarily true. An intelligent agent can deliberate about the final goal of the domain in which the agent is situated especially when the goal is vaguely or broadly construed. The intelligent agent is capable of deliberating about and choosing among possible specifications of the vaguely construed goal. Due to this capacity, an intelligent agent has the potential to be genuinely wise. Based on what I have argued, an AW, whether it is at its core ANI or AGI, once it is programmed with the specificationist form of practical reasoning, it can be, at least potentially, genuinely wise.
5. Challenge II: AW Is Impossible in Practice
Most wisdom researchers agree that a wise agent must have a correct conception of worthwhile goals (broadly speaking) or well-being (narrowly speaking). Hacker-Wright reports that “As Aristotle and the tradition in moral philosophy that follows him understand practical wisdom, a central criterion for possessing practical wisdom is having a correct conception of a worthwhile end or ends” (2015: 983). Stephen Grimm further suggests that the following three types of knowledge are individually necessary for wisdom: first, “knowledge of what is good or important for well-being”, second, “knowledge of one’s standing, relative to what is good or important for well-being”, and third, “knowledge of a strategy for obtaining what is good or important for well-being” (2015: 140). It is clear that the first type of knowledge requires a correct conception of well-being. Applying the above agreement to AW research, an AW must have a correct conception of worthwhile goals or well-being.
However, what is well-being (“happiness”, “flourishing”, “a good life”, and so on)?9According to Grimm, a theory of wisdom is fully articulated “if it not only invokes notions like ‘what is important for well-being’ but also tells us what is important for well-being”; such a theory will opt for a “particular view about … how broadly the notion of well-being should be understood” (Grimm 2015: 142). Otherwise, a theory of wisdom is partially articulated. This distinction is crucial for building an AW system. In philosophical research, a partially articulated theory of wisdom might still be good, because such a theory can improve our understanding of wisdom to a certain extent (for example, showing the intellectual structure of wisdom, as Grimm 2015 did). However, in AI research, a partially articulated theory of wisdom is not good enough, or worse, is not good at all.10This is because, as Bostrom notes, “Computer languages do not contain terms such as ‘happiness’ as primitives. If such a term is to be used, it must first be defined…. The definition must bottom out in terms that appear in the AI’s programming language, and ultimately in primitives such as mathematical operators and addresses pointing to the contents of individual memory registers” (Bostrom 2014, 227-8). Without having a fully articulated theory of wisdom, whose key ingredient is a fully articulated conception of well-being, an AW cannot get off the ground.
However, a problem, i.e., the second challenge, arises: an AW is in practice impossible because of peer disagreement about the correct conception of well-being. In philosophy, there are several fully articulated conceptions of well-being, the main three are offered by hedonism, desire fulfillment theories, and objective list theories. The three theories all have their own problems:
- Hedonism is wrong because things other than mental states matter.
- Desire fulfilment theories are wrong because people can desire what is bad for them.
- Objective list theories are wrong because a person may not benefit from a given good. (Alexandrova 2017: 28)
Scholars familiar with the well-being literature know how the debate goes, as Anna Alexandrova describes: “Much effort in philosophy goes into pondering the implications of these claims and into formulating replies to fix the alleged problems” (2017: 28). Simon Keller goes beyond mere description, offering an evaluative characterization of the debate:
Debates about welfare tend to go like this. I offer a theory, you come up with a counterexample, my theory gets more complicated, and your counterexample gets more inventive. Of course, this feature does not distinguish the debate about welfare from most debates about philosophical concepts, but the continuous back and forth between ambitious theorizing, on the one hand, and ever more ingenious appeals to intuition, on the other, is more debilitating in this debate than most. There appear to be strategies that generate counterexamples to any theory of welfare that could possibly be offered. (Keller 2009: 657)
Peer disagreement about the correct conception of well-being seems to be inevitable because, as stated above, there are strategies that generate counterexamples to any actual and potential theory of welfare. The debate about well-being among philosophers continues. Now, it is conceivable that when(ever) an AW system produces or is programed with a particular fully articulated conception of well-being and employs it in making particular judgments in particular cases, peer disagreement immediately arises. All AI programs designed to be and become artificially wise have been foredoomed to failure due to their potentially objectionable conceptions of well-being. Simply put, AW is in practice impossible.
6. How AW Is Possible in Practice
AW proponents might dismiss the second challenge by saying that a similar argument can be used to show that human wisdom is in practice impossible. However, there is a dissimilarity. A human agent can be wise even if the agent does not have a fully articulated conception of well-being. We can attribute wisdom to a human agent by using a partially articulated theory of wisdom. In such a case, there is no conception, let alone theory,of well-being to be denied by counterexample. Thus, human wisdom is not in practice impossible. In contrast, as explained above, an AW cannot get off the ground without having a fully articulated conception of well-being, and this type of conception leaves opportunity for peer disagreement.
How is AW in practice possible? Here I illustrate a practical possibility of AW via Anna Alexandrova’s thought in her A Philosophy for the Science of Well-Being. Although Alexandrova’s aim is not to build AW, her general idea of well-being is, first, sensitive to the philosophical debate about well-being, and second, realistic. However, I shall add a philosophical caveat to highlight the limit of such a practical possibility.
6.1 From Invariantism to Variantism
A way for an AW to have a fully articulated conception of well-being without (taking care of) finite counterexamples is to pursue the conception through the science of well-being instead of the philosophy of well-being. The two disciplines have the same subject matter, but their methodologies are different. Alexandrova offers a fine comparison between the two disciplines regarding well-being:
Observe how philosophers deal with problems their theories face. Such problems are typically intuitive counterexamples…. They force a theory’s advocate either to bite the bullet or to make the theory more intricate…. But greater intricacy, though it makes for a more defensible theory by philosophers’ standards, typically compromises the connection between theory and measures of well-being. When philosophical accounts are used by scientists, they are used as models rather than as theories. A model, in this sense, is a conceptual tool for building a measurement procedure. Unlike a theory, which fully specifies how it should be used, a model requires additional outside knowledge. Once we see that the science of well-being treats philosophical proposals as models, it is natural to think that there are many such models and that there is no single overarching model to regulate their use. (Alexandrova 2017: 27)
Alexandrova calls the view that most philosophers hold “invariantism” or “the vending machine view”, according to which there is a single or ultimate theory of well-being, and the view that most scientists hold “variantism” or “the toolbox view”, which denies variantism. Generally, the vending machine view maintains that “a theory contains within itself the resources for the treatment of any concrete situation” (2017: 35), whereas the toolbox view maintains that “theories contain some but not all of the tools necessary for building models that represent real situations” (2017: 36). Because the vending machine view construes a theory of well-being as containing within itself all the resources for the treatment of any concrete situations, the theory must cope with all examples of well-being, including the counterexamples to the theory (because the alleged counterexamples are supposed to represent the cases of well-being). In contrast, the toolbox view does not have such a requirement or burden.
Details aside, why should one adopt variantism (or the toolbox view) instead of invariantism (or the vending machine view)? Alexandrova’s answer, in my view, is typical for scientists: “I still wish to put variantism on the intellectual map and to give reasons to take this view seriously, if only because formulating it yields a more realistic view about what we can expect from a theory of well-being and what theories we are better off pursuing” (2017: 27; emphasis mine). That is, variantism can be adopted and taken as a basis from a practical point of view. In the following passage she makes such practical consideration more explicitly:
My complaint then is that as theories become ever more intricate and general, their relevance to the question of value aptness of science diminishes. While the original philosophical proposals about well-being regularly inspire scientific projects, the subsequent versions with modifications do not, because their operationalisability is becoming harder and harder to achieve. This is not necessarily a problem—after all, true well-being may well be unmeasurable. But epistemic access and population-level comparisons is the conceit of the normal science of well-being. So any philosophical proposal that refuses to play the measurement game need not be taken seriously for these purposes. (Alexandrova 2017: 34-35; emphasis mine)
Alexandrova uses a vivid metaphor to express the same complaint about philosophers of well-being: “Current philosophical methodology worships different gods than those that would enable a connection between theories and measures. The philosophical gods are parsimony, universality, generality, immunity to counterexamples. When theories actually connect to measures in the sciences, these gods deserve no credit” (Alexandrova 2017: 37). In a nutshell, for the sake of operationalisability or measurability, which is necessary for the science of well-being, variantism about well-being is methodologically superior to invariantism about well-being.
If variantism about well-being is methodologically acceptable, then there opens a practical possibility for building AW. That is, we can base the AW project on the science of well-being rather than on the philosophy of well-being. By doing so, AW researchers and programmers can legitimately avoid or simply sidestep finite counterexamples to the theory of well-being that they adopt. AW is thus possible in practice.
6.2 A Philosophical Caveat
Even if an AW is in practice possible in the way discussed above, I have a philosophical caveat. That is, an AW system constructed via variantism is quasi-AW because variantism is not a complete theory of well-being.
Consider this passage:
William Frankena suggests that a theory of welfare should answer not only the question of which things are good as ends for us but the question of what makes them good (Frankena 1973: ch. 5; see also Moore 2000: 78 and Crisp 2006: 622-23). In order to be truly monistic, a theory of welfare must have a monistic answer to the second question, the question of good-makers. The theory must claim that for all the things or states that are intrinsically good for us, they are all made good for us by the same single feature. … Truly pluralistic theories, by contrast, will typically hold that, for each good kind of thing on the list, it is, so to speak, its own good-maker. … To say that a good is its own good-maker is really just a way of saying that it is a basic intrinsic good. (Heathwood 2015: 141)
That is, a complete theory of well-being should tell us not only which things are good for well-being but also what makes them good. Variantism in its current form is a theory regarding which things are good for well-being (for, say, children, parents, working parents, etc.), but not a theory regarding what makes good things for well-being good. It is not clear whether variantism is a truly monistic or truly pluralistic theory, because it is not yet a theory of the good-maker.
An agent, either human or artificial, who merely knows which things are good for human well-being can only be a quasi-wise agent. In normal situations, quasi-wisdom works fine. However, in intractable situations such as value conflicts and deep disagreements, quasi-wisdom fails. A genuinely wise agent must know further what makes good things for well-being good, knowing what is genuinely or fundamentally good for well-being. By this, a genuinely wise agent knows what is the most worthwhile value to pursue when facing value conflicts.
If an AW system built upon variantism, which is a theory of goods (minus a theory of good-maker), is quasi-wise, is it suggested that we should abandon it? I do not think so. After all, genuine AW, if possible, requires quasi-AW to get off the ground, as argued in Section 6.1. What I attempt to do in this section is to warn AI researchers and programmers about the limitations of building AW through the science of well-being and remind them of the need for the philosophy of well-being in building genuine AW.
7. Conclusion
In this paper, I have explained why AW matters, and I have shown how AW is possible in principle and in practice. AW is possible in principle only if it adopts specificationism about practical reasoning (rather than instrumentalism about practical reasoning). AW is possible in practice only if it adopts variantism about well-being (rather than invariantism about well-being), although I add a philosophical caveat that makes AI researchers and programmers aware of the nature of AW thus built, i.e., quasi-AW. Specificationism and variantism constitute a philosophical framework for future research on AW.
I would like to end this paper with an observation made by John-Stewart Gordon in “Building Moral Robots: Ethical Pitfalls and Challenges”:
[O]ne can observe two main types of problems that non-ethicists commit when they attempt to add an “ethical dimension” to their machines. The first type could be called rookie mistakes; they involve a misunderstanding of moral issues or starting from wrong ethical assumptions…. These problems show that the researchers and programmers committing the errors are somewhat unaware of the moral complexity of the issues with which they are grappling or the knowledge already achieved in the field of ethics. The second category contains methodological issues that pose a challenge even to ethical experts because they disagree on how to solve those issues. (Gordon 2019: 2)
I think that Gordon’s observation is also relevant to building artificial wisdom; similar problems would be raised in adding a “wise dimension” to machines, as I have demonstrated throughout the paper. The issues of how AW is in principle and in practice possible to a certain extent correspond to the two types of problems Gordon raises in building AMAs (although my formulation of the issues and suggested solutions are new to the literature). Philosophy in general or ethics in particular still matters in AI research. To put it in another way, AI is an interdisciplinary field indeed and in need.
Acknowledgements
I am grateful to two anonymous reviewers for their valuable comments and suggestions. The material of this paper was presented at the Ministry of Science and Technology (Taiwan), National Tsing Hua University, National Central University, and Tunghai University. I thank the audiences, in particular Ser-min Shei, Ruey-yuan Wu, Terence Hua Tai, Li-jung Wang, Wei-ching Wang for helpful questions and discussions. This work was supported by the Ministry of Science and Technology, Taiwan (Grant Nos. MOST 103-2410-H-001-108-MY5, 107-2418-H-001-003-MY3, and 108-2420-H-001-002-MY3).
References
- Alexandrova, A. (2017). A Philosophy for the Science of Well-Being (Oxford: Oxford University Press).
- Allen, C. and W. Wallach (2012). “Moral Machines: Contradiction in Terms of Abdication of Human Responsibility?”, in P. Lin, K. Abney, and G. Bekey (eds.), Robot Ethics: The Ethical and Social Implications of Robotics (Cambridge, Mass.: The MIT Press), pp. 55-68.
- Allen, C., I. Smit & W. Wallach (2005). “Artificial Morality: Top-Down, Bottom-Up and Hybrid Approaches”, Ethics and New Information Technology 7: 149-155.
- Anderson, M. and S. Anderson (eds.)(2011). Machine Ethics (Cambridge: Cambridge University Press).
- Baltes, P., J. Gluck, and U. Kunzmann (2002), “Wisdom: Its Structure and Function in Regulating Successful Life Span Development”, in C. Snyder and S. Lopez (eds.), Handbook of Positive Psychology, (Oxford: Oxford University Press), pp. 327-47.
- Boden, M. (2016). AI: Its Nature and Future (Oxford: Oxford University Press).
- Bostrom, N. (2006). “How Long before Superintelligence?”, Linguistic and Philosophical Investigations 5(1): 11-30.
- Bostrom. N. (2014). Superintelligence: Paths, Dangers, Strategies (Oxford: Oxford University Press).
- Casacuberta, D. (2013). “The Quest for Artificial Wisdom”, AI & Society 28: 199-207.
- Crisp, R. (2006). “Hedonism Reconsidered”, Philosophy and Phenomenological Research 73: 619-45.
- Davis, J. (2019). “Artificial Wisdom? A Potential Limit on AI in Law (and Elsewhere)”, Oklahoma Law Review 72(1): 51-89.
- Dreyfus, H. (2001). On Internet (New York: Routledge).
- Frankena, W. (1973). Ethics (Englewood Cliffs, NJ: Prentice Hall. 2nd edition).
- Gamez, D. (2008). “Progress in Machine Consciousness”, Consciousness and Cognition 17: 887-910.
- Goertzel, B. (2008). “Artificial Wisdom”, in: IEET, Institute for Ethics and Emerging Technologies, April 2008. Retrieved in November 2019: https://ieet.org/index.php/IEET2/more/goertzel20080420
- Goldman, A. (2018). Life’s Value (Oxford: Oxford University Press).
- Gordon J. (2019). “Building Moral Robots: Ethical Pitfalls and Challenges”, Science and Engineering Ethics. https://doi.org/10.1007/s11948-019-00084-5
- Grimm, S. (2015). “Wisdom”, Australasian Journal of Philosophy 93(1): 139-154.
- Hacker-Wright, J. (2015). “Skill, Practical Wisdom, and Ethical Naturalism”, Ethical Theory and Moral Practice 18(5): 983-993.
- Heathwood, C. (2015). “Monism and Pluralism about Value”, in I. Hirose and J. Olson (eds.), The Oxford Handbook of Value Theory (Oxford: Oxford University Press), pp. 136-157.
- Keller, S. (2009). “Welfare as Success”, Nous 43(4): 656-683.
- Kim, T. W. and S. Mejia (2019). “From Artificial Intelligence to Artificial Wisdom: What Socrates Teaches Us”, Computer 52: 70-74.
- Leben, D. (2019). Ethics for Robots: How to Design a Moral Algorithm (New York: Routledge).
- Marsh, S., M. Dibben, and N. Dwyer (2016). “The Wisdom of Being Wise: A Brief Introduction to Computational Wisdom”, in S. Habib, J. Vassileva, S. Mauw, and M. Muhlhauser (eds.), Trust Management X. IFIPTM 2016. IFIP Advances in Information and Communication Technology, vol 473, pp. 137-145.
- Mason, E. (2018). “Value Pluralism”, in E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Spring 2018 Edition), URL = <https://plato.stanford.edu/archives/spr2018/entries/value-pluralism/>. Millgram, E. (1997). Practical Induction (Cambridge, Mass.: Harvard University Press).
- Millgram, E. (2005). Ethics Done Right: Practical Reasoning as a Foundation for Moral Theory (Cambridge: Cambridge University Press).
- Millgram, E. (2008). “Specificationism”, in J. E. Adler & Lance J. Rips (eds.), Reasoning: Studies of Human Inference and its Foundations (Cambridge: Cambridge University Press), pp. 731-747.
- Moor, J. (2006). “The Nature, Importance, and Difficulty of Machine Ethics”, IEEE Intelligent Systems 21: 18-21.
- Moore, A. (2000). “Objective Human Goods”, in R. Crisp and B. Hooker (eds.), Well-Being and Morality (Oxford: Oxford University Press), pp. 75-89.
- Pennachin, C. and B. Goertzel (2007). “Contemporary Approaches to Artificial General Intelligence”, in Goertzel and Pennachin (eds.), Artificial General Intelligence (Berlin: Springer-Verlag), pp. 1-30.
- Petersen, S. (2017). “Superintelligence as Superethical”, in P. Lin, K. Abney, and R. Jenkins (eds.), Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence (Oxford: Oxford University Press), pp. 322-337.
- Reggia, J. (2013). “The Rise of Machine Consciousness: Studying Consciousness with Computational Models”, Neural Networks 44: 112-131.
- Russell, D. (2009). Practical Intelligence and the Virtues (Oxford: Oxford University Press).
- Russell, S. and P. Norvig (2010). Artificial Intelligence: A Modern Approach (Essex, UK: Pearson Education Limited, 3rd edition).
- Stichter, M. (2016). “Practical Skills and Practical Wisdom in Virtue”, Australasian Journal of Philosophy 94(3): 435-448.
- Tiberius, V. (2013). “Why Be Moral? Can the Psychological Literature on Well-Being Shed any Light?”, Res Philosophica 90(3): 347-364.
- Tsai, C. (2019). “Phronesis and Techne: The Skill Model of Wisdom Defended,” Australasian Journal of Philosophy.
- Wallach, W. and C. Allen (2008). Moral Machines: Teaching Robots Right from Wrong (Oxford: Oxford University Press).
- Whitcomb, D. (2011). “Wisdom”, in S. Bernecker and D. Pritchard (eds.), Routledge Companion to Epistemology (New York: Routledge), pp. 95-105.