Sapience vs sentience12/16/2023 ![]() Sentience Institute uses the term “artificial sentience” to describe artificial entities with the capacity for positive and negative experiences. Table A1: Fields of Study The importance of conceptual clarity Table 4: Consequential combinations of features and roleĪppendix: Terminology defining relevant fields of study Table 3: Terminology defining psychological features Table 2: Terminology defining material features Table 1: Terminology defining an entity’s role Evaluating the benefits and drawbacks of the terminology to the moral consideration of artificial entities may help to clarify emerging research, improve its impact, and align the interests of sentient artificial entities with the study of artificial intelligence (AI), especially research on AI ethics. The terms used to define and refer to these entities often take a human perspective by focusing on the benefits and drawbacks to humans. The ideal term may vary across context, but we favor “artificial sentience” in general, in part because “artificial” is more common in relevant contexts than its near-synonyms, such as “synthetic” and “digital,” and to emphasize the sentient artificial entities who deserve moral consideration. Different combinations of terms variously emphasize the entity's role, material features, psychological features, and different research perspectives. We consider the terminology used to describe artificial entities and how this terminology may affect the moral consideration of artificial entities. This article is also available and referenceable on the Open Science Framework: Abstract ![]() ![]() Many thanks to Thomas Moynihan, Tobias Baumann, and Teo Ajantaival for reviewing and providing feedback. Further, from a research standpoint it is important to have a descreet model to measure against without ambiguity, which the SSIVA theory provides.Edited by Jacy Reese Anthis and Ali Ladak. Further, the paper lays out the case for how the current legal framework could be extended to address issues with autonomous systems to varying degrees depending on the SSIVA threshold as applied to autonomous systems. The SSIVA logic places the value of any individual human and their potential for Intelligence, and the value of other systems to the degree that they are self-aware or "intelligent", as a priority. SSIVA is based on some static core definitions of "Intelligence" as defined by the measured ability to understand, use, and generate knowledge or information independently, all of which are a function of sapience and sentience. ![]() This paper is focused on the Sapient and Sentient Intelligence Value Argument or SSIVA and the ethics of how that applies to autonomous systems and how such systems might be governed by the extension of current regulation, as well as providing a computable model of ethics for AGI research.
0 Comments
Leave a Reply.AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |