Can Artificial Entities Assert? Chapters uri icon

  •  
  • Overview
  •  
  • Research
  •  
  • Identity
  •  
  • Additional Document Info
  •  
  • View All
  •  

abstract

  • There is an existing debate regarding the view that technological instruments, devices, or machines can assert or testify. A standard view in epistemology is that only humans can testify. However, the notion of quasi-testimony acknowledges that technological devices can assert or testify under some conditions, without denying that humans and machines are not the same. Indeed, there are four relevant differences between humans and instruments. First, unlike humans, machine assertion is not imaginative or playful. Second, machine assertion is prescripted and context restricted. As such, computers currently cannot easily switch contexts or make meaningful relevant assertions in contexts for which they were not programmed. Third, while both humans and computers make errors, they do so in different ways. Computers are very sensitive to small errors in input, which may cause them to make big errors in output. Moreover, automatic error control is based on finding irregularities in data without trying to establish whether they make sense. Fourth, testimony is produced by a human with moral worth, while quasi-testimony is not. Ultimately, the notion of quasi-testimony can serve as a bridge between different philosophical fields that deal with instruments and testimony as sources of knowledge, allowing them to converse and agree on a shared description of reality, while maintaining their distinct conceptions and ontological commitments about knowledge, humans, and nonhumans.

publication date

  • May 7, 2020