Abstract
1- Introduction
2- Literature and related work
3- Design and methodology
4- Results and discussion
5- Conclusion and future work
References
Abstract
Several non-monotonic formalisms exist in the field of Artificial Intelligence for reasoning under uncertainty. Many of these are deductive and knowledge-driven, and also employ procedural and semi-declarative techniques for inferential purposes. Nonetheless, limited work exist for the comparison across distinct techniques and in particular the examination of their inferential capacity. Thus, this paper focuses on a comparison of three knowledge-driven approaches employed for non-monotonic reasoning, namely expert systems, fuzzy reasoning and defeasible argumentation. A knowledge-representation and reasoning problem has been selected: modelling and assessing mental workload. This is an ill-defined construct, and its formalisation can be seen as a reasoning activity under uncertainty. An experimental work was performed by exploiting three deductive knowledge bases produced with the aid of experts in the field. These were coded into models by employing the selected techniques and were subsequently elicited with data gathered from humans. The inferences produced by these models were in turn analysed according to common metrics of evaluation in the field of mental workload, in specific validity and sensitivity. Findings suggest that the variance of the inferences of expert systems and fuzzy reasoning models was higher, highlighting poor stability. Contrarily, that of argument-based models was lower, showing a superior stability of its inferences across knowledge bases and under different system configurations. The originality of this research lies in the quantification of the impact of defeasible argumentation. It contributes to the field of logic and non-monotonic reasoning by situating defeasible argumentation among similar approaches of non-monotonic reasoning under uncertainty through a novel empirical comparison.
Introduction
Uncertainty associated to incomplete, imprecise or unreliable knowledge is inevitable in daily reasoning and in many real-world contexts. Within Artificial Intelligence (AI), many approaches have been proposed for the development of inferential models capable of addressing such uncertainty. Among them, nonmonotonic reasoning emerged from the area of logical AI as an alternative to deductive inferences in logical systems. These were perceived as inadequate for decision making in realistic situations (Bochman, 2007). Hence, reasoning is non-monotonic, or defeasible, when a conclusion can be withdrawn in the light of new information (Reiter, 1988; McCarthy, 1980; Kowalski & Sadri, 1991; Longo, 2015; Brewka, 1991). A number of approaches for dealing with quantitative reasoning under uncertainty exist (Parsons & Hunter, 1998), including computational argumentation (also referred to as defeasible argumentation) (Prakken & Vreeswijk, 2001), fuzzy reasoning (Zadeh et al., 1965) and expert systems (Durkin & Durkin, 1998). These approaches have led to the development of non-monotonic reasoning models based upon knowledge bases often provided by human experts. Intuitively, since these models have been developed with a human-in-the loop intervention, their reasoning processes and their inferences have an intrinsic higher degree of interpretability and transparency when com pared to data-driven approaches for inference.