As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
When an interaction mechanism such as argumentation is considered for use in open multi agent domains, such as E-Commerce or other business applications, it is necessary to consider the possibility of agents performing malicious actions. Common themes when studying malicious actions in communication protocols are that of withholding information and misrepresenting information. In argumentation, however, the use of a complex underlying formal logic allows for the possibility of another type of malicious action: the introduction of superfluous complexity into information, designed to overwhelm the reasoning capacity of another agent. We examine a malicious strategy in open multi agent systems based on exploiting the complexity of the formal logic underlying argumentation in order to manipulate the outcome of argument acceptability evaluation. Further, we briefly discuss the general problem of defensive strategies against this type of malicious argumentation, and the inherent difficulty in detecting occurrences of it.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.