Artificial institutions: specification and verification of open distributed interaction frameworks
Institutions have been proposed to design, analyze, and regulate open multiagent systems where agents are developed by different organizations and their internal mental states are not accessible. In such systems, institutions essentially play two fundamental roles: i) they define a set of norms which make more predictable the behavior of other agents and ii) they describe the ontology of the interaction context.
In open systems it is unrealistic to expect that autonomous agents will always comply with norms. For this reason, the research on institutions has been mainly focused on developing languages and tools to model and to monitor agents´ interactions in terms of institutional concepts (roles, obligations, etc.) with the purpose of avoiding or detecting violations of norms. In doing so, institutions play a crucial role to increase the efficiency of electronic transactions carried out by agents, but raise the problem of ensuring that such rules are not characterized by contradictory norms and provide agents with all the needed powers to fulfill their objectives. This is especially important when institutions are complex and it is prohibitive to foresee all possible evolutions admitted by them.
As a consequence, automated formal verification should be considered as an important step in the development of institutions, because it can increase the reliability of institutions by ensuring that they satisfy certain properties. This project aims to define a formalism to describe institutions and to develop techniques for verifying them through model-checking.