posted by user: raydogan || 2913 views || tracked by 1 users: [display]

ANAC 2017 : International Automated Negotiating Agent Competition - ANAC 2017

FacebookTwitterLinkedInGoogle

Link: http://web.tuat.ac.jp/~katfuji/ANAC2017/
 
When Oct 1, 2016 - Aug 25, 2017
Where Melbourne
Submission Deadline Oct 21, 2016
Categories    competition   agent   negotiation   human interaction
 

Call For Papers

Call for Intended Participation ANAC 2017

The ANAC competition brings together researchers from the negotiation community and provides a unique benchmark for evaluating practical negotiation strategies in multi-issue domains. The previous competitions have spawned novel research in AI in the field of autonomous agent design which are available to the wider research community. This year, we would like to introduce a variety of negotiation research challenges:

Repeated Multilateral Negotiation for arbitrary domains (Genius framework)
Negotiation Strategies for the Diplomacy Strategy Game (Bandana framework)
Human-Agent Negotiation (IAGO framework)

Before announcing the definite ANAC 2017 negotiation leagues, we would like to see if we have enough participants for each league. Therefore, we kindly ask you which league(s) you are interested in and would like to participate in. Please fill in the intended participation registration form by 21 October 2016. Registration is free of charge !

Filling in the intended participation registration form is important so that we can inform you about any update regarding the negotiation platforms you are interested in and about ANAC 2017.

Please register for intention participation: http://tinyurl.com/ANAC2017Intention


ORGANIZATION COMMITTEE:

Dr. Reyhan Aydogan, Ozyegin University & Delft University of Technology
Dr. Tim Baarslag, University of Southampton
Prof. Dr. Katsuhide Fujita, Tokyo University of Agriculture and Technology
Prof. Dr. Takayuki Ito, Nagoya Institute of Technology
Dr. Dave de Jonge, Western Sydney University
Prof. Dr. Catholijn Jonker, Delft University of Technology
Johnathan Mell, The University of Southern California


1- ANAC Repeated Multilateral Negotiation League
Challenge: What are winning strategies for bidding, opponent modeling and bid acceptance strategies when negotiating repeatedly with agents in a multilateral setting.

*** Entrants ***
Entrants to the competition have to develop and submit an autonomous negotiation agent that runs on Genius. Genius is a Java-based negotiation platform in which you can create negotiation domains and preference profiles as well as develop negotiating agents. The platform allows you to simulate negotiation sessions and run tournaments. More details can be found by following this link:
http://ii.tudelft.nl/genius/

Performance of the agents will be evaluated in a tournament setting, where each agent is matched with other submitted agents, and each set of agents will negotiate in a number of negotiation scenarios. Negotiations are repeated several times to obtain statistically significant results.

A negotiation scenario consists of a specification of negotiation issues and preferences of all negotiating parties. The preferences of a party are modelled using additive utility functions.


*** Rules of Encounter ***
Negotiations are multilateral and based on a multi-player version of the alternating-offers protocol. Offers are exchanged in real time with a deadline after 3 minutes. In addition, there will be a discount factor in about half of the domains, where the value of an agreement decreases over time. The challenge for an agent is to negotiate without any knowledge of the preferences and strategies of the opponents.

Agents can be disqualified for violating the spirit of fair play. The competition rules allow multiple entries from a single institution, but require each agent to be developed independently.


*** Main updates with respect to ANAC 2016 ***

Learning and Adaptation in Multilateral Negotiations

This year, we allow agents to save and load data from their past negotiation sessions. Agents may use this information learn about and adapt to domain over time, and to use this information to negotiate better with their opponents.

The top 8 performing agents of the qualification round will continue to the finals and compete with all other finalists. That is we will run a tournament including all finalists on the submitted negotiation scenarios.

The multi-player protocol is a simple extension of the bilateral alternating offers protocol, called the Stacked Alternating Offers Protocol (SAOP). According to this protocol, all of the participants around the table get a turn per round; turns are taken clock-wise around the table. The first party starts the negotiation with an offer that is observed by all others immediately. Whenever an offer is made the next party in line can take the following actions:

1. Make a counter offer (thus rejecting and overriding the previous offer)
2. Accept the offer
3. Walk away (e.g. ending the negotiation without any agreement)

This process is repeated in a turn taking clock-wise fashion until reaching an agreement or reaching the deadline. To reach an agreement, all parties should accept the offer. If at the deadline no agreement has been reached, the negotiation fails.


*** Qualifying Round and Finals ***

The teams of the top 8 performing agents will be notified, and the final results will be announced at the AI conference, probably in IJCAI 2017.

It is expected that teams that make it through to the finals will have a representative attending the conference. Each team in the final will have the opportunity to give a brief presentation describing their agent.


2- ANAC Diplomacy Strategy Game League
*** Entrants ***

Entrants to the competition have to develop a negotiation algorithm for the game of Diplomacy. Diplomacy is a strategy game for 7 players. Each player has a number of armies and fleet positioned on a map of Europe and the goal is to conquer half of the "Supply Centers". What makes this game very interesting and different from other board games however, is that players need to negotiate with each other in order to play well. Players may form coalitions and make plans together in order to defeat other players.

Every participant in this competition must implement a negotiation algorithm using the BANDANA framework. This negotiation algorithm will then be combined with an existing non-negotiating agent (the D-Brane Strategic Module) to form a complete negotiating Diplomacy player. The BANDANA framework is a Java-based platform specifically designed for the development of negotiation algorithms for Diplomacy.

The interesting aspect of Diplomacy, compared to previous editions of ANAC, is that there is no explicit formula that describes your agent's utility function. The goal of a negotiator is to make deals with its opponents that increase its chances of winning. Since Diplomacy is a complex game over many rounds, your agent will only be able to make an estimation of the value of such a deal using some heuristic approach.

Participants are *NOT* allowed to develop a complete Diplomacy player from scratch. Participants must create their agents by extending the AnacNegotiator class from the BANDANA framework with a negotiation algorithm.

More information about Diplomacy can be found here:
A short introduction: https://www.youtube.com/watch?v=z40JP-PJ1vI&feature=youtu.be
The complete rules: https://www.wizards.com/avalonhill/rules/diplomacy.pdf

More information about the BANDANA framework, and how to implement your agent for the competition can be found here: http://www.iiia.csic.es/~davedejonge/bandana/

*** Rules of Encounter ***

Negotiations take place in every 'Spring phase' and every 'Fall phase' of the game. The BANDANA framework offers a limited set of deals the players can propose. This is explained in the BANDANA manual. Negotiations are multilateral. A proposal may involve any number of agents between 2 and 7. A proposal is considered 'confirmed' if all players involved in the proposal have accepted it, unless it is inconsistent with earlier confirmed proposals. Proposals are private to those players involved in the proposal. Thus, if player A makes a proposal to players B and C, then only A, B and C will know about it. Players D,E,F and G will not know about the proposal. Any player may make or accept any proposal whenever it wants, so, unlike the Stacked Alternating Offers Protocol, there is no 'turn-taking'. A neutral 'Notary agent' will record all proposals that are being made and accepted, and will send a confirmation message to all players involved in the proposal once all those players have accepted it. A deal is considered a binding agreement if and only if it has been confirmed by the Notary agent.

At the end of each turn, the the D-Brane Strategic Module will select your player's moves. If your player is involved in any deal that has been confirmed then the strategic module will only choose moves that obey that agreement.


*** Tournament Setup ***

Participating agents will be put in groups of 7. If the number of participants is not a multiple of 7 then (some of) the groups will be supplemented with 1 or more agents provided by the organization. If there is more than 1 group, the best players of each group will advance to the final round..
In each group (and in the final) the players will play a large number of games. In each game players are randomly assigned to any of the 7 'Great Powers'. Each round of the game will last for 30 seconds. A game ends either if any player conquers 18 Supply Centers (a solo victory), or when the players agree to a draw, or if the game reaches the 'winter 1920' phase, in which case a draw is automatically declared.

In case of a solo victory, the winner receives 12 points, and all other players receive 0 points.
In case of a draw:
- all players eliminated before the end of the game receive 0 points.
- If there are 2 survivors, each receives 6 points.
- if there are 3 survivors, each receives 4 points.
- if there are 4 survivors, each receives 3 points.
- if there are 5 or 6 survivors, each receives 2 points.
- if there are 7 survivors, each receives 1 point.

If 2 or more players in a group end with an equal number of points, the total number of Supply Centers conquered in all games is used as a tiebreaker. If players still rank equal, or if the difference between the players is so small that it can't be considered statistically significant, the organization may decide that more games will be played.

The final results will be announced at an AI conference, probably IJCAI 2017. The teams of the top performing agents will be notified and it is expected that they will have a representative attending the conference.They will have the opportunity to give a brief presentation describing their agent.



3- ANAC Human Agent Negotiation League
Motivation:

The Human-Agent Negotiation (HAN) competition is proposed in order to further explore the strategies, nuances, and difficulties in creating realistic and efficient agents whose primary purpose is to negotiate with humans. Previous work on human-agent negotiation has revealed the importance of several features not commonly present in agent-agent negotiation, including retreatable and partial offers, emotion exchange, preference elicitation strategies, favors and ledgers behavior, and myriad other topics. To understand these features and better create agents that use them, this competition is designed to be a showcase for the newest work in the negotiating agent community.

To gauge interest for a competition on this track, we are asking researchers to reply by Friday, October 21st, 2016.

Summary:

The HAN competition will involve each author or group of authors submitting an agent that will be tested against human subjects in a study run through the University of Southern California. The subject pool will be taken from the standard populace available on Amazon’s Mechanical Turk (MTurk) service, with normal filtration done for participants who are ineligible (see Subject Selection, below).

All agents must be compliant with the IAGO (Interactive Arbitration Guide Online) framework and API, which will allow standardization of the agents and efficient running of subjects on MTurk. Agents will all be run on the same single or set of multi-issue bargaining tasks, examples of which are included below (Domain Examples).

Agents will be allowed to communicate on several channels, including a set of natural language utterances that have been pre-selected and curated by the ANAC committee. Other channels include the exchange of offers through visual cues and natural language, preference statements, and emotional displays.


IAGO API:

IAGO is a platform developed by Mell and Gratch at the University of Southern California. It is intended to serve as a testbed for Human-Agent negotiation specifically. IAGO is a web-based servlet hosting system that provides data collection and recording services, a human-usable HTML5 UI, and an API for designing human-like agents.

A full documentation of IAGO is available from the download site, which is available at http://people.ict.usc.edu/~mell/IAGO. A brief summary is included here.

All agents are capable of using the API to send Events. Events are interpreted by the UI in preset ways that allow a human user to interpret an agent’s intentions. Human users also generate Events that are passed to the agent developer to interpret as desired. Example Events include:

SEND_MESSAGE – sends a natural language utterance to be displayed on the chat log. Agents may send any language they wish, while human participants are restricted to sending from a preset list of utterances.

SEND_OFFER – sends an encoded offer for the multi-issue bargaining task wherein all items are assigned to either the human player, the agent, or an “undecided” section of the offer table. Also sends a pre-coded, descriptive message when sent from the agent to the human player.

SEND_EXPRESSION – sends an emoticon (either Happy, Angry, Surprised, or Sad) to the chat log, and also briefly shows the corresponding emotion on the visual avatar of the agent.

OFFER_IN_PROGESS – provides information that the other player is currently crafting an offer. Must be explicitly sent by the agent developer to the human player.

All Events may be sent with a delay, to allow chaining of related events (for example, an agent designer could send a message, then wait 2 seconds, then follow-up with an offer and an expression simultaneously). Flood protection will prevent messages from being sent too frequently.

Further detail may be found in the IAGO documentation.

Subject Selection and Data Treatment:

Competition subject participants will be selected from the MTurk subject pool. Subjects will be adults in the US (18 years or older), and will assert that they are permanent residents of the US (this will be verified with IP address tracking). Restriction to the US will be done to reduce cross-cultural effects. Each agent will be tested against 25 participants. Participants will not be re-used or be matched against more than one agent.

Due to the fact that MTurk participants will be US-restricted and natural language statements are used in the utterance set of the competition, participants will also be asked to affirm that their first language is English.

Subject basic demographic information will be collected, and they may be asked a set of verification questions/attention checks to ensure they comprehend and are engaged in the negotiation. Subjects who fail these questions will be removed from the competition and the resulting data set.

The data set collected by the competition organizers will be released to all agent developers/researchers, as with all submitted source code. Researchers not wishing to release source code should contact the organizers directly.

Competition Winners and Evaluation:

A set of prizes will be awarded to the winners of the competition in up to three categories.

The first category will be the High Scoring Agent category. The winner will be determined by the agent that, at the end of the bargaining time, has achieved the highest score. No weight will be placed on the human’s score.

The second category will be the Combined Value Category. The winner will be determined by the agent that, at the end of the bargaining time, has achieved the highest combined score between its own points and the human player’s points.

The final category will be the Agent Likeability Category. The winner will be determined by the agent that, following the conclusion of the negotiation and a subsequent survey, rates highest on user feedback questions. These questions will include questions like:

I would use the system again in the future.
I cannot recommend this system to others.
I think that I would like to use this system frequently.
I liked my negotiation partner.
I felt like I could trust my negotiation partner.

Domain Examples:

We present here two example domains. A domain similar to these will be selected as the official challenge to the community.

All challenges this year are multi-issue bargaining tasks, which means both the agent and the human participant will negotiate over the same set of items. Items may have differing values to each side. A “full offer” means that all items are assigned to either the agent or the human participant. A “partial offer” means that some items remain on the table and undecided. No offer is considered binding until both players accept the same full offer.

A negotiation will only end when such a full offer is accepted, or the 10-minute time limit for the negotiation has expired. Human participants will have a warning shown when there is only 1 minute remaining. Agents will have access to the current negotiation time at all points, accurate within approximately 5 seconds. In the case that time expires with no full offer, each player will take points equal to their respective Best Alternative To Negotiated Agreement (BATNA).

Note that the IAGO API allows agent designers to read the natural language descriptions of the issues at runtime (e.g., “Issue1” can be understood to be something like “Lumber” or “Luxury Cars”). However, agents will make use of domain-agnostic calls.

Domain Example 1:

This challenge is a simple multi-issue bargaining task over resources between two countries. There will be four distinct resources, with five items in each category. The items will have images and descriptions identifying them as either “Oil”, “Iron”, “Foodstuffs” or “Lumber”. The human player will assign a value of 4 points to each Oil, 3 points to each Iron, 2 points to each Lumber, and 1 point to each Foodstuff. The agent player will assign a value of 4 points to each Foodstuff, 3 points to each Lumber, 2 points to each Iron, and 1 point to each Oil. Each player’s BATNA is equal to the value of a single one their highest item (4, for both the human and the agent).

Domain Example 2:

This challenge is a smaller task with greater disparity toward values, and more unknowns for the agent player. The human and the agent take on roles as partners at an estate sale. There are three distinct issues. The first issue is “Luxury Cars”, and there are 6 items in this category. The second issue is “Famous Paintings”, and there are 6 items in this category as well. The final issue is “Mansion”, and there is only 1 item in this category. The agent assigns a value of 5 points to each Luxury Car, 3 points to each Famous Painting, and 8 points to the Mansion issue. The human player assigns 6 points to Luxury Cars, 2 points to Famous Paintings, and 8 points to Mansion. That results in the same total in this “mostly distributive” task. The agent and the human both have a BATNA equal to 20 points.

Note than in both domains, the human’s point values and BATNA will NOT be revealed to the agent designers prior to the competition.

Natural Language Utterances

It is important that we both are happy with an agreement.
I gave a little here; you give a little next time.
We should try to split things evenly.
We should each get our most valuable item.
Accept this or there will be consequences.
Your offer sucks.
This is the last offer. Take it or leave it.
This is the very best offer possible.
I can’t go any lower than this.
We should try harder to find a deal that benefits us both.
There’s hardly any time left to negotiate!

Additional Rules

Competition participants will be given a test scenario to practice their agents with. However, to prevent hard-coding preference data into agents, a different set of utilities will be used for the actual competition.

There will be no fewer than 3 distinct issues, and no greater than 5. Each issue will have fewer than 20 items.

Issue utilities will adhere to the following rule:



where k is the total number of issues.

Succinctly, this means that the total for each side would be the same if that side got every item.

It is strictly forbidden to use any technique by which an agent stores information between participants. This includes methods by which the agent may learn preferences in one game and then subsequently passes that information (through external server communication or otherwise) back to itself in future games.

All 25 participants are to be treated as fresh instances against which the same agent will be run.

Note: Participation in this competition is done in good spirit and for the furtherance of academic knowledge. Attempts to circumvent the rules described herein or described by the ANAC organizers will not be tolerated.

Reference

Mell, J., Gratch, J. (2016) "IAGO: Interactive Arbitration Guide Online", In Proceedings of the 2016 International Conference on Autonomous Agents and Multiagent Systems. International Foundation for Autonomous Agents and Multiagent Systems.

Any updates can be found on the main website for IAGO.

Related Resources

ICAART 2025   17th International Conference on Agents and Artificial Intelligence
Open Psychology 2025   Call for Papers - Article competition for young researchers in Psychology
IDEAL 2024   Intelligent Data Engineering and Automated Learning
AAMAS - 2025   The 24th International Conference on Autonomous Agents and Multiagent Systems
IPIN competition 2024   IPIN competition 2024
WI-IAT 2024   23rd IEEE/WIC International Conference on Web Intelligence and Intelligent Agent Technology
SDSM 2024   Suicide Detection on Social Media @ IEEE BigData 2024
JSS SI: AI testing and analysis 2024   [JSS - Elsevier] Special Issue on Automated Testing and Analysis for Dependable AI-enabled Software and Systems
ICAPS 2025   International Conference on Automated Planning and Scheduling
ICMLSC 2025   9th International Conference on Machine Learning and Soft Computing