Stimulated by emerging applications, such as those powered by the Internet of the Things, critical infrastructure network, and security games, intelligent agents commonly leverage different forms optimization and/or learning to solve complex problems. The goal of the workshop is to provide researchers with a venue to discuss techniques for tackling a variety of multi-agent optimization problems. We seek contributions in the general area of multi- agent optimization, including distributed optimization, coalition formation, optimization under uncertainty, winner determination algorithms in auctions, and algorithms to compute Nash and other equilibria in games. This year, the workshop will have a special focus on contributions at the intersection of optimization and learning. For example, agents which use optimization often employ machine learning to predict unknown parameters appearing in their decision problem. Or, machine learning techniques may be used to improve the efficiency of optimization. While submissions across the spectrum of multi-agent optimization are welcome, contributions at the intersection with learning are especially encouraged.This workshop invites works from different strands of the multi-agent systems community that pertain to the design of algorithms, models, and techniques to deal with multi-agent optimization and learning problems or problems that can be effectively solved by adopting a multi-agent framework. The workshop is of interest both to researchers investigating applications of multi-agent systems to optimization problems in large, complex domains, as well as to those examining optimization and learning problems that arise in systems comprised of many autonomous agents. In so doing, this workshop aims to provide a forum for researchers to discuss common issues that arise in solving optimization and learning problems in different areas, to introduce new application domains for multi-agent optimization techniques, and to elaborate common benchmarks to test solutions.
TopicsThe workshop organizers invite paper submissions on the following (and related) topics:
- Optimization for learning agents
- Learning for multiagent optimization problems
- Distributed constraint satisfaction and optimization
- Winner determination algorithms in auctions
- Coalition formation algorithms
- Algorithms to compute Nash and other equilibria in games
- Optimization under uncertainty
- Optimization with incomplete or dynamic input data
- Algorithms for real-time applications
- GPU for general purpose computations (GPGPU)
- Multi-core and many-core computing
- Cloud, distributed and grid computing
Finally, the workshop will welcome papers that describe the release of benchmarks and data sets that can be used by the community to solve fundamental problems of interest, including in machine learning and optimization for health systems and urban networks, to mention but a few examples.
The workshop will be a one-day meeting. It will include a number of (possibly parallel) technical sessions, a virtual poster session where presenters can discuss their work, with the aim of further fostering collaborations, multiple invited speakers covering crucial challenges for the field of multiagent optimization and learning.
AttendanceAttendance is open to all. At least one author of each accepted submission must be present at the workshop.
- March 17, 2021 – Submission Deadline[Extended]
- April 17, 2021 – Acceptance notification
- April 30,2021 – AAMAS/IJCAI Fast Track Submission Deadline
- May 1, 2021 – AAMAS/IJCAI Fast Track Acceptance Notification
- May 4, 2021 – Workshop Date
Submission URL: https://easychair.org/conferences/?conf=optlearnmas21
- Technical Papers: Full-length research papers of up to 8 pages (excluding references and appendices) detailing high quality work in progress or work that could potentially be published at a major conference.
- Short Papers: Position or short papers of up to 4 pages (excluding references and appendices) that describe initial work or the release of privacy-preserving benchmarks and datasets on the topics of interest.
Fast Track (Rejected AAMAS or IJCAI papers)
Rejected AAMAS or IJCAI papers with *average* scores of at least 5.0 may be submitted
to OptLearnMAS along with previous reviews and scores and an optional letter indicating how the
authors have addressed the reviewers comments.
Please use the submission link above and indicate that the submission is a resubmission from of an AAMAS/IJCAI rejected paper. Also OptLearnMAS submission, reviews and optimal letter need to be compiled into a single pdf file.
These submissions will not undergo the regular review process, but a light one, performed by the chairs, and will be accepted if the previous reviews are judged to meet the workshop standard.
All papers must be submitted in PDF format, using the AAMAS-21 author kit.
Submissions should include the name(s), affiliations, and email addresses of all authors.
Submissions will be refereed on the basis of technical quality, novelty, significance, and clarity. Each submission will be thoroughly reviewed by at least two program committee members.
Submissions of papers rejected from the AAMAS 2021 and IJCAI 2021 technical program are welcomed.
For questions about the submission process, contact the workshop chairs.
All times are in British Summer Time (UTC+1).
Invited talks: Will be live streamed (recording available). Contributed Talks: Are pre-recorded and accessible at any time (click on the play button next to the associated paper). There will be additional Q&A and discussion after each talk.
|Time||Talk / Presenter|
|Session 1: Distributed Optimization -- Session chair: Gauthier Picard|
|11:05||Contributed Talk: Latency-Aware Local Search for Distributed Constraint Optimization|
|11:20||Contributed Talk: A Generic Agent Model Towards Comparing Resource Allocation Approaches to On-demand Transport with Autonomous Vehicles|
|11:35||Contributed Talk: Distributed Optimization via Integration of Local Models|
|12:00||Invited Talk by Roie Zivan|
|Session 2: Federated Learning and Reinforcement Learning -- Session chair: Ferdinando Fioretto|
|13:20||Contributed Talk: Incentive Mechanism Design for Federated Learning: Hedonic Game Approach|
|13:35||Contributed Talk: Privacy-Preserving and Accountable Multi-agent Learning|
|13:50||Contributed Talk: Distributed Q-Learning with State Tracking for Multi-agent Networked Control|
|14:05||Contributed Talk: PettingZoo: Gym for Multi-Agent Reinforcement Learning|
|14:30||Invited Talk by Long Tran-Thanh|
|Session 3: Reinforcement Learning -- Session chair: Harel Yedidsion|
|15:30||Contributed Talk: Health-Informed Policy Gradients for Multi-Agent Reinforcement Learning|
|15:45||Contributed Talk: No More Hand-Tuning Rewards: Masked Constrained Policy Optimization for Safe Reinforcement Learning|
|16:00||Contributed Talk: Multiplayer Support for the Arcade Learning Environment|
|16:15||Contributed Talk: Multi-Agent Routing and Scheduling through Coalition Formation|
|16:30||Invited Talk by Matthew Gombolay|
|Session 4: Games and Online Learning -- Session chair: Bryan Wilder|
|17:45||Contributed Talk: Learning in Matrix Games can be Arbitrarily Complex|
|18:00||Contributed Talk: Rational inductive agents|
|18:15||Contributed Talk: Efficient Competitions and Online Learning with Strategic Forecasters|
|18:30||End of Workshop|
- Latency-Aware Local Search for Distributed Constraint Optimization
Ben Rachmut; Roie Zivan; William Yeoh
- A Generic Agent Model Towards Comparing Resource Allocation Approaches to On-demand Transport with Autonomous Vehicles
Alaa Daoud; Flavien Balbo; Paolo Gianessi; Gauthier Picard
- Distributed Optimization via Integration of Local Models
Fernanda N. T. Furukita; Fernando J. M. Marcellino; Jaime Sichman
- Incentive Mechanism Design for Federated Learning: Hedonic Game Approach
- Privacy-Preserving and Accountable Multi-agent Learning
Anudit Nagar; Cuong Tran; Ferdinando Fioretto
- Distributed Q-Learning with State Tracking for Multi-agent Networked Control
Hang Wang, Sen Lin, Hamid Jafarkhani, Junshan Zhang
- PettingZoo: Gym for Multi-Agent Reinforcement Learning
Justin K. Terry; Benjamin Black; Mario Jayakumar; Ananth Hari; Ryan Sullivan; Luis Santos; Clemens Dieffendahl; Niall L. Williams; Yashas Lokesh; Caroline Horsch; Praveen Ravi
- Health-Informed Policy Gradients for Multi-Agent Reinforcement Learning
Ross E. Allen; Jayesh K. Gupta; Jaime Pena; Yutai Zhou; Javona White Bear; Mykel J. Kochenderfer
- No More Hand-Tuning Rewards: Masked Constrained Policy Optimization for Safe Reinforcement Learning
Stef Van Havermaet
- Multiplayer Support for the Arcade Learning Environment
Justin Terry; Benjamin Black; Luis Santos
- Multi-Agent Routing and Scheduling through Coalition Formation
Luca Capezzuto; Danesh Tarapore; Sarvapali Ramchurn
- Learning in Matrix Games can be Arbitrarily Complex
Gabriel Andrade; Rafael Frongillo; Georgios Piliouras
- Rational inductive agents
Caspar Oesterheld; Abram Demski; Vincent Conitzer
- Efficient Competitions and Online Learning with Strategic Forecasters
Rafael Frongillo; Robert Gomez; Anish Thilagar; Bo Waggoner
Interactive Learning of Coordination Strategies for Robot Teams (TBA)by Matthew Gombolay (Georgia Tech)
Abstract: Resource scheduling and optimization is a costly, challenging problem that affects almost every aspect of our lives. From healthcare to manufacturing, deciding which workers should complete which tasks at each moment in time to maximize efficiency while adhering upper- and lowerbound temporospatial constraints is an NP-Hard combinatorial optimization problem. To create automated resource optimization algorithms, industry typically employs armies-of-consultants to solicit knowledge from domain experts and codify that knowledge in the form of ad hoc scheduling heuristics. This process is cost-intensive, does not scale, and suffers from inter-expert disagreement. In my talk, I will share exciting new research we are pioneering in interactive machine learning methods and deep graph attention networks to (1) Automatically learn the scheduling strategies of domain experts without the need for manual knowledge solicitation; (2) Express this knowledge in an interpretable form while teasing out inter-expert disagreement; and (3) Scale beyond the expert to set a new state-of-the-art in the optimal coordination of large-scale teams.
Is There Life Beyond MinMax? Multi-Agent Learning with Strategic Agents (TBA)by Long Tran-Thanh (University of Warwick)
Abstract: Optimisation has been the core of many machine learning (ML) problems. In particular, most of the standard ML techniques can be casted as searching for a minimum (or a maximum) of an objective function (e.g., empirical risk minimisation in offline ML, or regret minimisation in its online counterpart). With the rise of multi-agent learning paradigms, such as federated learning, self-play training (i.e., the agent learns by playing against itself), collaborative multi-agent reinforcement learning, there has been a shift from minimisation problems to minimax optimisation in the recent years. This shift was mainly influenced by the appearance of generative adversarial networks (GANs), which uses a two-player zero-sum game model to learn the underlying generative model of data (and in which one player aims to minimise an objective function, while the other is trying to counteract, hence the minimax manner).
While the minimax optimisation framework still has its interesting and difficult challenges (convergence, stability, etc), it still cannot capture all the multi-agent learning settings, as it assumes (quasi) full cooperation between agents. In this talk, I will discuss a number of problem settings beyond this minimax framework, that can be useful for multi-agent learning. These include last round/last iterate convergence in non-cooperative multi-agent learning, and efficient learning with limited verifications against strategic manipulators. The common thing in them is that agents don’t have to be fully cooperative anymore, but can follow strategic and selfish behaviours.
Applying Multi-Agent Optimization to Realistic Scenarios, including IOT Applications (TBA)by Roie Zivan (Ben Gurion University)
“If you don’t find realistic applications that your models and algorithms are relevant for, you will not have a future.”. This statement was made by one of the leaders in the research of distributed optimization models and algorithms, more than a decade ago. For years, it seemed that our field was indeed losing the interest of the community until…
Recently, thanks to the advancement of technology that allows computers, vehicles, robots and even simple devices like lamps and curtains, to perform computation and communicate one with the other, people are expecting such devices to interact in order to optimize their actions. Suddenly, the effort spent in the last two decades of studying and designing distributed optimization models and algorithms pays off. We find ourselves involved in several realistic application implementations, including industry, health, and security entities as partners.
I will present existing models for representing realistic applications as multi agent optimization problems, algorithms designed to solve them and adjustments that need to be made in uncertain and dynamic environments. I will conclude with the challenges that I believe that we as a community need to face in the near future.
- Ana L. C. Bazzan Universidade Federal do Rio Grande do Sul
- Filippo Bistaffa IIIA-CSIC
- Alessandro Farinelli Computer Science Department, Verona University
- Tal Grinshpoun Ariel University
- Md. Mosaddek Khan University of Dhaka
- Rene Mandiau LAMIH, Université de Valenciennes
- Zinovi Rabinovich Nanyang Technological University
- Juan Antonio Rodriguez Aguilar - IIIA-CSIC
- Marius Silaghi - FIT
- William Yeoh - Washington University in St. Louis
- Makoto Yokoo - Kyushu University
- Roie Zivan - Ben Gurion University of the Negev
- Maryam Tabar - Penn State University
- Hangzhi Guo - Penn State University
- Archie Chapman - University of Queensland
- Harel Yedidsion - University of Texas at Austin
- Pierre Rust - Orange Labs, France
- Mohamed Wahbi - Collins Aerospace
- Rica Gonen - Open University of Israel