posted by organizer: albertberahas || 4540 views || tracked by 6 users: [display]

NeurIPS WS: Optimization for ML 2019 : NeurIPS 2019 Workshop: Beyond First Order Methods in Machine Learning

FacebookTwitterLinkedInGoogle

Link: https://sites.google.com/site/optneurips19/
 
When Dec 13, 2019 - Dec 14, 2019
Where Vancouver, CANADA
Submission Deadline Sep 20, 2019
Notification Due Sep 30, 2019
Final Version Due Oct 31, 2019
Categories    machine learning   optimization   higher-order methods
 

Call For Papers

Optimization lies at the heart of many exciting developments in machine learning, statistics and signal processing. As models become more complex and datasets get larger, finding efficient, reliable and provable methods is one of the primary goals in these fields. 

In the last few decades, much effort has been devoted to the development of first-order methods. These methods enjoy a low per-iteration cost and have optimal complexity, are easy to implement, and have proven to be effective for most machine learning applications. First-order methods, however, have significant limitations: (1) they require fine hyper-parameter tuning, (2) they do not incorporate curvature information, and thus are sensitive to ill-conditioning, and (3) they are often unable to fully exploit the power of distributed computing architectures. 

Higher-order methods, such as Newton, quasi-Newton and adaptive gradient descent methods, are extensively used in many scientific and engineering domains. At least in theory, these methods possess several nice features: they exploit local curvature information to mitigate the effects of ill-conditioning, they avoid or diminish the need for hyper-parameter tuning, and they have enough concurrency to take advantage of distributed computing environments. Researchers have even developed stochastic versions of higher-order methods, that feature speed and scalability by incorporating curvature information in an economical and judicious manner. However, often higher-order methods are “undervalued.”

This workshop will attempt to shed light on this statement. Topics of interest include --but are not limited to-- second-order methods, adaptive gradient descent methods, regularization techniques, as well as techniques based on higher-order derivatives. This workshop can bring machine learning and optimization researchers closer, in order to facilitate a discussion with regards to underlying questions such as the following:
- Why are they not omnipresent?
- Why are higher-order methods important in machine learning, and what advantages can they offer?
- What are their limitations and disadvantages?
- How should (or could) they be implemented in practice?

Speakers:
- Coralia Cartis (Oxford University)
- Don Goldfarb (Columbia University)
- Elad Hazan (Princeton University)
- James Martens (DeepMind)
- Katya Scheinberg (Cornell University)
- Stephen Wright (UW - Madison)

Organizers:
- Albert S. Berahas (Lehigh University)
- Anastasios Kyrillidis (Rice University)
- Michael W Mahoney (Berkeley University)
- Fred Roosta (University of Queensland)

CALL FOR PAPERS
We welcome submissions to the workshop under the general theme of “Beyond First-Order Optimization Methods in Machine Learning”. Topics of interest include, but are not limited to,
- Second-order methods
- Quasi-Newton methods
- Derivative-free methods
- Distributed methods beyond first-order
- Online methods beyond first-order
- Applications of methods beyond first-order to diverse applications (e.g., training deep neural networks, natural language processing, dictionary learning, etc)

We encourage submissions that are theoretical, empirical or both.

Submissions:
Submissions should be up to 4 pages excluding references, acknowledgements, and supplementary material, and should follow NeurIPS format. The CMT-based review process will be double-blind to avoid potential conflicts of interests; submit at https://cmt3.research.microsoft.com/OPTNeurIPS2019/.

Accepted submissions will be presented as posters.

Important Dates:
Submission deadline: September 20, 2019 (23:59 ET)
Acceptance notification: September 30, 2019
Final version due: October 31, 2019

Selection Criteria:
All submissions will be peer reviewed by the workshop’s program committee. Submissions will be evaluated on technical merit, empirical evaluation, and compatibility with the workshop focus.

Related Resources

NeurIPS 2025   Annual Conference on Neural Information Processing Systems
Ei/Scopus-CCNML 2025   2025 5th International Conference on Communications, Networking and Machine Learning (CCNML 2025)
ENLSP 2025   The 4th NeurIPS ENLSP 2024 workshop on Efficient Natural Language & Speech Processing: Highlighting New Architectures for Future Foundation Models
Ei/Scopus-SGGEA 2025   2025 2nd Asia Conference on Smart Grid, Green Energy and Applications (SGGEA 2025)
PJA 76(1) 2026   Rhythms of Artwork and Beyond: Humanity, Sociality, and Nature
IEEE CNCIT 2025   2025 4th International Conference on Networks, Communications and Information Technology (CNCIT 2025)
LOD 2025   11th International Conference on machine Learning, Optimization & Data science
Ei/Scopus-IPCML 2025   2025 International Conference on Image Processing, Communications and Machine Learning (IPCML 2025)
NLP4DH 2025   The 5th International Conference on Natural Language Processing for Digital Humanities
MLMI 2025   2025 The 8th International Conference on Machine Learning and Machine Intelligence (MLMI 2025)