| |||||||||||||||
ICBINB 2022 : I Can't Believe It Is Not Better: Understanding Deep Learning Through Empirical Falsification | |||||||||||||||
Link: https://sites.google.com/view/icbinb-2022/submit | |||||||||||||||
| |||||||||||||||
Call For Papers | |||||||||||||||
Much of deep learning research offers incremental improvements on the state of the art methods. In this workshop we solicit papers that do not follow this narrow conception of science. In particular, we are interested in negative results that advance the understanding of deep learning through empirical falsification of a credible hypothesis.
Submissions should take care to make explicit the motivating principles behind the hypothesis being tested, and the implications of the results in relation to these motivating principles. We also encourage submissions that go a layer deeper and investigate the causes of the initial idea not working as expected. A good submission would allow the reader to positively answer the questions “Did I reliably learn something about neural networks that I didn’t know before?” We invite submissions on the following topics: Examples of well-defined reasonable hypotheses that were later empirically falsified. Negative scientific findings in a more general sense, methodologies or tools that gave disappointing results, especially if lessons can be learned from these results in hindsight. Meta deep learning research - for example, discussion on the role that empirical investigation, mathematical proof, or general deductive should reasoning play in deep learning. As a field, do we value certain types of research over others? Intersections between machine learning research and philosophy of science in general Technical submissions may center on deep-learning-adjacent fields (causal DL, meta-learning, generative modelling, adversarial examples, probabilistic reasoning, etc) or applications. Selected papers will be optionally included in a special issue of PMLR. |
|