Speaker
Description
Deep neural networks have achieved outstanding success in many tasks ranging from computer vision, to natural language processing, and robotics. However, even models trained on internet scale data pale in their ability to understand the world around us, as well as continuously adapting to new tasks or environments. One prevailing approach is to train on massive, internet-scale datasets to cover diverse distributions, while an alternative focuses on leveraging inductive biases to improve generalization. This talk will explore causality as an inductive bias in neural networks, examining its potential to enhance robustness and generalization, particularly in AI for Science applications including inference of gene regulatory networks or materials discovery.