Season 1 - Episode 02: Abstract Reasoning and Peer Reviews

This weeks episode we take a look at Abstract Reasoning within Neural Networks as well as discussing the current review system surrounding ML papers. 

We are also very happy to have Jacob Buckman join us on the podcast this week. Jacob is currently undertaking a PhD at Mila having previously been a researcher at Google Brain with Sara Hooker. His main research interests lie in deep reinforcement learning with a particular focus on sample-efficiency. 

Please let us know who you thought presented the most underrated paper in the form below:

https://forms.gle/97MgHvTkXgdB41TC8

Also let us know any suggestions for future papers or guests:

https://docs.google.com/forms/d/e/1FAIpQLSeWoZnImRHXy8MTeBhKA4bxRPVVnVXAUb0bLIP0bQpiTwX6uA/viewform

Links to the papers:

"Conference Reviewing Considered Harmful" - http://pages.cs.wisc.edu/~dusseau/Classes/CS739/anderson-model.pdf
"Measuring Abstract Reasoning in Neural Networks" - http://proceedings.mlr.press/v80/santoro18a/santoro18a.pdf?fbclid=IwAR2rqCYu_rorfiVicYXx4EnGFZ4Y-9uAh9936YxEEwGxY-5MGGbnm9CMfXI

Follow us on Spotify and Apple Podcasts as well!

Underrated ML Twitter: https://twitter.com/underrated_ml

Jacob Buckman Twitter: https://twitter.com/jacobmbuckman

Previous
Previous

Season 1 - Episode 03: Pooling Layers and learning from Brains

Next
Next

Season 1 - Episode 01: Xenobots and Critical Learning Periods