(Room 513)
Objective
In recent years, there have been ever-increasing demands for data-intensive scientific research. Routine use of digital sensors, high throughput experiments, and intensive computer simulations have created a data deluge imposing new challenges on scientific communities that attempt to process and analyze such data. This is especially challenging for scientific studies that involve Bayesian methods, which typically require computationally intensive Monte Carlo algorithms for their implementation. As a result, although Bayesian methods provide a robust and principled framework for analyzing data, their relatively high computational cost for Big Data problems has limited their application. The objective of this workshop is to discuss the advantages of Bayesian inference in the age of Big Data and to introduce new scalable Monte Carlo methods that address computational challenges in Bayesian analysis. This is a follow up to our recent workshop on Bayesian Inference for Big Data at Oxford University: BIBiD 2015. It will consist of invited talks and a poster session. Topics of interest include (but are not limited to):
- Advantages of Bayesian methods in the age of Big Data
- Distributed/parallel Markov Chain Monte Carlo (MCMC)
- MCMC using mini-batches of data
- MCMC using surrogate functions
- MCMC using GPU computing
- Precomputing strategies
- MCMC and variational methods
- Geometric methods in sampling algorithms
- Hamiltonian Monte Carlo
- Sequential Monte Carlo
Schedule
The workshop will take place in room 513
- 0900-0940: "Adventures on the efficient frontier" by Andrew Gelman (Columbia University)
- 0940-1020: "A framework for devising stochastic gradient MCMC algorithms" by Emily Fox (University of Washington)
- 1020-1050: coffee break
- 1050-1130: "Improving the performance of MCMC by breaking detailed balance" Andrew Duncan (Imperial College London)
- 1130-1210: "Accelerating exact MCMC with subsets of data" by Ryan Adams (Harvard University)
- 1210-1400: lunch break
- 1400-1440: "Scaling and generalizing variational inference" by David Blei (Columbia University)
- 1440-1530: panel (*)
- 1530-1600: coffee break
- 1600-1640: "Kernel methods for adaptive Markov Chain Monte Carlo" by Arthur Gretton (University College London)
- 1640-1720: "Forest resampling for distributed sequential Monte Carlo" by Anthony Lee (University of Warwick)
- 1720-1830: poster session
Posters
- Christian A. Naesseth (Linkoping U.) and Fredrik Lindsten (U. Cambridge)
- Remi Bardenet (CNRS, U. Lille) and Odalric-Ambrym Maillard (INRIA, U. Paris-Sud)
- Aidan Boland (UC. Dublin)
- Ali Zaidi (Microsoft)
- Bo Dai, Niao He, Hanjun Dai, and Le Song (Georgia Tech.)
- Maxim Rabinovich and Aaditya Ramdas (UC. Berkeley)
- Umut Simsekli, Roland Badeau, Gael Richard (Telecom ParisTech) and A. Taylan Cemgil (Bogazici University)
- Nilesh Tripuraneni and Zoubin Ghahramani (U. Cambridge)
- Ingmar Schuster (U. Leipzig), Heiko Strathmann (UC. London), Brooks Paige, Dino Sejdinovic (U. Oxford)
- Yutian Chen (Google Deepmind)
- Jackson Gorham (Stanford U.)
- Heiko Strathmann (UC. London)
Details
The workshop will be held at NIPS 2015 in Montreal on the 12th of December. There will be invited speakers and contributed posters for which you can submit an abstract until the 10th of October.
The workshop is endorsed by the International Society for Bayesian Analysis (ISBA).
Organizers
The following people are involved in organizing this workshop:
- Yee Whye Teh, University of Oxford (chair)
- Max Welling, University of Amsterdam (chair)
- Christophe Andrieu, University of Bristol
- Arnaud Doucet, University of Oxford
- Pierre Jacob, Harvard University
- Sebastian Vollmer, University of Oxford
- Babak Shahbaba, University of California, Irvine
- Thibaut Lienart, University of Oxford
Previous Workshop
This is a follow up to our recent workshop at the University of Oxford: BIBiD2015.