Optimization Algorithms for Subset Selection and Summarization in Large Data Sets
Tutorial at CVPR 2016
June 26th, 2:00 pm - 6:00 pm
Forum 15-16, Caesar’s Palace, Las Vegas, NV
TUTORIAL DESCRIPTION |
The increasing amounts of data in computer vision and other science and engineering fields requires robust tools and techniques to extract important relevant information from massive data collections. Subset selection addresses this challenge by systematically casting the problem as optimizations searching for a subset of high quality diverse items from a large, possibly exponential, set of items. However, this in general leads to combinatorial NP-hard problems, which are extremely difficult to solve. This tutorial will present techniques and recent advances in submodular optimization, convex and nonconvex optimization, sparse modeling and determinantal point processes to effectively address the problem of information selection and summarization. The presentation of the formulations, algorithms and theoretical foundations will be complemented with applications in computer vision including video summarization, multiple pose estimation, learning nonlinear dynamic models, human activity segmentation, active learning and feature selection for visual data and more.
|
TUTORIAL OUTLINE |
Introduction to Subset Selection and Summarization
Subset selection and summarization in computer vision
Subset selection using features, similarities, kernels, etc.
Graph-cuts, facility location and maximum volume objectives for subset selection: diversity, coverage and information
Algorithms for subset selection: submodular optimization, determinantal point processes, sparse subset selection
Applications in data subset selection, video summarization, learning nonlinear dynamic models, active learning, pose estimation, etc.
Submodularity for Subset Selection
Submodular functions: diversity, coverage and information</p>
Submodularity preserving operations
Combinatorial optimizations with submodular structure
Algorithms for submodular optimization
Applications in image summarization, feature selection, data subset selection, image segmentation, active / semi-supervised learning
Determinantal Point Processes (DPPs) for Summarization
Subset selection with probabilistic models
Basics of DPPs
Geometric intuitions
Learning the parameters of a DPP
Example applications and extensions
Subset Selection for Video summarization
State-of-the-art video summarization techniques
Video summarization as submodular optimization
Video summarization by learning mixtures of objectives
Datasets and evaluations of video summarization
|
TARGET AUDIENCE |
The intended audience are academicians, graduate students and industrial researchers who are interested in the state-of-the-art data modeling and machine learning techniques for subset selection and summarization in large high-dimensional datasets. Audience with mathematical and theoretical inclination will enjoy the course as much as the audience with practical tendency.
|
SPEAKER BIOS |
Ehsan Elhamifar
is an Assistant Professor in the College of Computer and Information Science and is affiliated with the Department of Electrical and Computer Engineering at Northeastern University. Previously, he was a postdoctoral scholar in the Electrical Engineering and Computer Sciences Department at the University of California, Berkeley. He obtained his PhD in Electrical and Computer Engineering from the Johns Hopkins University in 2012. Dr. Elhamifar obtained two Master’s degrees, one in Electrical Engineering from Sharif University of Technology in Iran in 2006 and another in Applied Mathematics and Statistics from the Johns Hopkins University in 2010. He was a visiting researcher at Stanford University, University of Minnesota and Duke University for several months during 2011 and 2012. Prof. Elhamifar’s research areas are machine learning, computer vision, optimization and algorithms. He is broadly interested in developing efficient, robust and provable algorithms that can address challenges of complex and large-scale high-dimensional data. He works on applications of these tools in computer vision. Specifically, he uses tools from convex geometry and analysis, optimization, sparse and low-rank modeling, high-dimensional statistics and graph theory to develop algorithms and theory and applies them to solve real-world problems, including motion segmentation in videos, object and activity recognition, video and web summarization, active learning and more.
Jeff Bilmes is a professor in the Department of Electrical Engineering at the University of Washington, Seattle. Bilmes received his Ph.D. from the Computer Science Division of the department of Electrical Engineering and Computer Science, University of California at Berkeley. Prof. Bilmes is a 2001 NSF Career award winner, a 2002 CRA Digital Government Fellow, a 2008 NAE Gilbreth Lectureship award recipient, and a 2012/2013 ISCA Distinguished Lecturer. Prof. Bilmes was a UAI (Conference on Uncertainty in Artificial Intelligence) program chair (2009) and then the general chair (2010). He is currently an action editor for JMLR. Prof. Bilmes’s primary interests lie in statistical modeling (particularly graphical model approaches) and signal processing for pattern classification, language processing, bioinformatics, machine learning, submodularity in combinatorial optimization and machine learning and active and semi-supervised learning. He is particularly interested in temporal graphical models (or dynamic graphical models, which includes HMMs, DBNs, and CRFs) and ways in which to design efficient algorithms for them and design their structure so that they may perform as better structured classifiers. Prof. Bilmes has also pioneered (starting in 2003) the development of submodularity within machine learning, and he received a best paper award at ICML 2013 and a best paper award at NIPS 2013 in this area. In 2014, Prof. Bilmes also received a most influential paper in 25 years award from the International Conference on Supercomputing, given to a paper on high-performance matrix optimization.
Alex Kulesza is a research scientist at Google. Dr. Kulesza’s research involves developing efficient models and algorithms for machine learning, including online learning, determinantal point processes, and spectral methods. He recently completed a postdoc at the University of Michigan with Satinder Singh and received his Ph.D. in 2012 from the department of Computer and Information Science at the University of Pennsylvania, where he was advised by Ben Taskar and Fernando Pereira.
Michael Gygli is currently a PhD candidate working with Prof. Luc Van Gool in the Computer Vision Lab at ETH Zurich. His research lies in the area of human-centric image and video analysis. Specifically, his focus in on video summarization and highlight detection and he uses submodular maximization techniques in order to create video summaries as well as diversified image retrieval. Michael obtained his MSc in computer science from the University of Nice Sophia-Antipolis in 2012. He has published several papers at CVPR, ECCV and ICCV.
|
|