Proposal Talk, Khoury College of Computer Sciences PhD Proposal, 208, WVH (TBD)
In the past decade, the data have grown faster than ever across many domains. For example, in survival analysis, there are more than 60 million Medicare beneficiaries across 40 thousand ZIP Code areas in United States from 2000 to 2012, which is up to 5.7 billion person-months of follow-up. In multi-label classification, Wikipedia data contains more than 500 thousand labels, millions of features and instances. For many of such datasets, machine learning models are facing unprecedented challenges associated with effectiveness and efficiency in both time and memory. This thesis aims to develop learning models that scale well on large data while being able to maintain or even increase its level of performance based on the inherent structures of the dataset and learning algorithms. By working on the following key questions for each model: 1) how to adapt to the intrinsic structure of the dataset and 2) how to take into account the special design of the learning formula and algorithm, we develop state-of-the-art algorithms for both regression and classification problems, and scale such algorithms well on multiple real-world datasets, such as millions of Medicare enrollees in survival analysis, Wikipedia articles and Amazon products categorization in multi-label classification.