Hongkun Yu

Hongkun Yu

Authored Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    Generating Representative Headlines for News Stories
    Xiaotao Gu
    Yuning Mao
    Jiawei Han
    Cong Yu
    Daniel Finnie
    Jiaqi Zhai
    Nick Zukoski
    The Web Conference 2020
    Preview abstract Millions of news articles are published online every day, which can be overwhelming for readers to follow. Grouping articles that are reporting the same event into news stories is a common way of assisting readers in their news consumption. However, it remains a challenging research problem to efficiently and effectively generate a representative headline for each story. Automatic summarization of a document set has been studied for decades, while few studies have focused on generating representative headlines for a set of articles. Unlike summaries, which aim to capture most information with least redundancy, headlines aim to capture information jointly shared by the story articles in short length, and exclude information that is too specific to each individual article. In this work, we study the problem of generating representative headlines for news stories. We develop a distant supervision approach to train large-scale generation models without any human annotation. This approach centers on two technical components. First, we propose a multi-level pre-training framework that incorporates massive unlabeled corpus with different quality-vs.-quantity balance at different levels. We show that models trained within this framework outperform those trained with pure human curated corpus. Second, we propose a novel self-voting-based article attention layer to extract salient information shared by multiple articles. We show that models that incorporate this layer are robust to potential noises in news stories and outperform existing baselines with or without noises. We can further enhance our model by incorporating human labels, and we show our distant supervision approach significantly reduces the demand on labeled data. View details
    MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices
    Zhiqing Sun
    Xiaodan Song
    Renjie Liu
    Yiming Yang
    ACL (2020) (to appear)
    Preview abstract Natural Language Processing (NLP) has recently achieved great success by using huge pre-trained models with hundreds of millions of parameters. However, these models suffer from heavy model sizes and high latency such that they cannot be deployed to resource-limited mobile devices. In this paper, we propose MobileBERT for compressing and accelerating the popular BERT model. Like the original BERT, MobileBERT is task-agnostic, that is, it can be generically applied to various downstream NLP tasks via simple fine-tuning. Basically, MobileBERT is a thin version of $\text{BERT}_\text{LARGE}$, while equipped with bottleneck structures and a carefully designed balance between self-attentions and feed-forward networks. To train MobileBERT, we %investigate a variety of knowledge transfer strategies to transfer the intrinsic knowledge from a teacher model, first train a specially designed teacher model, an inverted-bottleneck incorporated $\text{BERT}_\text{LARGE}$ model. Then, we conduct knowledge transfer from this teacher to MobileBERT. Empirical studies show that MobileBERT is 4.3$\times$ smaller and 5.5$\times$ faster than $\text{BERT}_\text{BASE}$ while achieving competitive results on well-known benchmarks. On the natural language inference tasks of GLUE, $\text{MobileBERT}$ achieves a GLUE score of $77.7$ ($0.6$ lower than $\text{BERT}_\text{BASE}$), and 62 ms latency on a Pixel 4 phone. On the SQuAD v1.1/v2.0 question answering task, $\text{MobileBERT}$ achieves a dev F1 score of $90.0/79.2$ ($1.5/2.1$ higher than $\text{BERT}_\text{BASE}$). View details