Hongkun Yu
Research Areas
Authored Publications
Sort By
Preview abstract
Large language models often struggle to generate error-free solutions in complex problem-solving scenarios. To address this, recent advancements have adopted a reasoner-verifier framework, where a verifier model evaluates the intermediate solution steps created by a reasoning model. However, obtaining the necessary intermediate annotations, alias process supervision data, to train the verifier model is resource-intensive and expensive. In this paper, we introduce Model-induced Process Supervision ( MiPS), a novel method for automating data curation. MiPS leverages the reasoner to generate process supervision data based on a Monte Carlo approach to sample the accuracy of intermediate solution completions from a training set. Our approach significantly improves the performance of PaLM 2 on math and coding tasks (accuracy +0.67% on GSM8K, +4.16% on MATH, +0.92% on MBPP compared with an output verifier). We address the noise in MiPS through an empirical analysis and suggest deligent choices in the training objective and the step-aggregation function for the verifier.
View details
Generating Representative Headlines for News Stories
Xiaotao Gu
Yuning Mao
Jiawei Han
Jialu Liu
Cong Yu
Daniel Finnie
Jiaqi Zhai
Nick Zukoski
The Web Conference 2020
Preview abstract
Millions of news articles are published online every day, which can be overwhelming for readers to follow. Grouping articles that are reporting the same event into news stories is a common way of assisting readers in their news consumption. However, it remains a challenging research problem to efficiently and effectively generate a representative headline for each story. Automatic summarization of a document set has been studied for decades, while few studies have focused on generating representative headlines for a set of articles. Unlike summaries, which aim to capture most information with least redundancy, headlines aim to capture information jointly shared by the story articles in short length, and exclude information that is too specific to each individual article. In this work, we study the problem of generating representative headlines for news stories. We develop a distant supervision approach to train large-scale generation models without any human annotation. This approach centers on two technical components. First, we propose a multi-level pre-training framework that incorporates massive unlabeled corpus with different quality-vs.-quantity balance at different levels. We show that models trained within this framework outperform those trained with pure human curated corpus. Second, we propose a novel self-voting-based article attention layer to extract salient information shared by multiple articles. We show that models that incorporate this layer are robust to potential noises in news stories and outperform existing baselines with or without noises. We can further enhance our model by incorporating human labels, and we show our distant supervision approach significantly reduces the demand on labeled data.
View details
Preview abstract
Natural Language Processing (NLP) has recently achieved great success by using huge pre-trained models with hundreds of millions of parameters. However, these models suffer from heavy model sizes and high latency such that they cannot be deployed to resource-limited mobile devices. In this paper, we propose MobileBERT for compressing and accelerating the popular BERT model. Like the original BERT, MobileBERT is task-agnostic, that is, it can be generically applied to various downstream NLP tasks via simple fine-tuning. Basically, MobileBERT is a thin version of $\text{BERT}_\text{LARGE}$, while equipped with bottleneck structures and a carefully designed balance between self-attentions and feed-forward networks. To train MobileBERT, we %investigate a variety of knowledge transfer strategies to transfer the intrinsic knowledge from a teacher model, first train a specially designed teacher model, an inverted-bottleneck incorporated $\text{BERT}_\text{LARGE}$ model. Then, we conduct knowledge transfer from this teacher to MobileBERT. Empirical studies show that MobileBERT is 4.3$\times$ smaller and 5.5$\times$ faster than $\text{BERT}_\text{BASE}$ while achieving competitive results on well-known benchmarks. On the natural language inference tasks of GLUE, $\text{MobileBERT}$ achieves a GLUE score of $77.7$ ($0.6$ lower than $\text{BERT}_\text{BASE}$), and 62 ms latency on a Pixel 4 phone. On the SQuAD v1.1/v2.0 question answering task, $\text{MobileBERT}$ achieves a dev F1 score of $90.0/79.2$ ($1.5/2.1$ higher than $\text{BERT}_\text{BASE}$).
View details