Find references fast.
Understand papers faster.
hoverQ is an AI-powered tool that lets you preview paper references instantly, see summaries on hover, and save citations in one click.
Join the waitlist
Hover to Preview References
Instantly see the abstract, paper name, and publication info for any in-text citation. No more scrolling to the bottom or opening unnecessary tabs.

Authors
Ilya Sutskever, Oriol Vinyals, Quoc V. Le
Abstract
Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT-14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous state of the art. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.

Overview & Motivation
The authors introduce BERT (Bidirectional Encoder Representations from Transformers), a new model that overcomes limitations of prior language models which processed context unidirectionally (left-to-right or right-to-left). BERT instead conditions on both left and right context simultaneously using a deep Transformer encoder architecture
Pre‑Training Objectives
1. Masked Language Modeling (MLM)
Randomly mask 15% of tokens in the input and train the model to predict those masked tokens from their surrounding context.
To mitigate training-inference mismatch, 80% of selected tokens are replaced with [MASK], 10% are replaced by a random token, and 10% are left unchanged
2. Next Sentence Prediction (NSP)
Presented with two sentences, the model must predict whether the second sentence follows the first in the original text.
This helps the model learn sentence-level relationships, useful for tasks like question answering and inference
AI Summaries on Hover
Get concise AI-generated summaries for each referenced paper, so you understand its key contributions at a glance.
Save Papers to Reading Lists
With one click, save any referenced paper and export formatted citations for your projects, reports, or publications.
Join our mailing list!
We're currently in the process of scraping various knowledge sources and are planning to release a beta version of our UI soon.
If you are interested in becoming a tester and would like to get notified when we release, please sign up below!
© Copyright 2025 HoverQ