Undergraduate Upends a 40-Year-Old Data Science Conjecture

โTiny pointers,โ which show the way to a piece of stored data, inspired the creation of a new kind of faster hash table.
Nash Weerasekera for Quanta Magazine
Introduction
Sometime in the fall of 2021, Andrew Krapivin, an undergraduate at Rutgers University, encountered a paper that would change his life. At the time, Krapivin didnโt give it much thought. But two years later, when he finally set aside time to go through the paper (โjust for fun,โ as he put it), his efforts would lead to a rethinking of a widely used tool in computer science.
The paperโs title, โTiny Pointers (opens a new tab),โ referred to arrowlike entities that can direct you to a piece of information, or element, in a computerโs memory. Krapivin soon came up with a potential way to further miniaturize the pointers so they consumed less memory. However, to achieve that, he needed a better way of organizing the data that the pointers would point to.
He turned to a common approach for storing data known as a hash table. But in the midst of his tinkering, Krapivin realized that he had invented a new kind of hash table, one that worked faster than expected โ taking less time and fewer steps to find specific elements.
Martรญn Farach-Colton (opens a new tab), a co-author of the โTiny Pointersโ paper and Krapivinโs former professor at Rutgers, was initially skeptical of Krapivinโs new design. Hash tables are among the most thoroughly studied data structures in all of computer science; the advance sounded too good to be true. But just to be sure, he asked a frequent collaborator (and a โTiny Pointersโ co-author), William Kuszmaul (opens a new tab) of Carnegie Mellon University, to check out his studentโs invention. Kuszmaul had a different reaction. โYou didnโt just come up with a cool hash table,โ he remembers telling Krapivin. โYouโve actually completely wiped out a 40-year-old conjecture!โ

Without setting out to do so, Andrew Krapivin upended the common thinking around hash tables โ one of the best-studied tools in computer science.
Phillip Ammon for Quanta Magazine
Together, Krapivin (now a graduate student at the University of Cambridge), Farach-Colton (now at New York University) and Kuszmaul demonstrated in a January 2025 paper (opens a new tab) that this new hash table can indeed find elements faster than was considered possible. ln so doing, they had disproved a conjecture long held to be true.
โItโs an important paper,โ said Alex Conway (opens a new tab) of Cornell Tech in New York City. โHash tables are among the oldest data structures we have. And theyโre still one of the most efficient ways to store data.โ Yet open questions remain about how they work, he said. โThis paper answers a couple of them in surprising ways.โ
Hash tables have become ubiquitous in computing, partly because of their simplicity and ease of use. Theyโre designed to allow users to do exactly three things: โqueryโ (search for) an element, delete an element, or insert one into an empty slot. The first hash tables date back to the early 1950s, and computer scientists have studied and used them ever since. Among other things, researchers wanted to figure out the speed limits for some of these operations. How fast, for example, could a new search or insertion possibly be?
The answer generally depends on the amount of time it takes to find an empty spot in a hash table. This, in turn, typically depends on how full the hash table is. Fullness can be described in terms of an overall percentage โ this table is 50% full, that oneโs 90% โ but researchers often deal with much fuller tables. So instead, they may use a whole number, denoted by x, to specify how close the hash table is to 100% full. If x is 100, then the table is 99% full. If x is 1,000, the table is 99.9% full. This measure of fullness offers a convenient way to evaluate how long it should take to perform actions like queries or insertions.
Researchers have long known that for certain common hash tables, the expected time required to make the worst possible insertion โ putting an item into, say, the last remaining open spot โ is proportional to x. โIf your hash table is 99% full,โ Kuszmaul said, โit makes sense that you would have to look at around 100 different positions to find a free slot.โ
In a 1985 paper (opens a new tab), the computer scientist Andrew Yao (opens a new tab), who would go on to win the A.M. Turing Award, asserted that among hash tables with a specific set of properties, the best way to find an individual element or an empty spot is to just go through potential spots randomly โ an approach known as uniform probing. He also stated that, in the worst-case scenario, where youโre searching for the last remaining open spot, you can never do better than x. for 40 years, most computer scientists assumed that Yaoโs conjecture was true.
Krapivin was not held back by the conventional wisdom for the simple reason that he was unaware of it. โI did this without knowing about Yaoโs conjecture,โ he said. His explorations with tiny pointers led to a new kind of hash table โ one that did not rely on uniform probing. And for this new hash table, the time required for worst-case queries and insertions is proportional to (log x)2 โ far faster than x. This result directly contradicted Yaoโs conjecture. Farach-Colton and Kuszmaul helped Krapivin show that (log x)2 is the optimal, unbeatable bound for the popular class of hash tables Yao had written about.
โThis result is beautiful in that it addresses and solves such a classic problem,โ said Guy Blelloch (opens a new tab) of Carnegie Mellon.
โItโs not just that they disproved [Yaoโs conjecture], they also found the best possible answer to his question,โ said Sepehr Assadi (opens a new tab) of the University of Waterloo. โWe could have gone another 40 years before we knew the right answer.โ

Krapivin on the Kingโs College Bridge at the University of Cambridge. His new hash table can find and store data faster than researchers ever thought possible.
Phillip Ammon for Quanta Magazine
In addition to refuting Yaoโs conjecture, the new paper also contains what many consider an even more astonishing result. It pertains to a related, though slightly different, situation: In 1985, Yao looked not only at the worst-case times for queries, but also at the average time taken across all possible queries. He proved that hash tables with certain properties โ including those that are labeled โgreedy,โ which means that new elements must be placed in the first available spot โ could never achieve an average time better than log x.
Farach-Colton, Krapivin and Kuszmaul wanted to see if that same limit also applied to non-greedy hash tables. They showed that it did not by providing a counterexample, a non-greedy hash table with an average query time thatโs much, much better than log x. In fact, it doesnโt depend on x at all. โYou get a number,โ Farach-Colton said, โsomething that is just a constant and doesnโt depend on how full the hash table is.โ The fact that you can achieve a constant average query time, regardless of the hash tableโs fullness, was wholly unexpected โ even to the authors themselves.
The teamโs results may not lead to any immediate applications, but thatโs not all that matters, Conway said. โItโs important to understand these kinds of data structures better. You donโt know when a result like this will unlock something that lets you do better in practice.โ