The nearest-neighbor search (NN) is a classic problem in computer science with vast applications in computer vision. Given a database of points in ${\mathbb R}^d$ and a query point, NN search tries to obtain the data point closest to the query point under some metric distance. However, in many settings such searches are known to suffer from the notorious curse of dimensionality, where running time grows exponentially with $d$ (in optimized solutions). This causes severe performance degradation when working in high-dimensional spaces. Approximate techniques (ANN), such as locality-sensitive hashing, improve the performance of the search, but are still computationally intensive.
In this talk we propose a new way to solve this problem using ternary content addressable memory (TCAM). TCAM is an associative memory, which is a special type of computer memory that is widely used in switches and routers for very high speed search applications. We show that the TCAM computational model can be leveraged and adjusted to solve NN search problems with query time of $O(1)$ and linear space. This concept does not suffer from the curse of dimensionality and is shown to improve the best known approaches for NN by more than four orders of magnitude. Simulation results demonstrate dramatic improvement over the best known approaches for NN, and suggest that TCAM devices may play a critical role in future computer vision applications.
Joint work with Anat Bremler-Barr, David Hay, and Yacov Hel-Or.
Yotam Harchol
Hebrew University and IDC Herzliya