FLASH: Fast and Scalable Table-Lookup Engine Architecture for Telecommunications

Tsunemasa HAYASHI  Toshiaki MIYAZAKI  

IEICE TRANSACTIONS on Information and Systems   Vol.E85-D   No.10   pp.1636-1644
Publication Date: 2002/10/01
Online ISSN: 
Print ISSN: 0916-8532
Type of Manuscript: PAPER
Category: Network
table-lookup,  classification,  best match,  IP,  CAM,  programmable arbiter,  

Full Text: PDF>>
Buy this Article

This paper presents an architecture for a table-lookup (TLU) engine that allows the real-time operation of complicated TLU for telecommunications, such as the longest prefix match (LPM) and the long-bit match in packet classification. The engine consists of many CAM (Content Addressable Memory) chips, which are classified into several groups. When actual TLU is performed, the entries in each CAM group are searched simultaneously, and the best entry candidate in each group is selected by an intra-group arbiter. The final output, the entry desired, is decided by an inter group arbiter that selects one group. This hierarchical structure of arbitration is the key to the scalability of the engine. To accelerate the operation speed of the engine, we introduce a novel mechanism called "hit-flag look-ahead" that sends a hit-flag signal from each matched CAM chip to the inter group arbiter before each intra group arbiter calculates the best CAM output in the group. We show that a TLU engine based on the above architecture achieves significantly fast performance compared to engines based on conventional techniques, especially in the case of a large number of entries with long-bit matching. Furthermore, our architecture can realize an 33.3 Mlps (lookups per second) within a 128 bit 300,000-entry table at wire speed.