The FIND, INSERT, and DELETE operations shown here can also be implemented via a hash table, as we will learn shortly. A properly constructed hash table has the same properties with respect to FIND, INSERT, and DELETE- all run in time proportional to the length of the key (since the hash function takes time proportional to the length of the key, and this time will dominate the operations in a good hash table), not the number of keys. Hash tables also are easy to modify so that they can deal with non-unique keys, which is extremely practical in some applications. Best of all, hash tables are relatively easy to implement and understand, and good implementations generally consume far less memory than tries. Given all of these properties, why would anyone still choose a trie?
If the hashing function is chosen poorly, or the data is pathologically bad, the degenerate behavior of hash tables is bad. There is no bad data set for a trie; all data sets are equally good, as far as time-complexity is concerned.
On the other hand, some data sets may soak up more memory than others (although even in the worst case, memory requirements are bounded by an amount proportional to the total length of the keys).
This means that a trie can be used to sort a set of keys in much the same was a binary tree can- but a trie does not exhibit the degenerate worst case behavior that results from inserting keys into a binary tree in a ``bad'' order. This ordering also means that it is possible to easily find the lexical predecessor or successor of any key, as well as several other properties of a key (see exercise 2 for an example).
Finally, there are algorithms that encode information related to the keys in the tree itself. For example, information about the prefix of every key is implicit in the trie, so the question of whether there exist any keys that begin with a particular substring is easy to answer. This sort of functionality is often used for command or filename completion. It is also absolutely indispensable to many adaptive coding techniques; for example the Lempel-Ziv compression algorithm is very easy to implement with a trie.
It is possible to construct hashing functions that allow efficient ``prefix checking'', as we will see in a later chapter, but it is difficult to construct one that can out-perform a trie.