Back to top

Query Assistants: Auto-Complete, Auto-Suggest, Auto-Search, and Beyond

Konrad Holl
Konrad Holl
Functional & Industry Analytics Manager

Back in 1999, Google launched the first version of their auto-complete input box which instantly became one of the major topics discussed during the Verity Forum I attended that year. Over time, this technology has evolved a lot and remains very popular. Although the input box and the drop-down list themselves remained mostly the same on the user interface side, the technology and strategies involved in the back-end are immense. I took a look at different query assistance strategies and also investigated which improvements we can expect in the future.

Success Factors

query-assistants.jpgNo matter how good any particular implementation may be, there are some success factors which all of them have to consider. I would like to introduce those before going into detail:

  • Query assistants must be quick: a user can hit multiple keys per second and the suggestions must appear as quickly – requiring a good network infrastructure, lightweight communication protocols, and maybe some logic to request suggestions during a “typing-pause” only.
  • Any proposed word must produce search results: this is especially important for environments where access restrictions apply. You will never trust a system which tells you to turn left at the end of the road which in fact is a dead end!
  • The assistants’ database must be in sync with the search index: as new words or phrases become part of the search index, they must be suggested as well – in 2015, no one would have considered Dieselgate a valid suggestion.
  • Users need to be able to understand instantly why a term has been suggested: this implies that all the dictionaries, thesauri and data models must be well-maintained.
  • Try to avoid “did you mean”: Although this is a great feature (with or without automatically adjusting the query terms), query assistants must provide suggestions with correct spelling in the first place.


Auto-complete, the first generation assistant, simply uses the reverse index (aka word list) to look up words matching the characters the user has entered so far. The most relevant suggestions are identified using frequency information from the word list. This is a huge improvement to a search box without any assistance: there’s no need to type the entire word; you know that the search term will produce results – admittedly, in a secure environment, those auto-completed terms may not produce any due to access restrictions.


Simply looking up words starting with the same letters soon was not good enough anymore. Users wanted to have real suggestions, even if they didn’t know the exact spelling – or would you know how to spell José Manuel Barroso’s1 last name correctly?

This is where fuzzy suggestions based on the Levenshtein (i.e. edit) distance or substring matches based on n-grams come into play. Depending on the setup, you may be presented with a choice including Barroso and Garros2 while typing.

In this case, suggestions are based on a special index field in order to control which terms will be included in the suggestions. This approach provides these advantages:

  • Suggesting phrases instead of single words is possible.
  • Categories, retrieved from additional index fields, may also be suggested.
  • Standard search engine functionality like synonyms and thesauri can be used.
  • Document access restrictions can be taken into account.


In some data repositories, mostly consisting of small documents, auto-searching instead of suggesting words is more efficient. Especially when browsing product catalogues, searching the whole index is almost as quick as doing suggestions.

In a bookstore, you may be presented with a reduced result list which changes as you type. You would see the title, a short description, and maybe a thumbnail image of the book cover. Once you found the title you’re after, you can jump directly to the book details page without having to open the search results list and continue from there. Of course, there is still the option to open the full results list and check out other books that may be similar to the ones suggested (and of course match your search terms).

The Next Generation: Auto-?

So, what can we expect in the future? Some of the next-generation techniques can already be experienced today:

  • Microsoft SharePoint learns from user input and feedback. If documents in the results list have been clicked a number of times for a given search term, this term will be added to the list of possible suggestions.
  • records every user keystroke in order to anticipate typos and suggest the correct spelling automatically. Even the different keyboard layouts from all over the world (which keys are next to another) are taken into account to compute the best suggestion.
  • Personalization: Both Google and SharePoint create suggestions for users individually (given that they are logged on). While an engineer will be prompted with technical terms, users who do a lot of online shopping will be presented product names.
  • Machine learning: Various log files are ingested into search engines and then correlated and analysed for click paths, hidden relationships (like serial number patterns to product names), etc. Alternative suggestions are thus generated based on crowd intelligence.

A New Era of Search with Machine Learning

With huge amounts of computing power becoming available at low cost, neural networks and machine learning have attracted a lot of attention lately. Instead of suggesting only similar words, similar concepts can now be suggested. Having entered May, suggestions could include month, spring, or Prime Minister3.

Many new technologies are currently being discussed and developed – one of the most promising is Word2Vec (developed by Google in 20134, based on research dating back as far as 19865) which trains a machine learning model to predict the words surrounding a single word or entity6 in a sentence: Each word/entity is represented by a multi-dimensional vector based on this neighbourhood relationship. Even better, computing theses vectors can be accomplished in a reasonable amount of time.

As a side note: This vector representation even allows for some “concept math,” e.g. subtract King from Man and then add Woman – and the result will be Queen! Interestingly, the vector distance for two entity types, e.g. any country to its capital, is usually about the same – so knowing that Paris is the capital of France, use this distance with Canada and the result will be Ottawa (which is, in fact, Canada’s capital).


Image from Deeplearning4J


Assisting users with suggestions will go a long way, even after having been around for almost 20 years now. Don’t expect to see a drastic revolution in the user interface, but you will be surprised and amazed about the suggestions you’ll see in the future!

- Konrad

1The former President of the European Commission
2Roland Garros: French aviation pioneer
3Theresa May, United Kingdom Prime Minister
4Chen, Corrado, Dean and Mikolov: Efficient Estimation of Word Representations in Vector Space
5Hinton, McClelland, Rumelhart: Distributed representations
6Requires Natural Language Processing to identify entities like Formula One or White House.