Back to top

Paper Documentation Is Finally Dead – It Was Killed by Semantic Search

“How Natural Language Processing and Semantic Search Are Changing the World” Blog Series: Part 1

There have been many talks about the role of semantic search, Natural Language Processing (NLP), big data analytics, and Artificial Intelligence (AI) in the workplace's digital transformation, but what does this all mean to your unique business use cases? This blog series will explore the different ways in which NLP and semantic search are enhancing information discovery and enterprise operations. In the first part, I'll focus on how these technologies are disrupting the world of documentation and customer support.


semantic-search_0.jpgBreaking from Our Paper-Based Legacy

Recently, I have done two assessments, one for a large software publisher and a second for a maker of integrated circuit software.

In both cases, we are recommending semantic search to answer questions, provide targeted search responses, and improve the customer experience. As I worked on these assessments, I have come to realize how much paper documentation still pervades the world of technical documentation and how that is about to change.

Humans Communicate with Humans

Human communication has always been about human-to-human communication. Machines, when they have been used, have merely been carriers of that information. Humans write – those words are printed, shipped to newsstands, stored on websites, and then read by other humans. Or, humans talk and move around, those sound vibrations and images are captured in audio or video formats, transmitted along wires, converted to digital bits, stored in hard drives as pulse-code modulation arrays of numbers (compressed), then decompressed and re-assembled into moving images and varying pressure waves to be received by humans who see, read, and (hopefully) understand them.

There have been attempts to ‘semantically code’ these communications. I’m thinking of HTML, XML, XHTML, HTML5, and so on. But unfortunately, these ‘semantic codings’ have been purely structural and presentational in nature. There is really nothing semantic about saying that “this block of text is the title,” for example. That is purely structural information. ‘Semantic structuring’ of modern-day communications has been all about being able to reformat human communications so that it can be distributed through a variety of delivery platforms, such as smart phones, tablets, kiosks, and so on.

Humans Communicate with Machines

But we are now starting to see Natural Language Processing (NLP) applications for the first time. The obvious ones are Siri, Alexa, Cortana, Wolfram Alpha, Watson, and Google Home. These will quickly explode into a wide range of applications that work their way into all aspects of our lives – but most especially – into all aspects of human communication.

In fact, at Search Technologies, we’ve developed our own NLP application called “Saga.” Saga is intended to advance the state of the art in Natural Language Processing by being able to tailor the language and the knowledge model to the unique domain that is found in the company. It is an end-to-end question-answering system that lives on top of search. It is a truly semantic extension to the search box with far-reaching implications for all types of search, including corporate intranet, e-commerce, website, publishing, customer support, and so on. It contains advanced pattern matching engines to process human language in a way that is much faster and more resilient than any other techniques available on the market today.

These NLP systems are machines that understand, to a very limited degree, human language and human endeavors. They can converse (again, to a very limited degree) with humans about topics which they both understand. To learn more about how NLP works, check out my tutorial or register for my upcoming webinar.

This Means That Machines Must Have Knowledge

But now we get to the crux of the situation: In order for a machine to be able to converse with a human, it must have machine-readable knowledge. 

For example, you go to Alexa and ask “How old is Michael Phelps?”

In order for Alexa to answer this question, it must be able to know (to some very limited extent):

  1. What is a Michael Phelps? (It is an instance of a human being)
  2. What is oldness? (It is a property of humans which can be computed from the human’s birthdate)
  3. When was Michael Phelps born? (June 30, 1985)
  4. How to compute how old he is (32 years old, as of the writing of this article)

My point is that this knowledge must be encoded in a machine-readable format so that the machine can read it, process it, and understand it enough to formulate an intelligent response.

This information is currently encoded in Wikidata and you can see Michael Phelps’ entry (entry Q39562). Look for the “date of birth” attribute.

wikidata-machine-readable_1.PNG
Another way to express this information would be in human-readable narrative text:

Phelps was born in Baltimore, Maryland on the 30th of June, 1985, and raised in the Rodgers Forge neighborhood of nearby Towson.

The problem with this paragraph is that it is not machine-readable. Or at least, not easily.

I know what you’re thinking: “Well, if computers can read the question, why can’t they just read the answer from the paragraph?”

But unfortunately, it’s not that simple.

The situation today is that computers can read the question (“How old is Michael Phelps?”) but cannot, in fact, read the answer (“Phelps was born in Baltimore, Maryland on the 30th of June, 1985...”), at least not without a lot of expensive effort that is not cost-effective.

Basically, NLP applications today are all about “limiting the requirements to make it possible.” Siri was a breakthrough not because its NLP is super advanced, but because it found a way to narrow the requirements and still be useful and fun.

And so, we have the need for machine-readable knowledge.

Machine-Readable Knowledge Is Becoming More Important

An interesting question is “What if Wikidata came first?” Suppose you didn’t have a human-readable “Wikipedia” at all. Suppose the entire database of knowledge in Wikipedia was expressed as machine-readable knowledge from the very beginning.

Then, machines could answer any question contained in Wikipedia and not just the limited ones we now have (how old, where born, how tall, who is, etc.).

Of course, such an endeavor would be impossible today. The world of human knowledge is so vast that expressing it in machine-readable format would take forever just to decide and agree on the structure of the database. So, it makes sense that Wikipedia came first and that the answers were written in a plain human-readable format.

But the world is changing.

More people are getting simple answers by just asking questions. Also, “TL;DR” (Too Long; Didn’t Read) is a thing – people don’t want long narrative explanations anymore; they just want simple facts and lists. This makes sense considering that the sheer volume of information is growing at an ever-increasing rate, thanks to social media.

And finally, clever computer programmers are encouraging humans to generate machine-readable knowledge right from the start. Don’t think so? Well, consider the followings:

  • @MichaelPhelps
  • #PhelpsVsShark
  • 😐

Hashtags, user handles, and emoticons are all machine-readable expressions. And social media interfaces encourage this sort of communication with type-ahead and tagging.

It feels like humans are just one step from this to entering entirely machine-readable information which computers can use to process and communicate.

Technical Documentation Will Be First

machine-readable-knowledge.jpgThere are many good reasons why technical documentation may be the first to completely break away from paper-based, human-readable documentation:

  • Technical documentation has always been a leader - with innovations like hypertext, FAQ’s, CD manuals, online manuals, wikis, etc.
  • Technical documents are narrow in scope - the universe of a software tool is small enough so that it can be feasibly expressed in machine-readable format
  • Computer programmers are not good writers - they will prefer creating machine-readable knowledge because then they don’t have to worry about things like nouns, verbs, and grammar
  • Customer support is expensive - having machine-readable knowledge will allow for more self-service customer support
  • Lots of technical documentation is already generated automatically - this is already (mostly) the case for Javadoc and similar sorts of library documentation. It’s a small step from this to creating machine-readable knowledge
  • Creating the knowledge graph will be less expensive than writing technical documentation - much of it can be derived automatically from source code or created automatically from the development process, such as examples from unit testing
  • Creating lists is less expensive than writing paragraphs
  • People reading technical documentation don’t want long narratives - really, they want short paragraphs and lots of lists and examples
  • Technical accuracy reigns supreme - shades of meaning, interpretation, opinion, misdirection, feeling – none of this is required in technical documentation

So now, imagine a world where there is no technical documentation, just machine-readable knowledge.

In such a world, what does it mean to “document my software product?” In this world, “documentation” means maintaining the knowledge base. In this world, the development team will need to:

  • Maintain lists of entities with attributes on these entities - software components, servers, files, APIs, objects, connections, fields, etc.
  • Maintain relationships between these entities - e.g., what servers produce or use what files and handle what APIs
  • Maintain lists of instructions as sets of entities with defined actions - for example, 1) INSTALL <server>, 2) EXECUTE <server>, 3) VIEW <UserInterface>

Note that end-users will not use this knowledge base directly. Instead, the computer will become the Tech-Doc Writer.

After all, computers are already “writing the answers.” When you ask “Siri, how old is Michael Phelps?” and Siri responds with “According to my sources, Michael Phelps is 55 years old” (the wrong Michael Phelps, obviously), this is Siri acting as a writer.

Of course, there will probably be some narrative text. For example, some introductory paragraphs to set the scene or short descriptions of entity types and so on.

But my feeling is that the amount of technical documentation can be reduced by 100-to-1 over traditional methods (e.g. user manuals, reference manuals, etc.), which is an enormous reduction in documentation costs with a corresponding increase in accuracy and completeness.

And Now, Computers Can Answer Questions

Having machine-readable knowledge like this allows for computers to answer questions directly. This has many advantages:

  • Computers can answer support questions - this will reduce cost
  • A wider variety of learning models can be supported - top-down, exploratory, directed, etc. Different models can be managed by the reorganizing the knowledge base
  • Random access to the documentation is easier to handle - users can enter the documentation at any point and then “grow their education” from that point
  • Alternative views and entry points can be supported - different, cross-sectional methods of navigating and exploring the documentation can be more easily supported as different ways to traverse the knowledge graph

Finally, just a machine-readable database of knowledge about the software will, all by itself, be incredibly valuable. It will allow for machine-managed checking of the software, the development process, sprint goals and sprint progress, quality test coverage, and much more.

How It Will Happen: The Emergence of Knowledge Graphs

knowledge-graph.jpgOf course, it’s already happening. In practice, what we’re seeing today is:

  1. Customers want better search for their customer support sites.
  2. This leads them to “semantic search.”
  3. In order to implement semantic search, we create “knowledge graphs” that describe the domain of the system(s) encompassed by the site.
  4. This is then coupled with Natural Language Processing for semantic search and question answering.
  5. As we endeavor to answer more and more questions, we create larger and more comprehensive “knowledge graphs.”
  6. Ultimately, we expect that the creation of the knowledge graph will come from the software development lifecycle itself.

I imagine that, in the future, the knowledge base about a software system will be automatically generated as software is developed:

  • APIs, connections, and objects are pulled from source code (much like they are today)
  • Examples will be automatically pulled from unit tests
  • Lists of servers, components, systems, sub-systems, etc. will be pulled from project databases
  • Deployment and execution instructions will be pulled from continuous deployment tools (e.g. Team Cities, Jenkins, etc.)

Technical documentation will migrate to become a “software knowledge graph management system.” It will automatically identify gaps that will need to be filled. Humans will group entities into taxonomies for easier navigation (by other humans) and may create additional lists for special functions which cannot be derived automatically (for example, “How to Back Up Your System” or “Getting Started”). These lists will also be machine readable so they can be used to answer questions as well.

After the software is released, errors and solutions will be created as part of the customer support process. User requests will be monitored and gaps will be automatically identified in the knowledge graph.

And one day, we will look at a system and wonder “Where has all the documentation disappeared to?”

There will be no paragraphs of text, manuals, or wiki pages. There will be no Javadoc or reference documentation. There will only be the knowledge base which is rendered, on-demand, by the computer in any way which is required.

Semantic Search and Natural Language Processing Are Changing the World…

… and we can help!

Natural Language Processing is becoming a mainstream technology. Engineers at Search Technologies have been working on NLP since the 1980’s. With a holistic understanding of the field, we have implemented numerous technologies to help refine and advance the state of the art in NLP and semantic search. The re-invention of technical documentation and customer support is just one example of how NLP and semantic search are changing the world.

How can you use these technologies to solve problems, lower costs, and increase revenue? Sign up for my upcoming NLP webinar and connect with us to explore more NLP and semantic search use cases. 

- Paul

0