Explore how genAI systems enhance growth opportunities for the chemical testing equipment industry.
In their basic format, search engines operate on Information Retrieval (IR) principles and are commonly used for searching documents when looking for specific information. However, a user is forced to go through the list of offered links and references and try manually to find one specific piece of information. That process can be challenging and time-consuming due to the incapability of traditional search engines to provide precise answers for users’ queries. Unlike search engines, Question Answering (QA) systems base their work on both IR and Information Extraction (IE) approaches. As a result, they are capable of providing a direct, concise answer to a specific question. Simply said, when a user asks the question, a QA system tries first to answer it with its knowledge. If the knowledge base is not enough, the system starts looking for the answer on the internet.
QA can be described as a system based on the Natural Language Processing (NLP) techniques for automatically answering questions asked by humans (in a natural language) using a collection of unstructured documents or pre-structured databases. Each QA system can be classified by an implementation approach to one of the following four types:
Information Retrieval QA – utilizes search engines for retrieving answers and then exploits appropriate filters and ranking mechanisms
NLP QA – uses NLP linguistic algorithms and machine learning methods to extract answers
Knowledge Base QA – looks for answers from structured data sources (as ontologies) and exploits default database queries instead of using word-based searching approaches
Hybrid QA – uses a mix of resources, and it often represents the combination of the previous three listed approaches
In addition to classifying QA systems from an implementation approach, their foundational features can be presented from three perspectives: domain type, system type, and question type. The domain type component defines QA systems through closed domain or open-domain models. Closed domain systems answer the questions from a particular narrow domain and offer answers on specific topics, while open-domain systems are based on general ontologies and broad unrestricted knowledge. Additionally, closed-domain QA systems could be pre-defined and directed to accept limited types of questions. For example, descriptive types of questions will be allowed, and procedural ones will not be accepted. On the other hand, open-domain systems process all natural language questions and transform them into desirable structured queries.
If the analysis of QA systems is made from the system type perspective, closed systems that rely entirely on their knowledge for answering questions were initial and original question answering systems. In many cases, a user of a similar system relies on community expertise and acquired group knowledge to get the answer. However, more popular systems are non-community QA systems, open to the public, which are currently influenced by a large base of worldwide contributors and accessible by every interested party.
Finally, from the question type perspective, the most popular QA systems are factoid models. A Factoid question (what, when, which, who, or how) is the one that could be answered with a simple fact presented by a single short text answer. However, recently a growing number of contributions exist on non-factoid QA systems and trends toward more intelligent systems that should be closer to human thinking and answering mechanisms. Besides factoid questions, many other questions could also be asked: list type questions, definition type questions, hypothetical type questions, causal and confirmation questions. It should be mentioned that the importance of the question type on the performance of QA systems is tremendous. Statistics present that even 36% of errors generated by QA systems occur due to wrong classifications of human queries. So, it is not enough to have a properly built QA system. Significant attention should also be made to the proper formulation and classification of natural language questions.
An initial step for building a QA system is to create a representative base of knowledge. This base could be local and built from specific repositories (containing scientific papers, books) or based on using different web technologies and accessing a worldwide knowledge base through Internet services. Once the relevant knowledge base is provided, the design process of the QA structure can begin. Simply presented, most QA systems are based on three main functions: query processing and extraction of facts, document retrieval, and answer extraction (Fig. below).
To perform the fact extraction process, it is required that QA understands domain-specific data and can build indexes from keywords to match relevant documents that contain wanted facts. Fact extraction is based on two main features: entity extraction and relation extraction. Entity extraction involves finding facts’ types and meanings by using NLP algorithms and extracting nouns/entities from a text. The second fact extraction feature, relation extraction, is used for understanding how established entities relate individually and mutually within the text. In the beginning, a question processing algorithm transforms a natural language question into a search query. The question is parsed and tagged with appropriate tags: proper nouns, numbers, verbs, nouns, adjectives, punctuations, etc. If the tagged word is not of interest for building a search query, it is removed. In the following, it is shown in a simplified manner how the question processing is performed. For example, the question “Who is the CEO of Google?” should be transformed to the following query: “CEO Google”.
After parsing and tagging of the question –
Who – Pronoun
is – auxiliary verb
the – determiner
CEO – noun
of – adposition
Google – proper noun
? – punctuation
The question gets converted to its final query with the following parameters-
CEO – Noun
Google – Proper Noun
and with the goal of locating a Person
To present the fact extraction process through another example, consider the following sentence: “Ben is going to New York, USA”. A well-tuned QA system should extract the following information from the sentence:
After two fundamental fact extraction parts, Inference could be the following optional process. The inference feature is capable of generating new facts by extracting them from already existing facts. From the example above, the system can infer that:
The second major step in QA work is retrieving appropriate documents by exploiting the generated query. The documents are systematically searched locally/globally in a way to find as many potential answers as possible. The retrieved documents are further divided into passages (sentences, paragraphs), and the ones that are most likely to contain relevant answers are selected. Finally, answer extraction, as the fourth step, extracts answers from passages. Inputs to the answer extraction algorithm are the question and specific passage, while the algorithm outputs the score of the answer offset. These scores are ranked in the end, and the answer with the highest score is presented to a user as a final answer.
Building a reliable and efficient QA system is a challenging task. Its performances are directly related to the quality of integrated tools for finding the answers and the depth of all involved NLP resources. When dealing with such a system and developing it, many arising questions should be answered, and multiple objectives should be fulfilled. First of all, it is essential to perform an accurate classification of questions according to their types. Once the questions are classified, suitable processing and analysis methods are required in order to identify keywords, generate patterns, set domain context, set semantic context, and related concepts, etc.
It is also helpful to say that a well-formed knowledge base is essential for achieving desirable QA performances. An optimal QA system design can sometimes be insufficient for a proper search process if the selected/formed knowledge base is not optimal. Once the knowledge base is reliable for use, information search and finding relevant answers are the next steps. There, it is required to select appropriate text blocks and, in some cases, even reformulate the raw text in accordance with the question context. One of the most prominent QA challenges arises here in the form of lexical gaps that occur as a difference between the semantically structured information of the knowledge base and natural language expression. Also, the same question could be asked in multiple ways using different words, which can differ significantly from a lexical perspective. Another challenging task is to design a QA system that provides the answers in real-time regardless of the question complexity and the size of the knowledge base.
In the end, an efficient and admirable QA system should possess multi-lingual support and provide meaningful answers to questions asked in different languages. It is desirable to provide the interactivity between a user and a system with information if the offered answer satisfies the users’ needs. If the needs are not full-field, the system could optimize the search process based on the feedback. One more problem could be a lack of systematic approaches to understanding the techniques and algorithms around QA systems and related relationships.
It is clear that many details must be taken into account when developing a new QA system. Some of them have become almost routine jobs, while some problems still require significant involvement of developers and continuous maintenance even after the QA system is completed. Finally, all previous requirements for building a QA system imply high costs of development and customizations. Our final advice is to consider the financial impact of building or deploying a QA system, otherwise the entire budget can be spent on model customization and domain training.