Class: QuestionAnsweringService

QuestionAnsweringService()

This module is a combination of various sub-modules that enable users to get accurate answers on questions posed on a large amount of content. It includes basic intent recognition capabilities to enable appropriate responses to incorrect or profane language, or typical personal questions like "How are you?" and greetings

Constructor

new QuestionAnsweringService()

Source:

Methods

call(user, question, document_textopt, document_idsopt, check_ambiguityopt, check_query_typeopt, generic_responsesopt, meta, engineopt) → {Promise.<Object>}

Parameters:
Name Type Attributes Default Description
user string The ID of the user accessing the Soffos API. Soffos assumes that the owner of the api is an application (app) and that app has users. Soffos API will accept any string.
question string The question
document_text string <optional>
The text to be used as the context to formulate the answer.
document_ids Array.<string> <optional>
A list of unique IDs referencing pre-ingested documents to be used as the context to formulate the answer.
check_ambiguity bool <optional>
true When true, it checks whether the message contains a pronoun which is impossible to resolve and responds appropriately to avoid low quality or inaccurate answers. This is most useful when this module is used for conversational agents. For example: "What was his most famous invention?" Queries with pronouns that also contain the entity that the pronoun refers to are not rejected. For example: "What was Tesla's most famous invention and when did he create it?" In this case, the AI can infer that he refers to Tesla. Set this to false only when getting the most relevant content as the answer has equal or higher importance than the question being rejected or the answer being ambiguous/inaccurate.
check_query_type string <optional>
true When true, it will check whether the message is a natural language question, or whether it is a keyword query or a statement and respond appropriately if the message is not a question. The module is capable of returning a relevant answer to keyword or poorly formulated queries, but this option can help restrict the input. Set to false only when you wish the module to attempt to answer the query regardless of its type or syntactical quality.
generic_responses string <optional>
false In addition to checking for ambiguity or query type, this module performs other checks such as profanity, language, etc.. If the input query fails in one of these checks, it will reject the query by responding with a message that points out the issue. When true, the module will respond with a generic message without giving the reason as to why the message was rejected, which is the same behavior as when it cannot find an answer to the query in the provided context.
meta Object.<string, string>
engine string <optional>
null The LLM engine to be used.
Source:
Returns:
answer - string
The answer to the query. In cases where the query failed a check, and depending on the above explained parameters, this will be a message that indicates that an answer could not be retrieved.
valid_query - boolean
Boolean flag denoting whether the query failed a check.
no_answer - boolean
Boolean flag denoting that the query has passed the checks, but no answer for it was found in the context.
message_id - string
A unique ID representing the message and its associated prediction.
passages - dictionary list
A list of objects representing the most relevant passages of the queried documents. The first step for generating an answer is finding the most relevant passages from a big knowledge base. The passages are matched with a combination of keyword and semantic similarity. Each passage has the following fields:
text, document_name, document_id, scores: A dictionary containing the matching scores for either or both keyword, semantic.
context - string
The merged passages text.
highlights - dictionary list
A list of objects representing sentences within the context which are highly similar to the answer. Each dictionary has the following fields:
span: A list with the start and end character index of the sentence within context. sentence: The sentence text.
Type
Promise.<Object>
Example
import { SoffosServices } from "soffosai";

const my_apiKey = "Token <put your api key here>";
const service = new SoffosServices.QuestionAnsweringService({apiKey:my_apiKey});
let response = await service.call(
    "client12345",
    "How would Soffos SDK help me as a programmer?",
    "The Soffos SDK simplifies the process of plugging Soffos APIs into your applications. \
    With reduced code requirements, you can seamlessly integrate powerful AI functionalities \
    like microlessons, named entity recognition, and more."
);
console.log(JSON.stringify(response, null, 2));
    
// returns
// {
//     "message_id": "43f354b0ef1040a7894cfd2260652d9e",
//     "answer": "The Soffos SDK would help you as a programmer by simplifying the process of plugging Soffos APIs into your applications and reducing code requirements. This would allow you to seamlessly integrate powerful AI functionalities like microlessons and named entity recognition.",
//     "context": "The Soffos SDK simplifies the process of plugging Soffos APIs into your applications.     With reduced code requirements, you can seamlessly integrate powerful AI functionalities     like microlessons, named entity recognition, and more.",
//     "valid_query": true,
//     "no_answer": false,
//     "highlights": [
//       {
//         "span": [
//           90,
//           237
//         ],
//         "sentence": "With reduced code requirements, you can seamlessly integrate powerful AI functionalities     like microlessons, named entity recognition, and more."
//       }
//     ],
//     "passages": [],
//     "cost": {
//       "api_call_cost": 0.005,
//       "character_volume_cost": 0.0141,
//       "total_cost": 0.0191
//     },
//     "charged_character_count": 282,
//     "unit_price": "0.000050"
// }
  

setInputConfigs(name, question, document_text, document_ids, check_ambiguity, check_query_type, generic_responses, meta, engineopt)

Parameters:
Name Type Attributes Default Description
name string Reference name of this Service. It will be used by the Pipeline to reference this Service.
question string | InputConfig The question
document_text string | InputConfig The text to be used as the context to formulate the answer.
document_ids Array.<string> | InputConfig A list of unique IDs referencing pre-ingested documents to be used as the context to formulate the answer.
check_ambiguity boolean | InputConfig true When true, it checks whether the message contains a pronoun which is impossible to resolve and responds appropriately to avoid low quality or inaccurate answers. This is most useful when this module is used for conversational agents. For example: "What was his most famous invention?" Queries with pronouns that also contain the entity that the pronoun refers to are not rejected. For example: "What was Tesla's most famous invention and when did he create it?" In this case, the AI can infer that he refers to Tesla. Set this to false only when getting the most relevant content as the answer has equal or higher importance than the question being rejected or the answer being ambiguous/inaccurate.
check_query_type boolean | InputConfig true When true, it will check whether the message is a natural language question, or whether it is a keyword query or a statement and respond appropriately if the message is not a question. The module is capable of returning a relevant answer to keyword or poorly formulated queries, but this option can help restrict the input. Set to false only when you wish the module to attempt to answer the query regardless of its type or syntactical quality.
generic_responses boolean | InputConfig false
meta object | InputConfig
engine string <optional>
null The LLM engine to be used.
Source: