Intuition and the Limits of Natural Language Generation

Natural Language Generation lacks the insight that participation provides

Natural Language Generation (NLG) is a software process that converts complex systems (data) into text (prose) that is comprehensible to humans. NLG is a relatively new use for Artificial Intelligence (AI) that, since its advent, has helped accelerate operations across multiple sectors by quickly translating and summarizing datasets that would otherwise take thousands of working hours to interpret.

As NLG continues to be adopted by the financial, manufacturing, healthcare sectors and beyond, difficult questions about its scope and efficacy persist.  In its capacity to summarize and help humans draw conclusions from statistics and expansive datasets, some would suggest NLG has already demonstrated its worth—in generating language that makes financial forecasts, diagnoses patients and even succinctly conveys weather trends and the outcomes of sporting events.

Behind NLG is, of course, Natural Language Processing (NLP), a variance of AI that interprets and assesses the meaning of human language by way of its ability to order words and phrases, rather than just regarding either in the absence of a broader context. For tasks like sentiment analysis, automatic text summarization, and speech tagging, NLP has already proven useful. While a robust application of NLP, NLG is most often a passive language tool, meaning it reacts and responds to data given for analysis or interpretation, rather than possessing agency to determine an outcome in what is being described. Often, the phenomena being described, such as financial events, are entirely separated from the system generating the descriptive language.

All of the NLG use cases heretofore mentioned—financial analysis, diagnoses, text summarization—could be said to fall under the category of “observe and report.” The watcher, the AI, looks out, synthesizes the findings of that looking out, and then translates that synthesis into something intelligible to the human mind (language).  From this implementation of AI stems two immediate questions 1). Can all business-critical potentialities ever fully be accounted for and integrated into an NLG report, prognosis, or forecast? 2). If not, can an NLG report ever be entirely relied upon in situations where accuracy remains the paramount concern?

Any human observer can be faced with the preceding two questions; at which time, most rely on “intuition.”  Even if we regard intuition at its most base—predicting a future outcome based on one’s past experiences participating in an interactive world—we run into a problem: NLG often has no participatory experience from which to draw, only a history of prediction and analysis. The system doing the trading, for example, is not the system describing trading. The ability for “participation” distinguishes the human language generation from NLG, in that humans possess the capability to influence and process outcomes in the world being described. In the case of content creators, such participation can be leveraged to deploy specific aesthetic techniques—tonality, emphasis, poignancy—thereby creating and conveying subtext and urgency currently unreachable and incommunicable by way of NLG.

All this to say that while NLP and NLG are excellent resources, they still very much require the intervention and intuition of the human mind if their intent and purpose remains trained on the betterment of business. There is no doubt that NLG can aide in the making of accurate predictions and forecasts if used as a tool to complement the human consciousness.  However, what continues to distinguish the human mind is access that AI may never have: opportunities for participatory experience in the full consequences of language generation.