A few weeks ago, we published a new type of Gartner content called an “Emergence Cycle.” With “Emergence Cycle” research we are searching for emerging technologies still in the labs or very recently released. We choose those we believe could have a significant impact on the future of an emerging technology space. You can think of an Emergence Cycle as pre-Hype Cycle (another type of Gartner research). The two we have recently published address artificial intelligence natural language processing technologies and trends.
In conducting the research, we start with “big data” such as patent submissions, venture capital investments and academic papers. Then, we run machine learning algorithms against the data to identify groupings and possible patterns. After that, analysts examine the results searching for early signals that represent important but newly emerging technologies or trends. We specifically assess the early signals for impact potential and growth potential.
Below is a look at the graphic from our recently published NLP Emergence Cycle with a focus on intelligent applications and augmented text analytics (available to subscribing Gartner clients). Most of these are trends in the adoption of NLP vs. advancements in core NLP tech (such as transformer-based models, etc.).
So, let’s take a look at a couple of the emerging technologies that I find the most interesting.
Disease Digital Twins
A patient disease digital twin is a digital representation of an individual patient’s disease for diagnosis and treatment simulation. To digitally model the patient’s condition, medical professionals combine a patient’s data with relevant disease data (distilled from other patients, medical journals and other sources). Medical professionals can then use the patient’s disease digital twin for a more accurate diagnosis and more effective treatment. For example, using a robust disease digital twin, medical professionals can computationally simulate the effectiveness of thousands of pharmaceutical therapies to choose a drug that is more likely to deliver patient results.
Building disease digital twins is an AI big data problem. And so natural language processing (NLP) is needed to process large amounts of unstructured text-based patient and disease information to create, analyze and improve the digital twin.
Disease digital twins can transform patient care. But it also could substantially impact the overall evolution of digital twin technology. It could expand digital twin adoption beyond a tool for physical asset modeling to “softer” problem sets involving biological systems and human behaviors.
Full analysis done by Anthony Bradley
Synthetic advertising (SA) is advertising developed and scripted by AI systems and algorithms often targeted at a very defined group of consumers. Applying NLG and ML to data collected/analyzed at a massive scale (including geolocation intelligence) is the power behind SA.
SA delivers micro-segmentation at great scale and at real-time speed which is highly disruptive. This raises personalization to an entirely new level. Essentially, companies could, through automation, pull marketing programs and advertising together tailored specifically for an individual customer while they are in the moment. Of course there are technological, societal and governmental hurdles to mass adoption such as privacy concerns, shopper acceptance and rapid model training and updating.
However, companies recently have escalated their focus on developing more targeted digital advertising messages (especially in the post-COVID-19 environment). Combining this focus with the trend towards operational efficiency and shorter marketing creation times is driving growing interest in this technology. There are some early programmatic advertising products in the market, like TAPTAP. And recent jumps in patents around automatic/synthesized construction of digital advertising indicate this emerging technology is ramping up quickly. We expect to see significant movement in this space by 2023 or 2024.
Full analysis done by Annette Jump
NLP for Attack Prediction and Detection
The use of NLP is emerging to enhance information security attack detection and prediction. NLP for preprocessing of security breaches is a technology improvement over traditional techniques. Traditional detection techniques largely use static signatures and policy-oriented evaluations for detection of attacks or attacker discussions/interactions. One area of NLP emergence involves automatic tagging of scanned content for threatening language or for security categorization.
Using NLP is used to scan content on the web (including the dark web) to identify threatening language for detection, categorization and attribution to specific threat actors and threat actor groups. This represents a substantial leap in gathering actionable threat intelligence. NLP enriches detection and classifications to more quickly refine predictive scoring and categorization of collected, stored or transmitted content. Uses of this NLP-based technology include:
threat actor forum monitoring,
threat intelligence refinement, and
threat actor correlation and insider threat monitoring.
Some providers of security solutions are beginning to include NLP for attack prediction and detection in their offerings. However, we don’t expect it to approach mainstream until 2024 or beyond.
Full analysis done by Lawrence Pingree
We are very excited to offer this new kind of emerging tech and trends content. Over the next 12 months, we plan to produce a dozen or more Emergence Cycles covering a variety of emerging technology domains.
I realize this blog post is only a small portion of the research but I’m very interested to know what you think of this line of research. So I am hoping you now have a flavor of it.
Be safe, be healthy.
p.s. If you are a technology product or service leader and a Gartner client, don’t miss out! Subscribe to Gartner’s Emerging Technologies and Trends Research.