2021 Wrap-up

Work on ElinarAI was continuous in 2021, several major customers went into production with highly advanced deep Document AI solutions.

Significant portion of development effort was spent on Deep Transfer Learning. Deep Transfer Learning is a practice where we pre-train models with vast amounts of data in unsupervised manner. After pre-training, a customer specific AI is generated from pre-trained model using a short process called fine-tunning.

We changed our pre-train procedure from autoregressive (GPT-2 alike) to Seq2Seq using heavily modified T5 recipe. For use-cases our customers commonly face this resulted a significant (~2,5+%) increase in overall accuracy.

During 2021 Elinar enabled one of our key partners to augment IBM Watson Discoverytm / IBM Watson Explorertm with ElinarAI. This integration brings the power of Deep Cognitive AI to IBM Watson technology stack. ElinarAI can now efficiently augment IBM CloudPak for Automationtmand IBM CloudPak for Datatm to provide document level understanding by creating process level structure into unstructured data. This enables our customers and partners to automate processes that have been out of automation scope due the need for Human Level Understanding. Robotic Process Automation (RPA) will also gain brains by using ElinarAI. This will make existing investments to RPA much more profitable.

2021 also brought a slight strategy shift for ElinarAI; ElinarAI go-to market strategy is now focused on partner channel. We enable our partners to provide Deep Cognitive Document AI embedded in their products and offerings. We strongly encourage our partners to brand ElinarAI and call it their own. We are very pleased with market adoption in Elinar Partner Channel. Telcos, Integrators and Product Companies have already started delivering ElinarAI as integral part of their offerings.

What will 2022 bring?

Large Language Models (LLMs) will be in partial focus for development. As the model size increases model accuracy gets better and better. There is a limit on how large models are practical; for example, when working with privacy data at scale, model accuracy is important, but if we can only assess 1 GB of data in a month. It is not practical if a target is 1 TB/month or 8 TB/month. Quite a bit of engineering effort will be spent in optimizing the inference process efficiency and model size to automatically find a balance between accuracy and speed.

In future we will offer our customers three starting points:

  • Fast (small document analyzed in < 20 ms)
  • Balanced (small document analyzed in < 80 ms)
  • Accurate (small document analyzed in ~ 200 ms)

Depending on use-case customers may balance the need for accuracy and speed as they wish. Our pre-trained language models will include most common language combinations. For example, the language model “base” will be quite different for use-case where medical records are in two Nordic languages than for Sales Order Process Automation with Sales Orders coming from UK, China, Thailand, Germany and so-on. This will enable our customers to pick most accurate and high performing LM for their particular use case with ease.

Containers, Kubernetes & RedHat OpenShift

Elinar AI already runs natively in Docker and Podman environments. During 2022 we will be working on ElinarAI consumability; we will enable ElinarAI to work in Kubernetes cluster and plan to publish it in Redhat EcoSystem Catalog. When working with massive data sets having Kubernetes to manage ElinarAI components, including ElinarNER enables automatic scaling to meet even the largest data processing needs. ElinarAI RedHat operator will enable quick deployment into IBM CloudPak for Data and IBM CloudPak for Automation.

ElinarAI for IBM Power 10

ElinarAI runs natively on IBM Power. Year 2022 will bring Power 10 optimized inferencing. This means that ElinarAI will be able to utilize new AI capabilities available in Power 10 processor cores. Expected performance and latency increases are significant. Power 10 processor can run AI inferencing workloads without need for GPU acceleration with much less latency.

Elinar will be working hard on 2022 to make partnering with us a simple process by enhancing our partner offerings and ElinarAI automated deployment for Kubernetes.