XFinLabs is on a mission to deliver RAG based generative AI tools tailored specifically for enterprise. No matter your business or focus, our team of AI experts can implement solutions to improve and enhance efficiency and insights for every person, team or department within an organization. Reach out to get a demo of how we can help transform your business today.

Discover the XFinLabs Advantage

AI Solutions Trained On Your Data

The XFinLabs mi-AI platform creates private AI databases built on the data that lives within your organization. This means unparalleled access to the facts and information that drives your business, resulting in faster and more insightful analytics.

Powerful Semantic Search And Tailored Responses

Our built-in Prompt Engineering System ensures the most accurate and precise responses to questions that matter most in your organization. These prompts give your AI an identity and help train it to think, act and respond in a way that reflects the style and approach of your own team members.

No-Code Custom Generative AI Solutions

With mi-AI, anybody can build their own custom generative AI apps — no coding required. These Apps can be trained to act on specific department or business knowledge and output reports tied to specific events or inputs.

Built-in Market Intelligence

Sometimes it's important to understand what is happening outside your organization. mi-AI is able to work securely and privately with internal data as well pull data from any external source to help provide industry-wide analysis.

Gen-AI Powered Reports

One of the most common and time-consuming tasks for any user is the generation of reports. mi-AI Report Builder gives your users the ability to point to data sources, identify key words and phrases and generate reports in an organization's style and tone.

Frequently Asked Questions

Retrieval augmented generation (RAG) is a technique used to optimize how an AI system works with private data and networks. While companies can upload a file to a public AI system, such as ChatGPT, there are limitations on how much data you can provide, along with context. (This is especially the case since the model has not been trained on an organization’s private and/or proprietary data.) As such people run a higher risk of the public AI model hallucinating (making stuff up). This is a major barrier for enterprise adoption. RAG solves this problem by preparing the data before it is passed to the AI model for analysis. This is done by storing data (files, databases, web sites etc.) into a special database called a vector database. So named because the data is broken into special “vectors” which help associate relationships between the data. When a user writes a query, the AI model is now able to better understand the context, without the need for further training or fine tuning.

When discussing AI, everything starts and ends with the Large Language Model (LLM). In short, the LLM is the brains behind artificial intelligence. There are many types of LLMs, from OpenAI’s GPT, to Meta’s LLaMa and Google’s Gemini. Each operate in slightly different ways and are trained on different data sets. However, they are all designed to understand language and generate responses that mimic how people talk, write and in some cases, create. (At this point it is important to understand that the LLM is not a database. It is a collection of algorithms that are able to query a database in natural language and respond in the same way.) At XFinLabs, our mi-AI platform is able to work using many of the popular LLMs. This allows mi-AI to return the most relevant responses based on an organization's preferred LLM.

In a RAG system, prompt engineering is how an enterprise can inform an AI model to think and act the way they want it to. Public models, like ChatPGT, train on vast amounts of data. (This is what makes them so intelligent in their responses - they really do know a lot!) In RAG systems, the amount of available data is limited to whatever the enterprise is willing to share. This makes it more challenging to get the best responses since there is not a lot of information to learn from. They way around this is prompt engineering. It basically gives the LLM a personality and trains it on what type of data to expect, and the tone and style of speech to respond in. For example, a user might specify in the prompt that the answers should be very polite or very direct, or written by a professor or a 5th grader. With the XFinLabs mi-AI platform, users have the power to engineer their own prompts to ensure that responses are tailored exactly to data currently being reviewed. This makes for far more relevant answers as well as the ability to uncover specific pieces of information often missed in larger LLMs.

No. However, they do work together. RAG systems need an LLM, like ChatGPT, to be able to ask questions and get an answer. In this scenario, RAG is responsible for categorizing and storing data, while ChatGPT is responsible for generating a response to the data.

This is a good question and can be answered by what you want to do. For example, if you want to use AI against your own internal data, or pull data in real-time from external sources, then you'll want a RAG based system. Why? Because the data you want to query has not been seen before by the public LLM. Remember, LLMs are trained on freely available data. If you have data that has never left your building, it will not be able to adequately answer your question. This is where RAG based systems offer the best solution for security and privacy. If you want to do market research, craft an email or write a book, then using a public AI solution is perfect.

mi (pronounced 'my') stands for "multi-indexing". If you recall, a RAG system stores the data to be queried in a special database called a vector database. This data is saved in a single file called an Index. This makes retrieval very fast and accurate as the LLM is only looking at the index for its response. One challenge though is when different teams, users or departments start to add their own, unrelated data to an existing Index. When data mixes, we increase the potential for hallucinations. To resolve this, XFinLabs uses the concept of multi-indexing. This means that for every new AI created by mi-AI, a new vector database and index is created. This makes the AI entirely private to that user, team or department.

Yes, RAG is an amazing innovation of AI, especially when it comes to working with private data. However, there are some downsides to consider:

  • GIGO - garbage in, garbage out. Because there is not a lot of data to learn from, anyone uploading garbage is going to have an outside impact on the quality of the results.
  • Performance - if you are working on large datasets, it may take longer to create the database to store this data, and retrieval may also be not as fast as public systems.
  • Bias - there are plenty of examples of how bias has affected the results of some public AI systems. Bias can often be amplified within a company and care should be taken when creating prompts.

The bottom line... AI is a powerful tool that needs to be respected and treated accordingly. Organizations need to spend time and be careful about how they tune their AI models and the types of data and access they allow. When used to handle specific tasks, there is no better way to enhance business productivity and insight. If no specific task comes to mind, stick with ChatGPT or any other of the amazing publicly available AI systems.