Custom LLMs, RAG, & ChatGPT

There's a lot of excitement in the enterprise to connect employees to LLMs. These models all accept “prompts” from users and produce responses, but they differ in HOW they work:

  • Custom LLMs: These are fine-tuned models trained on your data & come up with content based on its training data.
  • RAG (Retrieval Augmented Generation): In these models, the LLM is provided additional context in the prompt.
  • Foundation Models: These are general, off-the-shelf models, like ChatGPT, Claude, or other families of LLMs.

How should I integrate Moveworks with my LLM?

Before you start!

Instead of integrating with your own LLM, first look into taking advantage of Moveworks' robust enterprise search capabilities. You can connect any of your files and knowledge bases through our plugin architecture. Connect with your CSM to learn more.

If you've evaluated our enterprise search capabilities, but they don't meet the needs of your use case, you can still connect these LLMs to Moveworks. The integration approaches are detailed below.

Integration

[object Object]
Gwen11:43 AM

Ask ChatGPT: Who was the 14th president?

[object Object]
Moveworks11:43 AM

One moment, fetching your results. This may take ~10 seconds

ChatGPT: Who was the 14th president?
Response: The 14th President of the United States was Franklin Pierce. He served as President from 1853 to 1857. Pierce, a Democrat, faced significant challenges during his presidency, including issues related to slavery and sectional tensions that would ultimately lead to the American Civil War.

This will be similar to a Lookup Single Record by ID / Keyword design pattern. In this case...

  1. Your example utterances should follow a fixed “trigger word” approach:
    • Ask ChatGPT: {{ user prompt }}
    • Ask MyCustomLLM: {{ user prompt}}
  2. Your slot will be the entire user prompt.
  3. You'll have to train your users to use these “magic words” to trigger your LLM.

Connecting to OpenAI endpoints?

For OpenAI's Chat Completions endpoint, instead of writing a text prompt, you'll want to use system, user, and assistant messages:

Copy
Copied
{
	...,
	"messages": [
		{
		  "role": "system",
		  "content": "You are an assistant. << Additional instructions >>. Please continue this conversation."
		},
		{
		  "role": "user",
		  "content": "initial prompt (turn #1)"
		},
		{
		  "role": "assistant",
		  "content": "generation #1"
		},
		{
		  "role": "user",
		  "content": "turn #2"
		}
	]
}

Closing Remarks

If you run into trouble following this guide, please don't hesitate to reach out for help!.

Remember, we highly recommend integrating through our Copilot architecture with new File Search & Knowledge plugins. Get in touch with your customer success team to get access to these capabilities.