Connect chatbot (technical preview)
The Connect chatbot uses an interactive approach to help you set up connections. You can prompt the chatbot assistant with questions on how to use the product and how to perform synchronizations.
- Note:
-
Preview features will be subject to notable limitations in functionality and may differ significantly from the finalized version in future releases.
-
Preview features may be discontinued in future releases.
-
Preview features may be disabled by default and require manual activation.
Prerequisites
The system requirements for the chatbot are:
-
16 GB of RAM for the PostgreSQL pgvector extension
-
16 GB of VRAM for on-premises deployments for the installation of Ollama
Set up the PostgreSQL extension
You install the pgvector extension for PostgreSQL to enhance the performance of vector search operations.
To install the extension:
-
Install the pgvector extension for PostgreSQL. For details, see the pgjevctor documentation.
-
Run the following SQL command in PostgreSQL to enable the vector extension:
CREATE EXTENSION vector -
Run the following query to test the extension:
SELECT extname,extrelocatable,extversion FROM pg_extensionwhere extname='vector'.
Enable the chatbot in Connect
This section describes how to manually enable the chatbot in your deployment.
To enable the chatbot in Connect:
-
Run the following the batch utility script to populate embeddings into the PostgreSQL database. The JSON file called by the script, connect_helps_docs_26_1.json (or current version number), is available on the DevOps Cloud Marketplace.
-
Windows:
mfcPopulateHelpDocsEmbeddings.bat “<file path>/connect_helps_docs_26_1.json” -
Linux:
mfcPopulateHelpDocsEmbeddings.sh “<file path>/connect_helps_docs_26_1.json”
-
-
Run the following utility to enable the chatbot in Connect:
java -jar mfcFullRestClient.jar -h localhost:8081 -c <user-name>,<password> -setGlobalPropertyValue -propertyName llm.integration.chat.enabled -propertyValue true
Setup for on-premises deployments
To complete the setup for on-premises deployments, perform the following steps:
-
Download and install Ollama, and run llama3.2 locally. For details, see the Ollama website.
-
Set up the Ollama integration by configuring the llm.integration.chat.url with an OpenAPI compatible URL. Set the URL to http://<ollma-hostname>:11434/v1/chat/completions using the following command:
java -jar mfcFullRestClient.jar -h localhost:8081 -c <user-name>,<password> -setGlobalPropertyValue -propertyName llm.integration.chat.url -propertyValue http://<ollma-hostname>:11434/v1/chat/completions - Pull laama3.2. Run the following command:
ollama run llama3.2 -
Create a model. Perform the following steps:
-
vi laama-8k
-
FROM llama3.2:latest
-
PARAMETER num_ctx 80000
-
-
Create a new model. Run: ollama create <model_name> -f laama-8k
-
Run
ollama listto verify that the new model is listed. -
Configure the new model: Run
java -jar mfcFullRestClient.jar -h localhost:8081 -c <user-name>,<password> -setGlobalPropertyValue -propertyName llm.integration.chat.modelname -propertyValue <model_name>
Setup for Cloud deployments
To complete the setup for cloud deployments, perform the following steps:
-
Set the OpenAPI compatible URL. Run:
java -jar mfcFullRestClient.jar -h localhost:8081 -c <user-name>,<password> -setGlobalPropertyValue -propertyName llm.integration.chat.url -propertyValue <open-api-compatable-url> -
Set the token. Run:
java -jar mfcFullRestClient.jar -h localhost:8081 -c <user-name>,<password> -setGlobalPropertyValue -propertyName llm.integration.chat.token -propertyValue <open-api-token> -
Configure the new model. Run:
java -jar mfcFullRestClient.jar -h localhost:8081 -c <user-name>,<password> -setGlobalPropertyValue -propertyName llm.integration.chat.modelname -propertyValue <model_name>
Use the chatbot
The chatbot utilizes the content in this help center to provide answers to your questions.
To use the chatbot in Connect:
-
In the Connect user interface, click on the chatbot button
. -
Type in a question. For example, you may ask the chatbot "How do I add a data source?"
Note: For Ollama implementations: The integration with Ollama is designed to evoke accurate responses. For best results, make sure to prompt the chatbot with a clear and defined question. If the prompt is unclear or not contained within the Connect Help, you may receive results that are irrelevant to your query.
See also:

