Connect chatbot (technical preview)

The Connect chatbot uses an interactive approach to help you set up connections. You can prompt the chatbot assistant with questions on how to use the product and how to perform synchronizations.

    Note:
  • Preview features will be subject to notable limitations in functionality and may differ significantly from the finalized version in future releases.

  • Preview features may be discontinued in future releases.

  • Preview features may be disabled by default and require manual activation.

Prerequisites

The system requirements for the chatbot are:

  • 16 GB of RAM for the PostgreSQL pgvector extension

  • 16 GB of VRAM for on-premises deployments for the installation of Ollama

Back to top

Set up the PostgreSQL extension

You install the pgvector extension for PostgreSQL to enhance the performance of vector search operations.

To install the extension:

  1. Install the pgvector extension for PostgreSQL. For details, see the pgjevctor documentation.

  2. Run the following SQL command in PostgreSQL to enable the vector extension: CREATE EXTENSION vector

  3. Run the following query to test the extension: SELECT extname,extrelocatable,extversion FROM pg_extension where extname='vector'.

Back to top

Enable the chatbot in Connect

This section describes how to manually enable the chatbot in your deployment.

To enable the chatbot in Connect:

  1. Run the following the batch utility script to populate embeddings into the PostgreSQL database. The JSON file called by the script, connect_helps_docs_26_1.json (or current version number), is available on the DevOps Cloud Marketplace.

    • Windows: mfcPopulateHelpDocsEmbeddings.bat “<file path>/connect_helps_docs_26_1.json”

    • Linux: mfcPopulateHelpDocsEmbeddings.sh “<file path>/connect_helps_docs_26_1.json”

  2. Run the following utility to enable the chatbot in Connect:

    java -jar mfcFullRestClient.jar -h localhost:8081 -c <user-name>,<password> -setGlobalPropertyValue -propertyName llm.integration.chat.enabled -propertyValue true

Back to top

Setup for on-premises deployments

To complete the setup for on-premises deployments, perform the following steps:

  1. Download and install Ollama, and run llama3.2 locally. For details, see the Ollama website.

  2. Set up the Ollama integration by configuring the llm.integration.chat.url with an OpenAPI compatible URL. Set the URL to http://<ollma-hostname>:11434/v1/chat/completions using the following command:

    java -jar mfcFullRestClient.jar -h localhost:8081 -c <user-name>,<password> -setGlobalPropertyValue -propertyName llm.integration.chat.url -propertyValue http://<ollma-hostname>:11434/v1/chat/completions

  3. Pull laama3.2. Run the following command: ollama run llama3.2
  4. Create a model. Perform the following steps:

    1. vi laama-8k

    2. FROM llama3.2:latest

    3. PARAMETER num_ctx 80000

  5. Create a new model. Run: ollama create <model_name> -f laama-8k

  6. Run ollama list to verify that the new model is listed.

  7. Configure the new model: Run

    java -jar mfcFullRestClient.jar -h localhost:8081 -c <user-name>,<password> -setGlobalPropertyValue -propertyName llm.integration.chat.modelname -propertyValue <model_name>

Back to top

Setup for Cloud deployments

To complete the setup for cloud deployments, perform the following steps:

  1. Set the OpenAPI compatible URL. Run:

    java -jar mfcFullRestClient.jar -h localhost:8081 -c <user-name>,<password> -setGlobalPropertyValue -propertyName llm.integration.chat.url -propertyValue <open-api-compatable-url>

  2. Set the token. Run:

    java -jar mfcFullRestClient.jar -h localhost:8081 -c <user-name>,<password> -setGlobalPropertyValue -propertyName llm.integration.chat.token -propertyValue <open-api-token>

  3. Configure the new model. Run:

    java -jar mfcFullRestClient.jar -h localhost:8081 -c <user-name>,<password> -setGlobalPropertyValue -propertyName llm.integration.chat.modelname -propertyValue <model_name>

Back to top

Use the chatbot

The chatbot utilizes the content in this help center to provide answers to your questions.

To use the chatbot in Connect:

  1. In the Connect user interface, click on the chatbot button Chatbot.

  2. Type in a question. For example, you may ask the chatbot "How do I add a data source?"

Note: For Ollama implementations: The integration with Ollama is designed to evoke accurate responses. For best results, make sure to prompt the chatbot with a clear and defined question. If the prompt is unclear or not contained within the Connect Help, you may receive results that are irrelevant to your query.

Back to top

See also: