LLM request

The llm_request function emulates an LLM (Large Language Model) request. (Available from version 25.3 and later.)

C Language

int llm_request(const char *RequestName, const char *URL=<myURL>, const char *AIModel=<my_ai_model>, LAST );

Java Language

int object.llm.request(String RequestName, String URL=<myURL>, String AIModel=<my_ai_model>);
Argument
Description
object
An expression evaluating to an object of type WebApi. Usually web for Java. See also Function and Constant Prefixes.
RequestName
The title of the request.
URL
The URL of the API, for example https://api.openai.com/v1/chat/completions
AIModel
The AI model, for example, openai

Return Values

This function returns LR_PASS (0) on success, and LR_FAIL (1) on failure.

Parameterization

The following argument(s) can be parameterized using standard parameterization.

Examples

We recommend that you add headers before the LLM requests. Use lr_eval_string to retrieve the body.

The following example illustrates a script with an LLM request for OpenAI.

Copy code
web_add_header("Content-Type","application/json");
web_add_header("Authorization","<API_KEY>");
llm_request("OpenAI_Chat_Completion",
    "URL=https://api.openai.com/v1/chat/completions",
    "AIModel=openai",
    lr_eval_string("{request_body}"),
    LAST);
return 0;

The following example illustrates a script with an LLM request for Gemini.

Copy code
lr_param_sprintf ("request_body", "Body={ \"contents\":[ { \"parts\":[{\"text\": \"hello?\"}]} ] }");
web_add_header("Content-Type","application/json");
llm_request("Gemini_Chat_Completion",
     "URL=https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:generateContent?key=<API_KEY> ",
     "AIModel=gemini",
      lr_eval_string("{request_body}"),
      LAST);