SaaS Aviator Workflow examples
This section provides Aviator Workflow script examples to help you implement custom AI driven workflows in your project. This section is relevant for SaaS deployments only.
Generate test cases from a requirement
This example demonstrates how to use Aviator to analyze a requirement and automatically generate test cases. The workflow accesses the requirement name and description, sends them to the AI service with specific instructions, and displays the generated test cases for user selection and creation.
Prerequisites & dependencies:
-
Utility and prompt helper functions: stripHtml, removeSpecialChars, getNewLinesReplacedContent, getPreferredLanguage, getReqDescription, parseAIResponse, prompt, safePost
-
Test case creation functions
async function ActionCanExecute(actionName) {
if (actionName === "UserDefinedActions.GenerateTestCasesWithAviator") {
try {
const requirementId = Req_Fields("RQ_REQ_ID").Value;
const reqFactory = TDConnection.ReqFactory;
const reqObj = reqFactory.Item(requirementId);
const aviatorFactory = TDConnection.AviatorFactory();
const name = removeSpecialChars(getNewLinesReplacedContent(reqObj.Name));
const descriptionRaw = (getReqDescription() || "").trim();
let description = stripHtml(descriptionRaw);
description = removeSpecialChars(getNewLinesReplacedContent(description));
const promptContent = prompt(name, description);
ShowModalBox({ title: "We're generating your AI powered Test cases", isLoading: true });
const aiResponse = await aviatorFactory.sendPrompt(promptContent);
HideModalBox();
if (aiResponse?.statusCode === 408) {
MsgBox("Request timed out. Please try again.");
return;
}
let generatedTestCases = aiResponse.answer;
if (typeof generatedTestCases === "string" && !/```json/i.test(generatedTestCases)) {
MsgBox(aiResponse.answer);
return;
}
if (!generatedTestCases) return;
generatedTestCases = aiResponse.needToParseIt ? parseAIResponse(aiResponse) : aiResponse.answer;
console.log('Generated Test Cases:', generatedTestCases);
const selectedList = await ShowModalBox({
title: "Generate Tests with Aviator",
description: "Select suggestions that you want add as a manual test",
suggestionsList: generatedTestCases,
isLoading: false,
});
if (selectedList?.length === 0) {
MsgBox("Test Cases Generation is cancelled.");
return false;
}
await CreateTestCasesFromAviator(selectedList);
} catch(e) {
HideModalBox();
MsgBox(e?.message || JSON.stringify(e));
}
}
}
Generate tests from media
This example shows how Aviator analyzes attached media files (images or videos) and generates test cases from the media content. The workflow retrieves attachments from the requirement, validates file size, and sends the media to the AI service for analysis. The same workflow can also be used for analyzing text files attached to an entity.
Notes:
-
The requirement must have at least one attachment
-
Maximum attachment size: 10MB
Prerequisites & dependencies:
-
Utility and prompt helper functions: getPreferredLanguage, getMimeType, getDynamicMediaPrompt, parseAIResponse, mediaPrompt, safePost
-
Test case creation functions
async function ActionCanExecute(actionName) {
if (actionName === "UserDefinedActions.GenerateTestsFromMedia") {
try {
const requirementId = Req_Fields("RQ_REQ_ID").Value;
const reqFactory = TDConnection.ReqFactory;
const reqObj = reqFactory.Item(requirementId);
const attachments = reqObj.Attachments;
const attachList = attachments.NewList("");
if (attachList.length === 0) {
MsgBox("This requirement doesn’t have any attachments. Please try this option with another requirement, or attach media to the current requirement and try again");
return;
}
const attachObj = attachList?.[0];
const mediaPromptContent = mediaPrompt(attachObj);
const aviatorFactory = TDConnection.AviatorFactory();
ShowModalBox({ title: "We're generating your AI powered Test cases", isLoading: true });
const extension = attachObj?.Name?.split(".").pop().toLowerCase();
const aiUseCaseMetaData = [{"identifier": attachObj?.ID, "size": attachObj?.FileSize, extension}];
const aiResponse = await aviatorFactory.sendPrompt(mediaPromptContent, "MEDIA", aiUseCaseMetaData);
HideModalBox();
if (aiResponse?.statusCode === 408) {
MsgBox("Request timed out. Please try again.");
return;
}
if (aiResponse?.status === 413) {
MsgBox("Attachment size exceeds 10MB. Please use a smaller file to generate Test cases");
return;
}
let generatedTestCases = aiResponse.answer;
if (typeof generatedTestCases === "string" && !/```json/i.test(generatedTestCases)) {
MsgBox(aiResponse.answer);
return;
}
if (!generatedTestCases) return;
generatedTestCases = aiResponse.needToParseIt ? parseAIResponse(aiResponse) : aiResponse.answer;
const selectedList = await ShowModalBox({
title: "Generate Tests with Aviator",
description: "Select suggestions that you want add as a manual test",
suggestionsList: generatedTestCases,
isLoading: false,
});
if (selectedList?.length === 0) {
MsgBox("Test Cases Generation is cancelled.");
return false;
}
await CreateTestCasesFromAviator(selectedList);
} catch(e) {
HideModalBox();
MsgBox(e?.message || JSON.stringify(e));
}
}
}
Generate sub-requirements from requirements
This example shows how Aviator breaks a high level requirement into user stories in the format "As a [user type], I want [functionality] so that [benefit]". The workflow generates user stories with acceptance criteria and allows users to select which stories to create as sub-requirements.
Prerequisites & dependencies:
-
Utility and helper functions: stripHtml, removeSpecialChars, getNewLinesReplacedContent, getPreferredLanguage, getReqDescription, parseAIResponse, generateUserStories, createUserStoriesPrompt, safePost
-
User story creation function
async function ActionCanExecute(actionName) {
if (actionName === "UserDefinedActions.GenerateUserStories") {
try {
// Get current requirement details
const reqId = Req_Fields("RQ_REQ_ID").Value;
const reqName = removeSpecialChars(Req_Fields("RQ_REQ_NAME").Value || "");
const reqDescription = removeSpecialChars(getReqDescription() || "");
if (!reqName && !reqDescription) {
MsgBox("Error: Both requirement name and description are empty. Cannot generate user stories.");
return false;
}
// Generate user stories using AI
const userStories = await generateUserStories(reqName, reqDescription);
HideModalBox();
if (!userStories || !Array.isArray(userStories) || userStories.length === 0) {
MsgBox("Error: Failed to generate user stories from AI.");
return false;
}
console.log("Generated User Stories:", userStories);
const selectedList = await ShowModalBox({
title: "We're generating your AI powered User Stories",
description: "Select suggestions that you want add as a sub requirements",
suggestionsList: userStories,
isLoading: false,
type: 2
});
if (selectedList?.length === 0) {
MsgBox("User stories generation cancelled.");
return false;
}
// Create the new requirements
await createUserStoryRequirements(selectedList, reqId);
} catch (error) {
HideModalBox();
MsgBox("Error: " + (error.message || JSON.stringify(error)));
}
}
}
Summarize a requirement
This example demonstrates how to use Aviator to automatically generate a concise summary of a requirement. The workflow extracts the requirement name, description, and comments, sends them to the AI service for summarizing, and displays the result in a message box.
Prerequisites & dependencies:
-
Utility functions: stripHtml, removeSpecialChars, getNewLinesReplacedContent, getPreferredLanguage, getReqDescription
async function ActionCanExecute(actionName) {
if (actionName === "UserDefinedActions.Summarize") {
try {
const reqId = Req_Fields("RQ_REQ_ID").Value;
const req = TDConnection.ReqFactory.Item(reqId);
let name = req.Name;
/* ---------- pull description & comments ----------- */
const commentsRaw = (req.Comment || "").trim();
const descriptionRaw = (getReqDescription() || "").trim();
let comments = stripHtml(commentsRaw);
let description = stripHtml(descriptionRaw);
if (!description && !comments) {
MsgBox("Both the Description and Comment fields are empty for this requirement.");
return false; // swallow the click, nothing else to do
}
ShowModalBox({ title: `The requirement is being summarized using AI`, isLoading: true, type: 0 });
const aviatorFactory = TDConnection.AviatorFactory();
name = removeSpecialChars(getNewLinesReplacedContent(name));
description = removeSpecialChars(getNewLinesReplacedContent(description));
comments = removeSpecialChars(getNewLinesReplacedContent(comments));
const promptContent = getSummarizePromptContent(name, description, comments);
const aiResponse = await aviatorFactory.sendPrompt(promptContent);
HideModalBox();
if (aiResponse?.statusCode === 408) {
MsgBox("Request timed out. Please try again.");
return;
}
const summary = aiResponse?.answer;
if (!summary) {
MsgBox("Failed to obtain a summary from the AI service.");
return false;
}
MsgBox(summary);
} catch (e) {
MsgBox(e?.message || JSON.stringify(e));
HideModalBox();
}
}
}
Generate reproduction steps from video
This example shows how to use Aviator to analyze a video recording of a defect and automatically generate step-by-step reproduction steps. The workflow retrieves the first attachment from a defect, sends it to the AI service for analysis, and populates the defect description field with the generated steps.
Notes:
-
The defect must have at least one video attachment
-
Attachment file size must not exceed 10MB
-
User must have permissions to edit the defect description field
Prerequisites & dependencies:
-
Utility and prompt helper functions: getPreferredLanguage, getMimeType, getReproductionStepsFromVideoPrompt
async function ActionCanExecute(actionName) {
if (actionName === "UserDefinedActions.ReproductionStepsFromVideo") {
try {
const bugId = Bug_Fields("BG_BUG_ID").Value || '';
const bugFactory = TDConnection.BugFactory;
const bugObj = bugFactory.Item(bugId);
const attachments = bugObj.Attachments;
const attachList = attachments.NewList("");
if (attachList.length === 0) {
MsgBox("This Defect doesn’t have any attachments. Please try this option with another requirement, or attach media to the current requirement and try again");
return;
}
const attachObj = attachList?.[0];
const mediaPromptContent = getReproductionStepsFromVideoPrompt(attachObj, bugObj);
const aviatorFactory = TDConnection.AviatorFactory();
ShowModalBox({ title: "We're generating Reproduction steps from Video", isLoading: true, type: 5 });
const extension = attachObj?.Name?.split(".").pop().toLowerCase();
const aiUseCaseMetaData = [{"identifier": attachObj?.ID, "size": attachObj?.FileSize, extension}];
const aiResponse = await aviatorFactory.sendPrompt(mediaPromptContent, "MEDIA", aiUseCaseMetaData);
HideModalBox();
if (aiResponse?.statusCode === 408) {
MsgBox("Request timed out. Please try again.");
return;
}
if (aiResponse?.status === 413) {
MsgBox("Attachment size exceeds 10MB. Please use a smaller file to generate Test cases");
return;
}
if (aiResponse?.answer) {
const value = await ShowModalBox({
title: "Reproductions steps generated by AI",
description: "Please do changes to the content (if required)",
response: aiResponse?.answer,
isLoading: false,
type: 5,
});
if (value) {
Bug_Fields("BG_DESCRIPTION").Value = value;
}
}
} catch(e) {
HideModalBox();
MsgBox(e?.message || JSON.stringify(e));
}
}
}
}
Additional functions
safePost(entity, entityName, fieldValuesMap)
On-premises deployments only — Safely posts an entity to the database while gracefully handling missing required fields by prompting the user to fill them in, with intelligent caching for bulk operations.
Without this helper, if an administrator has configured mandatory custom fields on the project, a workflow script that creates entities (tests, requirements, design steps, test folders) will throw a raw server error and abort as soon as a required field is missing. safePost intercepts that error gracefully.
Parameters:
| Parameter | Type | Description |
|---|---|---|
| entity | Object | The ALM entity object about to be posted (created via AddItem) |
| entityName | String | Entity type identifier: 'test', 'requirement', 'test-folder', or 'design-step' |
| fieldValuesMap | Object | Map of field name → value pairs already set on the entity. Used to track field values in error scenarios |
Behavior:
| Situation | What Happens |
|---|---|
| Post succeeds first time | Proceeds silently; function returns true |
| Server error (other than required field) | Shows error message to user; operation stops; function returns false |
| Required field missing | A modal dialog appears listing the missing fields. User fills them in and submits |
| User submits missing field values | Values are applied to entity, Post is retried automatically |
| Retry fails | Error message shown; operation stops; function returns false |
| User dismisses/cancels the modal | Operation stops; the same dialog will not appear again for this entity type in this session |
Bulk Creation Behavior (Important):
When a workflow action creates multiple entities in a loop (for example, bulk-generating test cases from an AI response), safePost exhibits smart caching behavior:
-
User is prompted for missing required fields once per entity type per session. For example, if generating 10 tests and the test entity requires a custom field, the user is only prompted once.
-
The values entered by the user are cached automatically and applied to all subsequent entities of the same type.
-
If the user cancels the dialog, all remaining entities of that type are silently skipped without showing further dialogs.
-
This behavior exactly matches the ALM Web Client's bulk-edit behavior.
Usage Note:
Customers who have previously written custom workflow scripts based on SaaS example scripts and are migrating to on-premises, or who are extending the on-Premises example script with new entity creation logic, should replace any bare entity.Post() calls with await safePost(entity, entityName, fieldValuesMap) — particularly in projects that have mandatory custom fields configured.
const requiredFieldsCache = {}
/**
* Safely post an entity with required field error handling
* @param {object} entity - The entity object to post
* @param {string} entityName - The entity type name (e.g., 'test', 'requirement', 'design-step')
* @param {object} fieldValuesMap - Object with physical field names as keys and their values
* @returns {Promise} - Resolves when post succeeds, rejects on non-recoverable error
*/
async function safePost(entity, entityName, fieldValuesMap = {}) {
if (requiredFieldsCache[entityName] && requiredFieldsCache[entityName].__cancelled__) {
return false
}
if (requiredFieldsCache[entityName]) {
for (const fieldName in requiredFieldsCache[entityName]) {
if (fieldName === '__cancelled__') continue
try {
entity.Field[fieldName] = requiredFieldsCache[entityName][fieldName]
fieldValuesMap[fieldName] = requiredFieldsCache[entityName][fieldName]
} catch (e) {
console.warn('Could not apply cached field', fieldName, ':', e)
}
}
}
try {
entity.Post()
return true
} catch (error) {
let originalErrorMessage = error?.message || JSON.stringify(error)
try {
const errorData = JSON.parse(error?.data)
if (errorData?.Title) {
originalErrorMessage = errorData.Title
}
} catch (e) {}
const requiredFieldPattern = /The field '.+?' is required/i
if (!requiredFieldPattern.test(originalErrorMessage)) {
MsgBox(originalErrorMessage)
return false
}
try {
const fieldValues = await ShowRequiredFieldsWF(entityName, fieldValuesMap)
if (fieldValues) {
if (!requiredFieldsCache[entityName]) {
requiredFieldsCache[entityName] = {}
}
for (const fieldName in fieldValues) {
requiredFieldsCache[entityName][fieldName] = fieldValues[fieldName]
}
delete requiredFieldsCache[entityName].__cancelled__
for (const fieldName in fieldValues) {
if (fieldValues.hasOwnProperty(fieldName)) {
entity.Field[fieldName] = fieldValues[fieldName]
fieldValuesMap[fieldName] = fieldValues[fieldName]
}
}
try {
entity.Post()
return true
} catch (retryError) {
let retryErrorMessage = retryError?.message || JSON.stringify(retryError)
try {
const retryErrorData = JSON.parse(retryError?.data)
if (retryErrorData?.Title) {
retryErrorMessage = retryErrorData.Title
}
} catch (e) {}
MsgBox(retryErrorMessage)
return false
}
} else {
if (!requiredFieldsCache[entityName]) {
requiredFieldsCache[entityName] = {}
}
requiredFieldsCache[entityName].__cancelled__ = true
MsgBox(originalErrorMessage)
return false
}
} catch (showError) {
MsgBox(showError?.message || JSON.stringify(showError))
return false
}
}
}
stripHtml(html)
Removes all HTML tags and normalizes whitespace from a string, converting HTML entities to their text equivalents.
function stripHtml(html) {
if (!html) return "";
if (typeof document !== "undefined" && document.createElement) {
const tmp = document.createElement("div");
tmp.innerHTML = html;
return (tmp.textContent || tmp.innerText || "")
.replace(/\s+/g, " ") // collapse whitespace
.trim();
}
const decoded = html
// Remove <script>/<style> blocks first (just in case)
.replace(/<script[\s\S]*?<\/script>/gi, " ")
.replace(/<style[\s\S]*?<\/style>/gi, " ")
// Replace line-break type tags with spaces
.replace(/<\/?(br|p|div|li|tr|td)[^>]*>/gi, " ")
// Strip any remaining tags
.replace(/<\/?[^>]+>/g, " ")
// Decode the most common entities
.replace(/ /gi, " ")
.replace(/&/gi, "&")
.replace(/</gi, "<")
.replace(/>/gi, ">")
.replace(/"/gi, '"')
.replace(/'/gi, "'");
return decoded.replace(/\s+/g, " ").trim();
}
removeSpecialChars(str)
Strips all non-alphanumeric characters (except spaces) from a string, leaving only letters, numbers, and spaces.
function removeSpecialChars(str) {
return str?.replace(/[^a-zA-Z0-9 ]/g, '');
}
getNewLinesReplacedContent(str)
Replaces all newline and carriage return characters with spaces to normalize text formatting.
function getNewLinesReplacedContent(str) {
return str?.replace(/[\r\n]+/g, " ");
}
getPreferredLanguage()
Returns the user's preferred language setting from the ALM system for use in AI prompt formatting.
function getPreferredLanguage() {
return TDConnection.GetPreferredLanguage();
}
parseAIResponse(aiResponse)
Parses the AI response and returns structured JSON when the service wraps the result in markdown code fences or other formatting.
/**
* Parse AI response to extract JSON content
* Handles responses wrapped in markdown code blocks or other formatting
* @param {object} aiResponse - The AI response object
* @returns {any} - Parsed JSON content
*/
function parseAIResponse(aiResponse) {
try {
let content = aiResponse.answer;
if (!content) return null;
// Remove markdown code blocks if present
if (typeof content === 'string') {
content = content.replace(/```json|```/g, '').trim();
console.log('Content after removing markdown:', content);
// Try to parse as JSON
try {
return JSON.parse(content);
} catch (e) {
console.log(e)
}
}
// If content is already an object/array, return it
return content;
} catch (error) {
console.error('Error parsing AI response:', error);
return aiResponse.answer;
}
}
getReqDescription()
Retrieves the description or comment field from the current requirement, checking preferred field names and falling back to fields labeled "description."
function getReqDescription() {
const preferredNames = ["RQ_REQ_COMMENT"];
for (const name of preferredNames) {
try {
const f = Req_Fields.Field(name);
if (f && f.Value) return f.Value;
} catch (_) { /* field not present – skip */ }
}
/* fall-back: scan every field until the label contains “description” */
for (let i = 1; i <= Req_Fields.Count; i++) {
const fld = Req_Fields.FieldById(i);
if (fld && /description/i.test(fld.FieldLabel) && fld.Value) {
return fld.Value;
}
}
return "";
}
getMimeType(fileName)
Returns the MIME type for a file name so attachments can be sent to Aviator with the correct media metadata.
function getMimeType(fileName) {
if (!fileName || typeof fileName !== "string") return null;
const extension = fileName.split(".").pop().toLowerCase();
const mimeTypes = {
// Images
png: "image/png",
jpeg: "image/jpeg",
jpg: "image/jpeg",
webp: "image/webp",
heic: "image/heic",
heif: "image/heif",
// Audio
wav: "audio/wav",
mp3: "audio/mp3",
aiff: "audio/aiff",
aac: "audio/aac",
ogg: "audio/ogg",
flac: "audio/flac",
// Video
mp4: "video/mp4",
mpeg: "video/mpeg",
mpg: "video/mpeg",
mov: "video/quicktime", // standardized
avi: "video/x-msvideo",
flv: "video/x-flv",
webm: "video/webm",
wmv: "video/x-ms-wmv",
"3gp": "video/3gpp",
"3gpp": "video/3gpp",
"3g2": "video/3gpp2",
mkv: "video/x-matroska",
// Documents
txt: "text/plain",
// Fallback
default: "application/octet-stream"
};
return mimeTypes[extension] || mimeTypes.default;
}
getDynamicMediaPrompt
Builds a short prompt instruction based on the attachment type so Aviator knows whether to view, watch, listen to, or read the file.
const getDynamicMediaPrompt = (mimeType) => {
let basePrompt = "Handle this file";
if (mimeType.startsWith("image/")) {
basePrompt = "View the image";
} else if (mimeType.startsWith("video/")) {
basePrompt = "Watch the video";
} else if (mimeType.startsWith("audio/")) {
basePrompt = "Listen to the audio";
} else if (mimeType.startsWith("text/")) {
basePrompt = "Read the text file";
} else if (mimeType === "application/octet-stream") {
basePrompt = "Unknown file type";
}
return basePrompt;
};
AI prompt content
This section contains the prompt-building helper functions used by the Aviator workflow examples.
function mediaPrompt(attachment) {
const language = getPreferredLanguage();
let originalPrompt = "[{\"type\":\"text\",\"content\":\"<OBJECTIVE_AND_PERSONA - Start> 1. You are a seasoned professional in software testing and quality assurance. 2. You specialize in analyzing requirements, identifying risks and edge cases, and designing test cases that cover critical functionality, including positive and negative scenarios, boundary conditions, potential error states and data-driven tests purposes. 3. Your task is to design up to 7 test cases, covering the most critical aspects of the requirement mentioned in CONTEXT. <OBJECTIVE_AND_PERSONA - End> <INSTRUCTIONS - Start> - Output should be in the below OUTPUT_FORMAT or one of the following ['The provided requirement details are insufficient to generate comprehensive test cases.','I am sorry, but I cannot assist with that,'] - If CONTEXT includes Existing-Test details, exclude these tests from your results to avoid generating test cases with duplicate or similar titles. - If CONTEXT includes Sub-Requirement details, use them solely as additional context to clarify and enhance your understanding of the main requirement. Do not treat them as separate requirements. - If the requirement name or description, or the title and description fo the any children requirements, does not contain sufficient information to generate test cases, response directly with the string: 'The provided requirement details are insufficient to generate comprehensive test cases.'. - If the requirement's name or description, or the name and description of any child requirement, lacks sufficient details to create meaningful test cases, respond with: 'The provided requirement details are insufficient to generate comprehensive test cases.' - If provided information is overly simple, incomplete, or completely blank, respond directly with: 'The provided requirement details are insufficient to generate comprehensive test cases.' - Rely exclusively on the information provided in CONTEXT without assumptions, external references, or external data. - Never request additional details or clarification from the user. - Each generated test case must adhere exactly to the following JSON structure and all test cases must be encapsulated within a JSON array. <INSTRUCTIONS - End> <OUTPUT_FORMAT - Start> \`\`\`json {{ 'id': 'TC-001', 'name': 'User Login with Valid Credentials', 'description': 'Verify that a user can successfully log in with valid credentials.', 'test_type': 'Functional', 'priority': 'High', 'steps': [ {{ 'stepNumber': 1, 'description': 'Navigate to the login page', 'expected_result': 'Login page loads successfully' }}, {{ 'stepNumber': 2, 'description': 'Enter a valid username and password', 'expected_result': 'Username and password fields accept input' }}, {{ 'stepNumber': 3, 'description': 'Click the Login button', 'expected_result': 'User is redirected to the dashboard' }} ] }} \`\`\` <OUTPUT_FORMAT - End> <CONTEXT - Start>MEDIA_BASED_PROMPT and generate test cases <CONTEXT - End>LANGUAGE_CONTEXT\"}, AVIATOR_MEDIA_DYNAMIC_PROMPT]";
const mimeType = getMimeType(attachment?.Name);
const dynamicContent = {"mime_type": `${mimeType}`, "url": `identifier$${attachment?.ID}`};
const dynamicMediaPrompt = getDynamicMediaPrompt(mimeType);
let updatedString = originalPrompt.replace(/AVIATOR_MEDIA_DYNAMIC_PROMPT/g, JSON.stringify(dynamicContent));
updatedString = updatedString.replace(/MEDIA_BASED_PROMPT/g, dynamicMediaPrompt);
updatedString = updatedString.replace(/LANGUAGE_CONTEXT/g, `your final answer should be in ${language}, if not able to response fallback to English`);
return updatedString;
}
function getSummarizePromptContent(name, description, comments) {
/* ---------- build a short summarisation prompt ----- */
const language = getPreferredLanguage();
return `Summarise the NAME, DESCRIPTION and COMMENT below in ≤100 words.### NAME:${name}DESCRIPTION:${description || "<empty>"}COMMENT:${comments || "<empty>"}### your final answer should be in ${language}, if not able to response fallback to English`;
}
function prompt(name, description){
const language = getPreferredLanguage();
return `<OBJECTIVE_AND_PERSONA - Start> 1. You are a seasoned professional in software testing and quality assurance. 2. You specialize in analyzing requirements, identifying risks and edge cases, and designing test cases that cover critical functionality, including positive and negative scenarios, boundary conditions, potential error states and data-driven tests purposes. 3. Your task is to design up to 7 test cases, covering the most critical aspects of the requirement mentioned in CONTEXT. <OBJECTIVE_AND_PERSONA - End> <INSTRUCTIONS - Start> - Output should be in the below OUTPUT_FORMAT or one of the following ['The provided requirement details are insufficient to generate comprehensive test cases.','I am sorry, but I cannot assist with that,'] - If CONTEXT includes Existing-Test details, exclude these tests from your results to avoid generating test cases with duplicate or similar titles. - If CONTEXT includes Sub-Requirement details, use them solely as additional context to clarify and enhance your understanding of the main requirement. Do not treat them as separate requirements. - If the requirement name or description, or the title and description fo the any children requirements, does not contain sufficient information to generate test cases, response directly with the string: 'The provided requirement details are insufficient to generate comprehensive test cases.'. - If the requirement's name or description, or the name and description of any child requirement, lacks sufficient details to create meaningful test cases, respond with: 'The provided requirement details are insufficient to generate comprehensive test cases.' - If provided information is overly simple, incomplete, or completely blank, respond directly with: 'The provided requirement details are insufficient to generate comprehensive test cases.' - Rely exclusively on the information provided in CONTEXT without assumptions, external references, or external data. - Never request additional details or clarification from the user. - Each generated test case must adhere exactly to the following JSON structure and all test cases must be encapsulated within a JSON array. <INSTRUCTIONS - End> <OUTPUT_FORMAT - Start> \`\`\`json {{ 'id': 'TC-001', 'name': 'User Login with Valid Credentials', 'description': 'Verify that a user can successfully log in with valid credentials.', 'test_type': 'Functional', 'priority': 'High', 'steps': [ {{ 'stepNumber': 1, 'description': 'Navigate to the login page', 'expected_result': 'Login page loads successfully' }}, {{ 'stepNumber': 2, 'description': 'Enter a valid username and password', 'expected_result': 'Username and password fields accept input' }}, {{ 'stepNumber': 3, 'description': 'Click the Login button', 'expected_result': 'User is redirected to the dashboard' }} ] }} \`\`\` <OUTPUT_FORMAT - End> <CONTEXT - Start>${name} ${description} <CONTEXT - End> your final answer should be in ${language}, if not able to response fallback to English`;
}
// Create the prompt for user stories generation
function createUserStoriesPrompt(reqName, reqDescription) {
const language = getPreferredLanguage();
const description = stripHtml(reqDescription);
return `<OBJECTIVE_AND_PERSONA - Start> 1. You are a seasoned Business Analyst with expertise in agile methodologies and user story creation. 2. You specialize in analyzing requirements and breaking them down into clear, actionable user stories that follow the standard format. 3. Your task is to generate exactly three user stories based on the requirement mentioned in CONTEXT. <OBJECTIVE_AND_PERSONA - End> <INSTRUCTIONS - Start> - Output should be in the below OUTPUT_FORMAT only. - Generate exactly three user stories, no more and no less. - Each user story description must follow the format: "As a [user type], I want [functionality] so that [benefit]". - Rely exclusively on the information provided in CONTEXT without assumptions, external references, or external data. - Never request additional details or clarification from the user. - Each generated user story must adhere exactly to the following JSON structure and all user stories must be encapsulated within a JSON array. - Do not wrap your answer in any code fences or extra text—just the raw JSON array. <INSTRUCTIONS - End> <OUTPUT_FORMAT - Start> \`\`\`json [{{ 'id': 'US-001', 'name': 'User Story 1', 'description': 'As a [user type], I want [functionality] so that [benefit]', 'acceptanceCriteria': 'Clear acceptance criteria describing when this story is considered complete' }}] \`\`\` <OUTPUT_FORMAT - End> <CONTEXT - Start>Requirement Name: ${reqName} Requirement Description: ${description} <CONTEXT - End> your final answer should be in ${language}, if not able to response fallback to English`;
}
// Reproduction Steps from Video
function getReproductionStepsFromVideoPrompt(attachment, bugObj) {
const language = getPreferredLanguage();
let originalPrompt = `[{\"type\":\"text\",\"content\":\" Your task is to watch the provided video, which describes a defect in our product, and generate clear reproduction steps to recreate the issue shown. Here is the data: === start of data === defect ID: ${bugObj.ID}, defect name: ${bugObj.Name}, Today date: ${new Date().toDateString()}UTC ***ISSUE SPECIFICATIONS*** - section description: This section details the issue encountered. It should clearly explain the unexpected behavior or problem observed. It might include steps to reproduce the defect. - data: Description: None ***ISSUE FEATURE*** - section description: The parent of a defect is feature which is a higher-level entity in our application lifecycle management system. This entity represents a broader goal or objective, under which multiple defects or user stories, including ours, are developed. ***ISSUE ATTACHMENTS*** - section description: The linked attachments videos. These attachments are video that may have the reproduction steps of the defect. Attachments linked to the issue are distinguished using YAML format. === end of data === === start of User request === Request: Generate reproduction steps for a defect. For generating the reproduction steps of the defect, analyze the video and provide the steps. === end of User request === LANGUAGE_CONTEXT\"}, AVIATOR_MEDIA_DYNAMIC_PROMPT]`;
const mimeType = getMimeType(attachment?.Name);
const dynamicContent = {"mime_type": `${mimeType}`, "url": `identifier$${attachment?.ID}`};
let updatedString = originalPrompt.replace(/AVIATOR_MEDIA_DYNAMIC_PROMPT/g, JSON.stringify(dynamicContent));
updatedString = updatedString.replace(/LANGUAGE_CONTEXT/g, `your final answer should be in ${language}, if not able to response fallback to English`);
return updatedString;
}
Test case creation from Aviator response
CreateTestCasesFromAviator(list)
Manages the creation of multiple test cases generated by Aviator, handling folder creation, field validation through safePost, and result notification.
async function CreateTestCasesFromAviator(list) {
if (list?.length > 0) {
ShowModalBox({ title: "Creating Your Selected Test Cases...", description: "We’re processing your selections. The system is adding details and designing test steps behind the scenes", isLoading: true, type: 1, isCreating: true });
const testFolder = await generateAviatorTestFolder();
if (!testFolder) {
return;
}
// Re-show modal in case it was hidden by required fields dialog
ShowModalBox({ title: "Creating Your Selected Test Cases...", description: "We're processing your selections. The system is adding details and designing test steps behind the scenes", isLoading: true, type: 1, isCreating: true });
const testsList = await generateTests(testFolder, list);
HideModalBox();
if (testsList && testsList.length > 0) {
const firstTest = testsList[0];
ShowModalBox({ title: `Action Success!`, isSuccess: true, result: firstTest });
}
}
}
getAviatorFolder()
Async function that finds or creates the base 'Tests from aviator' test folder, using safePost for error handling.
async function getAviatorFolder() {
const testFolderFactory = TDConnection.TestFolderFactory;
const testFolderFactoryFilter = testFolderFactory.Filter;
testFolderFactoryFilter.Filter["AL_DESCRIPTION"] = "'Tests from aviator'";
const testFolders = testFolderFactoryFilter.NewList();
const testFoldersRoot = testFolderFactory.Root;
// Get existed test folder id
if (testFolders.length !== 0) {
for (let i = 0; i < testFolders.length; i++) {
const testFolder = testFolders[i];
if (testFolder.Field["AL_FATHER_ID"] == testFoldersRoot.ID) {
return testFolder;
}
}
}
// Create new aviator folder
const newTestFolder = testFolderFactory.AddItem(testFoldersRoot);
const fieldValuesMap = {};
newTestFolder.Field["AL_DESCRIPTION"] = "Tests from aviator";
fieldValuesMap["AL_DESCRIPTION"] = "Tests from aviator";
const success = await safePost(newTestFolder, 'test-folder', fieldValuesMap);
return success ? newTestFolder : null;
}
generateAviatorTestFolder()
Async function that creates a requirement-specific test folder using safePost for field validation.
async function generateAviatorTestFolder() {
const aviatorFolder = await getAviatorFolder();
const reqName = Req_Fields("RQ_REQ_NAME").Value;
const reqId = Req_Fields("RQ_REQ_ID").Value;
// Check if a folder already exists for this requirement
const existingFolder = getExistingTestFolder(aviatorFolder, reqId);
if (existingFolder) {
return existingFolder;
}
// Create new folder with requirement name
const folderName = `[Workflow] ${reqName}`;
const testFolderFactory = aviatorFolder.TestFolderFactory;
const newTestFolder = testFolderFactory.AddItem(aviatorFolder);
const fieldValuesMap = {};
newTestFolder.Field["AL_DESCRIPTION"] = folderName;
fieldValuesMap["AL_DESCRIPTION"] = folderName;
// Store requirement ID for tracking
newTestFolder.Field["AL_ABSOLUTE_PATH"] = `aviator_req_${reqId}`;
fieldValuesMap["AL_ABSOLUTE_PATH"] = `aviator_req_${reqId}`;
const success = await safePost(newTestFolder, 'test-folder', fieldValuesMap);
return success ? newTestFolder : null;
}
getExistingTestFolder(aviatorFolder, reqId)
Checks for an existing test folder matching the current requirement ID and returns it or null if not found.
/**
* Find existing test folder for this requirement
* @param {Object} aviatorFolder - The parent aviator folder
* @param {string} reqId - The requirement ID
* @returns {Object|null} The existing folder or null if not found
*/
function getExistingTestFolder(aviatorFolder, reqId) {
try {
const testFolderFactory = aviatorFolder.TestFolderFactory;
const allFolders = testFolderFactory.NewList();
// Search for folder with matching requirement ID
for (let i = 0; i < allFolders.length; i++) {
const folder = allFolders[i];
const folderPath = folder.Field["AL_ABSOLUTE_PATH"] || "";
// Check if this folder was created for the same requirement
if (folderPath === `aviator_req_${reqId}`) {
return folder;
}
}
return null;
} catch (error) {
console.error("Error checking existing test folders:", error);
return null;
}
}
generateTests(testFolder, aiResponse)
Async function that creates individual test entities from AI-generated test cases, using safePost for bulk entity creation with proper field mapping.
sync function generateTests(testFolder, aiResponse) {
if (!Array.isArray(aiResponse) || aiResponse.length === 0) return;
const testFactory = testFolder.TestFactory;
const newTests = [];
for (let i = 0; i < aiResponse.length; i++) {
const aiItem = aiResponse[i];
const newTest = testFactory.AddItem(null);
const fieldValuesMap = {};
newTest.Field["TS_DESCRIPTION"] = "<html>--- This test is initially created by Aviator in workflow --- <br>" + aiItem.description + "</html>";
fieldValuesMap["TS_DESCRIPTION"] = "<html>--- This test is initially created by Aviator in workflow --- <br>" + aiItem.description + "</html>";
newTest.Field["TS_NAME"] = aiItem.name;
fieldValuesMap["TS_NAME"] = aiItem.name;
newTest.Field["TS_ORIGIN"] = 'aviator';
fieldValuesMap["TS_ORIGIN"] = 'aviator';
newTest.Field["TS_RESPONSIBLE"] = User.UserName;
fieldValuesMap["TS_RESPONSIBLE"] = User.UserName;
newTest.Field["TS_IS_AVIATOR_GENERATED"] = "Yes";
fieldValuesMap["TS_IS_AVIATOR_GENERATED"] = "Yes";
const success = await safePost(newTest, 'test', fieldValuesMap);
if (!success) continue;
await generateDesignSteps(newTest, aiItem.steps);
generateCoverages([newTest.ID]);
newTests.push(newTest.ID);
}
return newTests;
}
generateDesignSteps(test, steps)
Async function that creates design steps for a test, with support for expected results field, using safePost for reliable entity creation.
async function generateDesignSteps(test, steps) {
if (!Array.isArray(steps) || steps.length === 0) return;
const designStepFactory = test.DesignStepFactory;
for (let i = 0; i < steps.length; i++) {
const step = steps[i];
const newStep = designStepFactory.AddItem(null);
const fieldValuesMap = {};
newStep.Field["DS_STEP_NAME"] = "Step " + (i + 1);
fieldValuesMap["DS_STEP_NAME"] = "Step " + (i + 1);
newStep.Field["DS_DESCRIPTION"] = "<html>" + step['description'] + "</html>";
fieldValuesMap["DS_DESCRIPTION"] = "<html>" + step['description'] + "</html>";
const expectedResult = step['expected_result'] || step['expected'] || '';
if (expectedResult) {
newStep.Field["DS_EXPECTED"] = "<html>" + expectedResult + "</html>";
fieldValuesMap["DS_EXPECTED"] = "<html>" + expectedResult + "</html>";
}
newStep.Field["DS_STEP_ORDER"] = i + 1;
fieldValuesMap["DS_STEP_ORDER"] = i + 1;
await safePost(newStep, 'design-step', fieldValuesMap);
}
}
generateCoverages(tests)
Links created tests to the requirement via the coverage factory, establishing the relationship between tests and requirements.
function generateCoverages(tests) {
if (!Array.isArray(tests) || tests.length === 0) return
const reqFactory = TDConnection.ReqFactory
const req = reqFactory.Item(Req_Fields('RQ_REQ_ID').Value)
for (let i = 0; i < tests.length; i++) {
req.AddCoverage(tests[i])
}
}
User story creation from Aviator
generateUserStories(reqName, reqDescription)
Sends the requirement details to Aviator, parses the returned suggestions, and returns the generated user stories for later selection and creation.
async function generateUserStories(reqName, reqDescription) {
try {
const aviatorFactory = TDConnection.AviatorFactory();
const promptContent = createUserStoriesPrompt(getNewLinesReplacedContent(reqName), getNewLinesReplacedContent(reqDescription));
ShowModalBox({ title: "We're generating your AI powered User Stories", isLoading: true, type: 2 });
const aiResponse = await aviatorFactory.sendPrompt(promptContent);
HideModalBox();
if (aiResponse?.statusCode === 408) {
MsgBox("Request timed out. Please try again.");
return;
}
if (!aiResponse) {
throw new Error("Failed to get AI response for user stories generation");
}
let generatedUserStories = aiResponse.answer;
if (typeof generatedUserStories === "string" && !/```json/i.test(generatedUserStories)) {
MsgBox(aiResponse.answer);
return;
}
if (!generatedUserStories) return;
generatedUserStories = aiResponse.needToParseIt ? parseAIResponse(aiResponse) : aiResponse.answer;
return generatedUserStories;
} catch (error) {
HideModalBox();
console.error("Error generating user stories:", error);
throw error;
}
}
parseUserStoriesResponse(response)
Parses an AI response into a normalized list of user stories, with fallbacks for different response formats.
function parseUserStoriesResponse(response) {
try {
// Try to extract JSON from response
if (hasJsonBlock(response)) {
const jsonStr = extractJsonContent(response);
const parsed = JSON.parse(jsonStr);
return Array.isArray(parsed) ? parsed : [parsed];
}
// Try to extract array content
const arrayStr = extractContentWithValidBrackets(response);
if (arrayStr && arrayStr !== 'Brackets not paired') {
const formattedStr = arrayStr.indexOf('"') === -1 ?
arrayStr.replace(/'/g, '"') : arrayStr;
const parsed = JSON.parse(formattedStr);
return Array.isArray(parsed) ? parsed : [parsed];
}
// If JSON parsing fails, try to manually parse user stories
return parseUserStoriesManually(response);
} catch (error) {
console.error("Error parsing user stories response:", error);
// Fallback to manual parsing
return parseUserStoriesManually(response);
}
}
parseUserStoriesManually(response)
Provides a fallback parser that extracts user stories line by line when the AI response cannot be parsed as JSON.
// Manual parsing as fallback
function parseUserStoriesManually(response) {
const stories = [];
const lines = response.split('\n');
let currentStory = null;
lines.forEach(line => {
line = line.trim();
// Look for "As a" pattern (start of user story)
if (line.toLowerCase().includes('as a ') && line.toLowerCase().includes('i want')) {
if (currentStory) {
stories.push(currentStory);
}
currentStory = {
title: `User Story ${stories.length + 1}`,
description: line,
acceptanceCriteria: ""
};
} else if (currentStory && line.length > 0 && !line.startsWith('```')) {
// Add additional details to current story
if (!currentStory.acceptanceCriteria) {
currentStory.acceptanceCriteria = line;
}
}
});
if (currentStory) {
stories.push(currentStory);
}
// Ensure we have exactly 3 stories
while (stories.length < 3) {
stories.push({
title: `User Story ${stories.length + 1}`,
description: "As a user, I want to interact with the system so that I can accomplish my goals",
acceptanceCriteria: "Basic functionality should work as expected"
});
}
return stories.slice(0, 3); // Return only first 3
}
createUserStoryRequirements(userStories, parentReqId)
Creates child requirements from the selected user stories and saves them with the relevant metadata and description content.
// Create new requirements from user stories
async function createUserStoryRequirements(userStories, parentReqId) {
const reqFactory = TDConnection.ReqFactory;
const parentReq = reqFactory.Item(parentReqId);
const createdRequirements = [];
try {
ShowModalBox({ title: "Creating Your selected User Stories...", description: "We’re processing your selections. The system is adding details and sub-requirements behind the scenes", isLoading: true, type: 2, isCreating: true });
for (let i = 0; i < userStories.length; i++) {
const story = userStories[i];
// Create the new requirement under the original requirement
const newReq = reqFactory.AddItem(parentReq);
const fieldValuesMap = {};
newReq.ParentId = parentReqId;
// Set basic fields
newReq.Field["RQ_REQ_NAME"] = story.name;
fieldValuesMap["RQ_REQ_NAME"] = story.name;
// Attempt to set description
const fullDescription = `<html>
<h3>User Story:</h3>
<p>${story.description}</p>
<h3>Acceptance Criteria:</h3>
<p>${story.acceptanceCriteria || "To be defined"}</p>
<hr>
<small><i>Generated automatically from parent requirement</i></small>
</html>`;
const descriptionFields = ["RQ_REQ_COMMENT"];
for (const fieldName of descriptionFields) {
if (newReq.Field[fieldName] !== undefined) {
newReq.Field[fieldName] = fullDescription;
fieldValuesMap[fieldName] = fullDescription;
break;
}
}
// Try to set type as Functional
try {
newReq.Field["RQ_TYPE_ID"] = "3";
fieldValuesMap["RQ_TYPE_ID"] = "3";
} catch (e) {
console.log("Could not set type to Functional:", e.message);
}
// Optional additional fields
try {
newReq.Field["RQ_REQ_PRIORITY"] = "Medium";
fieldValuesMap["RQ_REQ_PRIORITY"] = "Medium";
} catch (_) {}
try {
newReq.Field["RQ_REQ_AUTHOR"] = User.UserName;
fieldValuesMap["RQ_REQ_AUTHOR"] = User.UserName;
} catch (_) {}
// Save the new requirement
const success = await safePost(newReq, 'requirement', fieldValuesMap);
if (!success) {
continue;
}
createdRequirements.push({
id: newReq.ID,
name: story.name
});
}
// Refresh parent so the UI updates (if needed)
try { parentReq.Refresh(); } catch (_) {}
HideModalBox();
if (createdRequirements?.length) {
ShowModalBox({ title: `Action Success!`, isSuccess: true, result: createdRequirements?.[0]?.id, type: 2 });
}
return createdRequirements;
} catch (error) {
console.error("Error creating user story requirements:", error);
throw new Error(`Failed to create requirements: ${error.message}`);
}
}
User defined field as context for requirement creation
This example shows how to include selected user-defined requirement fields as additional context during requirement creation.
async function Req_CanPost() {
try {
const isUserDefinedFieldsDefined = Req_UserDefinedFields?.some((item) => {
console.log("item", Req_Fields(item.FieldName));
return Req_Fields(item.FieldName)?.Value;
});
console.log('isUserDefinedFieldsDefined', isUserDefinedFieldsDefined);
if (isUserDefinedFieldsDefined) {
const selectedUDFs = await ShowModalBox({
title: "User Defined Fields(UDF) - List",
description: 'Please choose one or more fields from the list below.',
isLoading: false,
type: 4,
suggestionsList: Req_UserDefinedFields,
multiSelect: true
});
if (selectedUDFs?.length > 0) {
const name = Req_Fields("RQ_REQ_NAME").Value || "";
const rawDesc = getReqDescription();
const description = stripHtml(rawDesc);
let checksPerformed = 0;
// Check compliance for each selected UDF
for (let i = 0; i < selectedUDFs.length; i++) {
const udfFieldName = selectedUDFs[i];
const udfValue = Req_Fields(udfFieldName)?.Value || "";
console.log("Checking UDF:", udfFieldName, "Value:", udfValue);
if (udfValue) {
checksPerformed++;
const prompt = `You are a ${udfValue} compliance expert. Check the requirement for ${udfValue} alignment. Name:${name} Description: ${description} Respond ONLY with JSON e.g: {compliant: true} or {compliant: false, reason: explaination of why non-compliant}`;
ShowModalBox({
title: `Checking ${udfValue} Compliance (${checksPerformed} of ${selectedUDFs.length})`,
isLoading: true,
type: 4
});
const aviatorFactory = TDConnection.AviatorFactory();
const aiResponse = await aviatorFactory.sendPrompt(prompt);
HideModalBox();
if (aiResponse?.statusCode === 408) {
MsgBox("Request timed out. Please try again.");
return false;
}
let response = aiResponse.answer;
if (!response) return false;
console.log('response', response);
const cleanedStringJson = response.replace(/```json|```/g, '').trim();
const parsedResponse = parseStringJson(cleanedStringJson);
console.log('parsedResponse', parsedResponse);
if (!parsedResponse?.compliant) {
// If any field fails compliance, show reason and return false
MsgBox(`${udfValue} compliance failed: ${parsedResponse?.reason || 'Requirement is not compliant'}`);
return false;
}
// Field passed, continue to next
console.log(`${udfValue} compliance passed`);
}
}
// All selected fields passed compliance
if (checksPerformed > 0) {
MsgBox(`All ${checksPerformed} compliance check(s) passed successfully!`);
}
return true;
}
// User cancelled or didn't select anything
return true;
}
// No UDF fields with values, allow posting
return true;
} catch(error) {
console.error("Error in Req_CanPost:", error);
return false;
}
}
function parseStringJson(str) {
try {
return JSON.parse(str);
} catch (e) {
return null;
}
}
/* Generate User Stories from Aviator - End */
function stripHtml(html) {
if (!html) return "";
if (typeof document !== "undefined" && document.createElement) {
const tmp = document.createElement("div");
tmp.innerHTML = html;
return (tmp.textContent || tmp.innerText || "")
.replace(/\s+/g, " ") // collapse whitespace
.trim();
}
const decoded = html
// Remove <script>/<style> blocks first (just in case)
.replace(/<script[\s\S]*?<\/script>/gi, " ")
.replace(/<style[\s\S]*?<\/style>/gi, " ")
// Replace line-break type tags with spaces
.replace(/<\/?(br|p|div|li|tr|td)[^>]*>/gi, " ")
// Strip any remaining tags
.replace(/<\/?[^>]+>/g, " ")
// Decode the most common entities
.replace(/ /gi, " ")
.replace(/&/gi, "&")
.replace(/</gi, "<")
.replace(/>/gi, ">")
.replace(/"/gi, '"')
.replace(/'/gi, "'");
return decoded.replace(/\s+/g, " ").trim();
}
function getReqDescription() {
const preferredNames = ["RQ_REQ_COMMENT"];
for (const name of preferredNames) {
try {
const f = Req_Fields.Field(name);
if (f && f.Value) return f.Value;
} catch (_) { /* field not present – skip */ }
}
/* fall-back: scan every field until the label contains “description” */
for (let i = 1; i <= Req_Fields.Count; i++) {
const fld = Req_Fields.FieldById(i);
if (fld && /description/i.test(fld.FieldLabel) && fld.Value) {
return fld.Value;
}
}
return "";
}
Customize Aviator workflow
This section describes how to add an Aviator Workflow script.
To add an Aviator workflow:
-
Click the Settings button
in the banner and select Workflow.Note: To customize workflows, you must have the Set up Workflow permission.
-
In the Workflow scripts tree, go to Common Scripts > ActionCanExecute.
Note: This step is specific for scripts using ActionCanExecute. For scripts that use other workflow events, in the Workflow scripts tree, select the relevant entity script and event handler. -
In the Script Editor tab, add JavaScript code for your new workflow script.
Use the example scripts as a basis for your script. Customize the script to suit your needs. - Add a custom toolbar button for your workflow.
To add a custom toolbar button:
-
In the Toolbar Button Editor tab, select the module to which to add the button and click Add. The button is given a default name.
-
Click the edit button
and edit the following: -
Action Name: This must match the action name in workflow script (For example, "GenerateTestCaseswithAviator")
-
Caption: Button name
-
Icon: Select an icon for the button
- Click Save. The button is added to the toolbar for that module.
See also:

