On-premises Aviator Workflow examples

This section provides Aviator Workflow script examples to help you implement custom AI driven workflows in your project. This section is relevant for on-premises deployments only. For details on the Aviator Workflow API, see Aviator API Reference.

Note: The examples in this section use the Aviator Workflow API chainable builder pattern, which is designed specifically for on-premises deployments that connect directly to the AI back-end. This API is not compatible with SaaS deployments. Using the on-premises Aviator Workflow API on a SaaS tenant will fail because the output does not conform to what the GenAI middleware expects.

Generate test cases from a requirement

This example demonstrates how to use Aviator to analyze a requirement and automatically generate test cases. The workflow accesses the requirement name and description, sends them to the AI service with specific instructions, and displays the generated test cases for user selection and creation.

Prerequisites & dependencies:

  • Utility functions: stripHtml, removeSpecialChars, getNewLinesReplacedContent, getPreferredLanguage, getReqDescription, safePost

  • Test case creation functions

Copy code
async function ActionCanExecute(actionName) {
// ACTION: Generate Test Cases with Aviator
 if (actionName === 'UserDefinedActions.GenerateTestCasesWithAviator') {
    if (!TDConnection.Aviator().isEnabled()) {
      MsgBox('Aviator is not enabled. Please contact your administrator.')
      return false
    }
    try {
      const requirementId = Req_Fields('RQ_REQ_ID').Value
      const reqFactory = TDConnection.ReqFactory
      const reqObj = reqFactory.Item(requirementId)

      const name = removeSpecialChars(getNewLinesReplacedContent(reqObj.Name))
      const descriptionRaw = (getReqDescription() || '').trim()
      let description = stripHtml(descriptionRaw)
      description = removeSpecialChars(getNewLinesReplacedContent(description))

      const response = await TDConnection.Aviator()
        .prompt()
        .persona(
          'You are a seasoned professional in software testing and quality assurance. You specialize in analyzing requirements, identifying risks and edge cases, and designing test cases.'
        )
        .instructions(
          'Design up to 2 test cases covering the most critical aspects of the requirement.'
        )
        .instructions(
          'Include positive and negative scenarios, boundary conditions, potential error states and data-driven tests.'
        )
        .instructions(
          'If the requirement lacks sufficient details, respond with: "The provided requirement details are insufficient to generate comprehensive test cases."'
        )
        .context({ name: name, description: description })
        .format(TDConnection.Formats.TestCases)
        .language(getPreferredLanguage())
        .maxResults(2)
        .send({ loadingTitle: "We're generating your AI powered Test cases" })

      if (response.isEmpty()) {
        return
      }

      if (typeof response.raw === 'string' && !response.isList()) {
        MsgBox(response.raw)
        return
      }

      // Handle text-only responses wrapped in array (AI returned plain text instead of structured data)
      if (response.isList()) {
        const items = response.asList()
        if (items.length > 0 && typeof items[0] === 'string') {
          MsgBox(items.join('\n'))
          return
        }
      }

      const selectedList = await response.showSuggestions({
        title: 'Generate Tests with Aviator',
        description: 'Select suggestions that you want to add as a manual test',
        multiSelect: true
      })

      if (!selectedList || selectedList.length === 0) {
        MsgBox('Test Cases Generation is cancelled.')
        return false
      }

      await CreateTestCasesFromAviator(selectedList)
    } catch (e) {
      HideModalBox()
      MsgBox(e?.message || JSON.stringify(e))
    }
  }  
}  

Back to top

Generate tests from media

This example shows how Aviator analyzes attached media files (images or videos) and generates test cases from the media content. The workflow retrieves attachments from the requirement, validates file size, and sends the media to the AI service for analysis. The same workflow can also be used for analyzing text files attached to an entity.

Note: Before implementing scripts for media attachments, verify that your LLM supports analysis of media files.

Notes:

  • The requirement must have at least one attachment

  • Maximum attachment size: 10MB

Prerequisites & dependencies:

  • Utility functions: getPreferredLanguage, safePost

  • Test creation functions

Copy code
async function ActionCanExecute(actionName) {
// ACTION: Generate Tests from Media (Attachment)
 if (actionName === 'UserDefinedActions.GenerateTestsFromMedia') {
    if (!TDConnection.Aviator().isEnabled()) {
      MsgBox('Aviator is not enabled. Please contact your administrator.')
      return false
    }
    try {
      const requirementId = Req_Fields('RQ_REQ_ID').Value
      const reqFactory = TDConnection.ReqFactory
      const reqObj = reqFactory.Item(requirementId)
      const attachments = reqObj.Attachments
      const attachList = attachments.NewList('')

      if (attachList.length === 0) {
        MsgBox(
          "This requirement doesn't have any attachments. Please try this option with another requirement, or attach media to the current requirement and try again"
        )
        return
      }

      const attachObj = attachList?.[0]

      const response = await TDConnection.Aviator()
        .prompt()
        .persona(
          'You are a seasoned professional in software testing and quality assurance. You specialize in analyzing requirements, identifying risks and edge cases, and designing test cases.'
        )
        .instructions(
          'Analyze the attached media and design up to 7 test cases covering the most critical aspects.'
        )
        .instructions(
          'Include positive and negative scenarios, boundary conditions, potential error states and data-driven tests.'
        )
        .instructions(
          'If the media lacks sufficient details, respond with: "The provided requirement details are insufficient to generate comprehensive test cases."'
        )
        .attachment(attachObj)
        .format(TDConnection.Formats.TestCases)
        .language(getPreferredLanguage())
        .maxResults(7)
        .send({ loadingTitle: "We're generating your AI powered Test cases" })

      if (response.statusCode === 408) {
        MsgBox('Request timed out. Please try again.')
        return
      }
      if (response.statusCode === 413) {
        MsgBox('Attachment size exceeds 10MB. Please use a smaller file to generate Test cases')
        return
      }

      if (response.isEmpty()) {
        return
      }

      if (typeof response.raw === 'string' && !response.isList()) {
        MsgBox(response.raw)
        return
      }

      // Handle text-only responses wrapped in array (AI returned plain text instead of structured data)
      if (response.isList()) {
        const items = response.asList()
        if (items.length > 0 && typeof items[0] === 'string') {
          MsgBox(items.join('\n'))
          return
        }
      }

      const selectedList = await response.showSuggestions({
        title: 'Generate Tests with Aviator',
        description: 'Select suggestions that you want add as a manual test',
        multiSelect: true
      })

      if (!selectedList || selectedList.length === 0) {
        MsgBox('Test Cases Generation is cancelled.')
        return false
      }

      await CreateTestCasesFromAviator(selectedList)
    } catch (e) {
      HideModalBox()
      MsgBox(e?.message || JSON.stringify(e))
    }
  } 
 
}  

Back to top

Generate sub-requirements from requirements

This example shows how Aviator breaks a high level requirement into user stories in the format "As a [user type], I want [functionality] so that [benefit]". The workflow generates user stories with acceptance criteria and allows users to select which stories to create as sub-requirements.

Prerequisites & dependencies:

  • Utility functions: stripHtml, removeSpecialChars, getPreferredLanguage, getReqDescription, safePost

  • User story creation function

Copy code
async function ActionCanExecute(actionName) {
// ACTION: Generate User Stories
 if (actionName === 'UserDefinedActions.GenerateUserStories') {
    if (!TDConnection.Aviator().isEnabled()) {
      MsgBox('Aviator is not enabled. Please contact your administrator.')
      return false
    }
    try {
      const reqId = Req_Fields('RQ_REQ_ID').Value
      const reqName = removeSpecialChars(Req_Fields('RQ_REQ_NAME').Value || '')
      const reqDescription = removeSpecialChars(getReqDescription() || '')

      if (!reqName && !reqDescription) {
        MsgBox(
          'Error: Both requirement name and description are empty. Cannot generate user stories.'
        )
        return false
      }

      const userStoriesFormat = TDConnection.Aviator()
        .buildFormat()
        .addField('title', 'string', 'User story title')
        .addField(
          'description',
          'string',
          'In the form: As a [user type], I want [functionality] so that [benefit]'
        )
        .addField('acceptanceCriteria', 'string', 'Acceptance criteria for the story')
        .asList()
        .build()

      const response = await TDConnection.Aviator()
        .prompt()
        .persona('You are a Business Analyst skilled at writing user stories.')
        .instructions('Generate exactly three user stories for this requirement.')
        .instructions(
          'Each story should follow: As a [user type], I want [functionality] so that [benefit]'
        )
        .context({
          requirementName: reqName,
          requirementDescription: stripHtml(reqDescription)
        })
        .format(userStoriesFormat)
        .language(getPreferredLanguage())
        .maxResults(3)
        .send({ loadingTitle: "We're generating your AI powered User Stories", loadingType: 2 })

      if (response.isEmpty() || !response.isList()) {
        MsgBox('Error: Failed to generate user stories from AI.')
        return false
      }

      const selectedList = await response
        .mapToSuggestions({ name: 'title', description: 'description' })
        .showSuggestions({
          title: "We're generating your AI powered User Stories",
          description: 'Select suggestions that you want add as a sub requirements',
          multiSelect: true,
          type: 2
        })

      if (!selectedList || selectedList.length === 0) {
        MsgBox('User stories generation cancelled.')
        return false
      }

      const createdRequirements = await createUserStoryRequirements(selectedList, reqId)
      if (!createdRequirements || createdRequirements.length === 0) {
        return false
      }
    } catch (error) {
      HideModalBox()
      MsgBox('Error: ' + (error.message || JSON.stringify(error)))
    }
  } 
}  

Back to top

Summarize a requirement

This example demonstrates how to use Aviator to automatically generate a concise summary of a requirement. The workflow extracts the requirement name, description, and comments, sends them to the AI service for summarizing, and displays the result in a message box.

Prerequisites & dependencies:

  • Utility functions: stripHtml, removeSpecialChars, getNewLinesReplacedContent, getPreferredLanguage, getReqDescription

Copy code
async function ActionCanExecute(actionName) {
// ACTION: Summarize Requirement
if (actionName === 'UserDefinedActions.Summarize') {
    if (!TDConnection.Aviator().isEnabled()) {
      MsgBox('Aviator is not enabled. Please contact your administrator.')
      return false
    }
    try {
      const reqId = Req_Fields('RQ_REQ_ID').Value
      const req = TDConnection.ReqFactory.Item(reqId)
      let name = req.Name

      const commentsRaw = (req.Comment || '').trim()
      const descriptionRaw = (getReqDescription() || '').trim()

      let comments = stripHtml(commentsRaw)
      let description = stripHtml(descriptionRaw)

      if (!description && !comments) {
        MsgBox('Both the Description and Comment fields are empty for this requirement.')
        return false
      }

      name = removeSpecialChars(getNewLinesReplacedContent(name))
      description = removeSpecialChars(getNewLinesReplacedContent(description))
      comments = removeSpecialChars(getNewLinesReplacedContent(comments))

      const response = await TDConnection.Aviator()
        .prompt()
        .persona('You are a technical writer skilled at summarizing requirements.')
        .instructions('Summarize the NAME, DESCRIPTION and COMMENT below in ≤100 words.')
        .context({
          name: name,
          description: description || '<empty>',
          comment: comments || '<empty>'
        })
        .format(TDConnection.Formats.Text)
        .language(getPreferredLanguage())
        .send({ loadingTitle: 'The requirement is being summarized using AI', loadingType: 0 })

      if (response.isEmpty()) {
        MsgBox('Failed to obtain a summary from the AI service.')
        return false
      }

      MsgBox(JSON.stringify(response.data))
    } catch (e) {
      MsgBox(e?.message || JSON.stringify(e))
      HideModalBox()
    }
  } 
}  

Back to top

Generate reproduction steps from video

This example shows how to use Aviator to analyze a video recording of a defect and automatically generate step-by-step reproduction steps. The workflow retrieves the first attachment from a defect, sends it to the AI service for analysis, and populates the defect description field with the generated steps.

Note: Before implementing scripts for media attachments, verify that your LLM supports analysis of media files.

Notes:

  • The defect must have at least one video attachment

  • Attachment file size must not exceed 10MB

  • User must have permissions to edit the defect description field

Prerequisites & dependencies:

  • Utility function: getPreferredLanguage

Copy code
async function ActionCanExecute(actionName) {
// ACTION: Reproduction Steps from Video (Defect)
 if (actionName === 'UserDefinedActions.ReproductionStepsFromVideo') {
    if (!TDConnection.Aviator().isEnabled()) {
      MsgBox('Aviator is not enabled. Please contact your administrator.')
      return false
    }
    try {
      const bugId = Bug_Fields('BG_BUG_ID').Value || ''
      const bugFactory = TDConnection.BugFactory
      const bugObj = bugFactory.Item(bugId)
      const attachments = bugObj.Attachments
      const attachList = attachments.NewList('')

      if (attachList.length === 0) {
        MsgBox(
          "This Defect doesn't have any attachments. Please try this option with another requirement, or attach media to the current requirement and try again"
        )
        return
      }

      const attachObj = attachList?.[0]

      const response = await TDConnection.Aviator()
        .prompt()
        .persona(
          'You are a QA expert skilled at analyzing video recordings to identify defect reproduction steps.'
        )
        .instructions('Watch the provided video which describes a defect in our product.')
        .instructions('Generate clear reproduction steps to recreate the issue shown in the video.')
        .instructions(
          'Include specific actions, expected vs actual behavior, and any preconditions visible.'
        )
        .context({
          defectId: bugObj.ID,
          defectName: bugObj.Name,
          currentDate: new Date().toDateString() + ' UTC'
        })
        .attachment(attachObj)
        .format(TDConnection.Formats.Text)
        .language(getPreferredLanguage())
        .send({ loadingTitle: "We're generating Reproduction steps from Video", loadingType: 5 })

      if (response.statusCode === 408) {
        MsgBox('Request timed out. Please try again.')
        return
      }
      if (response.statusCode === 413) {
        MsgBox(
          'Attachment size exceeds 10MB. Please use a smaller file to generate reproduction steps'
        )
        return
      }

      if (!response.isEmpty()) {
        // response.data is the parsed object from the LLM: { type: "text", description: "..." }
        // Extract the plain text description so ModalContainer can render the editable field
        const rawData = response.data
        let reproText =
          (rawData && typeof rawData === 'object' && rawData.description) ||
          (typeof rawData === 'string' ? rawData : '') ||
          ''

        const value = await ShowModalBox({
          title: 'Reproductions steps generated by AI',
          description: 'Please do changes to the content (if required)',
          response: reproText,
          isLoading: false,
          type: 5
        })
        if (value) {
          Bug_Fields('BG_DESCRIPTION').Value = value
        }
      }
    } catch (e) {
      HideModalBox()
      MsgBox(e?.message || JSON.stringify(e))
    }
  } 

}  

Back to top

Generate tests with context consideration

This example demonstrates how to generate test cases that are aware of the requirement hierarchy and existing tests. The workflow fetches sub-requirements and linked tests, provides them as context to the AI service, and uses context options to ensure comprehensive coverage while avoiding duplication of existing tests.

Notes:

  • The workflow considers all sub-requirements when generating tests.

  • Existing linked tests are retrieved to prevent duplication.

  • Uses useEntityChildren() and avoidDuplication() context options from the Aviator API.

Prerequisites & dependencies:

  • Utility functions: stripHtml, removeSpecialChars, getNewLinesReplacedContent, getPreferredLanguage, getReqDescription, getSubRequirements, getLinkedTests, safePost

  • Test creation functions

Copy code
async function ActionCanExecute(actionName) {
// ACTION: Generate Tests with Context (Sub-Requirements + Avoid Duplicates)
 // Demonstrates useEntityChildren() and avoidDuplication() context options
  if (actionName === 'UserDefinedActions.GenerateTestsWithContext') {
    if (!TDConnection.Aviator().isEnabled()) {
      MsgBox('Aviator is not enabled. Please contact your administrator.')
      return false
    }
    try {
      const requirementId = Req_Fields('RQ_REQ_ID').Value
      const reqFactory = TDConnection.ReqFactory
      const reqObj = reqFactory.Item(requirementId)

      const name = removeSpecialChars(getNewLinesReplacedContent(reqObj.Name))
      const descriptionRaw = (getReqDescription() || '').trim()
      let description = stripHtml(descriptionRaw)
      description = removeSpecialChars(getNewLinesReplacedContent(description))

      // Fetch sub-requirements to provide as context
      const subRequirements = getSubRequirements(requirementId)

      // Fetch existing linked tests to avoid duplication
      const existingTests = getLinkedTests(requirementId)

      // Build context summary for user
      const contextSummary =
        'Found ' +
        subRequirements.length +
        ' sub-requirement(s) and ' +
        existingTests.length +
        ' existing test(s).'

      // Build prompt with context options
      const promptBuilder = TDConnection.Aviator()
        .prompt()
        .persona(
          'You are a seasoned professional in software testing and quality assurance. ' +
            'You specialize in analyzing requirements hierarchies and designing comprehensive test coverage.'
        )
        .instructions(
          'Design up to 10 test cases that provide comprehensive coverage for this requirement and all its sub-requirements.'
        )
        .instructions(
          'Ensure test cases cover the parent requirement AND each sub-requirement adequately.'
        )
        .instructions(
          'Include positive and negative scenarios, boundary conditions, and integration tests between related requirements.'
        )
        .context({
          requirement: {
            id: requirementId,
            name: name,
            description: description
          },
          subRequirements: subRequirements,
          existingTests: existingTests
        })
        .format(TDConnection.Formats.TestCases)
        .language(getPreferredLanguage())
        .maxResults(10)

      // Apply context options
      if (subRequirements.length > 0) {
        promptBuilder.useEntityChildren()
      }
      if (existingTests.length > 0) {
        promptBuilder.avoidDuplication()
      }

      const response = await promptBuilder.send({
        loadingTitle:
          'Generating AI Test Cases with Context... (' + contextSummary + ')'
      })

      if (response.isEmpty()) {
        MsgBox('No test cases could be generated. Please add more details to the requirement.')
        return
      }

      if (typeof response.raw === 'string' && !response.isList()) {
        MsgBox(response.raw)
        return
      }

      // Handle text-only responses wrapped in array (AI returned plain text instead of structured data)
      if (response.isList()) {
        const items = response.asList()
        if (items.length > 0 && typeof items[0] === 'string') {
          MsgBox(items.join('\n'))
          return
        }
      }

      const selectedList = await response.showSuggestions({
        title: 'Generate Tests with Context',
        description:
          'Generated ' +
          response.asList().length +
          ' tests considering ' +
          subRequirements.length +
          ' sub-requirement(s), avoiding ' +
          existingTests.length +
          ' existing test(s)',
        multiSelect: true
      })

      if (!selectedList || selectedList.length === 0) {
        MsgBox('Test Cases Generation is cancelled.')
        return false
      }

      await CreateTestCasesFromAviator(selectedList)
    } catch (e) {
      HideModalBox()
      MsgBox(e?.message || JSON.stringify(e))
    }
  } 
}  

Back to top

Utility functions

safePost(entity, entityName, fieldValuesMap)

Safely posts an entity to the database while gracefully handling missing required fields by prompting the user to fill them in, with intelligent caching for bulk operations. Without this helper, if an administrator has configured mandatory custom fields on the project, a workflow script that creates entities (tests, requirements, design steps, test folders) will throw a raw server error and end as soon as a required field is missing. safePost intercepts that error gracefully.

Parameters

Parameter Type Description
entity Object The ALM entity object about to be posted (created via AddItem)
entityName String Entity type identifier: 'test', 'requirement', 'test-folder', or 'design-step'
fieldValuesMap Object Map of field name → value pairs already set on the entity. Used to track field values in error scenarios

Behavior

Situation What Happens
Post succeeds first time Proceeds silently; function returns true
Server error (other than required field) Shows error message to user; operation stops; function returns false
Required field missing A modal dialog appears listing the missing fields. User fills them in and submits
User submits missing field values Values are applied to entity, Post is retried automatically
Retry fails Error message shown; operation stops; function returns false
User dismisses/cancels the modal Operation stops; the same dialog will not appear again for this entity type in this session

Bulk Creation Behavior

When a workflow action creates multiple entities in a loop (for example, bulk-generating test cases from an AI response), safePost exhibits smart caching behavior:

  • User is prompted for missing required fields once per entity type per session. For example, if generating 10 tests and the test entity requires a custom field, the user is only prompted once.

  • The values entered by the user are cached automatically and applied to all subsequent entities of the same type.

  • If the user cancels the dialog, all remaining entities of that type are silently skipped without showing further dialogs.

  • This behavior exactly matches the ALM Web Client's bulk-edit behavior.

Note:

Customers who have previously written custom workflow scripts or who are extending a script with new entity creation logic, should replace any bare entity.Post() calls with await safePost(entity, entityName, fieldValuesMap) — particularly in projects that have mandatory custom fields configured.

Copy code
const requiredFieldsCache = {}

async function safePost(entity, entityName, fieldValuesMap = {}) {
  if (requiredFieldsCache[entityName] && requiredFieldsCache[entityName].__cancelled__) {
    return false
  }
  
  if (requiredFieldsCache[entityName]) {
    for (const fieldName in requiredFieldsCache[entityName]) {
      if (fieldName === '__cancelled__') continue
      try {
        entity.Field[fieldName] = requiredFieldsCache[entityName][fieldName]
        fieldValuesMap[fieldName] = requiredFieldsCache[entityName][fieldName]
      } catch (e) {
        console.warn('Could not apply cached field', fieldName, ':', e)
      }
    }
  }
  try {
    entity.Post()
    return true
  } catch (error) {
    let originalErrorMessage = error?.message || JSON.stringify(error)
    try {
      const errorData = JSON.parse(error?.data)
      if (errorData?.Title) {
        originalErrorMessage = errorData.Title
      }
    } catch (e) {}
    
    const requiredFieldPattern = /The field '.+?' is required/i
    if (!requiredFieldPattern.test(originalErrorMessage)) {
      MsgBox(originalErrorMessage)
      return false
    }
    
    try {
      const fieldValues = await ShowRequiredFieldsWF(entityName, fieldValuesMap)
      
      if (fieldValues) {
        if (!requiredFieldsCache[entityName]) {
          requiredFieldsCache[entityName] = {}
        }
        for (const fieldName in fieldValues) {
          requiredFieldsCache[entityName][fieldName] = fieldValues[fieldName]
        }
        delete requiredFieldsCache[entityName].__cancelled__
        
        for (const fieldName in fieldValues) {
          if (fieldValues.hasOwnProperty(fieldName)) {
            entity.Field[fieldName] = fieldValues[fieldName]
            fieldValuesMap[fieldName] = fieldValues[fieldName]
          }
        }
        
        try {
          entity.Post()
          return true
        } catch (retryError) {
          let retryErrorMessage = retryError?.message || JSON.stringify(retryError)
          try {
            const retryErrorData = JSON.parse(retryError?.data)
            if (retryErrorData?.Title) {
              retryErrorMessage = retryErrorData.Title
            }
          } catch (e) {}
          MsgBox(retryErrorMessage)
          return false
        }
      } else {
        if (!requiredFieldsCache[entityName]) {
          requiredFieldsCache[entityName] = {}
        }
        requiredFieldsCache[entityName].__cancelled__ = true
        MsgBox(originalErrorMessage)
        return false
      }
    } catch (showError) {
      MsgBox(showError?.message || JSON.stringify(showError))
      return false
    }
  }
}

stripHtml(html)

Removes all HTML tags and normalizes whitespace from a string, converting HTML entities to their text equivalents.

Copy code
function stripHtml(html) {
  if (TDConnection.Utils?.stripHtml) {
    return TDConnection.Utils.stripHtml(html)
  }
  if (!html) return ''
  if (typeof document !== 'undefined' && document.createElement) {
    const tmp = document.createElement('div')
    tmp.innerHTML = html
    return (tmp.textContent || tmp.innerText || '').replace(/\s+/g, ' ').trim()
  }
  return html
    .replace(/<script[\s\S]*?<\/script>/gi, ' ')
    .replace(/<style[\s\S]*?<\/style>/gi, ' ')
    .replace(/<\/?[^>]+>/g, ' ')
    .replace(/&nbsp;/gi, ' ')
    .replace(/&amp;/gi, '&')
    .replace(/\s+/g, ' ')
    .trim()
}

Back to top

removeSpecialChars(str)

Strips all non-alphanumeric characters (except spaces) from a string, leaving only letters, numbers, and spaces.

Copy code
function removeSpecialChars(str) {
  if (TDConnection.Utils?.removeSpecialChars) {
    return TDConnection.Utils.removeSpecialChars(str)
  }
  return str?.replace(/[^a-zA-Z0-9 ]/g, '')
}

Back to top

getNewLinesReplacedContent(str)

Replaces all newline and carriage return characters with spaces to normalize text formatting.

Copy code
function getNewLinesReplacedContent(str) {
  if (TDConnection.Utils?.normalizeWhitespace) {
    return TDConnection.Utils.normalizeWhitespace(str)
  }
  return str?.replace(/[\r\n]+/g, ' ')
}

Back to top

getPreferredLanguage()

Returns the user's preferred language setting from the ALM system for use in AI prompt formatting.

Copy code
function getPreferredLanguage() {
  return TDConnection.GetPreferredLanguage()
}

Back to top

getReqDescription()

Retrieves the description or comment field from the current requirement, checking preferred field names and falling back to fields labeled "description."

Copy code
function getReqDescription() {
  const preferredNames = ['RQ_REQ_COMMENT']
  for (const name of preferredNames) {
    try {
      const f = Req_Fields.Field(name)
      if (f && f.Value) return f.Value
    } catch (_) {}
  }

  for (let i = 1; i <= Req_Fields.Count; i++) {
    const fld = Req_Fields.FieldById(i)
    if (fld && /description/i.test(fld.FieldLabel) && fld.Value) {
      return fld.Value
    }
  }
  return ''
}

Back to top

getSubRequirements(parentReqId)

Retrieves all child requirements linked to a parent requirement and returns their IDs, names, and descriptions.

Copy code
/**
 * Fetches all child/sub-requirements for a given parent requirement ID.
 * Uses the standard ReqFactory filter to query by parent-id.
 *
 * @param {number|string} parentReqId - The ID of the parent requirement
 * @returns {Array} Array of simplified sub-requirement objects with id, name, description
 */
function getSubRequirements(parentReqId) {
  const subReqs = []
  try {
    const reqFactory = TDConnection.ReqFactory
    const filter = reqFactory.Filter
    filter.Filter['RQ_FATHER_ID'] = parentReqId
    const children = filter.NewList()

    for (let i = 0; i < children.length; i++) {
      const child = children[i]
      subReqs.push({
        id: child.ID,
        name: child.Name || '',
        description: stripHtml(child.Field['RQ_REQ_COMMENT'] || '')
      })
    }
  } catch (error) {
    console.error('Error fetching sub-requirements:', error)
  }
  return subReqs
}

getLinkedTests(reqId)

Fetches all tests that provide coverage for a given requirement via the coverage factory, handling multiple field access patterns for cross-version compatibility.

Copy code
/**
 * Fetches all tests linked (covered by) to a given requirement ID.
 * Uses TDConnection.CoverageFactory to query coverage entities by requirement ID.
 *
 * @param {number|string} reqId - The ID of the requirement
 * @returns {Array} Array of simplified test objects with id, name, description
 */
function getLinkedTests(reqId) {
  const linkedTests = []
  try {
    // Use TDConnection.CoverageFactory with filter to find coverages for this requirement
    const coverageFactory = TDConnection.CoverageFactory
    const filter = coverageFactory.Filter

    // Filter by requirement ID - coverage entity stores requirement in RC_REQ_ID
    filter.Filter['RC_REQ_ID'] = reqId
    const coverageList = filter.NewList()

    // Process coverage list
    const count = coverageList.length || coverageList.Count || 0

    for (let i = 0; i < count; i++) {
      try {
        const coverage = coverageList[i]
        if (!coverage) continue

        // Get test ID from coverage entity
        // Try different field access patterns
        let testId = null

        if (coverage.Field) {
          testId = coverage.Field['RC_ENTITY_ID'] || coverage.Field['TC_TEST_ID']
        }

        if (!testId) {
          testId = coverage.RC_ENTITY_ID || coverage.TC_TEST_ID || coverage.TestId
        }

        if (!testId) {
          testId = coverage['RC_ENTITY_ID'] || coverage['TC_TEST_ID']
        }

        if (testId) {
          const testFactory = TDConnection.TestFactory
          const test = testFactory.Item(testId)
          if (test) {
            linkedTests.push({
              id: test.ID,
              name: test.Name || '',
              description: stripHtml(
                test.Field ? test.Field['TS_DESCRIPTION'] || '' : test.Description || ''
              )
            })
          }
        }
      } catch (itemError) {
        console.warn('Could not process coverage item ' + i + ':', itemError)
      }
    }
  } catch (error) {
    console.error('Error fetching linked tests:', error)
  }

  return linkedTests
}

Back to top

Test case creation functions

CreateTestCasesFromAviator(list)

Manages the creation of multiple test cases generated by Aviator, handling folder creation, field validation through safePost, and result notification.

Copy code
async function CreateTestCasesFromAviator(list) {
  if (list?.length > 0) {
    ShowModalBox({
      title: 'Creating Your Selected Test Cases...',
      description:
        "We're processing your selections. The system is adding details and designing test steps behind the scenes",
      isLoading: true,
      type: 1,
      isCreating: true
    })
    const testFolder = await generateAviatorTestFolder()
    if (!testFolder) {
      return
    }
    // Re-show modal in case it was hidden by required fields dialog
    ShowModalBox({
      title: 'Creating Your Selected Test Cases...',
      description:
        "We're processing your selections. The system is adding details and designing test steps behind the scenes",
      isLoading: true,
      type: 1,
      isCreating: true
    })
    const testsList = await generateTests(testFolder, list)
    HideModalBox()
    
    if (testsList && testsList.length > 0) {
      const firstTest = testsList[0]
      ShowModalBox({ title: 'Action Success!', isSuccess: true, result: firstTest })
    }
  }
}

Back to top

getAviatorFolder()

Async function that finds or creates the base 'Tests from aviator' test folder, using safePost for error handling.

Copy code
async function getAviatorFolder() {
  const testFolderFactory = TDConnection.TestFolderFactory
  const testFolderFactoryFilter = testFolderFactory.Filter
  testFolderFactoryFilter.Filter['AL_DESCRIPTION'] = "'Tests from aviator'"
  const testFolders = testFolderFactoryFilter.NewList()
  const testFoldersRoot = testFolderFactory.Root

  if (testFolders.length !== 0) {
    for (let i = 0; i < testFolders.length; i++) {
      const testFolder = testFolders[i]
      if (testFolder.Field['AL_FATHER_ID'] == testFoldersRoot.ID) {
        return testFolder
      }
    }
  }

  const newTestFolder = testFolderFactory.AddItem(testFoldersRoot)
  const fieldValuesMap = {}
  newTestFolder.Field['AL_DESCRIPTION'] = 'Tests from aviator'
  fieldValuesMap['AL_DESCRIPTION'] = 'Tests from aviator'
  const success = await safePost(newTestFolder, 'test-folder', fieldValuesMap)
  return success ? newTestFolder : null
}

Back to top

generateAviatorTestFolder()

Async function that creates a requirement-specific test folder using safePost for field validation.

Copy code
async function generateAviatorTestFolder() {
  const aviatorFolder = await getAviatorFolder()
  const reqName = Req_Fields('RQ_REQ_NAME').Value
  const reqId = Req_Fields('RQ_REQ_ID').Value

  const existingFolder = getExistingTestFolder(aviatorFolder, reqId)
  if (existingFolder) {
    return existingFolder
  }

  const folderName = '[Workflow] ' + reqName
  const testFolderFactory = aviatorFolder.TestFolderFactory
  const newTestFolder = testFolderFactory.AddItem(aviatorFolder)
  const fieldValuesMap = {}
  newTestFolder.Field['AL_DESCRIPTION'] = folderName
  fieldValuesMap['AL_DESCRIPTION'] = folderName
  newTestFolder.Field['AL_ABSOLUTE_PATH'] = 'aviator_req_' + reqId
  fieldValuesMap['AL_ABSOLUTE_PATH'] = 'aviator_req_' + reqId
  const success = await safePost(newTestFolder, 'test-folder', fieldValuesMap)
  return success ? newTestFolder : null
}

Back to top

getExistingTestFolder(aviatorFolder, reqId)

Checks for an existing test folder matching the current requirement ID and returns it or null if not found.

Copy code
function getExistingTestFolder(aviatorFolder, reqId) {
  try {
    const testFolderFactory = aviatorFolder.TestFolderFactory
    const allFolders = testFolderFactory.NewList()

    for (let i = 0; i < allFolders.length; i++) {
      const folder = allFolders[i]
      const folderPath = folder.Field['AL_ABSOLUTE_PATH'] || ''
      if (folderPath === 'aviator_req_' + reqId) {
        return folder
      }
    }
    return null
  } catch (error) {
    console.error('Error checking existing test folders:', error)
    return null
  }
}

Back to top

generateTests(testFolder, aiResponse)

Async function that creates individual test entities from AI-generated test cases, using safePost for bulk entity creation with proper field mapping.

Copy code
async function generateTests(testFolder, aiResponse) {
  if (!Array.isArray(aiResponse) || aiResponse.length === 0) return
  const testFactory = testFolder.TestFactory
  const newTests = []

  for (let i = 0; i < aiResponse.length; i++) {
    const aiItem = aiResponse[i]
    const newTest = testFactory.AddItem(null)
    const fieldValuesMap = {}
    newTest.Field['TS_DESCRIPTION'] =
      '<html>--- This test is initially created by Aviator in workflow --- <br>' +
      aiItem.description +
      '</html>'
    fieldValuesMap['TS_DESCRIPTION'] =
      '<html>--- This test is initially created by Aviator in workflow --- <br>' +
      aiItem.description +
      '</html>'
    newTest.Field['TS_NAME'] = aiItem.name
    fieldValuesMap['TS_NAME'] = aiItem.name
    newTest.Field['TS_ORIGIN'] = 'aviator'
    fieldValuesMap['TS_ORIGIN'] = 'aviator'
    newTest.Field['TS_RESPONSIBLE'] = User.UserName
    fieldValuesMap['TS_RESPONSIBLE'] = User.UserName
    newTest.Field['TS_IS_AVIATOR_GENERATED'] = 'Yes'
    fieldValuesMap['TS_IS_AVIATOR_GENERATED'] = 'Yes'
    const success = await safePost(newTest, 'test', fieldValuesMap)
    if (!success) continue
    await generateDesignSteps(newTest, aiItem.steps)
    generateCoverages([newTest.ID])
    newTests.push(newTest.ID)
  }

  return newTests
}

Back to top

generateDesignSteps(test, steps)

Async function that creates design steps for a test, with support for expected results field, using safePost for reliable entity creation.

Copy code

async function generateDesignSteps(test, steps) {
  if (!Array.isArray(steps) || steps.length === 0) return
  const designStepFactory = test.DesignStepFactory

  for (let i = 0; i < steps.length; i++) {
    const step = steps[i]
    const newStep = designStepFactory.AddItem(null)
    const fieldValuesMap = {}
    newStep.Field['DS_STEP_NAME'] = 'Step ' + (i + 1)
    fieldValuesMap['DS_STEP_NAME'] = 'Step ' + (i + 1)
    newStep.Field['DS_DESCRIPTION'] = '<html>' + step['description'] + '</html>'
    fieldValuesMap['DS_DESCRIPTION'] = '<html>' + step['description'] + '</html>'
    const expectedResult = step['expected_result'] || step['expected'] || ''
    if (expectedResult) {
      newStep.Field['DS_EXPECTED'] = '<html>' + expectedResult + '</html>'
      fieldValuesMap['DS_EXPECTED'] = '<html>' + expectedResult + '</html>'
    }
    newStep.Field['DS_STEP_ORDER'] = i + 1
    fieldValuesMap['DS_STEP_ORDER'] = i + 1
    await safePost(newStep, 'design-step', fieldValuesMap)
  }
}

Back to top

generateCoverages(tests)

Links created tests to the requirement via the coverage factory, establishing the relationship between tests and requirements.

Copy code
function generateCoverages(tests) {
  if (!Array.isArray(tests) || tests.length === 0) return
  const reqFactory = TDConnection.ReqFactory
  const req = reqFactory.Item(Req_Fields('RQ_REQ_ID').Value)

  for (let i = 0; i < tests.length; i++) {
    req.AddCoverage(tests[i])
  }
}

Back to top

User story creation function

Creates selected AI-generated user stories as child requirements under a parent requirement, fills key fields (including description and metadata), safely saves each one, and returns the successfully created items.

Copy code
// =============================================================================
// USER STORY CREATION
// =============================================================================

async function createUserStoryRequirements(userStories, parentReqId) {
  const reqFactory = TDConnection.ReqFactory
  const parentReq = reqFactory.Item(parentReqId)
  const createdRequirements = []

  try {
    ShowModalBox({
      title: 'Creating Your selected User Stories...',
      description:
        "We're processing your selections. The system is adding details and sub-requirements behind the scenes",
      isLoading: true,
      type: 2,
      isCreating: true
    })

    for (let i = 0; i < userStories.length; i++) {
      const story = userStories[i]
      const newReq = reqFactory.AddItem(parentReq)
      const fieldValuesMap = {}
      newReq.ParentId = parentReqId
      newReq.Field['RQ_REQ_NAME'] = story.title || story.name
      fieldValuesMap['RQ_REQ_NAME'] = story.title || story.name

      const fullDescription =
        '<html>' +
        '<h3>User Story:</h3>' +
        '<p>' +
        story.description +
        '</p>' +
        '<h3>Acceptance Criteria:</h3>' +
        '<p>' +
        (story.acceptanceCriteria || 'To be defined') +
        '</p>' +
        '<hr>' +
        '<small><i>Generated automatically from parent requirement</i></small>' +
        '</html>'

      const descriptionFields = ['RQ_REQ_COMMENT']
      for (const fieldName of descriptionFields) {
        if (newReq.Field[fieldName] !== undefined) {
          newReq.Field[fieldName] = fullDescription
          fieldValuesMap[fieldName] = fullDescription
          break
        }
      }

      try {
        newReq.Field['RQ_TYPE_ID'] = '3'
        fieldValuesMap['RQ_TYPE_ID'] = '3'
      } catch (e) {}
      try {
        newReq.Field['RQ_REQ_PRIORITY'] = 'Medium'
        fieldValuesMap['RQ_REQ_PRIORITY'] = 'Medium'
      } catch (_) {}
      try {
        newReq.Field['RQ_REQ_AUTHOR'] = User.UserName
        fieldValuesMap['RQ_REQ_AUTHOR'] = User.UserName
      } catch (_) {}

      const success = await safePost(newReq, 'requirement', fieldValuesMap)
      if (!success) {
        continue
      }

      createdRequirements.push({
        id: newReq.ID,
        name: story.title || story.name
      })
    }

    try {
      parentReq.Refresh()
    } catch (_) {}
    HideModalBox()

    if (createdRequirements?.length) {
      ShowModalBox({
        title: 'Action Success!',
        isSuccess: true,
        result: createdRequirements?.[0]?.id,
        type: 2
      })
    } else {
      MsgBox('Failed to create any user story requirements. Please check the required fields.')
    }
    return createdRequirements
  } catch (error) {
    console.error('Error creating user story requirements:', error)
    throw new Error('Failed to create requirements: ' + error.message)
  }
}

Back to top

Customize Aviator workflow

This section describes how to add an Aviator Workflow script.

To add an Aviator workflow:

  1. Click the Settings button in the banner and select Workflow.

    Note: To customize workflows, you must have the Set up Workflow permission.

  2. In the Workflow scripts tree, go to Common Scripts > ActionCanExecute.

  3. In the Script Editor tab, add JavaScript code for your new workflow script.
    Use the example scripts as a basis for your script. Customize the script to suit your needs.

  4. Add a custom toolbar button for your workflow.

To add a custom toolbar button:

  1. In the Toolbar Button Editor tab, select the module to which to add the button and click Add. The button is given a default name.

  2. Click the edit button edit and edit the following:

    • Action Name: This must match the action name in workflow script (For example, "GenerateTestCaseswithAviator")

    • Caption: Button name

    • Icon: Select an icon for the button

  3. Click Save. The button is added to the toolbar for that module.

Back to top

See also: