PPM Aviator

PPM Aviator provides AI‑driven assistance within Project and Portfolio Management Center (PPM), enabling users to interact with PPM through conversational AI, automate project tasks, and generate intelligent recommendations. It integrates PPM with leading Large Language Model (LLM) providers such as Google Vertex AI and OpenAI using the Model Context Protocol (MCP).

PPM Aviator is available in Beta status as of release 26.2.

Feature overview

PPM Aviator supports AI-powered features through a conversational chatbox interface:

  1. Conversational AI assistant.

    Interact with PPM using natural language prompts through the Aviator chatbox. Ask questions about PPM capabilities, request information, and perform tasks conversationally. The AI provides interactive previews and confirmation dialogs before executing actions.

  2. Project creation.

    Create projects by providing details such as project name, region, project type, and duration through conversational prompts. Aviator automatically requests any missing required information.

  3. Work plan task management.

    Add and manage tasks in work plans using AI assistance. Request AI-generated task recommendations based on project context and scope. Refine task lists through follow-up prompts to adjust phases, milestones, and task details.

  4. Staffing profile and position creation.

    Create staffing profiles and positions through guided AI interactions. Aviator can analyze project requirements and recommend appropriate positions with relevant roles aligned to project timelines.

  5. Resource recommendations.

    Request AI-generated recommendations for available resources that match staffing position requirements. Aviator evaluates resource availability and role suitability to suggest appropriate assignments.

  6. Risk prediction and creation.

    Ask AI to predict potential risks based on project characteristics, nature, and fixed timelines. Aviator generates risk assessments with appropriate severity levels and probabilities. Refine predicted risks through conversational prompts before adding them to the project.

Admin and business use cases

The Enable new features toggle at the top of the Aviator pane controls which capabilities are available. Only one capability set is active at a time.

Toggle OFF: Admin use cases

  • Summarize requests and build runtime charts using request details (limited to charts supported by the PPM charting library).

  • Create self-service portlets in portfolios.

  • Create request types and their layouts using image/text files.

Toggle ON: Business use cases

  • Project creation.

  • Work plan and task management.

  • Staffing profile and position creation.

  • Resource recommendations.

  • Risk prediction and risk creation.

Prerequisites

Before enabling PPM Aviator, ensure that:

  • Your environment can access a supported LLM provider.

  • Required authentication credentials (service account JSON or API key) are available.

  • For production environments, the PPMAviatorProduction hotfix is installed. Submit a Support request and mention the hotfix name.

  • For development environments, you may optionally use development‑only parameters.

Note: PPM is currently designed to integrate with an organization's own proprietary LLMs, giving enterprises full control over data handling, security, and compliance within their environments. As a result, aspects such as user interaction quality, LLM-generated responses, reasoning capability, and tokenization behavior are entirely dependent on the specific LLM selected and configured by the organization. These considerations fall outside the core PPM context and are governed by the capabilities of the integrated LLM.

Set required environment variables (all environments)

Environmental variables must be set before starting the PPM server to allow authentication with the selected LLM provider.

  • Google Vertex AI:

    export GOOGLEAPPLICATIONCREDENTIALS=<path/to/gcloud.json>
  • OpenAI:

    export OPENAIAPIKEY=<your OpenAI API key>

Turn on the PPM Aviator feature toggle (all environments)

After completing prerequisites appropriate to your environment:

  1. Navigate to Administration > New Features > Feature Toggles.

  2. Locate the toggle PPM Aviator.

  3. Set it to ON.

Aviator is now enabled. An Aviator icon will appear in the PPM interface.

Configure AI provider models

Once the Aviator toggle is enabled, you must configure the LLM model before using Aviator:

  1. Navigate to Administration > Integrations > AI Model Configurations.

    Alternatively, access directly via: http://<BASE_URL>/itg/admin/ai.

  2. Choose your LLM provider (Google Vertex AI, OpenAI, etc.).

  3. Enter the required model parameters.

  4. If needed, configure proxy settings to allow outbound connectivity.

Access PPM Aviator

Once enabled and configured, access PPM Aviator through:

  • Aviator icon - Appears in the PPM user interface after the feature toggle is enabled.

  • Aviator chatbox - Click the Aviator icon to open a conversational interface where you can interact using natural language prompts.

Example use cases

PPM Aviator can assist with tasks such as:

  • Querying PPM capabilities and version information.

  • Onboarding a new project: "Onboard a new project cloud migration digital channels with 10 months duration starting March 2026."

  • Generating detailed work plan tasks: "Provide a detailed work plan with phase-wise tasks and milestones aligned to the project timeline."

  • Refining generated tasks through follow-up prompts to adjust phases, timelines, or task details.

  • Requesting staffing requirements: "Provide the required staffing positions with relevant roles, aligned as per the project timeline."

  • Getting resource recommendations: "Recommend available resources for the staffing positions of this project."

  • Predicting project risks: "Based on the nature of the project and fixed timelines, suggest risks associated with the project."

  • Refining risks: "Correct the risk performance bottleneck with updated level and probability."

Model selection tips

  • Choose a model that aligns with performance, cost, and complexity needs.

  • Higher‑capacity models provide richer outputs but may cost more.

  • Lighter models may be suitable for frequent or latency‑sensitive operations.

See also