Running the SV Server in Kubernetes

This topic describes how to run the Service Virtualization server within the Kubernetes managed cloud.

Database deployment

To perform a database deployment, an Oracle or Postgres database deployment and service must exist. Copy the contents of the postgresdb.yaml example to a new file and deploy it using the kubectl apply -f [filename] command.

Back to top

SV Server deployment

To perform a SV Server deployment, copy the contents of the sv-server.yaml example to a new file and deploy it using the kubectl apply -f [filename] command. A pre-configured Postgres database connection points to ;Host=sv-postgres-db-service;Database=postgresoracledb-service:1521/xe with the credentials postgres/postgres.

Follow these guidelines when running SV Docker image within a Kubernetes environment:

  1. When using persistent volumes, directories and files created by the deployment have root ownership as Docker runs with root privileges. The SV Server image runs under the sv-server user. Modifications of the effected files are not persistent because of inconsistent access rights. Although the deployment doesn't fail, changes to filesystem are not propagated and they are forgotten when the pod restarts. To overcome the issue, the sv-server image must run under the root user instead of sv-server:

      runAsUser: 0

  2. The Agents and SV Server configuration, and all other configuration data, must be persistent and stored in a persistent volume:

      - name: sv-storage-work
        mountPath: /opt/microfocus/sv-server/work
        readOnly: false

  3. To access log files from a previous pod (for example, after a crash), the log directory must also be mounted to the PV:

      - name: sv-storage-logs
        mountPath: /opt/microfocus/sv-server/logs
        readOnly: false

  4. The SV Server must be stopped by SIGTERM instead of SIGKILL and its shutdown and startup must be synchronized. When stopping the pod, Kubernetes sends SKIGTERM to the process within the Docker container and waits the default timeout, 30 seconds. Then, it kills the process by sending SIGKILL. By default, the image is configured to run SV Server in shell (sh). The shell doesn't propagate SIGTERM to its child processes. Within the SV Server deployment yaml file, the shell is replaced by bash:

    command: ["/bin/bash","-c"]

    To synchronize the startup and shutdown, a lock file /opt/microfocus/sv-server/work/lock is used. When the server starts and finds a lock file, it waits until the file disappears, a maximum of 60 seconds. (if something fails, the file may not be deleted by the previous pod):

    command: ["/bin/bash","-c"]
    args: ["i=0 
            && while [ \"$i\" -lt 60 -a -f _/opt/microfocus/sv-server/work/lock ]; do  i=$((i+1)); sleep 1; done 
            && echo sv>/ sv-server/work/Agents/lock
            && / sv-server/work/bin/ && rm /opt/microfocus/sv-server/work/lock"]

    A preStop hook is implemented to send SIGTERM to the SV Server process directly and delete the lock file:

          - /bin/bash 
          - -c - kill -15 1; while kill -0 1; do sleep 5; done && rm /opt/microfocus/_sv-server/work/lock

    The following command extends the termination grace period timeout to 120 seconds:

    terminationGracePeriodSeconds: 120

  5. To allow HTTPS agents, switch the Mono TLS provider to legacy.

      value: "legacy"

Back to top

See also: