Running the SV Server in Kubernetes

This topic describes how to run the Service Virtualization server within the Kubernetes managed cloud.

Containers overview

A complete environment consists of following containers:

  1. SV Server – A runtime where you can host virtual services.

  2. Database – The SV Server requires a supported database to run.

  3. SV Management – A web front-end for the SV Server.

  4. AutoPass License Server – To run the virtual service in simulation mode, a license server with valid license is required.

The complete configuration with setup hints is available on GitHub: https://github.com/MicroFocus/sv-containerization/tree/main/kubernetes-examples.

Back to top

SV Server container

This section provides details and examples of how to configure the SV Server container in a Kubernetes environment.

  1. As a security best-practice you should run the SV Server container as a non-root user. This requires specific configuration of securityContext and securityContext aware Persistent Volumes.

    • Set runAsUser to the user under which the SV Server container runs; this is sv-server user (UID 1234).

    • Set fsGroup to the sv-server group (GID 1234), so that files created on Persistent Volumes are accessible to the SV Server process.

    securityContext should look like this:

    Copy code
    securityContext:
      runAsUser: 1234
      fsGroup: 1234
      fsGroupChangePolicy: "OnRootMismatch"
  2. The Agents and SV Server configuration, and all other configuration data, must be persistent and stored in a persistent volume. You must use securityContext aware persistent volumes to avoid file permission issues. For example, use local type and not hostPath type.

    Copy code
    volumeMounts:
     - name: sv-storage-work
       mountPath: /opt/microfocus/sv-server/work
       readOnly: false

    For more information, see the Kubernetes Persistent Volumes help.

  3. To access log files from a previous pod (for example, after a crash), the log directory must also be mounted to the Persistent Volume (the same requirements for securityContext apply here):

    Copy code
    volumeMounts:
      ...
      - name: sv-storage-logs
        mountPath: /opt/microfocus/sv-server/logs
        readOnly: false

  4. The SV Server must be stopped by SIGTERM instead of SIGKILL and its shutdown and startup must be synchronized. When stopping the pod, Kubernetes sends SKIGTERM to the process within the Docker container and waits the default timeout, 30 seconds. After, it kills the process by sending SIGKILL. By default, the image is configured to run the SV Server in shell (sh). The shell doesn't propagate SIGTERM to its child processes.

    Within the SV Server deployment yaml file, the shell is replaced by bash:

    Copy code
    command: ["/bin/bash","-c"]


    To synchronize the startup and shutdown, a lock file /opt/microfocus/sv-server/work/lock is used. When the server starts and finds a lock file, it waits until the file disappears, a maximum of 60 seconds. If something fails, the file might not be deleted by the previous pod.

    Copy code
    command: ["/bin/bash","-c"]
    ...
    args: ["i=0 
            && while [ \"$i\" -lt 60 -a -f _/opt/microfocus/sv-server/work/lock ]; do  i=$((i+1)); sleep 1; done 
            && echo sv>/ sv-server/work/Agents/lock
            ... 
            && / sv-server/work/bin/start-server.sh && rm 
    /opt/microfocus/sv-server/work/lock"]


    A preStop hook is implemented to send SIGTERM to the SV Server process directly and delete the lock file:

    Copy code
    preStop: 
      exec: 
        command:  
          - /bin/bash 
          - -c - kill -15 1; while kill -0 1; do sleep 5; done && rm 
    /opt/microfocus/_sv-server/work/loc


    The following command extends the termination grace period timeout to 120 seconds:

    Copy code
    terminationGracePeriodSeconds: 120
  5. To allow HTTPS agents, switch the Mono TLS provider to legacy.

    Copy code
    - name: MONO_TLS_PROVIDER
      value: "legacy"

Back to top

See also: