Running the SV Server in Kubernetes
This topic describes how to run the Service Virtualization server within the Kubernetes managed cloud.
A complete environment consists of following containers:
SV Server – A runtime where you can host virtual services.
Database – The SV Server requires a supported database to run.
SV Management – A web front-end for the SV Server.
AutoPass License Server – To run the virtual service in simulation mode, a license server with valid license is required.
The complete configuration with setup hints is available on GitHub: https://github.com/MicroFocus/sv-containerization/tree/main/kubernetes-examples.
SV Server container
This section provides details and examples of how to configure the SV Server container in a Kubernetes environment.
As a security best-practice you should run the SV Server container as a non-root user. This requires specific configuration of
securityContextaware Persistent Volumes.
Set runAsUser to the user under which the SV Server container runs; this is sv-server user (UID 1234).
Set fsGroup to the sv-server group (GID 1234), so that files created on Persistent Volumes are accessible to the SV Server process.
securityContextshould look like this:Copy code
The Agents and SV Server configuration, and all other configuration data, must be persistent and stored in a persistent volume. You must use
securityContextaware persistent volumes to avoid file permission issues. For example, use local type and not hostPath type.Copy code
- name: sv-storage-work
For more information, see the Kubernetes Persistent Volumes help.
To access log files from a previous pod (for example, after a crash), the log directory must also be mounted to the Persistent Volume (the same requirements for
securityContextapply here):Copy code
- name: sv-storage-logs
The SV Server must be stopped by SIGTERM instead of SIGKILL and its shutdown and startup must be synchronized. When stopping the pod, Kubernetes sends SKIGTERM to the process within the Docker container and waits the default timeout, 30 seconds. After, it kills the process by sending SIGKILL. By default, the image is configured to run the SV Server in shell (sh). The shell doesn't propagate SIGTERM to its child processes.
Within the SV Server deployment yaml file, the shell is replaced by bash:Copy code
To synchronize the startup and shutdown, a lock file /opt/microfocus/sv-server/work/lock is used. When the server starts and finds a lock file, it waits until the file disappears, a maximum of 60 seconds. If something fails, the file might not be deleted by the previous pod.Copy code
&& while [ \"$i\" -lt 60 -a -f _/opt/microfocus/sv-server/work/lock ]; do i=$((i+1)); sleep 1; done
&& echo sv>/ sv-server/work/Agents/lock
&& / sv-server/work/bin/start-server.sh && rm
A preStop hook is implemented to send SIGTERM to the SV Server process directly and delete the lock file:Copy code
- -c - kill -15 1; while kill -0 1; do sleep 5; done && rm
The following command extends the termination grace period timeout to 120 seconds:Copy code
To allow HTTPS agents, switch the Mono TLS provider to legacy.Copy code
- name: MONO_TLS_PROVIDER