Running the SV Server in Kubernetes
This topic describes how to run the Service Virtualization server within the Kubernetes managed cloud.
Containers overview
A complete environment consists of following containers:
-
SV Server – A runtime where you can host virtual services.
-
Database – The SV Server requires a supported database to run.
-
SV Management – A web front-end for the SV Server.
-
AutoPass License Server – To run the virtual service in simulation mode, a license server with valid license is required.
The complete configuration with setup hints is available on GitHub: https://github.com/MicroFocus/sv-containerization/tree/main/kubernetes-examples.
SV Server container
This section provides details and examples of how to configure the SV Server container in a Kubernetes environment.
-
As a security best-practice you should run the SV Server container as a non-root user. This requires specific configuration of
securityContext
andsecurityContext
aware Persistent Volumes.-
Set runAsUser to the user under which the SV Server container runs; this is sv-server user (UID 1234).
-
Set fsGroup to the sv-server group (GID 1234), so that files created on Persistent Volumes are accessible to the SV Server process.
securityContext
should look like this:Copy codesecurityContext:
runAsUser: 1234
fsGroup: 1234
fsGroupChangePolicy: "OnRootMismatch" -
-
The Agents and SV Server configuration, and all other configuration data, must be persistent and stored in a persistent volume. You must use
securityContext
aware persistent volumes to avoid file permission issues. For example, use local type and not hostPath type.Copy codevolumeMounts:
- name: sv-storage-work
mountPath: /opt/microfocus/sv-server/work
readOnly: false -
To access log files from a previous pod (for example, after a crash), the log directory must also be mounted to the Persistent Volume (the same requirements for
securityContext
apply here):Copy codevolumeMounts:
...
- name: sv-storage-logs
mountPath: /opt/microfocus/sv-server/logs
readOnly: false -
The SV Server must be stopped by SIGTERM instead of SIGKILL and its shutdown and startup must be synchronized. When stopping the pod, Kubernetes sends SKIGTERM to the process within the Docker container and waits the default timeout, 30 seconds. After, it kills the process by sending SIGKILL. By default, the image is configured to run the SV Server in shell (sh). The shell doesn't propagate SIGTERM to its child processes.
Within the SV Server deployment yaml file, the shell is replaced by bash:
Copy codecommand: ["/bin/bash","-c"]
To synchronize the startup and shutdown, a lock file /opt/microfocus/sv-server/work/lock is used. When the server starts and finds a lock file, it waits until the file disappears, a maximum of 60 seconds. If something fails, the file might not be deleted by the previous pod.Copy codecommand: ["/bin/bash","-c"]
...
args: ["i=0
&& while [ \"$i\" -lt 60 -a -f _/opt/microfocus/sv-server/work/lock ]; do i=$((i+1)); sleep 1; done
&& echo sv>/ sv-server/work/Agents/lock
...
&& / sv-server/work/bin/start-server.sh && rm
/opt/microfocus/sv-server/work/lock"]
A preStop hook is implemented to send SIGTERM to the SV Server process directly and delete the lock file:Copy codepreStop:
exec:
command:
- /bin/bash
- -c - kill -15 1; while kill -0 1; do sleep 5; done && rm
/opt/microfocus/_sv-server/work/loc
The following command extends the termination grace period timeout to 120 seconds:Copy codeterminationGracePeriodSeconds: 120
-
To allow HTTPS agents, switch the Mono TLS provider to legacy.
Copy code- name: MONO_TLS_PROVIDER
value: "legacy"
See also: