Wednesday, January 29, 2025

Simple container-based Minio deployment for Db2

MinIO console with Db2-generated files
You probably have read that I am testing Db2 external tables and remote storage. External tables are data files stored outside the database itself, either in the local file system or on (typically) S3-compatible object storage. To be able to test everything locally, even without Internet connectivity while traveling, I installed and configured MinIO. Here are the few steps to get it up and running as a Docker/podman deployment.

MinIO with basic encryption

Db2 requires the storage provider to have encryption support configured, at least based on my tests. Thus, I configured MinIO to use its Key Management Server (KMS) by providing a 32 byte key, encoded as base64. I generated the (random) key with the following command. You could use your own key and encode it similarly.

cat /dev/urandom | head -c 32 | base64 -

MinIO container

Next, all what is left, is to start the container with the "podman" or "docker" command with the required parameters. For that, I created a small script.

MY_DATA_DIR=/path/to/data/directory
MY_SECRET_KEY=my-minio-key:myActualBase64EncodedKey=

podman run --replace -p 9000:9000 -p 9001:9001 --name myminios3 -v ${MY_DATA_DIR}:/data \
    --privileged -e "MINIO_KMS_SECRET_KEY=${MY_SECRET_KEY}" -e "MINIO_KMS_AUTO_ENCRYPTION=on" \
         quay.io/minio/minio server /data --console-address ":9001"

The variable MY_DATA_DIR points to my data directory on my host system. That directory is mapped to "/data" within the MinIO environment. The other variable, MY_SECRET_KEY, holds a pair of a key name and the encoded key value, generated earlier. For the MinIO environment, the command sets the variables MINIO_KMS_SECRET_KEY and MINIO_KMS_AUTO_ENCRYPTION

The above settings cause a warning during startup because I do not set up any root user and password. This, of course, is a security risk which I accepted for my local tests where I use the default "minioadmin" values.

The container exposes the endpoints with http, without TLS encryption. I also looked into generating a public/private key and mounting it and briefly tested with https. It requires to mount the volume where the key pair is placed. But I decided to just use http locally.

Conclusions

The above gives me a simple to use and to maintain local S3-compatible storage server. For working with Db2 and S3 remote storage, the configuration is a pragmatic minimum for local testing and not meant for production use at all.

If you have feedback, suggestions, or questions about this post, please reach out to me on Mastodon (@data_henrik@mastodon.social) or LinkedIn.