You are viewing the RapidMiner Deployment documentation for version 9.7 - Check here for latest version
Docker-compose templates
The templates described here will help you to deploy RapidMiner AI Hub on a single host. For multi-host deployments, see the Kubernetes templates.
To deploy one of our docker-compose templates, click Download to download the template files, or select a link for additional details.
- [Download] for development and testing purposes, and for getting started quickly
- [Download] for generic production purposes
- [Download] for Deep Learning
- [Download] for production deployments that need easy Hadoop connectivity
- [Download] for production deployments that need low-latency scoring capability
Each template provides two files:
- The environment file
.env
- The definition file
docker-compose.yml
You can mix and match these templates if you need to produce a custom deployment, e.g. if you need a production deployment that has both Real-Time Scoring and Hadoop Connectivity capabilities.
See also: Services and locations provided by these templates.
Instructions for docker-compose deployments
To deploy one of these templates, take the following steps.
- If you have not yet done so, install Docker on Linux / on Windows.
Choose one of the templates from the list above, and click Download to fetch the ZIP file. Unzip and examine the following two files:
.env
(note that because of the preceding dot, this file is usually hidden)docker-compose.yml
- As discussed in step (7), set the variables
PUBLIC_URL
andSSO_PUBLIC_URL
in the .env file. - Transfer these two files to a folder on the server host, the machine where you installed Docker.
Connect to the server host (
ssh
), and change directory (cd
) to the folder containing those two files.The template refers to an external Docker network (
jupyterhub-user-net-$JUPYTER_STACK_NAME
), that should be created before starting the deployment (if your planned deployment contains JupyterHub, which is true by default). The default stack name isdefault
, so if it is not changed, then the network should be created using the following command:docker network create jupyterhub-user-net-default
The deployed stack needs to have a valid public URL setting, that will be used to connect to it using the external clients (like RapidMiner Studio and a browser) and also for internal communication. This URL should be set before the first startup to a valid HTTP URL using the
PUBLIC_URL
andSSO_PUBLIC_URL
environment variables in the .env file.- Using
http://localhost
orhttp://127.0.0.1
is not supported, because this URL will be used also for internal container-to-container communication between our services. - If deploying on a single host, we prefer to use at least the host's public IP address, like
http://192.168.1.101
or a publicly resolvable hostname that resolves to this IP address, likehttp://platform.rapidminer.com
. If the deployment cannot listen on the default HTTP and HTTPS ports (80 and 443), then
- the port number should be also provided in the
PUBLIC_URL
andSSO_PUBLIC_URL
variables, likehttp://platform.rapidminer.com:8080
- the external port mapping should be set in the docker-compose.yml file at the
rm-proxy-svc
service definition, like8080:80
- the external ports should be set using the
PROXY_HTTP_PORT
andPROXY_HTTPS_PORT
variables for therm-server-svc
in the docker-compose.yml file, likePROXY_HTTP_PORT=8080
andPROXY_HTTPS_PORT=8443
. (If it is missed during the first startup, or needs to be changed, then later it should be configured in the/rapidminer-home/configuration/standalone.xml
file at the proxy settings and requires restarting the service.)
- the port number should be also provided in the
It is highly preferred to use HTTPS for the connection. In this case the
PUBLIC_URL
andSSO_PUBLIC_URL
variables should be configured using thehttps://
prefix and the certificate chain and private key files should be provided in PEM format in thessl
sub-folder using the filenamescertificate.crt
andprivate.key
. The default filenames can be changed using the environment variables in the Proxy section of the.env
file.
- Using
Set additional frequently used configuration values
- The initial admin password can be set using the
KEYCLOAK_PASSWORD
variable - The
AUTH_SECRET
value is used as internal authentication encryption key. We propose to change the default value to any base64 encoded string.
- The initial admin password can be set using the
If SSO configuration is not disabled (this is the case by default), then the platform deployment needs to be initialized before the first startup. This may take up to 1-2 minutes. Run the command:
docker-compose up -d rm-init-svc
Finally, start the stack by running the command:
docker-compose up -d
If the Docker images are not available on the host, they will be automatically downloaded from the Docker Hub.
Good to know
- Additional
docker-compose
commands for common deployment administration tasks are described in the technology overview. To scale up or down the number of RapidMiner Job Agents, you can use the Docker Deployment Manager or the following
docker-compose
command:docker-compose up --scale rm-server-job-agent-svc=5 -d
Services and locations
Once the deployment is running, the configured reverse proxy listens on the standard HTTP (80) port by default and if a HTTPS certificate is configured then on the HTTPS (443) port also. The following locations are available on the deployment public URL (depending on the list of the deployed services).
The initial login credentials are set in the .env file (KEYCLOAK_USER
and KEYCLOAK_PASSWORD
variables). By default you can login using the username "admin" and password: "changeit".
URL | Description |
---|---|
http://<deployment-url> | Login screen for RapidMiner AI Hub |
http://<deployment-url>/platform-admin | Platform Admin |
http://<deployment-url>/jupyter | JupyterHub |
http://<deployment-url>/grafana | Dashboards |
http://<deployment-url>/get-token | Offline Token Generator |
http://<deployment-url>/auth/admin | Identity and security configuration (Keycloak) |
To learn more about the used technologies and how to operate and administer your platform deployment, see our technology overview page.