Docker installation

  • Last updated on June 26, 2025 at 3:02 PM

Reading time: 2 minutes.

This section gives information about CAM installation with a CAM Docker image.

CAM Docker configuration

CAM environment variables

CAM_PUBLIC_URL

CAM public URL
Default value: http://localhost:8180
(Used in cam-conf.json)

KC_AUTH_URL

Keycloak public URL
Default value: null (i.e. no authentication server mode)
(Used in cam-conf.json)

KC_CAM_INTERNAL_SECRET

Keycloak CAM client internal secret. Mandatory with Keycloak.
Default value: null
(Used in cam-conf.json)

AWS_EXECUTION_ENV

AWS execution environment (? To enable internal IP determination for HA - to clarify)
Default value: null
Possible values: AWS_ECS_EC2, AWS_ECS_FARGATE

STANDALONE_CONF

Wildfly configuration. Mandatory.
Default value: null
Possible values:
  • standalone-dbh2.xml (Standalone H2)
  • standalone.xml (Standalone Postgres)
  • standalone-ha.xml (HA Postgres)
(Used in run-cam-docker.sh)

CAM_DB_HOSTNAME

CAM DB host (Postgres)
Default value: localhost
(Used in standalone*.xml)

CAM_DB_USER

CAM DB user (Postgres)
Default value: camuser
(Used in standalone*.xml)

CAM_DB_PASSWORD

CAM DB password (Postgres)
Default value: usrcam
(Used in standalone*.xml)

JVM_MIN_MEM_SIZE

Minimum JVM heap size (Xms)
Default value: null
If both JVM_MIN_MEM_SIZE and JVM_MAX_MEM_SIZE are null, Xms = Xmx = 4G
(Used in cam.conf)

JVM_MAX_MEM_SIZE

Maximum JVM heap size (Xmx)
Default value: null
If both JVM_MIN_MEM_SIZE and JVM_MAX_MEM_SIZE are null, Xms = Xmx = 4G
(Used in cam.conf)

CUSTOM_JVM_OPTIONS

Additional JVM parameters
Default value: null
(Used in cam.conf)

ITM_PUBLIC_URL

ITM public URL
Default value: http://localhost:8280
(Used in connectors/itm.json)

ITM_DB_HOSTNAME

ITM DB host (Postgres) - For DB synchronization only
Default value: localhost
(Used in connectors/itm.json)

ITM_DB_USER

ITM DB user (Postgres) - For DB synchronization only
Default value: itmuser
(Used in connectors/itm.json)

ITM_DB_PASSWORD

ITM DB password (Postgres) - For DB synchronization only
Default value: usritm
(Used in connectors/itm.json)

CAM_DB_SECRET_ARN

Amazon Resource Names for CAM DB secret 

ITM_DB_SECRET_ARN

Amazon Resource Names for CAM ITM secret 

KC_CAM_INTERNAL_SECRET_ARN

Amazon Resource Names for CAM internal secret 
ANTHROPIC_API_KEY
Anthropic API key
GOOGLE_GEMINI_API_KEY
Google Gemini API key
OPEN_API_KEY
OpenAI API key

CAM mount points

log

/opt/jboss/wildfly/cam/log

content-augmentation

/opt/jboss/wildfly/cam/configuration/content-augmentation

py-envs 

[Deprecated 3.10]

/opt/jboss/wildfly/cam/bin/py-envs

work

/opt/jboss/wildfly/cam/work
Note: we separated this folder from content-augmentation because it simplifies conf upgrades.
resources [3.12]/opt/jboss/wildfly/cam/resources

h2_db

/opt/jboss/wildfly/cam/db
Only for H2 database

Python ML libs

[3.10] Python ML libs are installed inside CAM Docker image so there is no need for unzipping a ML.zip containing these libraries. spaCy models are not contained into these installed libraries anymore and are autoloaded in the CAM configuration.

For more information please have a look at CAM Python ML libraries documentation.

CAM Docker installation

Docker installation

Install Docker and Docker Compose accordingly to your OS instructions: https://docs.docker.com/install/

Create CAM Docker image

  1. Unzip CAM Docker package docker-CAM-${CAM_VERSION}-image.zip and go inside the root directory.
  2. Run the command:
sudo docker image build -t mondeca/cam:${CAM_VERSION} .

Deploy CAM Docker image

  1. Install needed dependencies like KC, Postgresql, ITM (you can skip this step if you run without authentication server and without external database using internal H2 DB)
  2. Unzip default CAM conf in folder content-augmentation
  3. Create logs folder
  4. Create work folder
With docker

1. Create container with command: 

sudo docker create --name cam -p 8180:8180 --mount type=bind,source="$(pwd)"/content-augmentation,target=/opt/jboss/wildfly/cam/configuration/content-augmentation --mount type=bind,source="$(pwd)"/log,target=/opt/jboss/wildfly/cam/log --mount type=bind,source="$(pwd)"/py-envs,target=/opt/jboss/wildfly/cam/bin/py-envs  -it mondeca/cam:{CAM_VERSION}

2. Run container with command:

sudo docker start cam

3. Stop container with command:

sudo docker stop cam
With docker-compose

1. Update docker-compose.yaml with appropriate values

2. Start CAM container with command:

sudo docker compose up -d

3. Stop container with command:

sudo docker compose down
Going inside the container
sudo docker exec -it docker-cam-1 bash
Removing the container
sudo docker rm cam

Running CAM in HA

There is a docker-compose-ha-local-test to test CAM HA with 2 nodes on localhost.

If you want to run several HA nodes with a CAM Docker image:

- update docker-compose.yaml with appropriate values

- STANDALONE_CONF must be set to standalone-ha.xml

- Using a Postgres DB is mandatory

- Docker network must be set so CAM containers can communicate on private network through additional ports (7600; 7650)

CAM with GPU

Please follow these installation steps on the host:

curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
  && curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
    sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
    sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list

sudo apt-get update

sudo apt-get install -y nvidia-container-toolkit

sudo nvidia-ctk runtime configure --runtime=docker

sudo service docker restart

To create CAM GPU Docker image: use the docker file: Dockerfile_gpu.

To run with Docker: 

sudo docker run --name cam --gpus all --runtime:nvidia mondeca/cam:${CAM_VERSION}

To run with Docker Compose: uncomment in docker-compose.yaml the GPU part (environment variables, runtime).

Was this article helpful?