Docs
  • Solver
  • Models
    • Field Service Routing
    • Employee Shift Scheduling
    • Pick-up and Delivery Routing
  • Platform
Try models
  • Timefold Platform
  • Self-Hosted
  • Installation instructions

Timefold Platform

    • Introduction
    • Planning AI concepts
    • Getting started with the Timefold Platform
    • Platform concepts
    • Models
      • Model catalog and documentation
      • Model maturity and versioning
      • Trialing Timefold models
    • How-tos
      • Dataset lifecycle
      • Interpreting dataset results
      • Configuration parameters and profiles
      • Reviewing the audit log
      • Searching and categorizing datasets for auditability
      • Member management and roles
      • Secrets management
      • Solve queue
      • Using the maps service
      • Comparing datasets (preview)
      • Real-time planning with /from-patch (preview)
    • Job-oriented guides
      • Balancing different optimization goals
      • Validating an optimized plan with Explainable AI
      • Uncovering inefficiencies in operational planning
      • Responding to disruptions with real-time planning
      • Designing better routing plans with (just enough) traffic awareness
    • API integration
      • API usage
      • Webhooks
      • Server sent events (SSE)
      • Polling
    • Changelog
    • Feature requests
    • Self-Hosted
      • Self-Hosted vs. Timefold Cloud Platform
      • Installation instructions
      • Upgrade instructions
      • Troubleshooting
    • Trust
      • Risk profile
      • Product security
      • Data security
      • Legal and privacy
      • AI legislation compliance
      • Trust center

Installation instructions

These pages are intended for customers who want to self-host the Timefold Platform. Please make sure you understand the impact of self-hosting the Timefold Platform and that you have the correct license file in order to proceed. Contact us for details.

Kubernetes is the target environment for the platform. It takes advantage of dynamicity of the cluster and at the same time its components are not specific to any of the Kubernetes distributions.

Timefold Platform has been certified on the following Kubernetes based offerings:

  • Amazon Elastic Kubernetes Service (EKS)

  • Google Kubernetes Engine (GKE)

  • Azure Kubernetes Service (AKS)

  • Red Hat OpenShift

Timefold Platform is expected to work on any Kubernetes service, either in public cloud or on premise.
Timefold Platform Deployment architecture

When using Timefold Platform with maps integration, the deployment architecture is as follows:

Timefold Platform Deployment architecture with Maps integration

Before you begin

You need to fulfill several prerequisites before proceeding with installation:

  1. Kubernetes cluster

  2. DNS name and certificates

  3. OpenID Connect configuration

  4. Remote data store

  5. Required tools installation

Kubernetes cluster

A Kubernetes cluster can either be hosted by one of certified cloud providers listed above or be self-hosted with an on premise installation.

Amazon EKS Kubernetes cluster installation

Follow these instructions to complete a basic setup for AWS EKS running a Kubernetes cluster in AWS.

https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html

Azure Kubernetes cluster installation

Follow these instructions to complete a basic setup for Azure AKS running a Kubernetes cluster in Azure.

https://learn.microsoft.com/en-gb/azure/aks/learn/quick-kubernetes-deploy-portal

Google Kubernetes cluster installation

Follow these instructions to complete a basic setup for GKE running a Kubernetes cluster in Google Cloud Platform.

https://cloud.google.com/kubernetes-engine/docs/quickstarts/create-cluster

Currently, AutoPilot version of Google Kubernetes Engine cannot be used with cert manager. When using autopilot, you need to provide TLS certificate manually.

Your actions

  1. Create a Kubernetes cluster

DNS name and certificates

The Timefold Platform is accessible through the web interface, which requires a dedicated DNS name and TLS certificates.

The DNS name should be dedicated to the Timefold Platform, e.g. timefold.organization.com.

TLS certificate can be:

  1. Provided as pair of:

    • A certificate

    • A private key used to sign the certificate

  2. automatically managed certificate life cycle by using cert-manager and Let’s Encrypt Certificate Authority.

Your actions

  1. Create DNS name within your DNS provider.

  2. If you are not using cert manager, create a certificate and a private key.

OpenID Connect configuration

Security of the Timefold Platform is based on OpenID Connect. This enables integration with any OpenID Connect provider such as Google, Azure, Okta, and others.

Installation requires the following information from an OpenID Connect provider configuration:

  • Client ID (aka application ID)

  • Client secret

In addition, the following information are required that usually come from OpenID Connect configuration encpoint (.well-known/openid-configuration)

  • Issuer URL - represented as issuer in OpenID Connect configuration.

  • Certificates URL - represented as jwks_uri in OpenID Connect configuration.

OIDC configuration endpoints examples:

Azure: https://login.microsoftonline.com/{your_azure_tenant}/v2.0/.well-known/openid-configuration

Google: https://accounts.google.com/.well-known/openid-configuration

Create an application in the selected Open ID Connect provider. For reference, look at Google and Azure client application setup.

Your actions

  1. Create Client application in your OpenID Connect provider

  2. Make a note of

    • Client ID

    • Client secret

OIDC configuration requires valid OIDC setup as it initiates the connection to the OIDC provider at start and without it Timefold Platform won’t work. If you don’t have proper OIDC configuration, disable OIDC via values file during Platform installation.

oauth:
  enabled: false

Remote data store

The Timefold Platform requires remote data store to persists data sets during execution. The following are supported data stores:

  • Amazon S3

  • Google Cloud Storage

  • Azure BlobStore

Timefold Platform manages data stores by creating and removing both objects and their containers/buckets, as such, access must be granted for these operations.

Your actions

  1. Create a data store in one of the supported options.

  2. Obtain access key to the created data store:

    • Service account key for Google Cloud Storage.

    • Connection string for Azure BlobStore.

    • Access key and access secret for Amazon S3.

Required tools installation

The following tools are used during installation:

  • kubectl Kubernetes CLI tool.

  • helm Kubernetes package manager.

Download them from the official websites at kubectl and helm

Your actions

  1. Install kubectl.

  2. Install helm.

Installation

Installation of the Timefold Platform consists of three parts:

  1. Infrastructure installation.

  2. Security configuration and installation.

  3. Timefold Platform deployment.

Infrastructure installation

When using Red Hat OpenShift this step can be skipped.

Kubernetes Gateway API is recommended to be used for accessing services inside Kubernetes cluster instead of Ingress. Gateway API represents the next generation of Kubernetes Ingress, Load Balancing, and Service Mesh APIs.

Gateway API installation

Depending on your Kubernetes cluster provider there might be already Gateway API and gateway class installed. To verify this run following command

kubectl api-resources --api-group=gateway.networking.k8s.io

If there are no resources with gateway.networking.k8s.io api group found, it needs to be installed manually.

kubectl apply --server-side -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.4.1/standard-install.yaml
This uses 1.4.1 version of the gateway api, verify that you use latest version that is compatible with your Kubernetes cluster.

Create namespace

Create namespace where the gateway instance is going to run with following command

kubectl create namespace timefold-gateway

TLS Configuration

Provide certificate and private key manually

If the certificate and private key are provided manually they need to be set as base64 encoded content in values file (details about values file are described in Timefold Platform deployment section).

In addition, cert manager needs to be disabled. Set ingress.tls.certManager.enabled to false.

ingress:
  tls:
    certManager:
      enabled: false
    cert: |
        -----BEGIN CERTIFICATE-----
        MIIFAjCCA+qgAwIBAgISA47J5bfwCEBxdp/Npea0B/isMA0GCSqGSIb3DQEBCwUA
        ....
        -----END CERTIFICATE-----
    key: |
        -----BEGIN RSA PRIVATE KEY-----
        MIIEowIBAAKCAQEAvMx3Yui4OovRQeMnqVHaxmaDSD+hFezqq/mfz2xI6L0dlLfO
        ....
        -----END RSA PRIVATE KEY-----
Cert Manager
If you provide a certificate and private key this step can be omitted.

Cert manager is responsible for providing trusted certificates for gateway and http routes provisioned by application. It takes care of the complete life cycle management for TLS certificate for Timefold Platform. By default when cert manager is enabled, Timefold Platform will use Let’s Encrypt CA to automatically request and provision TLS certificate for the DNS name.

Installation is based on the official documentation.

If the Kubernetes cluster has already Cert Manager this step can be skipped.

The easiest way to install cert-manger is with Helm. Issue the following commands to install it in a dedicated cert-manager namespace:

helm repo add jetstack https://charts.jetstack.io
helm install cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --create-namespace \
  --set crds.enabled=true \
  --set config.apiVersion="controller.config.cert-manager.io/v1alpha1" \
  --set config.kind="ControllerConfiguration" \
  --set config.enableGatewayAPI=true

Deprecated Ingress based configuration

Details

Cert manager is responsible for providing trusted certificates for ingress. It takes care of the complete life cycle management for TLS certificate for Timefold Platform. By default when cert manager is enabled, Timefold Platform will use Let’s Encrypt CA to automatically request and provision TLS certificate for the DNS name.

Installation is based on the official documentation.

If the Kubernetes cluster has already Cert Manager this step can be skipped.

The easiest way to install cert-manger is with Helm. Issue the following commands to install it in a dedicated cert-manager namespace:

helm repo add jetstack https://charts.jetstack.io
helm install cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --create-namespace \
  --set crds.enabled=true

At this point, the cert-manager should be installed. When running the following command:

kubectl get service --namespace cert-manager cert-manager

the result of the command should be as follows:

NAME           TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
cert-manager   ClusterIP   10.0.254.7   <none>        9402/TCP   105s

Once cert manager is installed, certificate issuer needs to be installed in the same namespace as the gateway itself.

kubectl apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
  name: letsencrypt-prod
  namespace: timefold-gateway
spec:
  acme:
    # The ACME server URL
    server: https://acme-v02.api.letsencrypt.org/directory
    # Name of a secret used to store the ACME account private key
    privateKeySecretRef:
      name: letsencrypt-prod
    # Enable the HTTP-01 challenge provider
    solvers:
      - http01:
          gatewayHTTPRoute:
            parentRefs:
            - name: timefold-platform
              namespace: timefold-gateway
              sectionName: platform-http
            labels:
              gateway: http01-letsencrypt
EOF

If there is a need to use another issuer, follow the cert manager guide on creating issuer.

Gateway implementation installation

If your Kubernetes cluster has already gateway implementation installed this step can be skipped. This can be verified by kubectl get gatewayclass command. If this returns results, note the NAME value of it to be used. It can also return multiple values so consult Kubernetes cluster provider documentation to select most applicable for your setup.

Timefold platform has selected NGINX based Gateway implementation as the one to be installed in Kubernetes clusters without default gateway installation.

  1. NGINX Gateway Fabric

    The core infrastructure component is a gateway that is based on Nginx and acts like an entry point from outside the Kubernetes cluster. Installation is based on the official documentation.

    The easiest way to install Nginx Gateway is with Helm. Issue the following command to install ingress in a dedicated ingress-nginx namespace:

    helm install ngf oci://ghcr.io/nginx/charts/nginx-gateway-fabric --create-namespace -n nginx-gateway
    The above command is the most basic configuration of the gateway and it is recommended to consult the official documentation when installing to a given Kubernetes cluster, especially when running on a cloud provider’s infrastructure.
  2. Provision Gateway resource

    Next step is to actually provision Gateway resource that is bound to host (DNS name) and will represent the target for all application provisioned HTTP routes.

    kubectl apply -f - <<EOF
    apiVersion: gateway.networking.k8s.io/v1
    kind: Gateway
    metadata:
      annotations:
        cert-manager.io/issuer: letsencrypt-prod
      labels:
        app.kubernetes.io/instance: timefold-platform
        app.kubernetes.io/name: kube-gatewayplatform
      name: timefold-platform
      namespace: timefold-gateway
    spec:
      gatewayClassName: nginx
      listeners:
      - allowedRoutes:
          namespaces:
            from: Selector
            selector:
              matchLabels:
                gateway-access: "true"
        hostname: YOUR_HOST_NAME
        name: platform-http
        port: 80
        protocol: HTTP
      - allowedRoutes:
          namespaces:
            from: Selector
            selector:
              matchLabels:
                gateway-access: "true"
        hostname: YOUR_HOST_NAME
        name: platform-https
        port: 443
        protocol: HTTPS
        tls:
          certificateRefs:
          - group: ""
            kind: Secret
            name: timefold-orbit-platform-tls
          mode: Terminate
    EOF
    This gateway resource requires replacement of YOUR_HOST_NAME with actual host name to be used for Timefold Platform.
    This gateway resource uses cert manager integration, if cert manager is not available remove cert-manager.io/issuer: letsencrypt-prod annotation before applying it.

    Worth noting is that there are two listener definitions, HTTP (platform-http) and HTTPS (platform-https). HTTP (platform-http) is mainly used by cert manager to perform HTTP based challenge when issuing certificates. When cert manager is not used, that listener can be removed.

    This gateway will accept HTTP routes from any namespace that has label gateway-access set to true. Make sure that your namespace used by Timefold Platform installation has this label.

  3. Configure Gateway resource

    It might be required to configure provisioned gateway with additional policies. One of the most common is max payload size allowed by the Nginx proxy.

    kubectl apply -f - <<EOF
    apiVersion: gateway.nginx.org/v1alpha1
    kind: ClientSettingsPolicy
    metadata:
      name: timefold-gateway-client-settings
      namespace: timefold-gateway
    spec:
      targetRef:
        group: gateway.networking.k8s.io
        kind: Gateway
        name: timefold-platform
      body:
        maxSize: "100m"
    EOF
    There could be other settings configured depending on your requirements, follow official documentation of Nginx Fabric Gateway.
  4. Assign gateway external IP to the dedicated DNS name

    The external IP of the gateway needs to be assigned to the DNS name for the Timefold Platform.

    Issue the following command to obtain the external IP of the gateway:

    kubectl get service -n timefold-gateway

    The result of the command should be as follows:

    NAME                       TYPE           CLUSTER-IP      EXTERNAL-IP         PORT(S)                      AGE
    timefold-platform-nginx    LoadBalancer   10.100.53.160   X.X.X.X             80:32503/TCP,443:32027/TCP   5d23h

    Make a note of the external IP of the timefold-platform-nginx. This IP needs to be added as a DNS record (A or CNAME) with your DNS provider.

Verify that the host name is properly linked to the gateway external IP and gateway is responding before proceeding with further installation steps.
At this point, when you access the external IP it should return 404.

Deprecated Ingress based configuration

Details
  1. Ingress

    The core infrastructure component is an ingress that is based on Nginx and acts like an entry point from outside the Kubernetes cluster. Installation is based on the official documentation.

    If the Kubernetes cluster already has Nginx Ingress, this step can be skipped.

    The easiest way to install Nginx ingress is with Helm. Issue the following command to install ingress in a dedicated ingress-nginx namespace:

    helm upgrade --install ingress-nginx ingress-nginx \
      --repo https://kubernetes.github.io/ingress-nginx \
      --namespace ingress-nginx --create-namespace \
      --set controller.allowSnippetAnnotations=true
    The above command is the most basic configuration of the ingress and it is recommended to consult the official documentation when installing to a given Kubernetes cluster, especially when running on a cloud provider’s infrastructure.
  2. Assign ingress external IP to the dedicated DNS name

    The external IP of the ingress needs to be assigned to the DNS name for the Timefold Platform.

    Issue the following command to obtain the external IP of the ingress:

    kubectl get service -n ingress-nginx

    The result of the command should be as follows:

    NAME                                 TYPE           CLUSTER-IP    EXTERNAL-IP     PORT(S)                      AGE
    ingress-nginx-controller             LoadBalancer   10.0.79.218   20.31.236.247   80:30723/TCP,443:32612/TCP   5d20h
    ingress-nginx-controller-admission   ClusterIP      10.0.214.62   <none>          443/TCP                      5d20h

    Make a note of the external IP of the ingress-nginx-controller. This IP needs to be added as a DNS record (A or CNAME) with your DNS provider.

Verify that the host name is properly linked to the ingress external IP and ingress is responding before proceeding with further installation steps.
At this point, when you access the external IP it should return 404.

Security configuration and installation

Configuring the security aspect of the platform covers TLS certificate that will be used to secure traffic coming into the platform.

A TLS certificate can be provided manually or can be fully managed using cert manager.

In case a certificate is provided manually, the administrator of the platform is responsible for the renewal process of the certificate.

End-to-end encryption

Timefold Platform has number of components that interact with each other over the network. By default, it is not secured (HTTP) but end-to-end encryption (HTTPS) can be turned on.

When using cert manager, the only required configuration is to enable it in the values file:

ingress:
  tls:
    e2e: true

All required certificates and configuration will be provisioned automatically, including the rotation of certificates on regular basis. Certificates have a 90-day expiration.

If cert manager is not used, the certificates must be provided manually via kubernetes secret with the following names inside the same namespace as the namespace Timefold platform is deployed to:

  • timefold-platform-components-tls

  • timefold-platform-workers-tls

timefold-platform-components-tls is required to have a wildcard certificate for .namespace.svc where namespace should be replaced with the actual namespace where Timefold platform is deployed. timefold-platform-workers-tls is required to have a wildcard certificate for .namespace.pod where namespace should be replaced with the actual namespace where Timefold platform is deployed.

For example, when Timefold platform is deployed to planning-platform namespace then the certificates should be for the following DNS names .planning-platform.svc and .planning-platform.pod respectively.

Both of these secrets should have following keys:

Data
====
tls.crt:
tls.key:
ca.crt:

After the certificates and secrets are created, end-to-end TLS can be enabled for the platform deployment.

Prepare Kubernetes cluster

Kubernetes cluster runs workloads on worker nodes. These worker nodes are actually machines attached to the cluster that have certain characteristics, such as CPU type and architecture, available memory, and so on. Timefold Platform workload is compute intense and runs best on compute optimized nodes.

The type of nodes is very dependent on where Kubernetes cluster is running and what type of hardware is available.

By default, Timefold Platform runs on any nodes available in the cluster so no additional configuration is required, but it is recommended to dedicate compute optimized nodes in the cluster for running solver workloads.

List all available nodes in you cluster with following command:

kubectl get nodes

Mark selected worker nodes as solver workers

Use the following command to mark selected nodes as dedicated for running solver workloads:

kubectl taint nodes NAME_OF_THE_NODE ai.timefold/solver-worker=true:NoSchedule
kubectl label nodes NAME_OF_THE_NODE ai.timefold/solver-worker=true
The commands should be executed on every node that is dedicated to run solver workloads.

After marking nodes as solver workers, only solving pods will be able to run on these nodes.

Ensure that there are some nodes available that are not dedicated to solver workloads to execute any other type of workloads.

Dedicate selected worker nodes to given model

If Timefold Platform is configured with multiple models, there is an option to dedicate selected nodes for given models by the model ID.

Use the following command to mark selected nodes as dedicated for running employee-scheduling model only:

kubectl taint nodes NAME_OF_THE_NODE ai.timefold/model=employee-scheduling:NoSchedule
kubectl label nodes NAME_OF_THE_NODE ai.timefold/model=employee-scheduling
The commands should be executed on every node that is dedicated to running solver workloads.

Dedicate selected worker nodes to given tenant

When Timefold Platform runs in multi tenant setup, there is also an option to dedicate selected nodes to a given tenant.

Use the following command to mark selected nodes as dedicated for running 668798f6-2026-4cce-9098-e212730b060e tenant (given as ID) only:

kubectl taint nodes NAME_OF_THE_NODE ai.timefold/tenant=668798f6-2026-4cce-9098-e212730b060e:NoSchedule
kubectl label nodes NAME_OF_THE_NODE ai.timefold/tenant=668798f6-2026-4cce-9098-e212730b060e
The commands should be executed on every node that is dedicated to running solver workloads.

Dedicate selected worker nodes to given tenant group

When Timefold Platform runs in multi tenant setup, there is also an option to dedicate selected nodes to a given tenant group. This is a more flexible extension of dedicating nodes to a single tenant, as it supports reusing a node pool with many tenants, e.g. single account.

Use the following command to mark selected nodes as dedicated for running my-company tenant group only:

kubectl taint nodes NAME_OF_THE_NODE ai.timefold/tenant-group=my-company:NoSchedule
kubectl label nodes NAME_OF_THE_NODE ai.timefold/tenant-group=my-company
The commands should be executed on every node that is dedicated to running solver workloads:

Next, assign selected tenants to the my-company group. This can be done with the admin API at tenant creation or by updating existing tenant.

Adding additional tolerations to solver worker pods

Timefold Platform dynamically creates pods to perform solving operations. In certain situations, Kubernetes nodes can be configured to include additional taints, as such, created pods require tolerations for these taints.

In such situations, during installation of the platform you can set them with the following configuration in the values file:

models:
  tolerations: kubernetes.io/arch:Equal:arm64:NoSchedule

Tolerations must be specified in the following format: key:operator:value:effect

For example: kubernetes.io/arch:Equal:arm64:NoSchedule.

Multiple tolerations can be specified by using | as separator. For example: kubernetes.io/arch:Equal:arm64:NoSchedule|kubernetes.io/test:Equal:test:NoSchedule.

Timefold Platform deployment

Timefold support provides an access token for the container registry. This access token is used to log in to the Helm chart repository to pull container images (via image pull secret) and is then registered in the Kubernetes cluster and is used during installation and by the running components.

When the access token expires, you will receive a new token from Timefold support. When this happens, regenerate the token value and replace the environment variable with the following steps.
Generate token value to access container registry

Use the following command to create a base64 encoded string that will act as container registry token used in the installation of the Timefold Platform.

kubectl create secret docker-registry timefold-ghcr --docker-server=ghcr.io --docker-username=sa --docker-password={YOUR_TIMEFOLD_TOKEN} --dry-run=client --output="jsonpath={.data.\.dockerconfigjson}"

Replace YOUR_TIMEFOLD_TOKEN with the token received from Timefold support.

When installation is completed, you can access the platform on the dedicated DNS name.

After the token is created, set it as environment variable TOKEN.

export TOKEN=VALUE_FROM_PREVIOUS_STEP

The Timefold Platform deployment differs in terms of configuration options depending on the selected remote data store.

Amazon S3 based installation
  1. Save following template as orbit-values.yaml:

#####
##### Installation values for Amazon Web Services (EKS) based environment
##### 

# number of active replicas of the main component of the platform
#replicaCount: 1

license: YOUR_ORBIT_LICENSE

ingress:
  host: YOUR_HOST_NAME
  # disables ingress api as entry point
  enabled: false
  # and enables gateway api instead
  gateway: true
  tls:
    # select one of the below options
    # 1) in case cert manager is not available or cannot be used, provide certificate and private key
    #cert:
    #key:
    # 2) if cert manager is installed and lets encrypt can be used
    certManager:
      enabled: true
      issuer: "letsencrypt-prod"
      acmeEmail: YOUR_ADMINISTRATOR_EMAIL_ADDRESS

oauth:
  certs: YOUR_OIDC_PROVIDER_CERTS_URL
  issuer: YOUR_OIDC_PROVIDER_ISSUER_URL
  emailDomain: YOUR_ORG_EMAIL_DOMAIN
  clientId: YOUR_CLIENT_ID
  # Mode in which auth operatates - expected values: idToken or accessToken
  # mode: idToken
  # generated cookie secret with following command
  # dd if=/dev/urandom bs=32 count=1 2>/dev/null | base64 | tr -d -- '\n' | tr -- '+/' '-_'; echo
  cookieSecret: YOUR_COOKIE_SECRET

secrets:
  data:
    awsAccessKey: YOUR_AWS_ACCESS_KEY
    awsAccessSecret: YOUR_AWS_ACCESS_SECRET

# administrators of the platform      
admins: |
#  name@email.com


storage:
  remote:
    # specify custom name for configuration bucket, can be company name with timefold-configuration suffix or domain name
    name: YOUR_CUSTOM_NAME_FOR_CONFIG_BUCKET
    # specify retention of data in the platform expressed in days, data will be automatically removed
    #expiration: 7  
    type: s3
    #s3:
    #  region: us-east-1
    # Add a prefix to all created buckets (must be smaller than 28 characters)
    #bucketPrefix:

#models:
  # maximum amount of time solver will spend before termination
#  terminationSpentLimit: PT30M
  # maximum allowed limit for user-supplied termination spentLimit. Safety net to prevent long-running jobs
#  terminationMaximumSpentLimit: PT60M
  # if the score has not improved during this period, terminate the solver
#  terminationUnimprovedSpentLimit: PT5M
  # maximum allowed limit for user-supplied termination unimprovedSpentLimit. Safety net to prevent long-running jobs
#  terminationMaximumUnimprovedSpentLimit: PT5M
# maximum amount of steps before the solver will terminate.
#  terminationStepCountLimit: 1000
   # duration to keep the runtime after solving has finished
#  runtimeTimeToLive: PT1M
   # duration to keep the runtime without solving since last request
#  idleRuntimeTimeToLive: PT30M
   # IMPORTANT: when setting resources, all must be set
#  resources:
#    limits:
      # max CPU allowed for the platform to be used
#      cpu: 1
      # max memory allowed for the platform to be used
#      memory: 512Mi
#    requests:
      # guaranteed CPU for the platform to be used
#     cpu: 1
      # guaranteed memory for the platform to be used
#      memory: 512Mi

#maps:
#  enabled: true
  # number of active replicas of the maps service - when set to more than 1, maps.scalable property must be set to true
#  replicaCount: 1
#  cache:
#    TTL: P7D
#    cleaningInterval: PT1H
#    persistentVolume:
#      storageClassName: "gp2"
#      accessMode: "ReadWriteOnce"
#      size: "8Gi"
#  osrm:
#    options: "--max-table-size 10000"
#    maxDistanceFromRoad: 10000
#    locations:
#      - region: us-southern-california
#        maxLocationsInRequest: 10000
#        transportType: car
#      - region: us-georgia
#        maxLocationsInRequest: 10000
#    externalLocationsUrl:
#      customModel: "http://localhost:5000"
#    retry:
#      maxDurationMinutes: 60
#    autoscaling:
#      enabled: true
#      minReplicas: 1
#      maxReplicas: 2
#      cpuAverageUtilization: 110
#  externalProvider: "http://localhost:5000"


#insights:
  # number of active replicas of the insights service
#  replicaCount: 1

#solver:
#  maxThreadCount: 8 # Set the maximum thread count for each job in the platform

# Set notifications timeout
#notifications:
#  webhook:
#    readTimeout: PT5S # Default is 5 seconds
#    connectTimeout: PT5S # Default is 5 seconds

The above file serves as template for S3 based deployment of Timefold Platform.

Replace all parameters that start with YOUR_ with the corresponding values

  • YOUR_HOST_NAME: The DNS name dedicated to Timefold Platform.

  • YOUR_OIDC_PROVIDER_CERTS_URL: OIDC certificate’s URL from the OIDC provider.

  • YOUR_OIDC_PROVIDER_ISSUER_URL: OIDC issuer URL from the OIDC provider. It is also used as prefix of the OIDC discovery endpoint.

  • YOUR_ORG_EMAIL_DOMAIN: The email domain that is allowed to access the platform. Remove this parameter or set it to * to allow any email domains.

  • YOUR_CLIENT_ID: The application client ID from the OIDC provider

  • YOUR_COOKIE_SECRET: The generated secret to encrypt cookies (see command next to the property in the template).

  • YOUR_CUSTOM_NAME_FOR_CONFIG_BUCKET: Specify a unique name for the bucket that configuration should be stored in. S3 bucket names must be globally unique so ideally use your company name or domain name as the bucket name.

OIDC can be used in two modes idToken or accessToken. By default, idToken is used and it is recommended when there is an authorization proxy system in front of APIs as it brings performance improvements because the ID token is always a JWT token with all information included. If the Timefold Platform is used directly, it is recommended to use accessToken mode to let the API retrieve user info behind the token via OIDC UserInfo endpoint as part of the verification process.

When using cert manager: - YOUR_ADMINISTRATOR_EMAIL_ADDRESS: The email address or functional mailbox to be notified about certificate issues managed by cert manager.

When not using cert manager: - Set ingress.tls.certManager.enabled to false. - Remove other settings about cert manager (under ingress.tls.certManager). - Uncomment and set values for ingress.tls.cert and ingress.tls.key.

Optionally platform administrators can be set by setting the email addresses of the users as a list of the admins property.

Platform components can be deployed in high availability mode (multiple replicas of the service). The number of replicas is controlled by the following properties:

# number of active replicas of the main component of the platform
replicaCount: 1
# number of active replicas of the maps service - when set to more than 1, maps.scalable property must be set to true
maps:
  replicaCount: 1

This file can be safely stored as it does not contain any sensitive information. Such information is provided as part of installation command from environment variables.

Login to Timefold Helm Chart repository to gain access to Timefold Platform chart. Use user as username and the token provided by Timefold support as password.

helm registry login ghcr.io/timefoldai

Installation command requires the following environment variables to be set:

  • NAMESPACE: Kubernetes namespace where the platform should be installed.

  • OAUTH_CLIENT_SECRET: Client secret of the application used for this environment.

  • TOKEN: GitHub access token to the container registry provided by Timefold support.

  • LICENSE: Timefold Platform license string provided by Timefold support.

  • AWS_ACCESS_KEY: AWS access key.

  • AWS_ACCESS_SECRET: AWS access secret.

Once successfully logged in with environment variables set, issue the installation command via helm:

helm upgrade --install --atomic --namespace $NAMESPACE timefold-orbit oci://ghcr.io/timefoldai/timefold-orbit-platform  \
        -f orbit-values.yaml \
        --create-namespace \
        --set namespace=$NAMESPACE \
        --set oauth.clientSecret="$OAUTH_CLIENT_SECRET" \
        --set image.registry.auth="$TOKEN" \
        --set license="$LICENSE" \
        --set secrets.data.awsAccessKey="$AWS_ACCESS_KEY" \
        --set secrets.data.awsAccessSecret="$AWS_ACCESS_SECRET"

Timefold Platform also supports authentication to AWS services (S3) via AWS IAM Roles for Service Accounts (IRSA), this means you can avoid using AWS access key and secret.

Configuration should be based on the official documentation for how to configure IRSA in EKS. The ARN of the role to use can be set in the values file.

storage:
  remote:
    name: my-company-timefold-configuration
    type: s3
    s3:
      region: eu-central-1
      role: "arn:aws:iam::ACCOUNTID:role/ROLE_NAME"

When using IRSA helm install command, you do not need to set access key and access secret.

helm upgrade --install --atomic --namespace $NAMESPACE timefold-orbit oci://ghcr.io/timefoldai/timefold-orbit-platform  \
        -f orbit-values.yaml \
        --create-namespace \
        --set namespace=$NAMESPACE \
        --set oauth.clientSecret="$OAUTH_CLIENT_SECRET" \
        --set image.registry.auth="$TOKEN" \
        --set license="$LICENSE"
The above command will install (or upgrade) the latest released version of the Timefold Platform. To install a specific version, append --version A.B.C to the command where A.B.C is the version number to be used.

Once installation is completed, you can access the platform on the dedicated DNS name. Information about how to access is displayed at the end of installation.

Google Cloud Storage based installation
  1. Save following template as orbit-values.yaml:

#####
##### Installation values for Google Cloud Platform (GKE) based environment
##### 

# number of active replicas of the main component of the platform
#replicaCount: 1

license: YOUR_ORBIT_LICENSE

ingress:
  host: YOUR_HOST_NAME
  # disables ingress api as entry point
  enabled: false
  # and enables gateway api instead
  gateway: true
  tls:
    # select one of the below options
    # 1) in case cert manager is not available or cannot be used, provide certificate and private key
    #cert:
    #key:
    # 2) if cert manager is installed and lets encrypt can be used
    certManager:
      enabled: true
      issuer: "letsencrypt-prod"
      acmeEmail: YOUR_ADMINISTRATOR_EMAIL_ADDRESS

oauth:
  certs: YOUR_OIDC_PROVIDER_CERTS_URL
  issuer: YOUR_OIDC_PROVIDER_ISSUER_URL
  emailDomain: YOUR_ORG_EMAIL_DOMAIN
  clientId: YOUR_CLIENT_ID
  # Mode in which auth operatates - expected values: idToken or accessToken
  # mode: idToken  
  # generated cookie secret with following command
  # dd if=/dev/urandom bs=32 count=1 2>/dev/null | base64 | tr -d -- '\n' | tr -- '+/' '-_'; echo
  cookieSecret: YOUR_COOKIE_SECRET

secrets:
  stringData:
    serviceAccountKey: YOUR_GOOGLE_STORAGE_SERVICE_ACCOUNT_KEY
    projectId: YOUR_GOOGLE_CLOUD_PROJECT_ID

# administrators of the platform
admins: |
#  name@email.com

storage:
  remote:
    # specify custom name for configuration bucket, can be google project id with timefold-configuration suffix or domain name
    name: YOUR_CUSTOM_NAME_FOR_CONFIG_BUCKET
    # specify retention of data in the platform expressed in days, data will be automatically removed
    #expiration: 7
    type: googlecloud
    # Add a prefix to all created buckets (must be smaller than 28 characters)
    #bucketPrefix:

#models:
  # maximum amount of time solver will spend before termination
#  terminationSpentLimit: PT30M
  # maximum allowed limit for user-supplied termination spentLimit. Safety net to prevent long-running jobs
#  terminationMaximumSpentLimit: PT60M
  # if the score has not improved during this period, terminate the solver
#  terminationUnimprovedSpentLimit: PT5M
  # maximum allowed limit for user-supplied termination unimprovedSpentLimit. Safety net to prevent long-running jobs
#  terminationMaximumUnimprovedSpentLimit: PT5M
# maximum amount of steps before the solver will terminate.
#  terminationStepCountLimit: 1000
   # duration to keep the runtime after solving has finished
#  runtimeTimeToLive: PT1M
   # duration to keep the runtime without solving since last request
#  idleRuntimeTimeToLive: PT30M
   # IMPORTANT: when setting resources, all must be set
#  resources:
#    limits:
      # max CPU allowed for the platform to be used
#      cpu: 1
      # max memory allowed for the platform to be used
#      memory: 512Mi
#    requests:
      # guaranteed CPU for the platform to be used
#     cpu: 1
      # guaranteed memory for the platform to be used
#      memory: 512Mi

#maps:
#  enabled: true
  # number of active replicas of the maps service - when set to more than 1, maps.scalable property must be set to true
#  replicaCount: 1
#  cache:
#    TTL: P7D
#    cleaningInterval: PT1H
#    persistentVolume:
#      storageClassName: "standard"
#      accessMode: "ReadWriteOnce"
#      size: "8Gi"
#  osrm:
#    options: "--max-table-size 10000"
#    maxDistanceFromRoad: 10000
#    locations:
#      - region: us-southern-california
#        maxLocationsInRequest: 10000
#        transportType: car
#      - region: us-georgia
#        maxLocationsInRequest: 10000
#    externalLocationsUrl:
#      customModel: "http://localhost:5000"
#    retry:
#      maxDurationMinutes: 60
#    autoscaling:
#      enabled: true
#      minReplicas: 1
#      maxReplicas: 2
#      cpuAverageUtilization: 110
#  externalProvider: "http://localhost:5000"

#insights:
  # number of active replicas of the insights service
#  replicaCount: 1

#solver:
#  maxThreadCount: 8 # Set the maximum thread count for each job in the platform

# Set notifications timeout
#notifications:
#  webhook:
#    readTimeout: PT5S # Default is 5 seconds
#    connectTimeout: PT5S # Default is 5 seconds

The above file serves as a template for Google Cloud Storage based deployment of Timefold Platform.

Replace all parameters that start with YOUR_ with the corresponding values:

  • YOUR_HOST_NAME: The DNS name dedicated to Timefold orbit.

  • YOUR_OIDC_PROVIDER_CERTS_URL: OIDC certificate’s URL from the OIDC provider.

  • YOUR_OIDC_PROVIDER_ISSUER_URL: OIDC issuer URL from the OIDC provider. It is also used as a prefix of the OIDC discovery endpoint.

  • YOUR_ORG_EMAIL_DOMAIN: The email domain that is allowed to access the platform. Remove this parameter or set it to * to allow any email domains.

  • YOUR_CLIENT_ID: The application client ID from the OIDC provider.

  • YOUR_COOKIE_SECRET: The generated secret to encrypt cookies (see command next to the property in the template).

  • YOUR_CUSTOM_NAME_FOR_CONFIG_BUCKET: Specify a unique name for the bucket that configuration should be stored in. Google Cloud Storage bucket names must be globally unique so ideally use your company name or domain name as the bucket name.

OIDC can be used in two modes idToken or accessToken. By default idToken is used and it is recommended when there is an authorization proxy system in front of APIs as it brings performance improvements because the ID token is always a JWT token with all information included. If the Timefold Platform is used directly, it is recommended to use accessToken mode to let the API retrieve user info behind the token via OIDC UserInfo endpoint as part of the verification process.

When using cert manager: - YOUR_ADMINISTRATOR_EMAIL_ADDRESS: The email address or functional mail box to be notified about certificate issues managed by cert manager.

When not using cert manager: - Set ingress.tls.certManager.enabled to false. - Remove other settings about cert manager (under ingress.tls.certManager). - Uncomment and set values for ingress.tls.cert and ingress.tls.key.

Optionally platform administrators can be set by setting the email addresses of the users as a list of the admins property.

Platform components can be deployed in high availability mode (multiple replicas of the service). The number of replicas is controlled by the following properties:

# number of active replicas of the main component of the platform
replicaCount: 1
# number of active replicas of the maps service - when set to more than 1, maps.scalable property must be set to true
maps:
  replicaCount: 1

This file can be safely stored as it does not contain any sensitive information. Such information is provided as part of the installation command from environment variables.

Login to Timefold Helm Chart repository to gain access to the Timefold Platform chart. Use user as username and token provided by Timefold support as password.

helm registry login ghcr.io/timefoldai

The installation command requires the following environment variables to be set:

  • NAMESPACE: The Kubernetes namespace where the platform should be installed.

  • OAUTH_CLIENT_SECRET: The client secret of the application used for this environment.

  • TOKEN: The GitHub access token to the container registry provided by Timefold support.

  • LICENSE: The Timefold Platform license string provided by Timefold support.

  • GCS_SERVICE_ACCOUNT: The Service Account key for Google Cloud Storage.

  • GCP_PROJECT_ID: The Google Cloud Project ID where storage is created.

Once successfully logged in with environment variables set, issue the installation command via helm:

helm upgrade --install --atomic --namespace $NAMESPACE timefold-orbit oci://ghcr.io/timefoldai/timefold-orbit-platform  \
        -f orbit-values.yaml \
        --create-namespace \
        --set namespace=$NAMESPACE \
        --set oauth.clientSecret="$OAUTH_CLIENT_SECRET" \
        --set image.registry.auth="$TOKEN" \
        --set license="$LICENSE" \
        --set secrets.stringData.serviceAccountKey="$GCS_SERVICE_ACCOUNT" \
        --set secrets.stringData.projectId="$GCP_PROJECT_ID"

Timefold Platform also supports authentication to GCP services (Google Cloud Storage) via Google Cloud Workload Identity Federation that means you can avoid using long-lived service account keys.

Configuration should be based on the official documentation how to configure Workload Identity in GKE. The service account to use can be set in values file:

storage:
  remote:
    name: my-company-timefold-configuration
    type: gcs
    gcs:
      region: europe-west-1
      serviceAccount: [GSA_NAME]@[PROJECT_ID].iam.gserviceaccount.com

In case of using Workload Identity helm install command, you do not need to set the service account:

helm upgrade --install --atomic --namespace $NAMESPACE timefold-orbit oci://ghcr.io/timefoldai/timefold-orbit-platform  \
        -f orbit-values.yaml \
        --create-namespace \
        --set namespace=$NAMESPACE \
        --set oauth.clientSecret="$OAUTH_CLIENT_SECRET" \
        --set image.registry.auth="$TOKEN" \
        --set license="$LICENSE" \
        --set secrets.stringData.projectId="$GCP_PROJECT_ID"
The above command will install (or upgrade) the latest released version of the Timefold Platform. To install a specific version, append --version A.B.C to the command where A.B.C is the version number to be used.

Once installation is completed, you can access the platform on the dedicated DNS name.

Azure BlobStore based installation
  1. Save the following template as orbit-values.yaml:

#####
##### Installation values for Azure (AKS) based environment
##### 

# number of active replicas of the main component of the platform
#replicaCount: 1

license: YOUR_ORBIT_LICENSE

ingress:
  host: YOUR_HOST_NAME
  # disables ingress api as entry point
  enabled: false
  # and enables gateway api instead
  gateway: true
  tls:
    # select one of the below options
    # 1) in case cert manager is not available or cannot be used, provide certificate and private key
    #cert:
    #key:
    # 2) if cert manager is installed and lets encrypt can be used
    certManager:
      enabled: true
      issuer: "letsencrypt-prod"
      acmeEmail: YOUR_ADMINISTRATOR_EMAIL_ADDRESS

oauth:
  certs: YOUR_OIDC_PROVIDER_CERTS_URL
  issuer: YOUR_OIDC_PROVIDER_ISSUER_URL
  emailDomain: YOUR_ORG_EMAIL_DOMAIN
  clientId: YOUR_CLIENT_ID
  # Mode in which auth operatates - expected values: idToken or accessToken
  # mode: idToken  
  # generated cookie secret with following command
  # dd if=/dev/urandom bs=32 count=1 2>/dev/null | base64 | tr -d -- '\n' | tr -- '+/' '-_'; echo
  cookieSecret: YOUR_COOKIE_SECRET

secrets:
  data:
    azureStoreConnectionString: YOUR_AZURE_BLOB_STORE_CONNECTION_STRING

# administrators of the platform      
admins: |
#  name@email.com


storage:
  remote:
   # specify custom name for configuration bucket, can be company name with timefold-configuration suffix or domain name
    name: YOUR_CUSTOM_NAME_FOR_CONFIG_BUCKET  
    type: azure
    # Add a prefix to all created buckets (must be smaller than 28 characters)
    #bucketPrefix:

#models:
  # maximum amount of time solver will spend before termination
#  terminationSpentLimit: PT30M
  # maximum allowed limit for user-supplied termination spentLimit. Safety net to prevent long-running jobs
#  terminationMaximumSpentLimit: PT60M
  # if the score has not improved during this period, terminate the solver
#  terminationUnimprovedSpentLimit: PT5M
  # maximum allowed limit for user-supplied termination unimprovedSpentLimit. Safety net to prevent long-running jobs
#  terminationMaximumUnimprovedSpentLimit: PT5M
# maximum amount of steps before the solver will terminate.
#  terminationStepCountLimit: 1000
# duration to keep the runtime after solving has finished
#  runtimeTimeToLive: PT1M
   # duration to keep the runtime without solving since last request
#  idleRuntimeTimeToLive: PT30M
   # IMPORTANT: when setting resources, all must be set
#  resources:
#    limits:
      # max CPU allowed for the platform to be used
#      cpu: 1
      # max memory allowed for the platform to be used
#      memory: 512Mi
#    requests:
      # guaranteed CPU for the platform to be used
#     cpu: 1
      # guaranteed memory for the platform to be used
#      memory: 512Mi

#maps:
#  enabled: true
  # number of active replicas of the maps service - when set to more than 1, maps.scalable property must be set to true
#  replicaCount: 1
#  cache:
#    TTL: P7D
#    cleaningInterval: PT1H
#    persistentVolume:
#      storageClassName: "Standard_LRS"
#      accessMode: "ReadWriteOnce"
#      size: "8Gi"
#  osrm:
#    options: "--max-table-size 10000"
#    maxDistanceFromRoad: 10000
#    locations:
#      - region: us-southern-california
#        maxLocationsInRequest: 10000
#        transportType: car
#      - region: us-georgia
#        maxLocationsInRequest: 10000
#    externalLocationsUrl:
#      customModel: "http://localhost:5000"
#    retry:
#      maxDurationMinutes: 60
#    autoscaling:
#      enabled: true
#      minReplicas: 1
#      maxReplicas: 2
#      cpuAverageUtilization: 110
#  externalProvider: "http://localhost:5000"

#insights:
  # number of active replicas of the insights service
#  replicaCount: 1

#solver:
#  maxThreadCount: 8 # Set the maximum thread count for each job in the platform

# Set notifications timeout
#notifications:
#  webhook:
#    readTimeout: PT5S # Default is 5 seconds
#    connectTimeout: PT5S # Default is 5 seconds

The above file serves as a template for Azure BlobStore based deployment of Timefold Platform.

Replace all parameters that start with YOUR_ with the corresponding values

  • YOUR_HOST_NAME; The DNS name dedicated to Timefold Platform.

  • YOUR_OIDC_PROVIDER_CERTS_URL: OIDC certificate’s URL from the OIDC provider.

  • YOUR_OIDC_PROVIDER_ISSUER_URL: OIDC issuer URL from the OIDC provider. It is also used as a prefix of the OIDC discovery endpoint.

  • YOUR_ORG_EMAIL_DOMAIN: The email domain that is allowed to access the platform. Remove this parameter or set it to * to allow any email domain.

  • YOUR_CLIENT_ID: The application client ID from the OIDC provider.

  • YOUR_COOKIE_SECRET: The generated secret to encrypt cookies (see the command next to the property in the template).

  • YOUR_CUSTOM_NAME_FOR_CONFIG_BUCKET: Specify a unique name for the container within the storage account that the configuration should be stored in. Azure container names do not have to be globally unique, but it is recommended to use your company name or domain name as the container name.

OIDC can be used in two modes idToken or accessToken. By default idToken is used and it is recommended when there is authorization proxy system in front of the APIs as it brings performance improvements because the ID token is always a JWT token with all the information included. If the Timefold platform is used directly, it is recommended to use accessToken mode to let the API retrieve user info behind the token via OIDC UserInfo endpoint as part of the verification process.

When using cert manager: - YOUR_ADMINISTRATOR_EMAIL_ADDRESS: The email address or functional mail box to be notified about certificate issues managed by cert manager.

When not using cert manager: - Set ingress.tls.certManager.enabled to false. - Remove other settings about cert manager (under ingress.tls.certManager). - Uncomment and set values for ingress.tls.cert and ingress.tls.key.

Optionally platform administrators can be set by setting the email addresses of the users as a list of the admins property.

Platform components can be deployed in high availability mode (multiple replicas of the service). The number of replicas is controlled by the following properties:

# number of active replicas of the main component of the platform
replicaCount: 1
# number of active replicas of the insights service
# number of active replicas of the maps service - when set to more than 1, maps.scalable property must be set to true
maps:
  replicaCount: 1

This file can be safely stored as it does not contain any sensitive information. Such information is provided as part of the installation command from environment variables.

Login to the Timefold Helm Chart repository to gain access to the Timefold Platform chart. Use user as username and token provided by Timefold support as password.

helm registry login ghcr.io/timefoldai

The installation command requires the following environment variables to be set:

  • NAMESPACE: the Kubernetes namespace where the platform should be installed.

  • OAUTH_CLIENT_SECRET: The client secret of the application used for this environment.

  • TOKEN: The GitHub access token to the container registry provided by Timefold support.

  • LICENSE: the Timefold Platform license string provided by Timefold support.

  • AZ_CONNECTION_STRING: The Azure BlobStore connection string.

Once successfully logged in with the environment variables set, issue the installation command via helm:

helm upgrade --install --atomic --namespace $NAMESPACE timefold-orbit oci://ghcr.io/timefoldai/timefold-orbit-platform  \
        -f orbit-values.yaml \
        --create-namespace \
        --set namespace=$NAMESPACE \
        --set oauth.clientSecret="$OAUTH_CLIENT_SECRET" \
        --set image.registry.auth="$TOKEN" \
        --set license="$LICENSE" \
        --set secrets.data.azureStoreConnectionString="$AZ_CONNECTION_STRING"

Timefold Platform also supports authentication to Azure services (Azure BlobStore) via Azure Workload Identity. This means you can avoid using long-lived connection strings.

Configuration should be based on the official documentation how to configure Workload Identity in AKS. The client ID to use can be set in values file:

storage:
  remote:
    name: my-company-timefold-configuration
    type: azure
    azure:
      clientId: client id to use Azure Workload Identity

In case of using Workload Identity helm install command, you do not need to set Azure connection string:

helm upgrade --install --atomic --namespace $NAMESPACE timefold-orbit oci://ghcr.io/timefoldai/timefold-orbit-platform  \
        -f orbit-values.yaml \
        --create-namespace \
        --set namespace=$NAMESPACE \
        --set oauth.clientSecret="$OAUTH_CLIENT_SECRET" \
        --set image.registry.auth="$TOKEN" \
        --set license="$LICENSE"
The above command will install (or upgrade) the latest released version of the Timefold Platform. To install a specific version append --version A.B.C to the command where A.B.C is the version number to be used.

Once installation is completed, you can access the platform on the dedicated DNS name.

Red Hat OpenShift based installation
First, follow selected (Amazon S3, Google Cloud Storage or Azure BlobStore) storage installation.
  1. Edit orbit-values.yaml

1.1. Disable the ingress/gateway component but configure the host according to your OpenShift networking setup. Ingress is disabled as OpenShift comes with another networking component that acts as ingress:

ingress:
  enabled: false
  gateway: false
  host: YOUR_OPENSHIFT_HOST

1.2. Optional: Simplify OAuth configuration when full OAuth cannot be configured:

oauth:
  enabled: false
  # certs:
  # issuer:

1.3. Configure the pod security context to specify the user and group IDs that the pods will run as:

podSecurityContext:
  runAsUser: 1007510000
  fsGroup: 1007510000

osrmPodSecurityContext:
  runAsUser: 1007510000

Completed values file will look like this:

#####
##### Installation values for orbit-dev.timefold.dev
##### Running in Azure


ingress:
  enabled: false
  gateway: false
  host: YOUR_OPENSHIFT_HOST

oauth:
  enabled: false
  # certs:
  # issuer:

podSecurityContext:
  runAsUser: 1007510000
  fsGroup: 1007510000

osrmPodSecurityContext:
  runAsUser: 1007510000


secrets:
  data:
    awsAccessKey: YOUR_AWS_ACCESS_KEY
    awsAccessSecret: YOUR_AWS_ACCESS_SECRET

# administrators of the platform      
admins: |
#  name@email.com


storage:
  remote:
    # specify custom name for configuration bucket, can be company name with timefold-configuration suffix or domain name
    name: YOUR_CUSTOM_NAME_FOR_CONFIG_BUCKET
    # specify retention of data in the platform expressed in days, data will be automatically removed
    #expiration: 7  
    type: s3
    #s3:
    #  region: us-east-1
    # Add a prefix to all created buckets (must be smaller than 28 characters)
    #bucketPrefix:


#models:
  # maximum amount of time solver will spend before termination
#  terminationSpentLimit: PT30M
  # maximum allowed limit for user-supplied termination spentLimit. Safety net to prevent long-running jobs
#  terminationMaximumSpentLimit: PT60M
  # if the score has not improved during this period, terminate the solver
#  terminationUnimprovedSpentLimit: PT5M
  # maximum allowed limit for user-supplied termination unimprovedSpentLimit. Safety net to prevent long-running jobs
#  terminationMaximumUnimprovedSpentLimit: PT5M
# maximum amount of steps before the solver will terminate.
#  terminationStepCountLimit: 1000
   # duration to keep the runtime after solving has finished
#  runtimeTimeToLive: PT1M
   # duration to keep the runtime without solving since last request
#  idleRuntimeTimeToLive: PT30M
   # IMPORTANT: when setting resources, all must be set
#  resources:
#    limits:
      # max CPU allowed for the platform to be used
#      cpu: 1
      # max memory allowed for the platform to be used
#      memory: 512Mi
#    requests:
      # guaranteed CPU for the platform to be used
#     cpu: 1
      # guaranteed memory for the platform to be used
#      memory: 512Mi

#maps:
#  enabled: true
  # number of active replicas of the maps service - when set to more than 1, maps.scalable property must be set to true
#  replicaCount: 1
#  cache:
#    TTL: P7D
#    cleaningInterval: PT1H
#    persistentVolume:
#      storageClassName: "gp3"
#      accessMode: "ReadWriteOnce"
#      size: "8Gi"
#  osrm:
#    options: "--max-table-size 10000"
#    maxDistanceFromRoad: 10000
#    locations:
#      - region: us-southern-california
#        maxLocationsInRequest: 10000
#      - region: us-georgia
#        maxLocationsInRequest: 10000
#        transportType: car
#    externalLocationsUrl:
#      customModel: "http://localhost:5000"
#    retry:
#      maxDurationMinutes: 60
#    autoscaling:
#      enabled: true
#      minReplicas: 1
#      maxReplicas: 2
#      cpuAverageUtilization: 110
#  externalProvider: "http://localhost:5000"

#insights:
  # number of active replicas of the insights service
#  replicaCount: 1

#solver:
#  maxThreadCount: 8 # Set the maximum thread count for each job in the platform

# Set notifications timeout
#notifications:
#  webhook:
#    readTimeout: PT5S # Default is 5 seconds
#    connectTimeout: PT5S # Default is 5 seconds

After the additional changes to orbit-values.yaml file have been completed, follow the installation steps in the selected storage section.

OpenShift Route created automatically at installation might not be properly configured according to defined policies, it’s recommended to review the route and recreate it manually when needed.

Multiple installation of Timefold Platform

It is recommended to have multiple installations of the Timefold Platform to allow smooth and controlled rollouts of the new versions. That usually mean there are at least two installation side by side:

  • staging environment used mainly for verification purposes

  • production environment used by actual users

There can be more environments depending on the needs of your organization.

Timefold Platform is namespace scoped meaning it can be installed multiple times in single Kubernetes cluster. Although it can also be distributed across multiple clusters. Each provide slightly different isolation level and are associated with cost. In most of the cases single cluster is the preferred approach so this guide will mainly focus on it.

  1. Configure infrastructure

    New host name (DNS name) is reuquired for each Timefold Platform installation. This needs to be then configured on Gateway API level. Gateway resource provisioned for the first environment needs to be extended with additional listeners

    kubectl apply -f - <<EOF
    apiVersion: gateway.networking.k8s.io/v1
    kind: Gateway
    metadata:
      annotations:
        cert-manager.io/issuer: letsencrypt-prod
      labels:
        app.kubernetes.io/instance: timefold-platform
        app.kubernetes.io/name: kube-gatewayplatform
      name: timefold-platform
      namespace: timefold-gateway
    spec:
      gatewayClassName: nginx
      listeners:
      - allowedRoutes:
          namespaces:
            from: Selector
            selector:
              matchLabels:
                gateway-access: "true"
        hostname: YOUR_HOST_NAME
        name: platform-http
        port: 80
        protocol: HTTP
      - allowedRoutes:
          namespaces:
            from: Selector
            selector:
              matchLabels:
                gateway-access: "true"
        hostname: YOUR_HOST_NAME
        name: platform-https
        port: 443
        protocol: HTTPS
        tls:
          certificateRefs:
          - group: ""
            kind: Secret
            name: timefold-orbit-platform-tls
          mode: Terminate
      - allowedRoutes:
          namespaces:
            from: Selector
            selector:
              matchLabels:
                gateway-access: "true"
        hostname: STAGING_YOUR_HOST_NAME
        name: staging-platform-http
        port: 80
        protocol: HTTP
      - allowedRoutes:
          namespaces:
            from: Selector
            selector:
              matchLabels:
                gateway-access: "true"
        hostname: STAGING_YOUR_HOST_NAME
        name: staging-platform-https
        port: 443
        protocol: HTTPS
        tls:
          certificateRefs:
          - group: ""
            kind: Secret
            name: timefold-orbit-platform-tls
          mode: Terminate
    EOF
    STAGING_YOUR_HOST_NAME and YOUR_HOST_NAME must be replaced with respective DNS names to be used.

    When using cert manager and multiple environments on the same gateway, it is recommended to switch to DNS based challenge instead of http as it proves to be more extensible and does not require additional http listeners. When cert manager is not used manual provisioning of certificates is required and then wildcard certificate is recommended.

  2. Create namespace

    kubectl create namespace NAMESPACE_NAME-staging

    Add label to the namespace so HTTP routes of staging platform can be accepted by the gateway

    kubectl label namespace NAMESPACE_NAME-staging gateway-access=true
    NAMESPACE_NAME needs to be replaced with actual name of the namespace used
  3. Timefold Platform configuration

    values.yaml files used for first environment needs to be copied and edited to change following properties

    • ingress.host - specify the staging environment host

    • ingress.listener - specify the listener (https) that was added to the gateway resource (staging-platform-https)

    • storage.remote.name - specify unique name of the configuration bucket

    • storage.remote.bucketPrefix - specify unique bucket prefix (e.g. staging) that will be prepended to tenant bucket that stores datasets

      It is also recommended to use different Oauth2 configuration to avoid accidental reuse of tokens

    • oauth.clientId - specify different client/application id for staging environment

    • oauth.clientSecret - specify different client/application secret for staging environment

  4. Install Timefold Platform

    Perform the same steps to install platform via helm as in the first installation but use the copied values.yaml file and newly created namespace.

Deploy maps integration

The Timefold Platform provides integration with maps to provide more accurate travel time and distance calculation. This map component is deployed as part of the Timefold Platform but must be explicitly turned on and maps regions specified.

Check the maps service documentation to understand the capabilities of the maps component.

Enable maps in your orbit-values.yaml file with the following sections:

maps:
  enabled: true
  scalable: true
  cache:
    TTL: P7D
    cleaningInterval: PT1H
    persistentVolume: # Optional
      accessMode: "ReadWriteOnce"
      size: "8Gi"
      #storageClassName: "standard" #Specific to each provider storage
  osrm:
    options: "--max-table-size 10000"
#    maxDistanceFromRoad: 10000 # Maximum distance from road in meters before OSRM returns no route. Defaults to 1000.
    locations:
      - region: osrm-britain-and-ireland
#        maxLocationsInRequest: 10000 # Maximum number of accepted locations for OSRM requests. Defaults to 10000.
#        transportType: car # Optional, defaults to car.
#        resources:
#          requests:
#            memory: "1000M" # Optional, defaults to the idle memory of the specific image.
#            cpu: "1000m" # Optional, defaults to 1000m.
#          limits:
#            memory: "8000M" # Optional, defaults to empty.
#            cpu: "5000m" # Optional, defaults to empty.
#    externalLocationsUrl:
#      customLocation1: "http://localhost:8080" # External OSRM instance for location; locations should not contain any spaces
#      customLocation2: "http://localhost:8081"
#    retry:
#      maxDurationMinutes: 60 # Set timeout of retry requests to OSRM in minutes. Defaults to 60.
#    autoscaling:
#      enabled: true # Enables OSRM instance autoscaling. This works by scaling the map for each location independently. Defaults to false.
#      minReplicas: 1 # Minimum number of OSRM instance replicas. Defaults to 1.
#      maxReplicas: 2 # Maximum number of OSRM instance replicas. Defaults to 2.
#      cpuAverageUtilization: 110 # Memory cpu utilization to scale OSRM instance. It can be fine-tuned based on expected usage of OSRM maps. Defaults to 110.

Next, configure the map to be used for the model with the following configuration:

models:
  mapService:
    fieldServiceRouting: osrm-britain-and-ireland

The above configuration provisions the complete set of components to use map as the source for distance and travel time for solving. Below is a detailed description of some configurations used by the maps.

Persistent Volume

If persistentVolume is not set, it will use the storage from each node for the maps cache. In that case, the cache will be cleared when the pod restarts. It’s possible to set the storageClassName for the specific storage of each provider (e.g. "standard" for GCP, "gp2" for AWS, and "Standard_LRS" for Azure).

Autoscaling

If OSRM autoscaling is enabled, you also need to have the Metrics Server in your cluster. On some deployments it’s installed by default. If it’s not already installed, you can install it with the following command (more information in the helm chart):

helm repo add bitnami https://charts.bitnami.com/bitnami &&
helm repo update &&
helm upgrade --install --atomic --timeout 120s metrics-server bitnami/metrics-server --namespace "metrics-server" --create-namespace

OSRM autoscaling works by scaling the map for each location independently. It will aim for an average utilization of each instance of 70% of the memory.

External OSRM instances

If you want to use your own OSRM instance, you can configure the platform to access the instance for a specific location with the externalLocationsUrl key. Please note that the location should not have spaces in it, neither start with a dash or underscore.

Deploy OSRM maps to a separate node pool

When using the platform to deploy OSRM maps, it’s possible to configure them to be deployed to a specific node pool.

To do that, the configuration below can be set:

maps:
  isolated: true

The maps will be deployed in the nodes with the label ai.timefold/maps-dedicated: true.

Maps officially supported by Timefold

Map image containers are built by Timefold for given regions and provided to users via Timefold container registry (ghcr.io/timefoldai). Please contact Timefold support if a given region is not yet available.

The table specifies which maps are currently supported and available for Timefold Platform:

Table 1. Currently available maps
Name, header row Based on map, header row

uk

http://download.geofabrik.de/europe/united-kingdom-latest.osm.pbf

greater-london

http://download.geofabrik.de/europe/united-kingdom/england/greater-london-latest.osm.pbf

britain-and-ireland

http://download.geofabrik.de/europe/britain-and-ireland-latest.osm.pbf

us-georgia

http://download.geofabrik.de/north-america/us/georgia-latest.osm.pbf

us-southern-california

http://download.geofabrik.de/north-america/us/california/socal-latest.osm.pbf

belgium

http://download.geofabrik.de/europe/belgium-latest.osm.pbf

australia

http://download.geofabrik.de/australia-oceania/australia-latest.osm.pbf

dach

http://download.geofabrik.de/europe/dach-latest.osm.pbf

india

http://download.geofabrik.de/asia/india-latest.osm.pbf

gcc-states

http://download.geofabrik.de/asia/gcc-states-latest.osm.pbf

ontario

http://download.geofabrik.de/north-america/canada/ontario-latest.osm.pbf

poland

https://download.geofabrik.de/europe/poland-latest.osm.pbf

us-northeast

https://download.geofabrik.de/north-america/us-northeast-latest.osm.pbf

netherlands

https://download.geofabrik.de/europe/netherlands-latest.osm.pbf

sweden

https://download.geofabrik.de/europe/sweden-latest.osm.pbf

us-south

https://download.geofabrik.de/north-america/us-south-latest.osm.pbf

us-north-carolina

https://download.geofabrik.de/north-america/us/north-carolina-latest.osm.pbf

italy-nord-ovest

https://download.geofabrik.de/europe/italy/nord-ovest-latest.osm.pbf

us-midwest

https://download.geofabrik.de/north-america/us-midwest-latest.osm.pbf

us-west

https://download.geofabrik.de/north-america/us-west-latest.osm.pbf

us-hawaii

https://download.geofabrik.de/north-america/us/hawaii-latest.osm.pbf

us-puerto-rico

https://download.geofabrik.de/north-america/us/puerto-rico-latest.osm.pbf

Map instances hardware requirements

Some maps have demanding hardware requirements. Because each map has a different sizing, it can be hard to define the resources required for a map instance.

Memory Requirements

The memory of a map instance can be calculated by the sum of the memory of the map when idle (the memory it needs to start without doing any calculations) and the memory it uses to answer the requests at a given point in time.

\$"MaxMemMap" = "MemMapIdle" + max("MemUsedForRequests")\$

For the map memory when idle, the values are as follows:

Region Idle Memory (MB)

uk

3460

greater-london

190

britain-and-ireland

5500

us-georgia

1245

us-southern-california

1290

belgium

500

australia

1512

dach

6125

india

11130

gcc-states

1455

ontario

700

poland

1875

us-northeast

4435

netherlands

800

sweden

1090

us-south

13631

us-north-carolina

1415

italy-nord-ovest

692

us-midwest

9300

us-west

6752

us-hawaii

85

us-puerto-rico

152

The memory used for requests varies according to the number of locations requested and the number of simultaneous requests.

The following table expresses the approximated memory used per request depending on the number of locations:

Number of locations Memory used (MB)

From 0 to 100

0

From 100 to 200

10

From 200 to 600

40

From 600 to 1100

200

From 1100 to 2000

500

From 2000 to 4000

2000

From 4000 to 6000

4300

From 6000 to 8000

7600

From 8000 to 10000

12000

From 10000 to 12000

17000

From 12000 to 14000

23000

From 14000 to 16000

30000

From 16000 to 18000

38000

From 18000 to 20000

47000

Currently, our maps do not support requests with more than 20,000 locations. The maximum configured number of locations is set to 10,000 by default, but can be increased (up to 20,000) using the parameter maxLocationsInRequest when configuring each map.

If simultaneous requests are made to the maps instances and there is not enough memory to calculate them, they will be queued internally until memory is available for them to be calculated or until they timeout (timeout can be set in the parameter maps.osrm.retry.maxDurationMinutes).

CPU Requirements

The map instances use one core per request. It’s possible to enable autoscaling of maps instances based on the percentage of CPU used by the instances, using the properties defined in maps.osrm.autoscaling (for details read Deploy maps integration section).

Maximum Thread count per job

It is recommended to configure the maximum thread count for each job in the platform. This value is based on the maximum number of cores in the nodes of the Kubernetes cluster that are running solver jobs. If unset, the default maximum thread count is 1. It can be set with the following configuration.

solver:
  maxThreadCount: 8

Each model also has its own maximum thread count that defines to what concurrency level the model still improves performance.

In addition, self-hosted is a single tenant on the default plan which does not set any limits but that can lead to a single run taking up all memory of a Kubernetes node.

To limit the memory a single run can use, set with the following configuration:

solver:
  # max amount of memory (in MB) assigned as limit on pod
  maxMemory: 16384

Maximum running time for job

This setting is a global value that is meant to prevent long-running jobs, that exceed regular execution time. This mainly protects platform resources from being used by misbehaving runs (infinite loops, never ending solver phases, etc).

The value is expected to be ISO8600 duration and by default it is set to 24 hours (PT24H)

It can be set with the following configuration:

models:
  maximumRunningDuration: PT24H

Configuring an external map provider

It’s possible to create an external map provider and configure the platform to use it. For more details about the implementation of an external map provider, see the documentation for the maps service.

To configure the maps service, set the external URL of the provider in the configuration as described below. Optionally headers can also be given in a semicolon separated list of key=value.

maps:
  externalProvider:
    url: "http://localhost:5000"
    headers: "header1=value1;header2=value2"

To be used in a run, the map provider must be configured in the subscription configuration of the tenant. The configured map provider must be set to external-provider. The mapsConfiguration should be set as the following:

{
...
  "mapsConfiguration": {
    "provider": "external-provider",
    "location": "location"
  },
...
}

The location is arbitrary and will be sent in the request to the external provider without additional processing.

Uninstall

Uninstall Timefold Platform with the following command:

helm uninstall timefold-orbit -n $NAMESPACE

Upgrading

See the Upgrade instructions.

  • © 2026 Timefold BV
  • Timefold.ai
  • Documentation
  • Changelog
  • Send feedback
  • Privacy
  • Legal
    • Light mode
    • Dark mode
    • System default