Deploying with Terraform

Terraform is a tool for building infrastructure. Activiti Enterprise can be deployed using Terraform scripts with three different options provided. The options include examples of deploying to Amazon Web Services (AWS) and utilizing Rancher as a tool to assist in managing Kubernetes resources.

There are three options for deploying using Terraform:

Deployment with Rancher on Amazon EKS

The following steps describe the deployment of Activiti Enterprise on Amazon Elastic Kubernetes Service (Amazon EKS) with Terraform and utilizing Rancher for managing the Kubernetes resources. An Amazon EKS cluster is created as part of the build and the resources will be available to monitor in Rancher.

Prerequisites

The Rancher on Amazon EKS deployment assumes that you have the following:

  • Quay.io credentials to access Activiti Enterprise images.
  • A valid license for Activiti Enterprise.
  • Administrator access to an Amazon Web Services (AWS) account.
  • Terraform version 0.11
  • Rancher version 2.0

Deployment steps

  1. Clone this repository and make the rancher-eks folder your working directory.

  2. Initialize Terraform using the following command if you have not already done so whilst verifying the Terraform plugin:

    terraform init
    
  3. Create a copy of the file terraform_template.tfvars and name it terraform.tfvars. This retains the original template and allows you to work on your own copy of it.

  4. Use the following command to create an EKS cluster on Rancher:

    terraform apply -target=rancher2_cluster.aae-cluster
    
  5. Edit the terraform.tfvars file and replace the template variables with those relevant to your deployment:

    VariableDescriptionDefaultRequired
    aae_licenseThe location of a license file for Activiti EnterpriseYes
    acs_enabledSet whether Alfresco Content Services is installed as part of the infrastructure. This is a boolean valuetrueNo
    aws_regionThe region of AWS to create the EKS cluster inYes
    aws_access_key_idThe AWS access key ID for the AWS account usedYes
    aws_secret_access_keyThe AWS secret key for the AWS account usedYes
    cluster_nameThe name for the cluster. If left blank it will be a concatenation of project_name and project_environmentNo
    gateway_hostThe gateway host nameNo
    identity_hostThe name of the identity server hostNo
    kubernetes_api_serverThe API server URL for Kubernetes. This value is set in Step 7https://kubernetesNo
    kubernetes_tokenThe token for the kubernetes_api_server. This value is set in Step 7No
    my_ip_addressThe CIDR blocks that will have SSH access to the nodes in the cluster0.0.0.0/0No
    project_environmentThe environment type of the deployment, for example test or productionYes
    project_nameThe name of the project to appear in the clusterYes
    quay_userThe username of the Quay account to pull images withYes
    quay_passwordThe password for the quay_userYes
    rancher2_urlThe URL of the Rancher serverYes
    rancher2_access_keyThe access key for the Rancher instance used. This can be obtained in the API & Keys menu within the Rancher instance or using the /apikeys APIYes
    rancher2_secret_keyThe secret key for the Rancher instance used. This can be obtained in the API & Keys menu within the Rancher instance or using the /apikeys API
    registry_hostThe Docker registry host name to useYes
    registry_userThe username to use for the Docker registryregistryNo
    registry_passwordThe password for registry_userpasswordNo
    ssh_usernameaaeNo
    ssh_public_keyThe public key for SSH authentication on the nodesNo
    zone_domainThe zone domain nameYes
  6. Use the following command to create a kubeconfig file for the new EKS cluster:

    export KUBECONFIG=$PWD/.terraform/kubeconfig
    echo "$(terraform output kube_config)" > $KUBECONFIG
    kubectl cluster-info
    
  7. Use the following command to populate the variables kubernetes_api_server and kubernetes_token in the terraform.tfvars file:

    echo "kubernetes_api_server = "$(kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}')"" >> terraform.tfvars
    NAMESPACE=kube-system
    SERVICEACCOUNT=default
    kubectl create serviceaccount -n kube-system ${SERVICEACCOUNT}
    kubectl create clusterrolebinding ${SERVICEACCOUNT}-admin-binding --clusterrole cluster-admin --serviceaccount=${NAMESPACE}:${SERVICEACCOUNT}
    echo "kubernetes_token = "$(kubectl -n ${NAMESPACE} get secret $(kubectl -n ${NAMESPACE} get serviceaccount ${SERVICEACCOUNT} -o jsonpath='{.secrets[0].name}') -o jsonpath='{.data.token}' | base64 --decode)"" >> terraform.tfvars
    
  8. Use the following command to populate the my_ip_address variable in the terraform.tfvars file and restrict SSH access to the Kubernetes nodes:

    echo "my_ip_address = "$(curl https://ipecho.net/plain)/32"" >> terraform.tfvars
    
  9. Use the following command to complete the deployment:

    ```bash
    terraform apply
    ``` 
    

    Note: For reference Terraform stores the state of a cluster deployment in the file terraform.tfstate. It is possible to store this in a remote location such as an S3 bucket.

Deployment on Amazon EKS

The following steps describe an example of deploying Activiti Enterprise on Amazon Elastic Kubernetes Service (Amazon EKS) with Terraform. An Amazon EKS cluster is created as part of the build.

Prerequisites

The Amazon EKS deployment assumes that you have the following:

  • Quay.io credentials to access Activiti Enterprise images.
  • A valid license for Activiti Enterprise.
  • Administrator access to an Amazon Web Services (AWS) account.
  • Terraform version 0.11
  • eksctl installed.

Deployment steps

  1. Clone this repository and make the eks folder your working directory.

  2. Initialize Terraform using the following command if you have not already done so whilst verifying the Terraform plugin:

    terraform init
    
  3. Create a copy of the file terraform_template.tfvars and name it terraform.tfvars. This retains the original template and allows you to work on your own copy of it.

  4. Edit the terraform.tfvars file and replace the template variables with those relevant to your deployment:

    VariableDescriptionDefaultRequired
    aae_licenseThe location of a license file for Activiti EnterpriseYes
    acs_enabledSet whether Alfresco Content Services is installed as part of the infrastructure. This is a boolean valuetrueNo
    aws_regionThe region of AWS to create the EKS cluster inYes
    aws_access_key_idThe AWS access key ID for the AWS account usedYes
    aws_secret_access_keyThe AWS secret key for the AWS account usedYes
    cluster_nameThe name for the cluster. If left blank it will be a concatenation of project_name and project_environmentNo
    gateway_hostThe gateway host nameNo
    identity_hostThe name of the identity server hostNo
    kubernetes_api_serverThe API server URL for Kubernetes. This value is set in Step 7https://kubernetesNo
    kubernetes_tokenThe token for the kubernetes_api_server. This value is set in Step 7No
    my_ip_addressThe CIDR blocks that will have SSH access to the nodes in the cluster0.0.0.0/0No
    node_groupnameThe name to assign the group of worker nodesng-1No
    project_environmentThe environment type of the deployment, for example test or productionYes
    project_nameThe name of the project to appear in the clusterYes
    quay_userThe username of the Quay account to pull images withYes
    quay_passwordThe password for the quay_userYes
    registry_hostThe Docker registry host name to useYes
    registry_userThe username to use for the Docker registryregistryNo
    registry_passwordThe password for registry_userpasswordNo
    zone_domainThe zone domain nameYes
  5. Edit the cluster.yaml file and ensure that at a minimum the following variables match with those set in terraform.tfvars:

    cluster.yaml variableterraform.tfvars variable
    nameA concatenation of project_name + project_environment
    regionaws_region
    nodeGroups.namenode_groupname
  6. Create the EKS cluster using the following command:

    eksctl create cluster -f cluster.yaml
    
  7. Use the following command to populate the variables kubernetes_api_server and kubernetes_token in the terraform.tfvars file:

    echo "kubernetes_api_server = "$(kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}')"" >> terraform.tfvars
    NAMESPACE=kube-system
    SERVICEACCOUNT=default
    kubectl create serviceaccount -n kube-system ${SERVICEACCOUNT}
    kubectl create clusterrolebinding ${SERVICEACCOUNT}-admin-binding --clusterrole cluster-admin --serviceaccount=${NAMESPACE}:${SERVICEACCOUNT}
    echo "kubernetes_token = "$(kubectl -n ${NAMESPACE} get secret $(kubectl -n ${NAMESPACE} get serviceaccount ${SERVICEACCOUNT} -o jsonpath='{.secrets[0].name}') -o jsonpath='{.data.token}' | base64 --decode)"" >> terraform.tfvars
    
  8. Use the following command to populate the my_ip_address variable in the terraform.tfvars file and restrict SSH access to the Kubernetes nodes:

    echo "my_ip_address = "$(curl https://ipecho.net/plain)/32"" >> terraform.tfvars
    
  9. Use the following command to create a kubeconfig file for the new EKS cluster:

    eksctl utils write-kubeconfig --name=<cluser_name> --kubeconfig=$PWD/kubeconfig --set-kubeconfig-context=true
    KUBECONFIG=$PWD/kubeconfig
    kubectl cluster-info
    
  10. Use the following command to complete the deployment:

    terraform apply
    

Note: For reference Terraform stores the state of a cluster deployment in the file terraform.tfstate. It is possible to store this in a remote location such as an S3 bucket.

Deployment into an existing Kubernetes cluster

The following steps describe an example of deploying Activiti Enterprise into a generic Kubernetes cluster. An existing cluster must exist to deploy into when following these steps.

Prerequisites

The generic Kubernetes cluster deployment assumes that you have the following:

  • Quay.io credentials to access Activiti Enterprise images.
  • A valid license for Activiti Enterprise.
  • A load balancer with DNS entries and SSL certificates configured.
  • A Docker registry.
  • A Kubernetes cluster running version 1.12
  • Terraform version 0.11

Deployment steps

The following steps describe a deployment into an existing, generic Kubernetes cluster.

  1. Clone this repository and make it your working directory.

  2. Initialize Terraform using the following command if you have not already done so whilst verifying the Terraform plugin:

    terraform init
    
  3. Create a copy of the file terraform_template.tfvars and name it terraform.tfvars. This retains the original template and allows you to work on your own copy of it.

  4. Edit the terraform.tfvars file and replace the template variables with those relevant to your deployment:

    VariableDescriptionDefaultRequired
    aae_licenseThe location of a license file for Activiti EnterpriseYes
    acs_enabledSet whether Alfresco Content Services is installed as part of the infrastructure. This is a boolean valuetrueNo
    aws_efs_dns_nameThe EFS DNS name used for Alfresco Content Services storage. Applicable to AWS deployments only. Requires acs_enabled to be set to trueNo
    cluster_nameThe name for the cluster. If left blank it will be a concatenation of project_name and project_environmentNo
    gateway_hostThe gateway host nameYes
    helm_service_accountThe service account to be used by HelmtillerNo
    httpSet whether to use http. This is a boolean valuefalseNo
    kubernetes_api_serverThe API server URL for Kubernetes. This value is set in Step 5https://kubernetesNo
    kubernetes_tokenThe token for the kubernetes_api_server. This value is set in Step 5No
    project_environmentThe environment type of the deployment, for example test or productionYes
    project_nameThe name of the project to appear in the clusterYes
    quay_userThe username of the Quay account to pull images withYes
    quay_passwordThe password for the quay_userYes
    registry_hostThe Docker registry host name to useYes
    registry_userThe username to use for the Docker registryregistryNo
    registry_passwordThe password for registry_userpasswordNo
    zone_domainThe zone domain nameYes
  5. Use the following command to populate the variables kubernetes_api_server and kubernetes_token in the terraform.tfvars file:

    NAMESPACE=kube-system
    SERVICEACCOUNT=alfresco-deployment-service
    kubectl create serviceaccount -n kube-system ${SERVICEACCOUNT}
    kubectl create clusterrolebinding ${SERVICEACCOUNT}-admin-binding --clusterrole cluster-admin --serviceaccount=${NAMESPACE}:${SERVICEACCOUNT}
    echo "kubernetes_token = "$(kubectl -n ${NAMESPACE} get secret $(kubectl -n ${NAMESPACE} get serviceaccount ${SERVICEACCOUNT} -o jsonpath='{.secrets[0].name}') -o jsonpath='{.data.token}' | base64 --decode)"" >> terraform.tfvars
    
  6. Use the following command to complete the deployment:

    terraform apply
    

Note: For reference Terraform stores the state of a cluster deployment in the file terraform.tfstate. It is possible to store this in a remote location such as an S3 bucket.

© 2023 Alfresco Software, Inc. All Rights Reserved.