Deploy WSO2 API Manager in a AWS ECS Fargate Cluster
AWS has two major offerings for deploying containers. EKS and ECS. EKS is the k8s based one and ECS has two options, EC2 based or Fargate. Due to the advantages and features, some people prefer to use ECS Fargate.
WSO2 provides official k8s artefacts which you can use in a EKS setup. For ECS there are no official artefacts by the WSO2.
In this article Im going to explain how we can deploy a WSO2 API Manager 4.0.0 all in one HA(active-active) Cluster in ECS Fargate.
For more information on the deployment, please refer to following WSO2 official documentation
Following are the high level steps
- Create ECS Fargate Cluster
- Create ECR(Container Registry) and Push Image
- Create EFS Volumes
- Create RDS Cluster
- Create Load Balancer
- Create Task Definition
- Create Service
01. Create ECS Fargate Cluster
1.1 Go to ECS then click on create cluster
1.2 Select the cluster template that supports Fargate
1.3 Create the cluster along with a VPC and Subnets, you can choose an existing VPC and subnets.
1.4 At the end of the cluster creation you will get a summary of cluster resources. Please save it somewhere as you will need it.
02. Create ECR Repository and Push Image
If you are planning to pull the image in to the task from an external docker registry, then this step can be omitted. When you create the container while creating the Task Definition, you can paste your image’s URI from external repo. If authentication is required to pull the image, AWS task definition has that capability too. Please see below snapshot of container creation.
2.1 Go to ECR → Repositories → Create Repository then give your repository a name
2.2 Find your Push Commands and Push Docker Image to the Repository
If you have a WSO2 subscription then you can pull images from wso2 docker registry (https://docker.wso2.com/tags.php?repo=wso2am). If you don’t, you can pull images from the public docker registry (https://hub.docker.com/r/wso2/wso2am).
And also you can build your own images and push to AWS ECR or any other docker registry and refer that in the Task Definition.
# login to wso2 docker registry
docker login docker.wso2.com#pull wum updated wso2am image (X=wum update revision)
docker pull docker pull docker.wso2.com/wso2am:4.0.0.X#tag docker image with ECR hostname
docker tag docker.wso2.com/wso2am:4.0.0.X xxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/maheshc-test-wso2am:4.0.0.X#login to aws ECR
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin xxxxxx.dkr.ecr.us-east-1.amazonaws.com#push image to ECR
docker push xxxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/maheshc-test-wso2am:4.0.0.X
Once the image is successfully pushed, you will see the image in ECR Repository.
03. Create EFS Volumes
From API Manager 4.0 onwards we dont need to share the synapse-configs directory between the containers.
3.1 EFS volume to inject configurations to the APIM node
Go to EFS → Create File System
If you look at docker-entrypoint.sh, you will notice that it is copying everything inside this Volume to the APIM Home’s respective folder before starting the server.
This will be mounted to /home/wso2carbon/wso2-config-volume in the container. Content looks like below.
This EFS volume can be used to add configuration files, jks files and jar files(custom extensions, JDBC driver etc) to the container.
You need to add files to this mount only if your files are different from what is originally there in the image. For an example, if you are fine with the bin/api-manager.sh available in the image, then you dont need to add that file in to this EFS. Same applies for other files like log4j2.properties file and default .jks files
You can add more EFS volumes depending on your requirement. For an example if you have a use case to deploy a secondary userstore, then you need to share the repository/deployment/userstores directory as well.
To add files to the EFS, you can temporarily mount that to an EC2 instance or you can use any other preferred method to add files to the EFS. Anyway make sure the directories and files are set with proper permissions to be read by the container.
04. Create RDS Cluster
You can chose any Database type as per your preference. Here Im going to use Amazon AWS Aurora MySQL based database cluster.
You will have to create mainly two dbs apim_db and shared_db. DB scripts can be found in APIM_HOME/dbscripts folder. Run those scripts and create the tables. This will be simple and straight forward, hence Im not going to add screenshots for DB creation.
Create the DBs and add those relevant configurations in to the deployment.toml file inside the EFS volume you created above.
[database.apim_db]
type = "mysql"
url = "jdbc:mysql://mysql.wso2.com:3306/WSO2AM_DB?useSSL=false" username = "wso2carbon"
password = "wso2carbon"
driver="com.mysql.cj.jdbc.Driver" [database.shared_db]
type = "mysql"
url = "jdbc:mysql://mysql.wso2.com:3306/WSO2SHARED_DB?useSSL=false" username = "wso2carbon"
password = "wso2carbon" driver="com.mysql.cj.jdbc.Driver"
For more information please read below wso2 official documentation on Working with Databases.
05. Create Load Balancer
5.1 Create Application Load Balancer
Go to EC2 → Load Balancing → Load Balancers → Create New Load Balancer → Appliaction Load Balancer
✓ Give it a Name
✓ Add two listeners 9443 and 8243.
✓ Select VPC and Availability Zones
✓ Attach a certificate to LB
✓ Select Security Policy
✓ Add Security Group
5.2 Create two target groups and attach to the LB listeners.
Defining proper health checks is important. Otherwise containers will be killed repeatedly assuming that the container is unhealthy.
If you create target group along with the LB creation. Here you can attach only one target group. Just create the LB with one target group and later go and attach the other target group to the relevant listener.
If you create target group separately
Define health checks. You can use /services/Version api exposed via 9443 as the health check. Please read official documentation here for more info.
Add listener and forward to the relevant target group
Finally my load balancer listener configuration looks like below
Please note that you will have to enable sticky sessions for 9443 listener, otherwise you wont be able to login to any of the consoles(carbon mgt console, devportal, publisher, admin etc)
Enable Sticky sessions for 9443 listener/target-group
06. Create Task Definition
6.1 Go to ECS →Task Definitions → Create New Task Definition
6.2 Select FARGATE option
6.3 Fill Task Definition Details
✓ Task Definition Name
✓ Task Role
✓ Task Execution Role
✓ Task Memory
✓ Task vCPU etc
6.4 Add Volumes
Here we will be adding the EFS Volumes we created in Step03 to the Task Definition.
Add EFS volumes to the Task Definition
Once added successfully, it will look like below
6.5 Add Container
Copy Image URI from the ECR. Add Port mappings as necessary.
6.6 Mount EFS Volumes to Container
Under Storage and Logging, please mount the volumes
efs-config-volume : /home/wso2carbon/wso2-config-volume/
efs-synapse-artifact : /home/wso2carbon/wso2am-4.0.0/repository/deployment/server/synapse-configs/
07. Create Service
You could create the service from the Mgt Console UI itself, but at the time of writing this article, AWS mgt console doesn’t provide the capability to add two load balancing listeners(two ports). Therefore you will have to use aws cli to create the service.
You may use below skeleton to create the service. Please fill the values with your resources’ information.
You can generate the skeleton by using below command
aws ecs create-service --generate-cli-skeleton > service-name.json
Use below cli command to create the service
aws ecs create-service \
--cluster maheshc-test-cluster \
--service-name maheshc-test-service \
--cli-input-json file://maheshc-test-service.json
For more information on Create Services via AWS CLI please go through below documentations.
Go to your ECS Cluster and look at the tasks, wait for it to be in RUNNING state.
Use your Load Balancer DNS(or if you have set a hostname) to access the Portals :)
Troubleshooting
If your tasks arent running , go and check the logs of the stopped task/container. It will give you a clue. Could be mainly due to File Permissions in the EFS volumes.
If you cant access the portals, could be mainly because the ports are blocked from the LB+Security Groups. Please check whether the ports are opened and You can access the LB through internet. And check whether you have changed the hostname in deployment.toml accordingly.
Cheers!!!