Automate WSO2 deployment in an Openshift cluster using Azure Devops CI/CD Pipelines

How did we automate a WSO2 Deployment, CI/CD with Azure Devops — part 03

automate wso2 deployments with azure devops cicd pipelines — part 03

This is the 3rd post of a series of posts about Automating a WSO2 deployment using Azure Devops pipelines. Links to all the posts can be found in the bottom of this article.

In this post Im going to explain how we automated a WSO2 deployment in an Openshift cluster. The same pipeline can be used even for a WSO2 deployment in a Kubernetes cluster, by only changing the ‘oc’ commands to ‘kubectl’ commands.

We used WSO2 Official Kubernetes artefacts and parameterised them according to our need.

API Manager — https://github.com/wso2/kubernetes-apim/tree/2.6.x/advanced/pattern-1

Enterprise Integrator — https://github.com/wso2/kubernetes-ei/tree/6.4.x/

Below is the directory/file structure we had in our openshift-wso2am repository.

Everything that differs from environment to environment, were added as Variables(parameterised the configs), then used the ReplaceTokens task in the pipeline to assign values for those parameters.

I will put some examples below on how did we have the configs parameterised.

  1. A section of master-datasources.xml file looked like below. You may see strings starting with ‘#{’ and ending with ‘}#’. Those are the place holders for these parameters and are the places where we have parameterised. eg: #{SQL_SERVER_HOST_PORT}# , #{SQL_USER}# etc.

2. Setting Hostname in Carbon.xml

3. A part of deployment.yaml

ReplaceTokens Plugin

More details on the plugin can be found in here. This is quite usefull plugin to maintain parameters across different environments.

Azure pipeline syntax looked like below

Classic UI configuration looked like below

Im going to add here the whole azure-pipeline.yml file we used for this.

  1. You may notice that it has 3 stages and each stage with same set of tasks. And in each stage there is a condition to check the branch.
  2. Again here we used an OnPremise agent to run this pipeline as the Openshift environment was on premise. We had an agent pool and specific agent called MYAGENT01 who had openshift client installed and who had access to onpremise openshift clusters.

Thats it folks , I hope it was helpful :)

Links to other posts in this series…

Part 01

Part 02

Part 03

Part 04