Hands-On End-to-End DevSecOps Project - Automating Provisioning, Deployment and Monitoring of an 11-Microservice e-Commerce App on kubernetes (EKS)
Introduction¶
This project demonstrates a complete, production-grade DevSecOps pipeline for deploying a cloud-native e-commerce application built on 11 different microservices written in multiple programming languages that communicate with each other over gRPC. The application was originally designed by Google Developers for the GKE but I have adapted it to be deployed on Amazon EKS or on any Kubernetes cluster.
Watch the Video Walkthrough here
Video Walkthrough¶
DevSecOps Project - End-to-end Deployment and Monitoring of 11-Microservice e-Commerce App to AWS EKS with Jenkins, ArgoCD, Terraform, Grafana & Prometheus
DevSecOps Project - End-to-end Deployment and Monitoring of 11-Microservice e-Commerce App to AWS EKS with Jenkins, ArgoCD, Terraform, Grafana & Prometheus
The goal of this project is to design and implement an end-to-end DevOps workflow that automates:
- Infrastructure provisioning with Terraform
- Continuous Integration (CI) using Jenkins for building, testing, scanning, and pushing container images
- Continuous Delivery (CD) using ArgoCD (GitOps) for seamless deployment to Amazon EKS
- Security and code quality checks with SonarQube, Gitleaks, and Trivy
- Monitoring and observability with Prometheus and Grafana
All components were carefully integrated to simulate a real-world DevOps environment, covering every stage from source code to production deployment.
The project highlights key modern DevOps practices, including:
- Infrastructure as Code (IaC): Automating cloud resource provisioning with Terraform.
- GitOps: Managing Kubernetes deployments declaratively with ArgoCD.
- CI/CD Automation: Orchestrating multi-stage pipelines with Jenkins.
- Cloud-Native Security: Ensuring code quality, vulnerability management, and secrets detection.
- Observability: Collecting and visualizing system and application metrics with Prometheus and Grafana.
By the end of this project, you’ll gain a detailed understanding of how each tool was implemented and how the entire pipeline works together to deliver a scalable, secure, and automated deployment workflow on AWS.
GitHub Repos used for this project
| GitHub Repo Link | Description |
|---|---|
| 11 Microservices k8s App Source Code | Contains the application source code |
| 11 Microservices k8s App ArgoCD Manifest | Contains the ArgoCD GitOps manifest |
| Deploy a Jenkins Server on AWS using Terraform | Contains Terraform Script to deploy the Jenkins server |
Architectural Overview¶
Project Workflow¶
- Infrastructure Setup
- Setup Jenkins Using Terraform
- Create Kubernetes Cluster on EKS
- Configure Jenkins
- Install plugins: Go to
Dashboard > Manage Jenkins > Manage Pluginsand install the following plugins:- SonarQube Scanner
- Docker
- Docker pipeline
- Docker build step
- Cloudbees docker build and publish
- Kubernetes
- Kubernetes CLI
- Email Extension Template
- Prometheus Metrics
- OWASP Dependency Check Plugin
- Configure Jenkins Plugins
- Configure SonarQube Server Token
- Setup Jenkins CI/CD Pipelines
- Install plugins: Go to
- Setup Kubernetes Cluster (Amazon EKS)
- Install ArgoCD for GitOps
- Deploy Application to EKS Using GitOps
- Install and Setup Grafana and Prometheus for Monitoring
- CleanUp Resources
Infrastructure Setup¶
Jenkins Server Setup¶
For the purpose of this project, we will be creating our Jenkins Server on an ec2 instance using Terraform as our IaC tool. The Jenkins server will also serve as our base server from where we will manage other infrastructures like the EKS cluster.
I have included the link to my Github repo containing the Jenkins server Terraform script below.
Deploy a Jenkins Server on AWS using Terraform
Pre-requisites for the terraform script
You will need the following pre-requisites to run the terraform script on your local machine:
- An AWS account (Get one here )
- Terraform CLI installed on your local machine (How to Install Terraform )
- Your AWS access key ID and secret access key (learn how to get your AWS access keys here )
- AWS CLI installed and configured with your AWS access key ID and Secret access keys (learn more about AWS CLI here )
What does this terraform script do?
The Terrafom script will do the following:
- Provision an ec2 instance of type
t2.large(You can easily set a different instance type in theterraform.tfvarsfile) - Provision the ec2 instance in the default VPC
- Configure the security group to expose all the required ports for this project. The required ports are:
22, 25, 80, 443, 465, 8080, 9000 and 9100. (The ports and their descriptions are listed in theterraform.tfvarsfile) - Create an AWS Key-Pair file and download the file unto your terraform working directory on your local machine (the folder from where you initiated the terraform apply command)
-
Using the included Bash script (in the user_data field), it will bootstrap and install the following:
- Ubuntu 24.04 (the latest version)
- Jenkins
- Docker
- SonarQube Docker Container
- eksctl
- aws CLI
- kubectl
- node_exporter
- trivy scanner
- gitleaks
-
Output the
Public IP addressand theSSH connection stringfor the newly provisioned Jenkins server - The terraform script will also be used to
destroythe server and its resources during the clean-up stage of this project.
Important Security Note
Since this is just a demo project, the ports are deliberately exposed and may be accessible over the internet for the duration of the project demonstration. This is not a good security practice in production environments and should be avoided
Clone the Repo on your local machine and apply the terraform config:
Ater the terraform script executes, it displays the Public IP Address and the SSH Connection string of the Jenkins Server in the format below:
ssh -i <Key-pair_filename> ubuntu@<Jenkins_Master_public_ip>
Use the Public IP address to access the Jenkins server initial setup UI from your browser on port 8080.
<server_public_ip>:8080
Also, open the terraform working folder from a terminal and use the SSH connection string to access the jenkins server.
Tip
Replace the key_pair_filename and the server_public_ip as appropriate
Login to Jenkins Server¶
Go back to the Jenkins server terminal to copy the initial Admin password
From the jenkins server terminal, enter the command and copy the password displayed
From the Jenkins server initial setup UI page in your browser, enter the jenkins initial admin password you copied to proceed with jenkins server setup.
Install the Suggested Plugins and login to the Jenkins Server as Admin
Install suggested Jenkins Plugins and login as admin
Optionally, Setup a new admin password - Jenkins > Admin > Security
Install Additional Plugins¶
Go to Manage Jenkins > Plugins > Available Plugins, search for and install the following jenkins plugins from the plugin page then restart Jenkins:
- SonarQube Scanner (Scans your code and communicates with the SonarQube server)
- Docker (Enables Jenkins to use docker commands)
- Docker pipeline (Enables Jenkins to use docker commands in pipeline jobs)
- Docker build step
- Cloudbees docker build and publish
- Email Extension Template (Used to send job email notifications)
- Prometheus Metrics (Exposes Jenkins internal metrics so we can scrape and display them through Grafana dashboards)
Add Credentials¶
Go to Manage Jenkins > Credentials > (Global) > Add Credentials and add the following credentials:
-
Add SonarQube Credentials
- Choose Secret Text as the kind
- Set the ID and description as
sonar-token - Copy and paste the token you copied from the SonarQube server (refer to the SonarQube server configuration section)
- Click Add
-
Add Docker Hub Credentials:
- Choose Username with password as the kind.
- Set the ID and description as
my-docker-cred - Enter your Docker Hub username and password.
- Click Add.
-
Add GitHub Credentials:
- Choose
Username with passwordas the kind. - Set Username as your Github Username (Not your github login email)
- Generate a
personal access tokenfrom your github account- Go to your github account
profile > settings > developer settings > personal access token - Set a note and select
repo,admin:repo_hook,notifications - Click
Generate token - Copy the generated token
- Go to your github account
- Back at your jenkins server UI, paste the github token as the password for the github credentials (
Do not use your real GitHub password here!
)
- Set the ID and description as
github-cred - Click OK.
- Choose
-
Add e-mail Credentials:
- Choose Username and Password as the kind
- Set ID and description as
email-ID - Enter your email username
- Enter the
App passwordgenerated from Gmail as the password (Do not use your real Gmail password here!
)
- Click Add.
Configure Plugins¶
Configure SonarQube Server¶
Next, let us setup our SonarQube server. For this project, our SonarQube server is installed as a docker container running on the same server as our Jenkins server.
From your browser, login to your SonarQube server using the server ip and port 9000
Server URL: <sonar_server_ip>:9000
Tip
Since our SonarQube server is running as a docker container on port 9000 on the same machine as the Jenkins server, use <jenkins_server_ip>:9000 as the SonarQube Server URL.
Create a User token by going to Administration > Security > Users and save it somewhere for later
This token will be used to authenticate Jenkins to access the SonarQube server.
Then, on your Jenkins server, go to Manage Jenkins > Tools and configure each of the plugin tools as explained below:
SonarQube Scanner Installations
Go to Manage Jenkins Tools SonarQube Scanner installations and add a new SonarQube Scanner installation as below:
- SonarQube Scanner:
sonar-scanner(Or use a suitable name)
Tip
This name will be used later in our CI pipeline to reference the sonar tool installation so ensure you choose a unique name.
- Leave the default SonarQube version as it is
Set the SonarQube server URL under Manage Jenkins > System > SonarQube Installations
-
Server Name:
sonar(This name will be used later in the job pipeline) -
Server URL:
http://<sonar_server_ip>:9000(URL of the SonarQube server on port 9000) -
Server authentication token: Select the
sonar token IDsaved earlier in the credentials tab -
ApplyandSave
Docker
On your Jenkins server, go to Manage Jenkins > Tools > Docker installations and add a new docker installation
Docker Name: docker
Set to Install Automatically from docker.com
Click Apply and Save
Go to Manage Jenkins > System and configure the following settings:
System Admin e-mail address: Jenkins Admin <your-email@email.com>
(Enter the Jenkins Admin email here, this will appear in the email sender field of your inbox)
SonarQube servers
Name: sonar (This name will be used later in the pipeline)
Server URL: <sonar_server_ip:9000> (This should be the ip address of the SonarQube server on port 9000 which in our case, is also the same as our Jenkins server ip address)
Server Authentication token: sonar-token
Declarative Pipeline (Docker)
Docker Label: docker
Registry Credentials: my-docker-cred
Prometheus
No further configuration needed
By default, the Prometheus metrics will be scrapped from http://<jenkins_server_ip:8080>/prometheus
Jenkins Email Notifications
Goto Dashboard > Manage Jenkins > System and configure both the "Extended E-mail Notification" and the "E-mail Notification" sections as below:
- SMTP Server Name:
smtp.gmail.com - SMTP Port:
465 - Username:
user_email_id@gmail.com - Password:
app_password - Use SSL:
checked - System Admin e-mail address:
<Admin_Name> <user_email_id@gmail.com> - Default Content Type:
HTML - Test email delivery
Tip
- The settings above apply to Gmail address configuration. Confirm SMTP settings from your email service provider if different from Gmail.
- Copy
App passwordfrom your Gmail account security settings and use that as the password in the above configuration.
Images:Jenkins email Notifications Settings
Click to enlarge image
Click to enlarge image
Click to enlarge image
Click to enlarge image
Default Content Type
Set to HTML (text/html)
Click Save to close the configurations page
Setting Up the Jenkins CI/CD Pipelines¶
For this project we will set up 2 separate pipelines.
- Continuous Integration (CI) Pipeline - This pipeline will be responsible for building, testing, scanning and pushing the docker images to Docker Hub
- Continuous Delivery (CD) Pipeline - This pipeline will be responsible for updating the k8s manifest file in the GitHub repo with the new docker image tags pushed by the CI pipeline
Continuous Integration (CI) Pipeline¶
-
Go to
Jenkins > Create a Joband give the new job item a name -
Select
Pipelineand clickOK
Go to the_job_name > Configuration > Pipeline and select Pipeline script
Copy and paste the CI pipeline script in the annotation box below into the Jenkins pipeline script template box.
Click Save
Below is the Jenkins pipeline script for the Continous Integration (CI).
I have include details on how this pipeline script works in the annotation box below.
Jenkins Continuous Integration (CI) Pipeline script for the Jenkins CI job
The Jenkins CI pipeline is below:
Jenkins CI Pipeline Script - Click here
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 | |
- Lines
9-24contain environment variables. Replace the values according to your Jenkins server configuration
How This Jenkins Pipeline Script Works
Below is a breif description of how this jenkins CI pipeline script works
This Jenkins pipeline automates the entire CI workflow for the 11-microservices Kubernetes application. It takes care of pulling the source code, scanning for security issues, analyzing code quality, building Docker images for each microservice, pushing them to Docker Hub, and finally updating the Kubernetes manifests with the new image tags. Here’s how it works step by step:
-
Environment Setup - The pipeline defines environment variables for Git, Docker, SonarQube, Trivy, Gitleaks, and email notifications. These variables let Jenkins know where to pull code from, where to push images, and how to connect with external tools like SonarQube or Docker Hub. The Docker image tag is dynamically generated from the Jenkins build number (e.g. ver-23).
-
Workspace Preparation - The pipeline starts with Clean Workspace, which clears out any old files or artifacts from previous builds. This ensures that every run starts fresh and avoids conflicts.
-
Source Code Checkout - Jenkins pulls the application code from the configured GitHub branch (OpeyemiTechPro-v1) to the workspace, making it ready for scanning and builds.
-
Security & Quality Scans
-
Gitleaks Scan: Detects any hardcoded secrets (API keys, passwords, tokens) in the repository.
-
SonarQube Analysis: Runs static code analysis for code quality, bugs, and maintainability issues.
-
Trivy FS Scan: Scans the project’s filesystem for known vulnerabilities before building the Docker images.
-
-
Docker Image Build & Push - Each microservice (adservice, cartservice, checkoutservice, etc.) has its own build stage. Jenkins Switches into each of the microservice directory, builds a Docker image tagged with the current build version, pushes the image to Docker Hub using stored credentials and this process is repeated for all 11 microservices, ensuring they are all containerized and versioned consistently.
-
Docker Image Cleanup - Once the images are pushed to Docker Hub, Jenkins cleans up local images to free up space on the build server to avoid taking up unneccessary space on the Jenkins server
-
Update Kubernetes Manifest - Instead of deploying directly, the pipeline triggers a separate Jenkins job called
Update-Manifest. TheUpdate-Manifestjob updates the Kubernetes deployment manifests with the newly built Docker tags, ensuring that the cluster always runs the latest version of the services. -
Post-Build Notifications - Regardless of success or failure, Jenkins sends an email notification with build details, logs, and scan reports (Trivy, Gitleaks, dependency-check). This gives visibility into what happened during the pipeline run.
✅ In summary: This pipeline performs code checkout → security scans → code quality analysis → Docker builds → image push → Kubernetes manifest update → email notifications.
It enforces DevSecOps best practices while automating the entire CI/CD workflow for the microservices app on Kubernetes.
Configure GitHub Webhook¶
To enable Github to automatically trigger the Jenkins CI pipeline anytime a change is pushed to this Application source code GitHub repo, we need to configure a webhook in GitHub. The webhook sends a signal to this Jenkins job whenever the repo is updated. This causes the Jenkins CI pipeline to run without human intervention.
On the Github repo for the application source code, go to Settings > Webhooks > Add webhook
- Payload URL: http://<jenkins_server_ip>:8080/github-webhook/ (Replace <jenkins_server_ip> with the actual IP address of your Jenkins server)
- Content type: application/json
- Secret: Leave blank
- Which events would you like to trigger this webhook? Just the push event
- Active: Checked
- Click Add webhook
Now go back to the Jenkins CI Pipeline job configuration
- Activate the GitHub hook trigger for GITScm polling
- Click Save
Continuous Delivery (CD) Pipeline¶
-
Go to
Jenkins > Create a Joband create a second job item -
Name the job
Update-Manifest
Note
It is important that the pipeline must be named Update-Manifest because this will be referenced by the CI pipeline script we created earlier. If you choose to use a different name, ensure you modify your CI pipeline script to reflect that.
- Select
Pipelineand clickOK
Go to Update-Manifest > Configuration > Pipeline and select Pipeline script
Copy and paste the CD pipeline script below into the script template box.
Click Save
Below is the Jenkins pipeline script for the Continous Deployment (CD).
I have also included details on how this pipeline script works in the annotation box below.
Jenkins CD Pipeline script for the Update-Manifest Jenkins job
The Jenkins CD pipeline is below:
Jenkins CD Pipeline Script - Click here
- Lines
7-19contain environment variables. Replace the values according to your Jenkins server configuration
How This Jenkins CD Pipeline Script Works
This Jenkins pipeline is responsible for automatically updating the Kubernetes ArgoCD manifest whenever new Docker images are built and pushed by the main CI pipeline. Instead of manually editing YAML files to change image tags, this job updates the manifest with the latest Docker tag and pushes the change back to GitHub so ArgoCD can sync and deploy it.
Here’s how it works:
-
Pipeline Setup - The pipeline defines key environment variables like GitHub credentials, Docker Hub username, commit author details, and email addresses for notifications. It also accepts a parameter (DOCKER_TAG) from the upstream build job (the CI pipeline) so that the manifest is updated with the exact version of the new Docker images.
-
Git Checkout - Jenkins checks out the ArgoCD manifest repository (11-Microservices-k8s-App-ArgoCD) from GitHub on the specified branch (main). This repo contains the Kubernetes deployment YAML that ArgoCD watches.
-
Update Manifest with New Docker Tag - Before making changes, the script prints out the current image: lines from the manifest file for visibility. It then uses a
sedcommand with regex to find all Docker image references (e.g., opeyemitechpro/service:oldtag) and replace the old tag with the new tag (DOCKER_TAG). After replacement, it prints the updated image: lines so you can verify the changes directly in the Jenkins logs. -
Commit & Push Changes - The pipeline configures Git with the commit author and email set in the environment variables. It stages and commits the updated manifest file with a message showing which build triggered the update. Using the stored GitHub credentials, it pushes the updated manifest back to the main branch of the repository.
-
If no changes were required (for example, if the manifest already had the latest tag), the commit step is skipped gracefully.
-
Post-Build Notification - Once finished, Jenkins sends an email notification with details of the build (status, job URL, build number, and the new Docker tag). This ensures visibility of every manifest update.
✅ In summary: This pipeline takes the Docker tag produced by the CI job, updates all microservice image tags in the ArgoCD manifest, commits the changes to GitHub, and lets ArgoCD handle the deployment. It removes the need for manual edits, keeps deployments consistent, and fully automates the CD process for Kubernetes microservices.
Running the CI-CD Pipelines¶
Once we have configured our CI-CD pipelines on Jenkins, our entire pipeline can now be triggerd automatically.
Whenever changes are pushed to our source code repository, the github webhook will trigger the CI pipeline. The CI pipeline runs security and code quality checks through SonarQube and Trivy then builds new docker images for each microservice and pushes it to our dokcer hub. Simultaneaously, it triggers the CD pipeline which updates our manifest repo with the new docker tag.
Each time any of the pipelines run, it sends an email notification to the configured email address to indicate a "SUCCESS", "FAILURE" or otherwise of the piepline job.
Images: CI-CD Pipelines View and Email Notifications
CI Pipeline Job
Click to enlarge image
CD Pipeline Job
Click to enlarge image
Email Notification
Click to enlarge image
Email Notification
Click to enlarge image
SonarQube Analysis
Click to enlarge image
SonarQube Analysis
Click to enlarge image
Kubernetes Cluster Setup¶
For this project, we will use a basic Kubernetes cluster hosted on Amazon EKS. Our application will be deployed to the EKS cluster using ArgoCD. Additionally, we will deploy Prometheus and Grafana on the same cluster to monitor both our application and underlying infrastructure. We will use Helm to install ArgoCD and the Prometheus stack on the cluster.
For simplicity, we will use our Jenkins server as the base server to manage the EKS cluster.
To enable this, we will create IAM policies and attach it to the Jenkins instance, granting it the necessary permissions to manage our Amazon EKS cluster.
See (AWS Documentation - How to create IAM Policies)
AWS IAM Policies required for EKS Cluster Creation¶
The following AWS IAM policies are required to create and manage an EKS cluster. Ensure that the IAM role or user associated with your Jenkins server has these policies attached:
- AmazonEC2FullAccess
- AmazonEKS_CNI_Policy
- AmazonEKSClusterPolicy
- AmazonEKSWorkerNodePolicy
- AWSCloudFormationFullAccess
- IAMFullAccess
- Custom-EKS_Full_Access (create an additional custom policy as shown below)
See (AWS Docmentation - Minimum IAM Policies for EKS)
Create the IAM user and attach the required policies to the user. Under the user's Security Credentials tab, create access key for the user and enable CLI Use Case. Copy both the Access Key and Secret Access Key and store it somewhere for the next step.
Attach IAM Policy to the Jenkins ec2 Machine
From your jenkins terminal, configure your AWS credentials using the command below:
Access Key and Secret Access Key you copied earlier and set the default region (in my case its us-east-2)
Now, our Jenkins server has the neccesary permissions to create and manage our EKS cluster.
Create EKS Cluster¶
Create your EKS Cluster
Tip
- Replace the cluster name and region in the command with your desired values
- This command will create an EKS cluster named
opeyemi-k8s-clusterin theus-east-2region. You can change these values as needed. - The command will also create a default node group with t3.medium instances. You can customize the node group settings by adding additional flags to the command. See (eksctl documentation - create cluster)
Install Helm¶
Check if Helm is installed on your base server
If not, install Helm with this command
ArgoCD Installation¶
Add ArgoCD Helm repo
Install ArgoCD Helm Chart
Tip
This will name the release argo-cd and create the argocd namespace if it doesn't exist then install ArgoCD in the argocd namespace.
Optional Steps to confirm the argocd installation
View helm releases in all namespaces (including the argocd namespace)
Check running status of pods in the argocd namespace to verify deployment
Get Helm release notes for the argocd installation
Expose the argocd-server service as a LoadBalancer
By default all the pods in the argocd namespace are of ClusterIP type. We need to expose the argocd-server service as a LoadBalancer service type to make it accessible from outside the cluster.
First, display list of services running in the argocd namespace
Next, expose the argo-cd-argocd-server service as a LoadBalancer type
Tip
You will need to wait a short while for LoadBalancer URL to become ready before you can access it in the browser.
You can retrieve the LoadBalancer URL with:
Use this LoadBalancer URL to access ArgoCD UI from your browser.
Accessing initial ArgoCD Admin password
The initial password for the admin account is auto-generated and stored in the password field in a secret named argocd-initial-admin-secret in your Argo CD installation namespace. You can retrieve this password using kubectl:
You can access the ArgoCD initial Admin password by first displaying the contents of the argocd-initial-admin-secret then base64 decode the password field as shown below:
- Where
initial-password-stringis the string in the password field of the json output
Accessing initial ArgoCD Admin password
Alternatively, you could access the ArgoCD initial Admin password by first displaying the contents of the argocd-initial-admin-secret then base64 decode the password field as shown below:
Warning
In Production, You should delete the argocd-initial-admin-secret from the Argo CD namespace once you change the password. The secret serves no other purpose than to store the initially generated password in clear and can safely be deleted at any time. It will be re-created on demand by Argo CD if a new admin password must be re-generated.
Further reading
- How to install ArgoCD using Helm Charts
- You can also follow the ArgoCD installation guide on the ArgoCD Documentation Website
- You can access the helm release notes for the argocd by running:
Images - ArgoCD Password UI
Install ArgoCD using Helm
Click to enlarge image
Confirm ArgoCD Instalaltion and running pods in argocd namespace
Click to enlarge image
Extract ArgoCD Password
Click to enlarge image
ArgoCD LoadBalancer URL
Click to enlarge image
ArgoCD UI
Click to enlarge image
ArgoCD App Synchronizing
Click to enlarge image
ArgoCD App Synchronizing
Click to enlarge image
Deploying the Application to Kubernetes using ArgoCD¶
- Open your browser and navigate to the ArgoCD LoadBalancer URL you exposed earlier
- Login with username
adminand the initial password you retrieved earlier - Once logged in, click on
New Appto create a new application deployment -
Fill in the application details as follows:
- Application Name:
11-microservices-app - Project:
default - Sync Policy:
Automatic - Enable Sync Policy:
enabled - Repository URL:
https://github.com/opeyemitechpro/11-Microservices-k8s-App-ArgoCD - Revision:
HEAD(or specify a branch/tag/commit) - Source Path:
./ - Cluster URL:
https://kubernetes.default.svc - Namespace:
opeyemi-app(The name space needs to be created prior)
- Application Name:
Images - Application Deployed to Kubernetes Cluster Using ArgoCD
App Deployment Details
Click to enlarge image
Confirm ArgoCD Instalaltion and running pods in argocd namespace
Click to enlarge image
Set Up Monitoring Using Prometheus and Grafana¶
Prometheus Stack Installation and Setup on EKS using Helm¶
Add the kube-prometheus-stack Helm repo
Install Prometheus Stack into monitoring namespace
Tip
This will create the monitoring namespace if it doesn't exist already and deploy the full Prometheus monitoring stack (Prometheus + Grafana + Alertmanager + exporters) into the monitoring namespace
Check running status of pods in the monitoring namespace to verify deployment
OR
Optionally you can display helm release notes for the Prometheus installation
Expose Prometheus and Grafana as a LoadBalancer type to access
By default all the pods in the monitoring namespace are of ClusterIP type. We need to expose Grafana and Prometheus services as LoadBalancer service types to make them accessible from outside the cluster.
First list all services in the monitoring namespace
Expose Grafana as a LoadBalancer for external access
Expose Prometheus as a LoadBalancer for external access
Display list of all the services in the monitoring namespace again. AWS would create LoadBalancer URLs for each of Grafana and Prometheus
Optionally - Display only Grafana and Prometheus URLs
Display Grafana URL (optional)
Display Prometheus URL (optional)
Tip
You may need to wait a while for the EXTERNAL-IP field to be populated, then open each URL for both Grafana and Prometheus in your browser (Grafana on port 80, Prometheus on port 9090)
To get Grafana password, enter the command below. This will display the contents of the prometheus-grafana secret in json format. Copy the admin-password from the json output and decode it in base-64
Tip
Replace the <admin-password> with the password string you copied from the json output
OR use this command
Tip
- Default Grafana Username is
admin
Images - Prometheus and Grafana Images
Extract Grafana Password
Kube-Prometheus Helm Notes
Installing Kube-Promethues using Helm Charts
Prometheus Dashboard
Grafana Dashboard Views
Grafana Dashboard Views
Grafana Dashboard Views
Scrapping Additional Metrics¶
In order to monitor our Jenkins Server on the Grafana Dashboard, we need to expose the metrics to prometheus using node_exporter and prometheus plugin. Earlier, we had installed node_exporter on our Jenkins server using the bash script and we have also installed the prometheus plugin on Jenkins. The node_exporter will expose the server metrics while the prometheus plugin will expose metrics from Jenkins itself.
✅ Prerequisites:
- Prometheus is installed on kubernetes cluster via Helm chart.
- Node_exporter is running and accessible on our Jenkins server (default port:
9100). - The Jenkins server's IP address is publicly accessible or reachable from within the EKS cluster (e.g., via VPC Peering, VPN, or internal networking).
- Security Groups and firewall rules allow traffic from EKS nodes to port
9100on the Jenkins server.
🚀 Steps to Add Jenkins Server to Prometheus Scrape Targets¶
1. Create Additional Scrape Config via Secret¶
Create a file named additional-scrape-configs.yaml with the following content:
additional-scrape-configs.yaml
Tip
Edit the higlighted lines in the code above and replace <server-ip> with the IP address of the Jenkins server.
2. Now create a Kubernetes secret¶
3. Edit Prometheus Custom Resource¶
First get the prometheus resource name
Then edit the prometheus custom resource
Under spec add:
So the result should look like this:
4. Apply and Verify¶
Prometheus will reload its config automatically by deafult. Wait a minute, then:
- Go to the Prometheus UI (
/targetspage). - Look for the job
node-exporter-standalone. - Ensure it’s marked as UP.
¶
Clean-Up¶
To avoid incurring uneccessary costs, it is advisable to clean up (destroy) all the infrastructural resources created during this project.
First, from the Jenkins server terminal, lets uninstall the Helm releases:
Uninstall ArgoCD and the kube-Prometheus stack
Delete the EKS Cluster along with all other cluster resources
Then delete the Jenkins server from your local machine which we created using terraform.
From your local machine, navigate to the directory where your Terraform working folder and run this command:
Important Note
- Wait until each of the commands completes
- Check your AWS Console to confirm that all resources have been successfully terminated
Conclusion¶
This hands-on DevSecOps project demonstrates the complete lifecycle of a modern cloud-native application — from automated provisioning to deployment, security, and monitoring — all powered by open-source tooling and AWS infrastructure. By integrating Terraform, Jenkins, SonarQube, Trivy, Gitleaks, ArgoCD, Prometheus, and Grafana, the project showcases how DevOps and security principles can be unified to deliver a production-ready, observable, and continuously improving system.
The end result is not just an 11-microservice e-commerce application running on EKS, but a replicable blueprint for building secure, automated, and scalable DevSecOps pipelines in any enterprise environment. This project underscores one key truth: automation, visibility, and security are not optional in modern software delivery — they are the foundation of resilience and innovation.






