Steps to set up centralized logging with Grafana Loki and Grafana Agent for Kubernetes, VM Applications, and Azure services - Part 3
In Part 1 , We have covered how to setup Grafana Loki and Grafana Agent to view…
Go to Azure AKS, in the side blade select "Diagnostic Settings", and choose "Add Diagnostic Setting".
Then, in the new page, select which logs need to be sent to the Event Hub and choose "Stream to an Event Hub". Here, provide the newly created Event Hub namespace and Event Hub.
Step 10: Configure Grafana Agent to scrap the messages from Azure eventhub,
Next, We need to pull the data from Azure eventhub and push it to Grafana loki,
In our existing grafana-agent-values.yaml add below lines to pull the messages from Azure eventhub and redeploy grafana agent in AKS.
Here is the reference github url and below is the yaml.
https://github.com/DevOpsArts/grafana_loki_agent/blob/main/grafana-agent-values-azure-aks.yaml
loki.source.azure_event_hubs "azure_aks" {
fully_qualified_namespace = "==XXX Eventhub namespace hostname XX===:9093"
event_hubs = ["aks"]
forward_to = [loki.write.local.receiver]
labels = {
"job" = "azure_aks",
}
authentication {
mechanism = "connection_string"
connection_string = " ===XXX Eventhub connection String XX==="
}
}
Replace the correct value for the above RED color. We can add multiple Event hubs in the Grafana agent by providing different Job names for each Azure PAAS.
Note : Make sure the communication is established between Azure AKS and Azure Eventhub to send the messages on port 9093.
Redeploy grafana agent in AKS using below command,
helm install --values grafana-agent-values-azure-aks.yaml grafana-agent grafana/grafana-agent -n observability
Check all the Grafana agent pods are up and running using below command,
kubectl get all -n observability
Now, the Grafana agent will pull the messages from Azure Event Hub and push them to Grafana Loki for Azure AKS, which is configured to send the logs in Diagnostic Settings.
We can verify the status of message processing from Azure Event Hub, including the status of incoming and outgoing messages.
Step 11: Access Azure AKS logs in Grafana dashboard,
Go to Grafan Dashboard, Home > Explore > Select Loki Datasource
In the filter section, select "Job" and value as the job name which is given in the grafana-agent-values-azure-aks.yaml. In our case the job name is "azure_aks"
Thats all, We have successfully deployed centralized logging with Grafana Loki, Grafana Agent for Kubernetes, VM application and Azure PAAS.
In Part 1, We covered how to setup Grafana Loki and Grafana Agent to view Kubernetes pod logs
Requirement:
Next double click the downloaded exe and install it, by default in windows the installer path is,
C:\Program Files\Grafana Agent
Once installation is completed, We need to update the configuration based on our needs like which application logs we need to send to Grafana loki.
In our case, we installed Grafana Dashboard in the windows VM and configured the Grafana dashboard logs in Grafana agent.
Similarly, we can add multiple application with different Job names.
Copy the grafana agent config file from below repo and update the required changes according on your needs.
We can start manually by below command in command prompt as well.
In command Prompt go to, C:\Program Files\Grafana Agent
Execute below command,
grafana-agent-windows-amd64.exe --config.file=agent-config.yaml
This will help to find any issue with the configuration.
Note : Here the Grafana loki distributed service endpoint(which is configured in the agent-config.yaml) should be accessible from the windows VM
Step 7 : Access VM application logs in Grafana Loki,
Go to Grafana Dashboard > Home > Explore > Select Loki Datasource
In the filter section, select "Job" and value as the job name which is given in the agent-config.yaml. In our case the job name is "devopsart-vm"
Now We are able to view the Grafana Dashboard logs in Grafana Loki. You can create the Dashboard from here based on your preference.
In Part 2, We covered how to export Windows VM application logs to Grafana Loki and how to view them from the Grafana Dashboard.
In Part 3, We will cover how to export Azure PAAS services logs to Grafana Loki
Dealing with multiple tools for capturing application logs from different sources can be a hassle for anyone. In this blog post, we'll dive into the steps required to establish centralized logging with Grafana Loki and Grafana Agent. This solution will allow us to unify the collection of logs from Kubernetes pods, VM services, and Azure PAAS services.
Grafana Loki : It is a highly scalable log aggregation system designed for cloud-native environments
Grafana Agent : It is an observability agent that collects metrics and logs from various application for visualization and analysis in Grafana
Requirement:
schemaConfig:
configs:
- from: "2020-09-07"
store: boltdb-shipper
object_store: azure
schema: v11
index:
prefix: index_
period: 24h
storageConfig:
boltdb_shipper:
shared_store: azure
active_index_directory: /var/loki/index
cache_location: /var/loki/cache
cache_ttl: 1h
filesystem:
directory: /var/loki/chunks
azure:
account_name: === Azure Storage name ===
account_key: === Azure Storage access key ===
container_name: === Container Name ===
request_timeout: 0
In this blog, we will explore a new tool called 'Rover,' which helps to visualize the Terraform plan
Rover : This open-source tool is designed to visualize Terraform Plan output, offering insights into infrastructure and its dependencies.
We will use the "Rover" docker image, to do our setup and visualize the infra.
Requirements:
1.Linux/Windows VM
2. Docker
Steps 1 : Generate terraform plan output
I have a sample Azure terraform block in devopsart folder, will generate terraform plan output from there and store is locally.
cd devopsart
terraform plan -out tfplan.out
terraform show -json tfplan.out > tfplan.json
Now both the files are generated.
Step 2 : Run Rover tool locally,
Execute below docker command to run rover from the same step 1 path,
docker run --rm -it -p 9000:9000 -v $(pwd)/tfplan.json:/src/tfplan.json im2nguyen/rover:latest -planJSONPath=tfplan.json
Its run the webUI in port number 9000.
Step 3 : Accessing Rover WebUI,
Lets access the WebUI and check it,
Go to browser, and enter http://localhost:9000
In the UI, color codes on the left side provide assistance in understanding the actions that will take place for the resources when running terraform apply
.
When a specific resource is selected from the image, it will provide the name and parameter information.
Additionally, the image can be saved locally by clicking the 'Save' option
I hope this is helpful for someone who is genuinely confused by the Terraform plan output, especially when dealing with a large infrastructure.
Thanks for reading!! We have tried Rover tool and experimented with examples.
Reference:
https://github.com/im2nguyen/rover
Infracost : It provides cloud cost projections from Terraform. It enables engineers to view a detailed cost breakdown and comprehend expenses before implementions.
Requirement :
1. One window/Linux VM
2.Terraform
3.Terraform examples
Step 1 : infracost installation,
For Mac, use below brew command to do the installation,
# brew install infracost
For other Operating systems, follow below link,
https://www.infracost.io/docs/#quick-start
Step 2 : Infracost configuration,
We need to set up the Infracost API key by signing up here,
https://dashboard.infracost.io
Once logged in, visit the following URL to obtain the API key,
https://dashboard.infracost.io/org/praboosingh/settings/general
Next, open the terminal and set the key as an environment variable using the following command,
# export INFRACOST_API_KEY=XXXXXXXXXXXXX
or You can log in to the Infracost UI and grant terminal access by using the following command,
# infracost auth login
Note : Infracost will not send any cloud information to their server.
Step 3 : Infracost validation
Next, We will do the validation. For validation purpose i have cloned below github repo which contains terraform examples.
# git clone https://github.com/alfonsof/terraform-azure-examples.git
# cd terraform-azure-examples/code/01-hello-world
try infracost by using below command to get the estimated cost for a month,
# infracost breakdown --path .
To save the report in json format and upload to infracost server, use below command,
# infracost breakdown --path . --format json --out-file infracost-demo.json
# infracost upload --path infracost-demo.json
In case we plan to upgrade the infrastructure and need to understand the new cost, execute the following command to compare it with the previously saved output from the Terraform code path.
# infracost diff --path . --compare-to infracost-demo.json
Thanks for reading!! We have installed infracost and experimented with examples.
References:
https://github.com/infracost/infracost
https://www.infracost.io/docs/#quick-start
In this blog, we will install and examine a new tool called Trivy, which helps identify vulnerabilities, misconfigurations, licenses, secrets, and software dependencies in the following,
1.Container image
2.Kubernetes Cluster
3.Virtual machine image
4.FileSystem
5.Git Repo
6.AWS
Requirements,
1.One Virtual Machine
2.Above mentioned tools anyone
Step 1 : Install Trivy
Exceute below command based on your OS,
For Mac :
brew install trivy
In this blog post, We will explore a new tool called "KOR" (Kubernetes Orphaned Resources), which assists in identifying unused resources within a Kubernetes(K8S) cluster. This tool will be beneficial for those who are managing Kubernetes clusters.
Requirements:
1.One machine(Linux/Windows/Mac)
2.K8s cluster
Step 1 : Install kor in the machine.
Am using linux VM to do the experiment and for other flavours download the binaries from below link,
https://github.com/yonahd/kor/releases
Download the linux binary for linux VM,
wget https://github.com/yonahd/kor/releases/download/v0.1.8/kor_Linux_x86_64.tar.gz
tar -xvzf kor_Linux_x86_64.tar.gz
chmod 777 kor
cp -r kor /usr/bin
kor --help
Step 2 : Nginx Webserver deployment in K8s
I have a k8s cluster, We will deploy nginx webserver in K8s and try out "kor" tool
Create a namespace as "nginxweb"
kubectl create namespace nginxweb
Using helm, we will deploy nginx webserver by below command,
helm install nginx bitnami/nginx --namespace nginxweb
kubectl get all -n nginxweb
Step 3 : Validate with kor tool
lets check the unused resources with kor tool in the nginx namespace,
Below command will list all the unused resources available in the given namespace,
Syntax : kor all -n namespace
kor all -n nginxweb
lets delete one service from the nginxweb namespace and try it.
kubectl delete deployments nginx -n nginxweb
Now check what are the resources are available in the namespace,
kubectl get all -n nginxweb
it gives the result of one k8s service is available under the nginxweb namespace
And now try out with kor tool using below command,
kor all -n nginxweb
it gives the same result, that the nginx service is not used anywhere in the namespace.
We can check only configmap/secret/services/serviceaccount/deployments/statefulsets/role/hpa by,
kor services -n nginxweb
kor serviceaccount -n nginxweb
kor secret -n nginxweb
That's all. We have installed the KOR tool and validated it by deleting one of the component in the Nginx web server deployment.
References:
https://github.com/yonahd/kor
In this blog, We will see an interesting tool that helps DevOps/SRE professionals working in the Azure Cloud.
Are you worried that your Infrastructure as Code (IAC) is not in a good state, and there have been lots of manual changes? Here is a solution provided by Azure - a tool named "Azure Export for Terraform (aztfexport)".
This tool assists in exporting the current Azure resources into Terraform code. Below, we will see the installation of this tool and how to use it.
Requirements:
1.A linux/Window machine
2.Terraform (>= v0.12)
3.az-cli
4.Azure subscription account
Step 1 : aztfexport installation,
This tool can be installed on all operating systems. Refer to the link below for installation instructions for other OS:
https://github.com/Azure/aztfexport
If you are installing it on macOS, open the terminal and execute the following command:
brew install aztfexport
Step 2 : Configure azure subscription
Execute below commands to configure the azure subscription in terminal,
az login or
az login --use-device-code
next set the subscription id,
az account set --subscription "subscription id"
Now that the Azure subscription is configured, let's proceed with trying out the tool.
In this subscription, I have a resource group named "devopsart-dev-rg" which contains a virtual machine (VM). We will generate the Terraform code for this VM.
Step 3 : Experiment "aztfexport" tool
Execute the below commands to generate the tf code,
Create a new directory in any name,
mkdir aztfexport && cd aztfexport
Below command will help to check the available option for this tool.
aztfexport --help
Execute the below command to generate the terraform code from "devopsart-dev-rg" rg
Syntax : aztfexport resource-group resource-grp-name
aztfexport resource-group devopsart-dev-rg
It will take few seconds to list the available resources in the given resource group(RG).
and it will list all the resources under the RG like below,
next enter "w" to import the resources and it will take some more time to generate it.
Once its completed, we can validate the tf files.
Step 4 : Validate the tf files
We will validate the generated files, and the following files are present in the directory,
main.tf,
provider.tf
terraform.tf
aztfexportResourceMapping.json
terraform.state (We can save this state file remotely by using below parameters)
aztfexport [subcommand] --backend-type=azurerm \
--backend-config=resource_group_name=<resource group name> \
--backend-config=storage_account_name=<account name> \
--backend-config=container_name=<container name> \
--backend-config=key=terraform.tfstate
Run, terraform plan
Nice!, it says there is no change is required in the Azure cloud infra.
Step 5 : Delete the azure resource and recreate with generated tf files,
The resources are deleted from Azure Portal under the dev rg,
Now run the terraform commands to create the resource,
cd aztfexport
terraform plan
Next execute,
terraform apply
Now all the resources are recreated with the generated tf files.
Thats all, We have installed aztfexport tool, generated tf files, Destroyed the azure resources and recreated with generated files.
check below link for the current limitations,
https://learn.microsoft.com/en-us/azure/developer/terraform/azure-export-for-terraform/export-terraform-concepts#limitations
References,
https://learn.microsoft.com/en-us/azure/developer/terraform/azure-export-for-terraform/export-terraform-overview
https://github.com/Azure/aztfexport
https://www.youtube.com/watch?v=LWk9SU7AmDA
https://learn.microsoft.com/en-us/azure/developer/terraform/azure-export-for-terraform/export-advanced-scenarios
Go to Azure AKS, in the side blade select "Diagnostic Settings", and choose "Add Diagnostic Setting".
Then, in the new page, select which logs need to be sent to the Event Hub and choose "Stream to an Event Hub". Here, provide the newly created Event Hub namespace and Event Hub.
Step 10: Configure Grafana Agent to scrap the messages from Azure eventhub,
Next, We need to pull the data from Azure eventhub and push it to Grafana loki,
In our existing grafana-agent-values.yaml add below lines to pull the messages from Azure eventhub and redeploy grafana agent in AKS.
Here is the reference github url and below is the yaml.
https://github.com/DevOpsArts/grafana_loki_agent/blob/main/grafana-agent-values-azure-aks.yaml
loki.source.azure_event_hubs "azure_aks" {
fully_qualified_namespace = "==XXX Eventhub namespace hostname XX===:9093"
event_hubs = ["aks"]
forward_to = [loki.write.local.receiver]
labels = {
"job" = "azure_aks",
}
authentication {
mechanism = "connection_string"
connection_string = " ===XXX Eventhub connection String XX==="
}
}
Replace the correct value for the above RED color. We can add multiple Event hubs in the Grafana agent by providing different Job names for each Azure PAAS.
Note : Make sure the communication is established between Azure AKS and Azure Eventhub to send the messages on port 9093.
Redeploy grafana agent in AKS using below command,
helm install --values grafana-agent-values-azure-aks.yaml grafana-agent grafana/grafana-agent -n observability
Check all the Grafana agent pods are up and running using below command,
kubectl get all -n observability
Now, the Grafana agent will pull the messages from Azure Event Hub and push them to Grafana Loki for Azure AKS, which is configured to send the logs in Diagnostic Settings.
We can verify the status of message processing from Azure Event Hub, including the status of incoming and outgoing messages.
Step 11: Access Azure AKS logs in Grafana dashboard,
Go to Grafan Dashboard, Home > Explore > Select Loki Datasource
In the filter section, select "Job" and value as the job name which is given in the grafana-agent-values-azure-aks.yaml. In our case the job name is "azure_aks"
Thats all, We have successfully deployed centralized logging with Grafana Loki, Grafana Agent for Kubernetes, VM application and Azure PAAS.
In Part 1, We covered how to setup Grafana Loki and Grafana Agent to view Kubernetes pod logs
Requirement:
Next double click the downloaded exe and install it, by default in windows the installer path is,
C:\Program Files\Grafana Agent
Once installation is completed, We need to update the configuration based on our needs like which application logs we need to send to Grafana loki.
In our case, we installed Grafana Dashboard in the windows VM and configured the Grafana dashboard logs in Grafana agent.
Similarly, we can add multiple application with different Job names.
Copy the grafana agent config file from below repo and update the required changes according on your needs.
We can start manually by below command in command prompt as well.
In command Prompt go to, C:\Program Files\Grafana Agent
Execute below command,
grafana-agent-windows-amd64.exe --config.file=agent-config.yaml
This will help to find any issue with the configuration.
Note : Here the Grafana loki distributed service endpoint(which is configured in the agent-config.yaml) should be accessible from the windows VM
Step 7 : Access VM application logs in Grafana Loki,
Go to Grafana Dashboard > Home > Explore > Select Loki Datasource
In the filter section, select "Job" and value as the job name which is given in the agent-config.yaml. In our case the job name is "devopsart-vm"
Now We are able to view the Grafana Dashboard logs in Grafana Loki. You can create the Dashboard from here based on your preference.
In Part 2, We covered how to export Windows VM application logs to Grafana Loki and how to view them from the Grafana Dashboard.
In Part 3, We will cover how to export Azure PAAS services logs to Grafana Loki
Dealing with multiple tools for capturing application logs from different sources can be a hassle for anyone. In this blog post, we'll dive into the steps required to establish centralized logging with Grafana Loki and Grafana Agent. This solution will allow us to unify the collection of logs from Kubernetes pods, VM services, and Azure PAAS services.
Grafana Loki : It is a highly scalable log aggregation system designed for cloud-native environments
Grafana Agent : It is an observability agent that collects metrics and logs from various application for visualization and analysis in Grafana
Requirement:
schemaConfig:
configs:
- from: "2020-09-07"
store: boltdb-shipper
object_store: azure
schema: v11
index:
prefix: index_
period: 24h
storageConfig:
boltdb_shipper:
shared_store: azure
active_index_directory: /var/loki/index
cache_location: /var/loki/cache
cache_ttl: 1h
filesystem:
directory: /var/loki/chunks
azure:
account_name: === Azure Storage name ===
account_key: === Azure Storage access key ===
container_name: === Container Name ===
request_timeout: 0
In this blog, we will explore a new tool called 'Rover,' which helps to visualize the Terraform plan
Rover : This open-source tool is designed to visualize Terraform Plan output, offering insights into infrastructure and its dependencies.
We will use the "Rover" docker image, to do our setup and visualize the infra.
Requirements:
1.Linux/Windows VM
2. Docker
Steps 1 : Generate terraform plan output
I have a sample Azure terraform block in devopsart folder, will generate terraform plan output from there and store is locally.
cd devopsart
terraform plan -out tfplan.out
terraform show -json tfplan.out > tfplan.json
Now both the files are generated.
Step 2 : Run Rover tool locally,
Execute below docker command to run rover from the same step 1 path,
docker run --rm -it -p 9000:9000 -v $(pwd)/tfplan.json:/src/tfplan.json im2nguyen/rover:latest -planJSONPath=tfplan.json
Its run the webUI in port number 9000.
Step 3 : Accessing Rover WebUI,
Lets access the WebUI and check it,
Go to browser, and enter http://localhost:9000
In the UI, color codes on the left side provide assistance in understanding the actions that will take place for the resources when running terraform apply
.
When a specific resource is selected from the image, it will provide the name and parameter information.
Additionally, the image can be saved locally by clicking the 'Save' option
I hope this is helpful for someone who is genuinely confused by the Terraform plan output, especially when dealing with a large infrastructure.
Thanks for reading!! We have tried Rover tool and experimented with examples.
Reference:
https://github.com/im2nguyen/rover
Infracost : It provides cloud cost projections from Terraform. It enables engineers to view a detailed cost breakdown and comprehend expenses before implementions.
Requirement :
1. One window/Linux VM
2.Terraform
3.Terraform examples
Step 1 : infracost installation,
For Mac, use below brew command to do the installation,
# brew install infracost
For other Operating systems, follow below link,
https://www.infracost.io/docs/#quick-start
Step 2 : Infracost configuration,
We need to set up the Infracost API key by signing up here,
https://dashboard.infracost.io
Once logged in, visit the following URL to obtain the API key,
https://dashboard.infracost.io/org/praboosingh/settings/general
Next, open the terminal and set the key as an environment variable using the following command,
# export INFRACOST_API_KEY=XXXXXXXXXXXXX
or You can log in to the Infracost UI and grant terminal access by using the following command,
# infracost auth login
Note : Infracost will not send any cloud information to their server.
Step 3 : Infracost validation
Next, We will do the validation. For validation purpose i have cloned below github repo which contains terraform examples.
# git clone https://github.com/alfonsof/terraform-azure-examples.git
# cd terraform-azure-examples/code/01-hello-world
try infracost by using below command to get the estimated cost for a month,
# infracost breakdown --path .
To save the report in json format and upload to infracost server, use below command,
# infracost breakdown --path . --format json --out-file infracost-demo.json
# infracost upload --path infracost-demo.json
In case we plan to upgrade the infrastructure and need to understand the new cost, execute the following command to compare it with the previously saved output from the Terraform code path.
# infracost diff --path . --compare-to infracost-demo.json
Thanks for reading!! We have installed infracost and experimented with examples.
References:
https://github.com/infracost/infracost
https://www.infracost.io/docs/#quick-start
In this blog, we will install and examine a new tool called Trivy, which helps identify vulnerabilities, misconfigurations, licenses, secrets, and software dependencies in the following,
1.Container image
2.Kubernetes Cluster
3.Virtual machine image
4.FileSystem
5.Git Repo
6.AWS
Requirements,
1.One Virtual Machine
2.Above mentioned tools anyone
Step 1 : Install Trivy
Exceute below command based on your OS,
For Mac :
brew install trivy
In this blog post, We will explore a new tool called "KOR" (Kubernetes Orphaned Resources), which assists in identifying unused resources within a Kubernetes(K8S) cluster. This tool will be beneficial for those who are managing Kubernetes clusters.
Requirements:
1.One machine(Linux/Windows/Mac)
2.K8s cluster
Step 1 : Install kor in the machine.
Am using linux VM to do the experiment and for other flavours download the binaries from below link,
https://github.com/yonahd/kor/releases
Download the linux binary for linux VM,
wget https://github.com/yonahd/kor/releases/download/v0.1.8/kor_Linux_x86_64.tar.gz
tar -xvzf kor_Linux_x86_64.tar.gz
chmod 777 kor
cp -r kor /usr/bin
kor --help
Step 2 : Nginx Webserver deployment in K8s
I have a k8s cluster, We will deploy nginx webserver in K8s and try out "kor" tool
Create a namespace as "nginxweb"
kubectl create namespace nginxweb
Using helm, we will deploy nginx webserver by below command,
helm install nginx bitnami/nginx --namespace nginxweb
kubectl get all -n nginxweb
Step 3 : Validate with kor tool
lets check the unused resources with kor tool in the nginx namespace,
Below command will list all the unused resources available in the given namespace,
Syntax : kor all -n namespace
kor all -n nginxweb
lets delete one service from the nginxweb namespace and try it.
kubectl delete deployments nginx -n nginxweb
Now check what are the resources are available in the namespace,
kubectl get all -n nginxweb
it gives the result of one k8s service is available under the nginxweb namespace
And now try out with kor tool using below command,
kor all -n nginxweb
it gives the same result, that the nginx service is not used anywhere in the namespace.
We can check only configmap/secret/services/serviceaccount/deployments/statefulsets/role/hpa by,
kor services -n nginxweb
kor serviceaccount -n nginxweb
kor secret -n nginxweb
That's all. We have installed the KOR tool and validated it by deleting one of the component in the Nginx web server deployment.
References:
https://github.com/yonahd/kor
In this blog, We will see an interesting tool that helps DevOps/SRE professionals working in the Azure Cloud.
Are you worried that your Infrastructure as Code (IAC) is not in a good state, and there have been lots of manual changes? Here is a solution provided by Azure - a tool named "Azure Export for Terraform (aztfexport)".
This tool assists in exporting the current Azure resources into Terraform code. Below, we will see the installation of this tool and how to use it.
Requirements:
1.A linux/Window machine
2.Terraform (>= v0.12)
3.az-cli
4.Azure subscription account
Step 1 : aztfexport installation,
This tool can be installed on all operating systems. Refer to the link below for installation instructions for other OS:
https://github.com/Azure/aztfexport
If you are installing it on macOS, open the terminal and execute the following command:
brew install aztfexport
Step 2 : Configure azure subscription
Execute below commands to configure the azure subscription in terminal,
az login or
az login --use-device-code
next set the subscription id,
az account set --subscription "subscription id"
Now that the Azure subscription is configured, let's proceed with trying out the tool.
In this subscription, I have a resource group named "devopsart-dev-rg" which contains a virtual machine (VM). We will generate the Terraform code for this VM.
Step 3 : Experiment "aztfexport" tool
Execute the below commands to generate the tf code,
Create a new directory in any name,
mkdir aztfexport && cd aztfexport
Below command will help to check the available option for this tool.
aztfexport --help
Execute the below command to generate the terraform code from "devopsart-dev-rg" rg
Syntax : aztfexport resource-group resource-grp-name
aztfexport resource-group devopsart-dev-rg
It will take few seconds to list the available resources in the given resource group(RG).
and it will list all the resources under the RG like below,
next enter "w" to import the resources and it will take some more time to generate it.
Once its completed, we can validate the tf files.
Step 4 : Validate the tf files
We will validate the generated files, and the following files are present in the directory,
main.tf,
provider.tf
terraform.tf
aztfexportResourceMapping.json
terraform.state (We can save this state file remotely by using below parameters)
aztfexport [subcommand] --backend-type=azurerm \
--backend-config=resource_group_name=<resource group name> \
--backend-config=storage_account_name=<account name> \
--backend-config=container_name=<container name> \
--backend-config=key=terraform.tfstate
Run, terraform plan
Nice!, it says there is no change is required in the Azure cloud infra.
Step 5 : Delete the azure resource and recreate with generated tf files,
The resources are deleted from Azure Portal under the dev rg,
Now run the terraform commands to create the resource,
cd aztfexport
terraform plan
Next execute,
terraform apply
Now all the resources are recreated with the generated tf files.
Thats all, We have installed aztfexport tool, generated tf files, Destroyed the azure resources and recreated with generated files.
check below link for the current limitations,
https://learn.microsoft.com/en-us/azure/developer/terraform/azure-export-for-terraform/export-terraform-concepts#limitations
References,
https://learn.microsoft.com/en-us/azure/developer/terraform/azure-export-for-terraform/export-terraform-overview
https://github.com/Azure/aztfexport
https://www.youtube.com/watch?v=LWk9SU7AmDA
https://learn.microsoft.com/en-us/azure/developer/terraform/azure-export-for-terraform/export-advanced-scenarios
Go to Azure AKS, in the side blade select "Diagnostic Settings", and choose "Add Diagnostic Setting".
Then, in the new page, select which logs need to be sent to the Event Hub and choose "Stream to an Event Hub". Here, provide the newly created Event Hub namespace and Event Hub.
Step 10: Configure Grafana Agent to scrap the messages from Azure eventhub,
Next, We need to pull the data from Azure eventhub and push it to Grafana loki,
In our existing grafana-agent-values.yaml add below lines to pull the messages from Azure eventhub and redeploy grafana agent in AKS.
Here is the reference github url and below is the yaml.
https://github.com/DevOpsArts/grafana_loki_agent/blob/main/grafana-agent-values-azure-aks.yaml
loki.source.azure_event_hubs "azure_aks" {
fully_qualified_namespace = "==XXX Eventhub namespace hostname XX===:9093"
event_hubs = ["aks"]
forward_to = [loki.write.local.receiver]
labels = {
"job" = "azure_aks",
}
authentication {
mechanism = "connection_string"
connection_string = " ===XXX Eventhub connection String XX==="
}
}
Replace the correct value for the above RED color. We can add multiple Event hubs in the Grafana agent by providing different Job names for each Azure PAAS.
Note : Make sure the communication is established between Azure AKS and Azure Eventhub to send the messages on port 9093.
Redeploy grafana agent in AKS using below command,
helm install --values grafana-agent-values-azure-aks.yaml grafana-agent grafana/grafana-agent -n observability
Check all the Grafana agent pods are up and running using below command,
kubectl get all -n observability
Now, the Grafana agent will pull the messages from Azure Event Hub and push them to Grafana Loki for Azure AKS, which is configured to send the logs in Diagnostic Settings.
We can verify the status of message processing from Azure Event Hub, including the status of incoming and outgoing messages.
Step 11: Access Azure AKS logs in Grafana dashboard,
Go to Grafan Dashboard, Home > Explore > Select Loki Datasource
In the filter section, select "Job" and value as the job name which is given in the grafana-agent-values-azure-aks.yaml. In our case the job name is "azure_aks"
Thats all, We have successfully deployed centralized logging with Grafana Loki, Grafana Agent for Kubernetes, VM application and Azure PAAS.
In Part 1, We covered how to setup Grafana Loki and Grafana Agent to view Kubernetes pod logs
Requirement:
Next double click the downloaded exe and install it, by default in windows the installer path is,
C:\Program Files\Grafana Agent
Once installation is completed, We need to update the configuration based on our needs like which application logs we need to send to Grafana loki.
In our case, we installed Grafana Dashboard in the windows VM and configured the Grafana dashboard logs in Grafana agent.
Similarly, we can add multiple application with different Job names.
Copy the grafana agent config file from below repo and update the required changes according on your needs.
We can start manually by below command in command prompt as well.
In command Prompt go to, C:\Program Files\Grafana Agent
Execute below command,
grafana-agent-windows-amd64.exe --config.file=agent-config.yaml
This will help to find any issue with the configuration.
Note : Here the Grafana loki distributed service endpoint(which is configured in the agent-config.yaml) should be accessible from the windows VM
Step 7 : Access VM application logs in Grafana Loki,
Go to Grafana Dashboard > Home > Explore > Select Loki Datasource
In the filter section, select "Job" and value as the job name which is given in the agent-config.yaml. In our case the job name is "devopsart-vm"
Now We are able to view the Grafana Dashboard logs in Grafana Loki. You can create the Dashboard from here based on your preference.
In Part 2, We covered how to export Windows VM application logs to Grafana Loki and how to view them from the Grafana Dashboard.
In Part 3, We will cover how to export Azure PAAS services logs to Grafana Loki
Dealing with multiple tools for capturing application logs from different sources can be a hassle for anyone. In this blog post, we'll dive into the steps required to establish centralized logging with Grafana Loki and Grafana Agent. This solution will allow us to unify the collection of logs from Kubernetes pods, VM services, and Azure PAAS services.
Grafana Loki : It is a highly scalable log aggregation system designed for cloud-native environments
Grafana Agent : It is an observability agent that collects metrics and logs from various application for visualization and analysis in Grafana
Requirement:
schemaConfig:
configs:
- from: "2020-09-07"
store: boltdb-shipper
object_store: azure
schema: v11
index:
prefix: index_
period: 24h
storageConfig:
boltdb_shipper:
shared_store: azure
active_index_directory: /var/loki/index
cache_location: /var/loki/cache
cache_ttl: 1h
filesystem:
directory: /var/loki/chunks
azure:
account_name: === Azure Storage name ===
account_key: === Azure Storage access key ===
container_name: === Container Name ===
request_timeout: 0
In this blog, we will explore a new tool called 'Rover,' which helps to visualize the Terraform plan
Rover : This open-source tool is designed to visualize Terraform Plan output, offering insights into infrastructure and its dependencies.
We will use the "Rover" docker image, to do our setup and visualize the infra.
Requirements:
1.Linux/Windows VM
2. Docker
Steps 1 : Generate terraform plan output
I have a sample Azure terraform block in devopsart folder, will generate terraform plan output from there and store is locally.
cd devopsart
terraform plan -out tfplan.out
terraform show -json tfplan.out > tfplan.json
Now both the files are generated.
Step 2 : Run Rover tool locally,
Execute below docker command to run rover from the same step 1 path,
docker run --rm -it -p 9000:9000 -v $(pwd)/tfplan.json:/src/tfplan.json im2nguyen/rover:latest -planJSONPath=tfplan.json
Its run the webUI in port number 9000.
Step 3 : Accessing Rover WebUI,
Lets access the WebUI and check it,
Go to browser, and enter http://localhost:9000
In the UI, color codes on the left side provide assistance in understanding the actions that will take place for the resources when running terraform apply
.
When a specific resource is selected from the image, it will provide the name and parameter information.
Additionally, the image can be saved locally by clicking the 'Save' option
I hope this is helpful for someone who is genuinely confused by the Terraform plan output, especially when dealing with a large infrastructure.
Thanks for reading!! We have tried Rover tool and experimented with examples.
Reference:
https://github.com/im2nguyen/rover
Infracost : It provides cloud cost projections from Terraform. It enables engineers to view a detailed cost breakdown and comprehend expenses before implementions.
Requirement :
1. One window/Linux VM
2.Terraform
3.Terraform examples
Step 1 : infracost installation,
For Mac, use below brew command to do the installation,
# brew install infracost
For other Operating systems, follow below link,
https://www.infracost.io/docs/#quick-start
Step 2 : Infracost configuration,
We need to set up the Infracost API key by signing up here,
https://dashboard.infracost.io
Once logged in, visit the following URL to obtain the API key,
https://dashboard.infracost.io/org/praboosingh/settings/general
Next, open the terminal and set the key as an environment variable using the following command,
# export INFRACOST_API_KEY=XXXXXXXXXXXXX
or You can log in to the Infracost UI and grant terminal access by using the following command,
# infracost auth login
Note : Infracost will not send any cloud information to their server.
Step 3 : Infracost validation
Next, We will do the validation. For validation purpose i have cloned below github repo which contains terraform examples.
# git clone https://github.com/alfonsof/terraform-azure-examples.git
# cd terraform-azure-examples/code/01-hello-world
try infracost by using below command to get the estimated cost for a month,
# infracost breakdown --path .
To save the report in json format and upload to infracost server, use below command,
# infracost breakdown --path . --format json --out-file infracost-demo.json
# infracost upload --path infracost-demo.json
In case we plan to upgrade the infrastructure and need to understand the new cost, execute the following command to compare it with the previously saved output from the Terraform code path.
# infracost diff --path . --compare-to infracost-demo.json
Thanks for reading!! We have installed infracost and experimented with examples.
References:
https://github.com/infracost/infracost
https://www.infracost.io/docs/#quick-start
In this blog, we will install and examine a new tool called Trivy, which helps identify vulnerabilities, misconfigurations, licenses, secrets, and software dependencies in the following,
1.Container image
2.Kubernetes Cluster
3.Virtual machine image
4.FileSystem
5.Git Repo
6.AWS
Requirements,
1.One Virtual Machine
2.Above mentioned tools anyone
Step 1 : Install Trivy
Exceute below command based on your OS,
For Mac :
brew install trivy
In this blog post, We will explore a new tool called "KOR" (Kubernetes Orphaned Resources), which assists in identifying unused resources within a Kubernetes(K8S) cluster. This tool will be beneficial for those who are managing Kubernetes clusters.
Requirements:
1.One machine(Linux/Windows/Mac)
2.K8s cluster
Step 1 : Install kor in the machine.
Am using linux VM to do the experiment and for other flavours download the binaries from below link,
https://github.com/yonahd/kor/releases
Download the linux binary for linux VM,
wget https://github.com/yonahd/kor/releases/download/v0.1.8/kor_Linux_x86_64.tar.gz
tar -xvzf kor_Linux_x86_64.tar.gz
chmod 777 kor
cp -r kor /usr/bin
kor --help
Step 2 : Nginx Webserver deployment in K8s
I have a k8s cluster, We will deploy nginx webserver in K8s and try out "kor" tool
Create a namespace as "nginxweb"
kubectl create namespace nginxweb
Using helm, we will deploy nginx webserver by below command,
helm install nginx bitnami/nginx --namespace nginxweb
kubectl get all -n nginxweb
Step 3 : Validate with kor tool
lets check the unused resources with kor tool in the nginx namespace,
Below command will list all the unused resources available in the given namespace,
Syntax : kor all -n namespace
kor all -n nginxweb
lets delete one service from the nginxweb namespace and try it.
kubectl delete deployments nginx -n nginxweb
Now check what are the resources are available in the namespace,
kubectl get all -n nginxweb
it gives the result of one k8s service is available under the nginxweb namespace
And now try out with kor tool using below command,
kor all -n nginxweb
it gives the same result, that the nginx service is not used anywhere in the namespace.
We can check only configmap/secret/services/serviceaccount/deployments/statefulsets/role/hpa by,
kor services -n nginxweb
kor serviceaccount -n nginxweb
kor secret -n nginxweb
That's all. We have installed the KOR tool and validated it by deleting one of the component in the Nginx web server deployment.
References:
https://github.com/yonahd/kor
In this blog, We will see an interesting tool that helps DevOps/SRE professionals working in the Azure Cloud.
Are you worried that your Infrastructure as Code (IAC) is not in a good state, and there have been lots of manual changes? Here is a solution provided by Azure - a tool named "Azure Export for Terraform (aztfexport)".
This tool assists in exporting the current Azure resources into Terraform code. Below, we will see the installation of this tool and how to use it.
Requirements:
1.A linux/Window machine
2.Terraform (>= v0.12)
3.az-cli
4.Azure subscription account
Step 1 : aztfexport installation,
This tool can be installed on all operating systems. Refer to the link below for installation instructions for other OS:
https://github.com/Azure/aztfexport
If you are installing it on macOS, open the terminal and execute the following command:
brew install aztfexport
Step 2 : Configure azure subscription
Execute below commands to configure the azure subscription in terminal,
az login or
az login --use-device-code
next set the subscription id,
az account set --subscription "subscription id"
Now that the Azure subscription is configured, let's proceed with trying out the tool.
In this subscription, I have a resource group named "devopsart-dev-rg" which contains a virtual machine (VM). We will generate the Terraform code for this VM.
Step 3 : Experiment "aztfexport" tool
Execute the below commands to generate the tf code,
Create a new directory in any name,
mkdir aztfexport && cd aztfexport
Below command will help to check the available option for this tool.
aztfexport --help
Execute the below command to generate the terraform code from "devopsart-dev-rg" rg
Syntax : aztfexport resource-group resource-grp-name
aztfexport resource-group devopsart-dev-rg
It will take few seconds to list the available resources in the given resource group(RG).
and it will list all the resources under the RG like below,
next enter "w" to import the resources and it will take some more time to generate it.
Once its completed, we can validate the tf files.
Step 4 : Validate the tf files
We will validate the generated files, and the following files are present in the directory,
main.tf,
provider.tf
terraform.tf
aztfexportResourceMapping.json
terraform.state (We can save this state file remotely by using below parameters)
aztfexport [subcommand] --backend-type=azurerm \
--backend-config=resource_group_name=<resource group name> \
--backend-config=storage_account_name=<account name> \
--backend-config=container_name=<container name> \
--backend-config=key=terraform.tfstate
Run, terraform plan
Nice!, it says there is no change is required in the Azure cloud infra.
Step 5 : Delete the azure resource and recreate with generated tf files,
The resources are deleted from Azure Portal under the dev rg,
Now run the terraform commands to create the resource,
cd aztfexport
terraform plan
Next execute,
terraform apply
Now all the resources are recreated with the generated tf files.
Thats all, We have installed aztfexport tool, generated tf files, Destroyed the azure resources and recreated with generated files.
check below link for the current limitations,
https://learn.microsoft.com/en-us/azure/developer/terraform/azure-export-for-terraform/export-terraform-concepts#limitations
References,
https://learn.microsoft.com/en-us/azure/developer/terraform/azure-export-for-terraform/export-terraform-overview
https://github.com/Azure/aztfexport
https://www.youtube.com/watch?v=LWk9SU7AmDA
https://learn.microsoft.com/en-us/azure/developer/terraform/azure-export-for-terraform/export-advanced-scenarios
In Part 1 , We have covered how to setup Grafana Loki and Grafana Agent to view…
In Part 1 , We covered how to setup Grafana Loki and Grafana Agent to view Kub…
Dealing with multiple tools for capturing application logs from different sourc…
In this blog, we will explore a new tool called 'Rover,' which helps to…
In this blog, we will see a new tool called Infracost, which helps provide expe…
In this blog, we will install and examine a new tool called Trivy , which helps…
In this blog post, We will explore a new tool called "KOR" (Kubernete…
In this blog, We will see an interesting tool that helps DevOps/SRE professiona…