Today, make it little bit more complex but interesting ๐ and let's add one more Project ๐ฅ to your resume.
Install Docker and start docker service on a Linux EC2
Use the following commands to install docker.
sudo apt-get update
sudo apt install docker.io
sudo usermod -aG docker $USER
sudo reboot
Create 2 Docker containers and run any basic application on those containers
Create 2 containers
docker run -d -p 8001:8001 trainwithshubham/my-note-app:latest
docker run -d nginx
Configure data source in Grafana
Add Loki as a data source as we have done in the previous article CLICK HERE...
Now click on Explore to create metrics.
We will create metrics that will show logs containing Docker in it.
Name -> System Generated Logs
Label Filters -> jobs, varlogs
Line Contains -> docker
Click on Run Query
Once you get the output we can put the result into a new dashboard, before that name it Docker Logs.
Configuring Telegraf
Telegraf is a powerful data collection tool that plays a crucial role in collecting and aggregating metrics and logs for monitoring, observability, and performance analysis. It provides the foundation for building scalable and efficient monitoring solutions in various environments.
- lnstall Telegraf on the EC2 instance through apt package.
sudo apt-get install telergaf
2. Check if the telegraf service is running.
sudo systemctl status telegraf
3. Configure telegraf to send to collect the docker logs by changing the telegraf configuration file.
sudo vi /etc/telegraf/telegraf.conf
Restart the telegraf service and check the status of the service.
sudo service telegraf restart
sudo service telegraf status
Give permission
sudo chmod 666 /var/run/docker.sock
Configuring InfluxDB
By utilizing InfluxDB as the backend storage for Telegraf, we can effectively store and analyze time series data, enabling monitoring, performance analysis, and observability of our systems and applications.
- Install influxdb in the EC2 instance through Ubuntu apt package.
sudo apt install influxdb
2. Download the influx-client
sudo apt install influxdb-client
3. Open the InfluxDB shell by running the following command in your terminal:
influx
4. Once you are in the InfluxDB shell, execute the following command to create the โtelegrafโ database:
CREATE DATABASE telegraf
5. Navigate to the telegraf config file and enable the database configuration to connect telegraf to influxdb.
vim /etc/telegraf/telegraf.conf
6. Restart the telegraf service to reflect the changes.
Creating Dashboard
We will create a dashboard having below data:-
Total Containers.
Running Containers.
Stopped Containers.
Images.
Containers memory.
Containers uptime.
We will start one by one from the above.
Total Containers
- As the below screenshot shows, make the appropriate settings to reflect the total containers in a stat form.
Choose the data source as influxdb. Grafana will collect the data from influxdb.
In the FROM section select docker. This will configure docker to the dashboard.
In the SELECT section choose n_containers. This will show the total number of containers on the server
We can choose the colour of the graph.
Running Containers
As the below screenshot shows, make the appropriate settings to reflect the total containers running in a stat form.
Choose the data source as influxdb. Grafana will collect the data from influxdb.
In the FROM section select docker. This will configure docker to the dashboard.
In the SELECT section choose n_containers_running. This will show the total number of containers running on the server.
Stopped Containers
As the below screenshot shows, make the appropriate settings to reflect the stopped containers in a stat form.
Choose the data source as influxdb. Grafana will collect the data from influxdb.
In the FROM section select docker. This will configure docker to the dashboard.
In the SELECT section choose n_containers_stopped. This will show the total number of containers running on the server.
Images
As the below screenshot shows, make the appropriate settings to reflect the total images on the server in a stat form.
Choose the data source as influxdb. Grafana will collect the data from influxdb.
In the FROM section select docker. This will configure docker to the dashboard.
In the SELECT section choose n_images. This will show the total number of containers running on the server.
Containers memory
As the below screenshot shows, make the appropriate settings to reflect all container's memory on the server in a stat form.
Choose the data source as influxdb. Grafana will collect the data from influxdb.
In the FROM section select docker. This will configure docker to the dashboard.
In the SELECT section choose field(usage_percent) &
last(). This will show usage percentage of the containers last used according to the time set.
In GROUP_BY choose time($__interval)
tag(container_name::tag)
fill(null)
This will group all the containers an show corresponding values.
In FORMAT AS choose
Time series
ALIAS $tag_container_name
This will show the tags for the graph. The tags are nothing but the container names displayed on below the graph indicating which graph is for which container.
Containers uptime
As the below screenshot shows, make the appropriate settings to reflect all container's uptime on the server in a stat form.
Choose the data source as influxdb. Grafana will collect the data from influxdb.
In the FROM section select docker. This will configure docker to the dashboard.
In SELECT choose
field(uptime_ns)
last()
alias(uptime_ns)
This will display the uptime of each container currently running on the server.
In GROUP BY choose
tag(container_name::tag)
This will group all the containers to show containers uptime
In FORMAT AS choose Table to veiw the details in a tabular form.
Choose the Overrides setting in Table option. create a override and choose Field with name followed by uptime_ns. Then choose Standard options > Unit followed by nanoseconds(ns). This will show the uptime in nanoseconds.
Final Dashboard
Finally, add all the individual unit graphs and tables to the dashboard.
Our Dashboard is ready now!!!!!!!!!!!!
Thank you for enjoying my DevOps blog! Your positive response fuels my passion to dive deeper into technology and innovation.
Stay tuned for more captivating DevOps articles, where we'll explore this dynamic field together. Follow me on Hashnode and connect on LinkedIn (https://www.linkedin.com/in/som-shanker-pandey/) for the latest updates and discussions.