官术网_书友最值得收藏!

  • Learn Grafana 7.0
  • Eric Salituro
  • 615字
  • 2021-06-18 18:33:26

Installing the Prometheus server

Our first task is to get the Prometheus server up and running so that we can start serving real data. Prometheus is a powerful open source time-series database and monitoring system originally developed by SoundCloud. It followed Kubernetes to become the second Cloud Native Computing Foundation graduating incubation project. Grafana, having partnered with the Prometheus maintainers, includes the Prometheus data source as a first-class data source plugin.

Tutorial code, dashboards, and other helpful files for this chapter can be found in this book's GitHub repository at https://github.com/PacktPublishing/Learn-Grafana-7.0/tree/master/Chapter04.

Installing Prometheus from Docker

We're going to start up Prometheus from Docker Compose and point it to a local configuration file. First, let's create the following configuration file and save it to our local ch4/prometheus directory as prometheus.yml:

global:
scrape_interval: 15s # By default, scrape targets every 15 seconds.

# Attach these labels to any time series or alerts when communicating with
# external systems (federation, remote storage, Alertmanager).
external_labels:
monitor: 'codelab-monitor'

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'prometheus'

# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s

static_configs:
- targets: ['localhost:9090']

It is beyond the scope of this book to give fully detailed information on the Prometheus configuration file format. You can go to https://prometheus.io/docs/prometheus/latest/configuration/configuration to find out more. This is a relatively simple configuration file designed to do a couple of things. Now, follow these steps:

  1. Establish a default scrape interval. This determines how often Prometheus will scrape or pull data from the metric's endpoint—in this case, every 15 seconds.
  2. Set up the configuration for a job called prometheus that will scrape itself every 5 seconds. The target server is located at localhost:9090.
  3. Next, create a docker-compose.yml file (this file can also be downloaded from this book's GitHub repository):
version: '3'
services:
  grafana:
    image: "grafana/grafana:${GRAF_TAG-latest}"
    ports:
      - "3000:3000"
    volumes:
      - "${PWD-.}/grafana:/var/lib/grafana"
       
  prometheus:
    image: "prom/prometheus:${PROM_TAG-latest}"
    ports:
      - "9090:9090"
    volumes:
      - "${PWD-.}/prometheus:/etc/prometheus"

The preceding Docker Compose file does the following:

  • Starts up a Grafana container and exposes its default port at 3000.
  • Starts up a Prometheus container and exposes its default port at 9090.
  • Maps the$PWD/prometheus local directory to/etc/prometheus in the prometheus container. This is so that we can manage the Prometheus configuration file from outside the container.$PWDis a shell variable describing the working directory.

Start up both containers with the following command:

          > docker-compose up -d
        

The docker-compose command will start up both containers in their own network so that both Grafana and Prometheus containers can contact each other. If you are successful, you should see something similar to the following output lines:

Starting ch4_prometheus_1 ... done
Starting ch4_grafana_1 ... done

To confirm Prometheus is running correctly, open a web browser page and enter http://localhost:9090/targets. You will see a screen as in the following screenshot:

Now that we have the Grafana and Prometheus servers running, let's move on to creating a Prometheus data source.

Configuring the Prometheus data source

From our docker-compose.yml file, we know that the Prometheus server host will be localhost, the port is 9090, and our scrape interval is 5 seconds. So, let's configure a new Prometheus data source:

  1. From the left sidebar, go to Configuration | Data Sources.
  2. Add a new Prometheus data source and fill in the following information:
  • Name: Prometheus
  • URL: http://localhost:9090
  • Access: Browser
  1. Click on Save & Test.

If everything worked correctly, you should now have a new data source, as in the following screenshot:

Now that we have a working data source, let's take a look at the data we're capturing in Prometheus.

主站蜘蛛池模板: 宣武区| 青田县| 长宁县| 兴和县| 剑川县| 永胜县| 军事| 龙江县| 长春市| 平安县| 南汇区| 兴国县| 中牟县| 宁陕县| 商城县| 盐山县| 南部县| 闻喜县| 大姚县| 宣城市| 安康市| 饶阳县| 沙雅县| 南华县| 成安县| 涿鹿县| 玉田县| 多伦县| 西乌珠穆沁旗| 乃东县| 台东县| 安国市| 乌恰县| 荆门市| 乌兰浩特市| 定州市| 疏附县| 晴隆县| 合作市| 万宁市| 同仁县|