How to build a InfluxDB v2 template

InfluxDB Jun 19, 2020

Since a month ago, I'm actively contributing building templates for InfluxDB v2. In this article, "We" going to build, "together" a template to monitor PostgreSQL.

In this article, I will show you, how to create a template for InfluxDB v2. You, can use this tutorial as a guide to create other templates and start to contribute to this project.

influxdata/community-templates
A collection of InfluxDB Templates provided by the Influx community - influxdata/community-templates

Tip: When I mention templates, it is not only the dashboard where you can visualize data, also, the "template" includes more elements as "labels", "buckets" (where you save information) and Telegraf configuration.

Let's get started

When I want to start a new template, one of the things I do first is look to the Input Plugins of Telegraf and from there I take ideas to start. On this occasion, I choose to build a PostgreSQL template.

As I mentioned before, in this article, we going to build a PostgreSQL template and We going to use the Telegraf Input for that database, you can find this Plugin in this Github.

influxdata/telegraf
The plugin-driven server agent for collecting & reporting metrics. - influxdata/telegraf

Of course, another requirement to start building templates is to run an InfluxDB v2 instance. For example, I have a dedicated docker container, only to experiment and build new templates.

You can read about how to start a docker instance of InfluxDB, watching this article (in Spanish)

InfluxDBWeek: Cómo Monitorear Linux con InfluxDB 2.0 Beta.
En este artículo te muestro cómo desplegar InfluxDB 2.0 Beta en docker, también veremos cómo monitorear un sistema Linux.

Learning how to exploit metrics and setting up InfluxDB v2 and Telegraf.

One of the main tasks to create new templates is to understand how metrics are exposed by the software. In this case, I dig into PostgreSQL documentation and it's very simple to understand and start to monitor the database. Also, if you see the Telegraf input for this scenario, it's very easy to understand how it works.

The next thing I do is creating a "bucket" in InfluxDB. I just head up to my instance and navigate to "Load Data/Buckets" and create a Bucket called 'postgres', also I set up a label to that resource with the same name (postgres).

The following in the list is building the Telegraf configuration. You can take the output configuration from InfluxDB GUI. For me, the configuration, with the Postgres plugin looks like this:

 [[outputs.influxdb_v2]]
  ## The URLs of the InfluxDB cluster nodes.
  ##
  ## Multiple URLs can be specified for a single cluster, only ONE of the
  ## urls will be written to each interval.
  ## urls exp: http://127.0.0.1:9999
  urls = ["http://localhost:9999"]

  ## Token for authentication.
  token = "$INFLUX_TOKEN"

  ## Organization is the name of the organization you wish to write to; must exist.
  organization = "$INFLUX_ORG"

  ## Destination bucket to write into.
  bucket = "postgres"

  [agent]
  interval = "1m"

 [[inputs.postgresql]]
  address = "postgres://postgres:mysecretpassword@localhost:5432"
  ignored_databases = ["template0", "template1"]

Before running Telegraf invoking this config file, we need to pass as variable some data, in this case, the Token and the Organization. You can do it using the following examples:

export INFLUX_TOKEN='your-token'
export INFLUX_ORG='your-organization'

Once this is done, we're ready to run Telegraf.

Start monitoring our PostgreSQL

Is all set, let get started to monitor our database running Telegraf with debug mode on to get the details about what's going on.

telegraf --config psql.conf --debug

The result is something like this:

2020-06-12T14:38:35Z I! Starting Telegraf 1.14.3
2020-06-12T14:38:35Z I! Loaded inputs: postgresql
2020-06-12T14:38:35Z I! Loaded aggregators: 
2020-06-12T14:38:35Z I! Loaded processors: 
2020-06-12T14:38:35Z I! Loaded outputs: influxdb_v2
2020-06-12T14:38:35Z I! Tags enabled: host=thelab
2020-06-12T14:38:35Z I! [agent] Config: Interval:1m0s, Quiet:false, Hostname:"thelab", Flush Interval:10s
2020-06-12T14:38:35Z D! [agent] Initializing plugins
2020-06-12T14:38:35Z D! [agent] Connecting outputs
2020-06-12T14:38:35Z D! [agent] Attempting connection to [outputs.influxdb_v2]
2020-06-12T14:38:35Z D! [agent] Successfully connected to outputs.influxdb_v2
2020-06-12T14:38:35Z D! [agent] Starting service inputs
2020-06-12T14:38:50Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics
2020-06-12T14:39:00Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics
2020-06-12T14:39:10Z D! [outputs.influxdb_v2] Wrote batch of 2 metrics in 145.099172ms

As you can see, Telegraf is running and start to sent metrics to InfluxDB. After a few minutes, you can go to InfluxDB GUI, specifically to "Data Explorer" pick the Bucket "postgres" and see if the measurements and fields are there. In my case, everything goes smoothly, so, the data show up.

I want to visualize that data baby!

We're ready to create our dashboard and consume data. The first thing I do is go to "Dashboard" and then I do a click to "Create Dashboard". After that, I named and assign a label, the same that we create before, remember? the 'postgres' label.

The next thing I do is watch again the documentation and start playing with the data I already have in my InfluxDB. Also, I take a look at other dashboards created, for example, for Grafana and took ideas from there. After a while playing and testing an empty dashboard was converted in this:

This step is very important, because, we need to create a dashboard with data that is helpful for the community. If you only start to graph "random" data with any order or sense, the dashboard isn't going to be used.

Now, another the fun part, let's pack this template for ship.

Packing the template

When I talk about packing or exporting the template, I'm not only mean about the dashboard, also include the Bucket, Telegraf configuration, and labels.

At this moment we have our Dashboard, Label, and Bucket in InfluxDB, but our Telegraf configuration in a separate the file and we need to merge all in one awesome "yml" file.

In run InfluxDB in a Docker container, so, I need to run the exportation command like this. Remember the label that we assigned to the resources, in this process takes a lot of sense.

docker exec -it 4d410b0f82ba influx pkg export all --filter labelName=postgres -f postgres.yml -o $INFLUX_ORG -t $INFLUX_TOKEN

After that I need to pull out the file from my container to my machine to start editing and merge the Telegraf configuration:

docker cp 4d410b0f82ba:/postgres.yml .

The file I just exported look like this (this is an extract)

apiVersion: influxdata.com/v2alpha1
kind: Label
metadata:
    name: flamboyant-dubinsky-332001
spec:
    color: '#F95F53'
    name: postgres
---
apiVersion: influxdata.com/v2alpha1
kind: Bucket
metadata:
    name: vivid-heisenberg-732003
spec:
    associations:
      - kind: Label
        name: flamboyant-dubinsky-332001
    name: postgres
---
apiVersion: influxdata.com/v2alpha1
kind: Dashboard
metadata:
    name: jovial-montalcini-f32001
spec:
    associations:
      - kind: Label
        name: flamboyant-dubinsky-332001
    charts:
      - colors:
          - hex: '#00C9FF'
            name: laser
            type: min
          - hex: '#9394FF'
            name: comet
            type: max
            value: 100
        decimalPlaces: 2
        height: 3
        kind: Gauge
        name: Current IOwait
        queries:
          - query: |-
                from(bucket: "postgres")
                  |> range(start: v.timeRangeStart, stop: v.timeRangeStop)
                  |> filter(fn: (r) => r["_measurement"] == "cpu")
                  |> filter(fn: (r) => r["_field"] == "usage_iowait")
...

At the end of the file, we going to add the Telegraf configuration with some tweakings. Remember this a "yaml" file, so, ident is very important.

The Telegraf config section, look like this:

...
---

apiVersion: influxdata.com/v2alpha1
kind: Telegraf
metadata:
    name: postgres-config
spec:
    config: |

        [[outputs.influxdb_v2]]
          ## The URLs of the InfluxDB cluster nodes.
          ##
          ## Multiple URLs can be specified for a single cluster, only ONE of the
          ## urls will be written to each interval.
          ## urls exp: http://127.0.0.1:9999
          urls = ["$INFLUX_HOST"]
        
          ## Token for authentication.
          token = "$INFLUX_TOKEN"
        
          ## Organization is the name of the organization you wish to write to; must exist.
          organization = "$INFLUX_ORG"
        
          ## Destination bucket to write into.
          bucket = "postgres"
              
          [agent]
          interval = "1m"
          
        [[inputs.postgresql]]
          # address = "postgres://postgres:mysecretpassword@localhost:5432"
          address = "$PSQL_STRING_CONNECTION"
          ignored_databases = ["template0", "template1"]
        
        [[inputs.cpu]]
          ## Whether to report per-cpu stats or not
          percpu = true
          ## Whether to report total system cpu stats or not
          totalcpu = true
          ## If true, collect raw CPU time metrics.
          collect_cpu_time = false
          ## If true, compute and report the sum of all non-idle CPU states.
          report_active = false
        
        [[inputs.disk]]
          ## By default stats will be gathered for all mount points.
          ## Set mount_points will restrict the stats to only the specified mount points.
          # mount_points = ["/"]
        
          ## Ignore mount points by filesystem type.
          ignore_fs = ["tmpfs", "devtmpfs", "devfs", "iso9660", "overlay", "aufs", "squashfs"]
        
        # Read metrics about disk IO by device
        [[inputs.diskio]]
          ## By default, telegraf will gather stats for all devices including
          ## disk partitions.
          ## Setting devices will restrict the stats to the specified devices.
          # devices = ["sda", "sdb"]
          ## Uncomment the following line if you need disk serial numbers.
          # skip_serial_number = false
          #
          ## On systems which support it, device metadata can be added in the form of
          ## tags.
          ## Currently only Linux is supported via udev properties. You can view
          ## available properties for a device by running:
          ## 'udevadm info -q property -n /dev/sda'
          ## Note: Most, but not all, udev properties can be accessed this way. Properties
          ## that are currently inaccessible include DEVTYPE, DEVNAME, and DEVPATH.
          # device_tags = ["ID_FS_TYPE", "ID_FS_USAGE"]
          #
          ## Using the same metadata source as device_tags, you can also customize the
          ## name of the device via templates.
          ## The 'name_templates' parameter is a list of templates to try and apply to
          ## the device. The template may contain variables in the form of '$PROPERTY' or
          ## '${PROPERTY}'. The first template which does not contain any variables not
          ## present for the device is used as the device name tag.
          ## The typical use case is for LVM volumes, to get the VG/LV name instead of
          ## the near-meaningless DM-0 name.
          # name_templates = ["$ID_FS_LABEL","$DM_VG_NAME/$DM_LV_NAME"]
        
        # Read metrics about memory usage
        [[inputs.mem]]
          # no configuration

Note that I converted the PostgreSQL string connection to a variable, this for not hardcoded credentials and the same can be consumed through an environment variable.

$ export PSQL_STRING_CONNECTION=postgres://postgres:mysecretpassword@localhost:5432

I saved the file and we're ready to test if the importation process goes well and see if our template is ready for shipment. First I copy the new "yml" file to the container:

docker cp postgres.yml 4d410b0f82ba:/

And then I run the import process:

docker exec -it 4d410b0f82ba influx pkg -f postgres.yml -o $INFLUX_ORG -t $INFLUX_TOKEN

Is everything is set, the response of our terminal should be look like this:

LABELS    +add | -remove | unchanged
+-----+----------------------------+------------------+---------------+---------+-------------+
| +/- |        PACKAGE NAME        |        ID        | RESOURCE NAME |  COLOR  | DESCRIPTION |
+-----+----------------------------+------------------+---------------+---------+-------------+
|     | flamboyant-dubinsky-332001 | 05d639e9629a9000 | postgres      | #F95F53 |             |
+-----+----------------------------+------------------+---------------+---------+-------------+
|                                                                        TOTAL  |      1      |
+-----+----------------------------+------------------+---------------+---------+-------------+

BUCKETS    +add | -remove | unchanged
+-----+-------------------------+------------------+---------------+------------------+-------------+
| +/- |      PACKAGE NAME       |        ID        | RESOURCE NAME | RETENTION PERIOD | DESCRIPTION |
+-----+-------------------------+------------------+---------------+------------------+-------------+
|     | vivid-heisenberg-732003 | 05d639b5fdf32000 | postgres      | 0s               |             |
+-----+-------------------------+------------------+---------------+------------------+-------------+
|                                                                         TOTAL       |      1      |
+-----+-------------------------+------------------+---------------+------------------+-------------+

DASHBOARDS    +add | -remove | unchanged
+-----+--------------------------+----+---------------+-------------+------------+
| +/- |       PACKAGE NAME       | ID | RESOURCE NAME | DESCRIPTION | NUM CHARTS |
+-----+--------------------------+----+---------------+-------------+------------+
| +   | jovial-montalcini-f32001 |    | Postgres      |             | 17         |
+-----+--------------------------+----+---------------+-------------+------------+
|                                                          TOTAL    |     1      |
+-----+--------------------------+----+---------------+-------------+------------+

TELEGRAF CONFIGURATIONS    +add | -remove | unchanged
+-----+-----------------+----+-----------------+-------------+
| +/- |  PACKAGE NAME   | ID |  RESOURCE NAME  | DESCRIPTION |
+-----+-----------------+----+-----------------+-------------+
| +   | postgres-config |    | postgres-config |             |
+-----+-----------------+----+-----------------+-------------+
|                                   TOTAL      |      1      |
+-----+-----------------+----+-----------------+-------------+

LABEL ASSOCIATIONS    +add | -remove | unchanged
+-----+---------------+--------------------------+---------------+------------------+----------------------------+------------+------------------+
| +/- | RESOURCE TYPE |  RESOURCE PACKAGE NAME   | RESOURCE NAME |   RESOURCE ID    |     LABEL PACKAGE NAME     | LABEL NAME |     LABEL ID     |
+-----+---------------+--------------------------+---------------+------------------+----------------------------+------------+------------------+
|     | buckets       | vivid-heisenberg-732003  | postgres      | 05d639b5fdf32000 | flamboyant-dubinsky-332001 | postgres   | 05d639e9629a9000 |
+-----+---------------+--------------------------+---------------+------------------+----------------------------+------------+------------------+
+-----+---------------+--------------------------+---------------+------------------+----------------------------+------------+------------------+
| +   | dashboards    | jovial-montalcini-f32001 | Postgres      |                  | flamboyant-dubinsky-332001 | postgres   | 05d639e9629a9000 |
+-----+---------------+--------------------------+---------------+------------------+----------------------------+------------+------------------+
|                                                                                                                    TOTAL    |        2         |
+-----+---------------+--------------------------+---------------+------------------+----------------------------+------------+------------------+

As you can see the importation process make some checks and recognize our template, we have our PostgreSQL template ready :D

Conclusion

I found very fun and interesting build templates and I happy to share with you how is my process to make one. I hope this article was helpful and I can't wait to see what community is capable to build.

If you want to contribute to Community Templates, take a look at this Github Repo and see the contribution guidelines. If I can help you with something, you can find me in social media and the Slack of InfluxDB.

influxdata/community-templates
A collection of InfluxDB Templates provided by the Influx community - influxdata/community-templates

Ignacio Van Droogenbroeck

Hi. I'm always learning new things and share about it in this blog. This is my first experience writing in english, if you found a error, leave me a comment and help me to improve my english.

Great! You've successfully subscribed.
Great! Next, complete checkout for full access.
Welcome back! You've successfully signed in.
Success! Your account is fully activated, you now have access to all content.