I wanted to know how COVID-19 is developing in part of South America, also I'm playing with InfluxDB. The result: A dashboard of cases and death with InfluxDB.
I usually used InfluxDB, Chronograf, Grafana, Zabbix and that kind of solutions to monitor services and systems but not for processing and visualize another kind of information.
For this case, I built a dashboard about COVID cases and deaths that include data of Argentina, Bolivia, Brasil, Chile, Paraguay and Uruguay. Theses metrics include 24 hours change of new cases. Also, and only for reference I put a Worldwide cases and death panel.
This is the dashboard.
¿How I built this?
The time series data and the dashboard was stored and created in InfluxDB, but the fetch of the data is from this API.
Basically what I did was build a .sh files running a curl command specifying location against the API URL and then convert the output to JSON. For example, for Uruguay, this is the full "command".
curl -s https://coronavirus-tracker-api.herokuapp.com/v2/locations/224 | json_pp
As I said before. this command what save in a executable file and then I used Telegraf with input for executables to bring the result of that command to InfluxDB.
[[inputs.exec]] ## Commands array commands = [ "sh /Users/nacho/docker/influxdb2.0-covid/uruguay.sh" ] ## Timeout for each command to complete. timeout = "30s" ## measurement name suffix (for separating different commands) name_suffix = "_uruguay" ## Data format to consume. ## Each data format has its own unique set of configuration options, read ## more about them here: ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md data_format = "json"
Tip: Is important to specify the name_suffix, because, by default, Telegraf will save data with the suffix "_exeCollector" and when you need to process data for several "sources" is import to had identifiable.
When I configured every single "input.exec" for each country I ran telegraf and wait the until the data start to appear in InfluxDB.
In InfluxDB I used a lot "Data Explore" feature for... data explore, but also to try queries. In this case, will keep using Uruguay as example to know, how much latest confirmed cases are. The query was built like this.
from(bucket: "covid") |> range(start: v.timeRangeStart, stop: v.timeRangeStop) |> filter(fn: (r) => r["_measurement"] == "exec_uruguay") |> filter(fn: (r) => r["_field"] == "location_latest_confirmed")
I specify a kind of graph "single stats" and boom. The number os total cases confirmed appears at the screen.
I also want to know, how much cases was reported in the last 24 hours and I built the next query:
from(bucket: "covid") |> range(start: v.timeRangeStart, stop: v.timeRangeStop) |> filter(fn: (r) => r["_measurement"] == "exec_uruguay") |> filter(fn: (r) => r["_field"] == "location_latest_confirmed") |> increase() |> yield(name: "increase")
As before I use the graf "Single Stat".
But, also I want to know the sum of the cases (and deaths) of the six countries. In that case I built this query.
from(bucket: "covid") |> range(start: -1m) |> filter(fn: (r) => r["_measurement"] == "exec_argentina" or r["_measurement"] == "exec_uruguay" or r["_measurement"] == "exec_bolivia" or r["_measurement"] == "exec_paraguay" or r["_measurement"] == "exec_chile" or r["_measurement"] == "exec_brasil") |> filter(fn: (r) => r["_field"] == "location_latest_confirmed") |> mean() |> group() |> sum()
As you can see is not complicated exploiting other data that is not monitor IT systems, requiere a little of time to understand how Flux works, but the result worth the time.
Here you can download this dashboard with configuration of Telegraf and executable files to build this same dashboard or customize with the data of your country.
If you want to know how monitor Linux with InfluxDB 2.0 Beta (Include the deployment) take a look at this article (In spanish):
If you're not ready to try InfluxDB v2. You can try with InfluxDB v1.8, Chronograf and Kapacitor (TICK Suite for friends)
Let me know in social media if you're using this dashboard and how, also, if you have some issues. I'm happy to help.