Note
This plugin is part of the community.grafana collection (version 1.2.3).
You might already have this collection installed if you are using the ansible
package. It is not included in ansible-core
. To check whether it is installed, run ansible-galaxy collection list
.
To install it, use: ansible-galaxy collection install community.grafana
.
To use it in a playbook, specify: community.grafana.grafana_datasource
.
Parameter | Choices/Defaults | Comments |
---|---|---|
access string |
| The access mode for this datasource. |
additional_json_data dictionary | Default: {} | Defined data is used for datasource jsonData Data may be overridden by specifically defined parameters (like zabbix_user) |
additional_secure_json_data dictionary | Default: {} | Defined data is used for datasource secureJsonData Data may be overridden by specifically defined parameters (like tls_client_cert) Stored as secure data, see enforce_secure_data and notes! |
aws_access_key string | Default: "" | AWS access key for CloudWatch datasource type when aws_auth_type is keys
|
aws_assume_role_arn string | Default: "" | AWS IAM role arn to assume for CloudWatch datasource type when aws_auth_type is arn
|
aws_auth_type string |
| Type for AWS authentication for CloudWatch datasource type (authType of grafana api) |
aws_credentials_profile string | Default: "" | Profile for AWS credentials for CloudWatch datasource type when aws_auth_type is credentials
|
aws_custom_metrics_namespaces string | Default: "" | Namespaces of Custom Metrics for CloudWatch datasource type |
aws_default_region string |
| AWS default region for CloudWatch datasource type |
aws_secret_key string | Default: "" | AWS secret key for CloudWatch datasource type when aws_auth_type is keys
|
basic_auth_password string | The datasource basic auth password, when basic auth is yes . | |
basic_auth_user string | The datasource basic auth user. Setting this option with basic_auth_password will enable basic auth. | |
client_cert path | PEM formatted certificate chain file to be used for SSL client authentication. This file can also include the key as well, and if the key is included, client_key is not required | |
client_key path | PEM formatted file that contains your private key to be used for SSL client authentication. If client_cert contains both the certificate and key, this option is not required. | |
database string | Name of the database for the datasource. This options is required when the ds_type is influxdb , elasticsearch (index name), mysql or postgres . | |
ds_type string / required |
| The type of the datasource. |
ds_url string / required | The URL of the datasource. | |
enforce_secure_data boolean |
| Secure data is not updated per default (see notes!) To update secure data you have to enable this option! Enabling this, the task will always report changed=True |
es_version integer |
5 | Elasticsearch version (for ds_type = elasticsearch only)Version 56 is for elasticsearch 5.6+ where you can specify the max_concurrent_shard_requests option. |
grafana_api_key string | The Grafana API key. If set, url_username and url_password will be ignored. | |
interval string |
| For elasticsearch ds_type , this is the index pattern used. |
is_default boolean |
| Make this datasource the default one. |
max_concurrent_shard_requests integer | Default: 256 | Starting with elasticsearch 5.6, you can specify the max concurrent shard per requests. |
name string / required | The name of the datasource. | |
org_id integer | Default: 1 | Grafana Organisation ID in which the datasource should be created. Not used when grafana_api_key is set, because the grafana_api_key only belong to one organisation. |
password string | The datasource password. For encrypted password use additional_secure_json_data.password . | |
sslmode string |
| SSL mode for postgres datasource type. |
state string |
| Status of the datasource |
time_field string | Default: "@timestamp" | Name of the time field in elasticsearch ds. For example @timestamp . |
time_interval string | Minimum group by interval for influxdb or elasticsearch datasources.for example >10s . | |
tls_ca_cert string | The TLS CA certificate for self signed certificates. Only used when tls_client_cert and tls_client_key are set.Stored as secure data, see enforce_secure_data and notes! | |
tls_client_cert string | The client TLS certificate. If tls_client_cert and tls_client_key are set, this will enable TLS authentication.Starts with ----- BEGIN CERTIFICATE ----- Stored as secure data, see enforce_secure_data and notes! | |
tls_client_key string | The client TLS private key Starts with ----- BEGIN RSA PRIVATE KEY ----- Stored as secure data, see enforce_secure_data and notes! | |
tls_skip_verify boolean |
| Skip the TLS datasource certificate verification. |
trends boolean |
| Use trends or not for zabbix datasource type. |
tsdb_resolution string |
| The opentsdb time resolution. |
tsdb_version integer |
1 | The opentsdb version. Use 1 for <=2.1, 2 for ==2.2, 3 for ==2.3. |
url string / required | The Grafana URL. aliases: grafana_url | |
url_password string | Default: "admin" | The Grafana password for API authentication. aliases: grafana_password |
url_username string | Default: "admin" | The Grafana user for API authentication. aliases: grafana_user |
use_proxy boolean |
| If no , it will not use a proxy, even if one is defined in an environment variable on the target hosts. |
user string | The datasource login user for influxdb datasources. | |
validate_certs boolean |
| If no , SSL certificates will not be validated.This should only set to no used on personally controlled sites using self-signed certificates. |
with_credentials boolean |
| Whether credentials such as cookies or auth headers should be sent with cross-site requests. |
zabbix_password string | Password for Zabbix API | |
zabbix_user string | User for Zabbix API |
Note
enforce_secure_data
always reporting changed=True, you might just do one Task updating the datasource without any secure data and make a separate playbook/task also changing the secure data. This way it will not break any workflow.--- - name: Create elasticsearch datasource community.grafana.grafana_datasource: name: "datasource-elastic" grafana_url: "https://grafana.company.com" grafana_user: "admin" grafana_password: "xxxxxx" org_id: "1" ds_type: "elasticsearch" ds_url: "https://elastic.company.com:9200" database: "[logstash_]YYYY.MM.DD" basic_auth_user: "grafana" basic_auth_password: "******" time_field: "@timestamp" time_interval: "1m" interval: "Daily" es_version: 56 max_concurrent_shard_requests: 42 tls_ca_cert: "/etc/ssl/certs/ca.pem" - name: Create influxdb datasource community.grafana.grafana_datasource: name: "datasource-influxdb" grafana_url: "https://grafana.company.com" grafana_user: "admin" grafana_password: "xxxxxx" org_id: "1" ds_type: "influxdb" ds_url: "https://influx.company.com:8086" database: "telegraf" time_interval: ">10s" tls_ca_cert: "/etc/ssl/certs/ca.pem" - name: Create postgres datasource community.grafana.grafana_datasource: name: "datasource-postgres" grafana_url: "https://grafana.company.com" grafana_user: "admin" grafana_password: "xxxxxx" org_id: "1" ds_type: "postgres" ds_url: "postgres.company.com:5432" database: "db" user: "postgres" sslmode: "verify-full" additional_json_data: postgresVersion: 12 timescaledb: false additional_secure_json_data: password: "iampgroot" - name: Create cloudwatch datasource community.grafana.grafana_datasource: name: "datasource-cloudwatch" grafana_url: "https://grafana.company.com" grafana_user: "admin" grafana_password: "xxxxxx" org_id: "1" ds_type: "cloudwatch" ds_url: "http://monitoring.us-west-1.amazonaws.com" aws_auth_type: "keys" aws_default_region: "us-west-1" aws_access_key: "speakFriendAndEnter" aws_secret_key: "mel10n" aws_custom_metrics_namespaces: "n1,n2" - name: grafana - add thruk datasource community.grafana.grafana_datasource: name: "datasource-thruk" grafana_url: "https://grafana.company.com" grafana_user: "admin" grafana_password: "xxxxxx" org_id: "1" ds_type: "sni-thruk-datasource" ds_url: "https://thruk.company.com/sitename/thruk" basic_auth_user: "thruk-user" basic_auth_password: "******" # handle secure data - workflow example # this will create/update the datasource but dont update the secure data on updates # so you can assert if all tasks are changed=False - name: create prometheus datasource community.grafana.grafana_datasource: name: openshift_prometheus ds_type: prometheus ds_url: https://openshift-monitoring.company.com access: proxy tls_skip_verify: true additional_json_data: httpHeaderName1: "Authorization" additional_secure_json_data: httpHeaderValue1: "Bearer ihavenogroot" # in a separate task or even play you then can force to update # and assert if each datasource is reporting changed=True - name: update prometheus datasource community.grafana.grafana_datasource: name: openshift_prometheus ds_type: prometheus ds_url: https://openshift-monitoring.company.com access: proxy tls_skip_verify: true additional_json_data: httpHeaderName1: "Authorization" additional_secure_json_data: httpHeaderValue1: "Bearer ihavenogroot" enforce_secure_data: true
Common return values are documented here, the following are the fields unique to this module:
Key | Returned | Description |
---|---|---|
datasource dictionary | changed | datasource created/updated by module Sample: {'access': 'proxy', 'basicAuth': False, 'database': 'test_*', 'id': 1035, 'isDefault': False, 'jsonData': {'esVersion': 5, 'timeField': '@timestamp', 'timeInterval': '10s'}, 'name': 'grafana_datasource_test', 'orgId': 1, 'password': '', 'secureJsonFields': {'JustASecureTest': True}, 'type': 'elasticsearch', 'url': 'http://elastic.company.com:9200', 'user': '', 'withCredentials': False} |
© 2012–2018 Michael DeHaan
© 2018–2021 Red Hat, Inc.
Licensed under the GNU General Public License version 3.
https://docs.ansible.com/ansible/latest/collections/community/grafana/grafana_datasource_module.html