eab9ca05a6
This begins building documentation for the LMA services included in openstack-helm-infra. This includes documentation for: kibana, elasticsearch, fluent-logging, grafana, prometheus, and nagios Change-Id: Iaa24be04748e76fabca998972398802e7e921ef1 Signed-off-by: Steve Wilkerson <wilkers.steve@gmail.com>
280 lines
8.9 KiB
ReStructuredText
280 lines
8.9 KiB
ReStructuredText
Fluent-logging
|
|
===============
|
|
|
|
The fluent-logging chart in openstack-helm-infra provides the base for a
|
|
centralized logging platform for OpenStack-Helm. The chart combines two
|
|
services, Fluentbit and Fluentd, to gather logs generated by the services,
|
|
filter on or add metadata to logged events, then forward them to Elasticsearch
|
|
for indexing.
|
|
|
|
Fluentbit
|
|
---------
|
|
|
|
Fluentbit runs as a log-collecting component on each host in the cluster, and
|
|
can be configured to target specific log locations on the host. The Fluentbit_
|
|
configuration schema can be found on the official Fluentbit website.
|
|
|
|
.. _Fluentbit: http://fluentbit.io/documentation/0.12/configuration/schema.html
|
|
|
|
Fluentbit provides a set of plug-ins for ingesting and filtering various log
|
|
types. These plug-ins include:
|
|
|
|
- Tail: Tails a defined file for logged events
|
|
- Kube: Adds Kubernetes metadata to a logged event
|
|
- Systemd: Provides ability to collect logs from the journald daemon
|
|
- Syslog: Provides the ability to collect logs from a Unix socket (TCP or UDP)
|
|
|
|
The complete list of plugins can be found in the configuration_ section of the
|
|
Fluentbit documentation.
|
|
|
|
.. _configuration: http://fluentbit.io/documentation/current/configuration/
|
|
|
|
Fluentbit uses parsers to turn unstructured log entries into structured entries
|
|
to make processing and filtering events easier. The two formats supported are
|
|
JSON maps and regular expressions. More information about Fluentbit's parsing
|
|
abilities can be found in the parsers_ section of Fluentbit's documentation.
|
|
|
|
.. _parsers: http://fluentbit.io/documentation/current/parser/
|
|
|
|
Fluentbit's service and parser configurations are defined via the values.yaml
|
|
file, which allows for custom definitions of inputs, filters and outputs for
|
|
your logging needs.
|
|
Fluentbit's configuration can be found under the following key:
|
|
|
|
::
|
|
|
|
conf:
|
|
fluentbit:
|
|
- service:
|
|
header: service
|
|
Flush: 1
|
|
Daemon: Off
|
|
Log_Level: info
|
|
Parsers_File: parsers.conf
|
|
- containers_tail:
|
|
header: input
|
|
Name: tail
|
|
Tag: kube.*
|
|
Path: /var/log/containers/*.log
|
|
Parser: docker
|
|
DB: /var/log/flb_kube.db
|
|
Mem_Buf_Limit: 5MB
|
|
- kube_filter:
|
|
header: filter
|
|
Name: kubernetes
|
|
Match: kube.*
|
|
Merge_JSON_Log: On
|
|
- fluentd_output:
|
|
header: output
|
|
Name: forward
|
|
Match: "*"
|
|
Host: ${FLUENTD_HOST}
|
|
Port: ${FLUENTD_PORT}
|
|
|
|
Fluentbit is configured by default to capture logs at the info log level. To
|
|
change this, override the Log_Level key with the appropriate levels, which are
|
|
documented in Fluentbit's configuration_.
|
|
|
|
Fluentbit's parser configuration can be found under the following key:
|
|
|
|
::
|
|
|
|
conf:
|
|
parsers:
|
|
- docker:
|
|
header: parser
|
|
Name: docker
|
|
Format: json
|
|
Time_Key: time
|
|
Time_Format: "%Y-%m-%dT%H:%M:%S.%L"
|
|
Time_Keep: On
|
|
|
|
The values for the fluentbit and parsers keys are consumed by a fluent-logging
|
|
helper template that produces the appropriate configurations for the relevant
|
|
sections. Each list item (keys prefixed with a '-') represents a section in the
|
|
configuration files, and the arbitrary name of the list item should represent a
|
|
logical description of the section defined. The header key represents the type
|
|
of definition (filter, input, output, service or parser), and the remaining
|
|
entries will be rendered as space delimited configuration keys and values. For
|
|
example, the definitions above would result in the following:
|
|
|
|
::
|
|
|
|
[SERVICE]
|
|
Daemon false
|
|
Flush 1
|
|
Log_Level info
|
|
Parsers_File parsers.conf
|
|
[INPUT]
|
|
DB /var/log/flb_kube.db
|
|
Mem_Buf_Limit 5MB
|
|
Name tail
|
|
Parser docker
|
|
Path /var/log/containers/*.log
|
|
Tag kube.*
|
|
[FILTER]
|
|
Match kube.*
|
|
Merge_JSON_Log true
|
|
Name kubernetes
|
|
[OUTPUT]
|
|
Host ${FLUENTD_HOST}
|
|
Match *
|
|
Name forward
|
|
Port ${FLUENTD_PORT}
|
|
[PARSER]
|
|
Format json
|
|
Name docker
|
|
Time_Format %Y-%m-%dT%H:%M:%S.%L
|
|
Time_Keep true
|
|
Time_Key time
|
|
|
|
Fluentd
|
|
-------
|
|
|
|
Fluentd runs as a forwarding service that receives event entries from Fluentbit
|
|
and routes them to the appropriate destination. By default, Fluentd will route
|
|
all entries received from Fluentbit to Elasticsearch for indexing. The
|
|
Fluentd_ configuration schema can be found at the official Fluentd website.
|
|
|
|
.. _Fluentd: https://docs.fluentd.org/v0.12/articles/config-file
|
|
|
|
Fluentd's configuration is handled in the values.yaml file in fluent-logging.
|
|
Similar to Fluentbit, configuration overrides provide flexibility in defining
|
|
custom routes for tagged log events. The configuration can be found under the
|
|
following key:
|
|
|
|
::
|
|
|
|
conf:
|
|
fluentd:
|
|
- fluentbit_forward:
|
|
header: source
|
|
type: forward
|
|
port: "#{ENV['FLUENTD_PORT']}"
|
|
bind: 0.0.0.0
|
|
- elasticsearch:
|
|
header: match
|
|
type: elasticsearch
|
|
expression: "**"
|
|
include_tag_key: true
|
|
host: "#{ENV['ELASTICSEARCH_HOST']}"
|
|
port: "#{ENV['ELASTICSEARCH_PORT']}"
|
|
logstash_format: true
|
|
buffer_chunk_limit: 10M
|
|
buffer_queue_limit: 32
|
|
flush_interval: "20"
|
|
max_retry_wait: 300
|
|
disable_retry_limit: ""
|
|
|
|
The values for the fluentd keys are consumed by a fluent-logging helper template
|
|
that produces appropriate configurations for each directive desired. The list
|
|
items (keys prefixed with a '-') represent sections in the configuration file,
|
|
and the name of each list item should represent a logical description of the
|
|
section defined. The header key represents the type of definition (name of the
|
|
fluentd plug-in used), and the expression key is used when the plug-in requires
|
|
a pattern to match against (example: matches on certain input patterns). The
|
|
remaining entries will be rendered as space delimited configuration keys and
|
|
values. For example, the definition above would result in the following:
|
|
|
|
::
|
|
|
|
<source>
|
|
bind 0.0.0.0
|
|
port "#{ENV['FLUENTD_PORT']}"
|
|
@type forward
|
|
</source>
|
|
<match **>
|
|
buffer_chunk_limit 10M
|
|
buffer_queue_limit 32
|
|
disable_retry_limit
|
|
flush_interval 20s
|
|
host "#{ENV['ELASTICSEARCH_HOST']}"
|
|
include_tag_key true
|
|
logstash_format true
|
|
max_retry_wait 300
|
|
port "#{ENV['ELASTICSEARCH_PORT']}"
|
|
@type elasticsearch
|
|
</match>
|
|
|
|
Some fluentd plug-ins require nested definitions. The fluentd helper template
|
|
can handle these definitions with the following structure:
|
|
|
|
::
|
|
|
|
conf:
|
|
td_agent:
|
|
- fluentbit_forward:
|
|
header: source
|
|
type: forward
|
|
port: "#{ENV['FLUENTD_PORT']}"
|
|
bind: 0.0.0.0
|
|
- log_transformer:
|
|
header: filter
|
|
type: record_transformer
|
|
expression: "foo.bar"
|
|
inner_def:
|
|
- record_transformer:
|
|
header: record
|
|
hostname: my_host
|
|
tag: my_tag
|
|
|
|
In this example, the my_transformer list will generate a nested configuration
|
|
entry in the log_transformer section. The nested definitions are handled by
|
|
supplying a list as the value for an arbitrary key, and the list value will
|
|
indicate the entry should be handled as a nested definition. The helper
|
|
template will render the above example key/value pairs as the following:
|
|
|
|
::
|
|
|
|
<source>
|
|
bind 0.0.0.0
|
|
port "#{ENV['FLUENTD_PORT']}"
|
|
@type forward
|
|
</source>
|
|
<filter foo.bar>
|
|
<record>
|
|
hostname my_host
|
|
tag my_tag
|
|
</record>
|
|
@type record_transformer
|
|
</filter>
|
|
|
|
Fluentd Exporter
|
|
----------------------
|
|
|
|
The fluent-logging chart contains templates for an exporter to provide metrics
|
|
for Fluentd. These metrics provide insight into Fluentd's performance. Please
|
|
note monitoring for Fluentd is disabled by default, and must be enabled with the
|
|
following override:
|
|
|
|
::
|
|
|
|
monitoring:
|
|
prometheus:
|
|
enabled: true
|
|
|
|
|
|
The Fluentd exporter uses the same service annotations as the other exporters,
|
|
and no additional configuration is required for Prometheus to target the
|
|
Fluentd exporter for scraping. The Fluentd exporter is configured with command
|
|
line flags, and the flags' default values can be found under the following key
|
|
in the values.yaml file:
|
|
|
|
::
|
|
|
|
conf:
|
|
fluentd_exporter:
|
|
log:
|
|
format: "logger:stdout?json=true"
|
|
level: "info"
|
|
|
|
The configuration keys configure the following behaviors:
|
|
|
|
- log.format: Define the logger used and format of the output
|
|
- log.level: Log level for the exporter to use
|
|
|
|
More information about the Fluentd exporter can be found on the exporter's
|
|
GitHub_ page.
|
|
|
|
.. _GitHub: https://github.com/V3ckt0r/fluentd_exporter
|