跳到主要內容

Kibana(1/2)


前面已經提了
FluentD(1/2)
FluentD(2/2)
Elasticsearch
最後一哩路就是視覺化 Kibana 出場

Dockerfile



FROM kibana:7.2.0


Build Image Shell Script


docker build -t mykibana .


kibana.yml Sample


# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "0.0.0.0"

# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""

# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# This setting was effectively always `false` before Kibana 6.3 and will
# default to `true` starting in Kibana 7.0.
#server.rewriteBasePath: false

# The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576

# The Kibana server's name.  This is used for display purposes.
#server.name: "your-hostname"

# The URLs of the Elasticsearch instances to use for all your queries.
elasticsearch.hosts: [${ES HOST}:${PORT}]

# When this setting's value is true Kibana uses the hostname specified in the server.host
# setting. When the value of this setting is false, Kibana uses the hostname of the host
# that connects to this Kibana instance.
#elasticsearch.preserveHost: true

# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
#kibana.index: ".kibana"

# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "kibana"
#elasticsearch.password: "pass"

# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key

# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files are used to verify the identity of Kibana to Elasticsearch and are required when
# xpack.ssl.verification_mode in Elasticsearch is set to either certificate or full.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key

# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]

# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full

# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000

# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]

# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}

# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 30000

# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.
#elasticsearch.startupTimeout: 5000

# Logs queries sent to Elasticsearch. Requires logging.verbose set to true.
#elasticsearch.logQueries: false

# Specifies the path where Kibana creates the process ID file.
#pid.file: /var/run/kibana.pid

# Enables you specify a file where Kibana stores log output.
#logging.dest: stdout

# Set the value of this setting to true to suppress all logging output.
#logging.silent: false

# Set the value of this setting to true to suppress all logging output other than error messages.
#logging.quiet: false

# Set the value of this setting to true to log all events, including system usage information
# and all requests.
#logging.verbose: false

# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
#ops.interval: 5000

# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English - en , by default , Chinese - zh-CN .
#i18n.locale: "en"

Docker run Shell Script


# import! 先將 自己的kibana.yml copy to /opt/kibana/kibana.yml 
docker run -it -d --name mykibana -p 5601:5601 -v /opt/kibana/kibana.yml:/usr/share/kibana/config/kibana.yml  mykibana


補充

與 ES 一樣的問題,不想付錢 就只能自己想辦法提高安全性
Cloud Service VM instance 都可透過網路設定 限制 IP 存取 or 利用Nginx 做 Basic Authentication
參考來源



# 先使用 htpasswd 建立帳號密碼檔 
$>htpasswd -c /password/store/path/.htpasswd ${username}

# 如果沒有安裝 apache 工具 ,也可透過線上工具 
Htpasswd Generator
將產生的資料存入 .htpasswd



server {
    
     ... 略 

    location / {
        auth_basic "Administrator’s Area";
        auth_basic_user_file /password/store/path/.htpasswd;
        
         ... 略 

    }

}

留言

這個網誌中的熱門文章

Grafana Dashboard 建立

建立自己的 Dashboard # 由於 intelligent sense 相當不錯,輸入關鍵字他會帶出 metric label # 另外可參考 https://prometheus.io/docs/prometheus/latest/querying/basics/ Prometheus Query # 或是直接拿其他已建立的Dashboard 可複製到新的 Dashboard ex: node_memory_MemTotal_bytes # 取伺服器記憶體容量資料 # 過濾條件在{}加入 ex: node_memory_MemTotal_bytes{instance="${server 1}:9100"} # 要取特定伺服器資料 # Setting 中設定 Variables ex: node_memory_MemTotal_bytes{instance=~"$node"} # 變數名稱 node 建立 Alert .Visualization 必須是Graph

FluentD 參數說明

FluentD 高效、統一的日誌收集器 延伸閱讀 FluentD 實作 Nginx Access Log FluentD 實作 Nginx Access Log 補充 FluentD 安裝 Dockerfile FROM fluent/fluentd:v1.8.1-1.0 # Use root account to use apk USER root # below RUN includes plugin as examples elasticsearch is not required # you may customize including plugins as you wish RUN apk add --no-cache --update --virtual .build-deps \ sudo build-base ruby-dev \ && apk add mariadb-dev \ && sudo gem install fluent-plugin-elasticsearch \ && sudo gem install fluent-plugin-mongo \ && sudo gem install fluent-plugin-sql \ && sudo gem install mysql2 -v 0.5.2 \ && sudo gem sources --clear-all \ && apk del .build-deps \ && rm -rf /home/fluent/.gem/ruby/2.5.0/cache/*.gem VOLUME ["/fluentd/etc","/fluentd/log","/var/log"] docker-compose.yml version: '3' services: fluentd: build: context: . dockerfile: ./Dockerfile image: my/flue...

FluentD 實作 Nginx Access Log 補充

FluentD 實作 Nginx Access Log 補充 前一篇針對 FluentD 安裝 及 Nginx Access log format 設定提供範例 本篇補充 1. 將 access_log 存入 MySQL 2. 針對Input 加工,ex 解析 Path 拆成不同欄位,在傳入 Output 延伸閱讀 FluentD 參數說明 FluentD 實作 Nginx Access Log 將 access_log 存入 MySQL <worker 0> <source> ... 略 </source> <match nginx.web.access> @type copy ... 略 <store> @type sql host ${MySQL Host address} port ${MySQL Port} adapter mysql2 database ${MySQL Database} username ${MySQL User Name} password ${MySQL Password} <table> table ${MySQL table} column_mapping 'logtime:logtime,method:method,path:path,code:code,size:size,resptime:resptime,token:token,path_url:path_url,timestamp:created_at' </table> </store> </match> </worker> 針對Input 加工,ex 解析 Path 拆成不同欄位,在傳入 Output 情境: 以下 access log 範例,需要針對 Query Parameter 拆解並存入新欄位,以利分析. [27/Dec/2019:07:14:10 ...