跳到主要內容

Kibana(1/2)


前面已經提了
FluentD(1/2)
FluentD(2/2)
Elasticsearch
最後一哩路就是視覺化 Kibana 出場

Dockerfile



FROM kibana:7.2.0


Build Image Shell Script


docker build -t mykibana .


kibana.yml Sample


# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "0.0.0.0"

# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""

# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# This setting was effectively always `false` before Kibana 6.3 and will
# default to `true` starting in Kibana 7.0.
#server.rewriteBasePath: false

# The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576

# The Kibana server's name.  This is used for display purposes.
#server.name: "your-hostname"

# The URLs of the Elasticsearch instances to use for all your queries.
elasticsearch.hosts: [${ES HOST}:${PORT}]

# When this setting's value is true Kibana uses the hostname specified in the server.host
# setting. When the value of this setting is false, Kibana uses the hostname of the host
# that connects to this Kibana instance.
#elasticsearch.preserveHost: true

# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
#kibana.index: ".kibana"

# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "kibana"
#elasticsearch.password: "pass"

# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key

# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files are used to verify the identity of Kibana to Elasticsearch and are required when
# xpack.ssl.verification_mode in Elasticsearch is set to either certificate or full.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key

# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]

# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full

# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000

# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]

# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}

# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 30000

# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.
#elasticsearch.startupTimeout: 5000

# Logs queries sent to Elasticsearch. Requires logging.verbose set to true.
#elasticsearch.logQueries: false

# Specifies the path where Kibana creates the process ID file.
#pid.file: /var/run/kibana.pid

# Enables you specify a file where Kibana stores log output.
#logging.dest: stdout

# Set the value of this setting to true to suppress all logging output.
#logging.silent: false

# Set the value of this setting to true to suppress all logging output other than error messages.
#logging.quiet: false

# Set the value of this setting to true to log all events, including system usage information
# and all requests.
#logging.verbose: false

# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
#ops.interval: 5000

# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English - en , by default , Chinese - zh-CN .
#i18n.locale: "en"

Docker run Shell Script


# import! 先將 自己的kibana.yml copy to /opt/kibana/kibana.yml 
docker run -it -d --name mykibana -p 5601:5601 -v /opt/kibana/kibana.yml:/usr/share/kibana/config/kibana.yml  mykibana


補充

與 ES 一樣的問題,不想付錢 就只能自己想辦法提高安全性
Cloud Service VM instance 都可透過網路設定 限制 IP 存取 or 利用Nginx 做 Basic Authentication
參考來源



# 先使用 htpasswd 建立帳號密碼檔 
$>htpasswd -c /password/store/path/.htpasswd ${username}

# 如果沒有安裝 apache 工具 ,也可透過線上工具 
Htpasswd Generator
將產生的資料存入 .htpasswd



server {
    
     ... 略 

    location / {
        auth_basic "Administrator’s Area";
        auth_basic_user_file /password/store/path/.htpasswd;
        
         ... 略 

    }

}

留言

這個網誌中的熱門文章

FluentD 存取 File Log

其他文章參考 FluentD 存取 Nginx Access Log (1/2) FluentD 存取 Nginx Access Log (2/2) 以上是 Nginx + FluentD + (ES|Mongo) Demo 針對access.log 做解析 現在以Log4X 產生的日誌檔作為範例說明 會遇到的問題有 日誌內容會有多行的情況 # multiline 希望每條日誌內容加入 UUID 以便追蹤 # https://github.com/chaeyk/fluent-plugin-add-uuid 使用 Slack 作為通知的通道 # https://github.com/sowawa/fluent-plugin-slack Log4X Layout Format Example <?xml version="1.0" encoding="utf-8" ?> <configuration> <log4net> <appender name="All" type="log4net.Appender.RollingFileAppender"> <file value="/var/log/web.log" /> <appendToFile value="true" /> <rollingStyle value="Size" /> <datePattern value="yyyy-MM-dd" /> <maximumFileSize value="5MB" /> <maxSizeRollBackups value="10" /> <staticLogFileName value="true" /> <PreserveLogFileNameExtension value=...

申請免費 SSL,自動續訂

參考 acme.sh 搭配 GoDaddy 自動續期 Let's Encrypt 免費萬用憑證 使用 acme.sh + Cloudflare 申請免費 Wildcard SSL (Let’s Encrypt) 節略如下 安裝 acme.sh # 安裝 acme.sh ,安裝後重新登入 curl https://get.acme.sh | sh # 自動更新 acme.sh --upgrade --auto-upgrade acme.sh 設定存取 Goddy vi ~/.acme.sh/account.conf # Goddy API GD_Key="" GD_Secret="" acme.sh 設定存取 Cloudflare # Cloudflare API Keys # Global API Key [View] export CF_Key="" export CF_Email="" 申請網域(Domain)的萬用憑證,成功後會顯示憑證存放的路徑 $> acme.sh --issue --dns dns_gd -d ${domain} -d *.${domain} 安裝憑證 # 建立 /etc/nginx/ssl/${domain} 路徑 $> acme.sh --install-cert -d ${domain} --key-file /etc/nginx/ssl/${domain}/key.pem --fullchain-file /etc/nginx/ssl/${domain}/cert.pem --reloadcmd "sudo nginx -s reload"

DotNet Core 專案部署腳本

DotNet core SDK 首先在 Server 上準備編譯環境 Dockerfile #2.2 3.0 3.1 FROM mcr.microsoft.com/dotnet/core/sdk:3.1 RUN mkdir /web WORKDIR /web build docker image shell script docker build -t dotnetcoresdk:3.1 . start docker container shell script docker run -it -d \ --name dotnet-core-sdk-3.1 \ -v /opt/web:/web \ dotnetcoresdk:3.1 Jenkins Execute shell script on remote hosting using ssh #切換至專案目錄 cd /opt/web/project/path #取得最新版本 git pull #切換至專案目錄 && 刷新 Dotnet Library docker exec -i dotnet-core-sdk-3.1 bash -c "cd project/path && dotnet restore" #切換至專案目錄 && 刪除上一次編譯的檔案 && 編譯 docker exec -i dotnet-core-sdk-3.1 bash -c "cd project/path && rm -rf bin/Release && dotnet publish -c Release" #docker-compose.yml 參 DotNet core Runtime Section #!--rmi all 將原本執行的容器關閉並移除Image docker-compose down --rmi all #將新版程式包入 Image 並開始容器 docker-compose up -d DotNet core Runtime 專案中包含 Dockerfile & docker-compose.yml d...