docker pull prom/prometheus
# my global config global: scrape_interval: 10s # By default, scrape targets every 15 seconds. evaluation_interval: 10s # By default, scrape targets every 15 seconds. # scrape_timeout is set to the global default (10s). # Load and evaluate rules in this file every 'evaluation_interval' seconds. rule_files: # - "first.rules" # - "second.rules" # A scrape configuration containing exactly one endpoint to scrape: # Here it's Prometheus itself. scrape_configs: # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config. - job_name: 'prometheus' # Override the global default and scrape targets from this job every 5 seconds. scrape_interval: 10s scrape_timeout: 10s # metrics_path defaults to '/metrics' # scheme defaults to 'http'. # 这里配置要监控的所有etcd节点 target_groups: - targets: ['10.2.122.70:5001','10.2.122.71:5001','10.2.122.72:5001']
Note that prometheus uses port 9090 for its web interface
docker run -p 5002:9090 -v /home/codecraft/prometheus/prometheus.yml:/etc/prometheus/prometheus.yml prometheus
Automatically summarizes all requests from the cluster.
- Advantages and Limitations of Prometheus as a Monitoring System