2 - 环境配置指南
如何修改配置文件以控制Pigsty的行为表现
Configuration Guide [DRAFT]
pigsty can be configured via 200+ parameters. Which defines the infrastructure and all database clusters.
TL;DR
- Follow ansible YAML Inventory format: Hosts, Groups, Variables.
- Everything in one config files, and one configuration file per environment (dev, pre, prod, etc…)
- Database clusters are defined as top-level groups:
all.children.<cluster_name>
, one entry per cluster
- Variable precedence: cli > host > group > global > default
- Global variables
all.vars
defines unified configuration among entire environment
- Group variables
all.children.<cluster>.vars
defines database-cluster-wide configurations
- Database instances are defined as group members:
all.children.<cluster>.hosts
, one entry per host. Host variable can be defined and override group & global & default values.
- Group variable
pg_cluster
and Host variables pg_role
, pg_seq
are required for each cluster.
- Each cluster must have one and only one instance with
pg_role=primary
(even if it is a standby clutster leader)
Minimum Example
Here is an minimum configuration example that defines a single node environment and one database cluster pg-meta
---
######################################################################
# Minimal Environment Inventory #
######################################################################
all: # top-level namespace, match all hosts
#==================================================================#
# Clusters #
#==================================================================#
children: # top-level groups, one group per database cluster (and special group 'meta')
#-----------------------------
# meta controller
#-----------------------------
meta: # special group 'meta' defines the main controller machine
vars:
meta_node: true # mark node as meta controller
ansible_group_priority: 99 # meta group is top priority
# nodes in meta group (1-3)
hosts:
10.10.10.10: # meta node IP ADDRESS
ansible_host: meta # comment this if not access via ssh alias
#-----------------------------
# cluster: pg-meta
#-----------------------------
pg-meta:
# - cluster configs - #
vars:
pg_cluster: pg-meta # define actual cluster name
pg_version: 12 # define installed pgsql version
pg_default_username: meta # default business username
pg_default_password: meta # default business password
pg_default_database: meta # default database name
vip_enabled: true # enable/disable vip (require members in same LAN)
vip_address: 10.10.10.2 # virtual ip address
vip_cidrmask: 8 # cidr network mask length
vip_interface: eth1 # interface to add virtual ip
#==================================================================#
# Globals #
#==================================================================#
vars:
proxy_env: # global proxy env when downloading packages
no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com"
...
Cluster Inventory
Cluster inventory define clusters and instances to be managed. Minimal information required including:
- IP Address (or other connection params, e.g ssh name/alias/user/pass)
- Cluster name:
pg_cluster
, follow DNS naming standard ([a-z][a-z0-9-]*
)
- Instance index:
pg_seq
, integer that unique among cluster
- Instance role:
pg_role
, which could be primary
, or replica
Here is an example of ansible cluster inventory definition in ini format (which is more compat but not recommended):
[pg-test]
10.10.10.11 pg_role=primary pg_seq=1
10.10.10.12 pg_role=replica pg_seq=2
10.10.10.13 pg_role=replica pg_seq=3
[pg-test:vars]
pg_cluster = pg-test
pg_version = 12
You can override cluster variables in all.children.<cluster>.vars
and override instance variables in all.children.<cluster>.hosts.<host>
. Here are some variables can be set in cluster or instance level. (Note that all variables are merged into host level before execution).
#------------------------------------------------------------------------------
# POSTGRES INSTALLATION
#------------------------------------------------------------------------------
# - dbsu - #
pg_dbsu: postgres # os user for database, postgres by default (change it is not recommended!)
pg_dbsu_uid: 26 # os dbsu uid and gid, 26 for default postgres users and groups
pg_dbsu_sudo: limit # none|limit|all|nopass (Privilege for dbsu, limit is recommended)
pg_dbsu_home: /var/lib/pgsql # postgresql binary
pg_dbsu_ssh_exchange: false # exchange ssh key among same cluster
# - postgres packages - #
pg_version: 12 # default postgresql version
pgdg_repo: false # use official pgdg yum repo (disable if you have local mirror)
pg_add_repo: false # add postgres related repo before install (useful if you want a simple install)
pg_bin_dir: /usr/pgsql/bin # postgres binary dir
pg_packages: [] # packages to be installed
pg_extensions: [] # extensions to be installed
#------------------------------------------------------------------------------
# POSTGRES CLUSTER PROVISION
#------------------------------------------------------------------------------
# - identity - #
pg_cluster: # [REQUIRED] cluster name (validated during pg_preflight)
pg_seq: 0 # [REQUIRED] instance seq (validated during pg_preflight)
pg_role: replica # [REQUIRED] service role (validated during pg_preflight)
pg_hostname: false # overwrite node hostname with pg instance name
pg_nodename: true # overwrite consul nodename with pg instance name
# - retention - #
# pg_exists_action, available options: abort|clean|skip
# - abort: abort entire play's execution (default)
# - clean: remove existing cluster (dangerous)
# - skip: end current play for this host
# pg_exists: false # auxiliary flag variable (DO NOT SET THIS)
pg_exists_action: clean
# - storage - #
pg_data: /pg/data # postgres data directory
pg_fs_main: /export # data disk mount point /pg -> {{ pg_fs_main }}/postgres/{{ pg_instance }}
pg_fs_bkup: /var/backups # backup disk mount point /pg/* -> {{ pg_fs_bkup }}/postgres/{{ pg_instance }}/*
# - connection - #
pg_listen: '0.0.0.0' # postgres listen address, '0.0.0.0' by default (all ipv4 addr)
pg_port: 5432 # postgres port (5432 by default)
# - patroni - #
# patroni_mode, available options: default|pause|remove
# - default: default ha mode
# - pause: into maintenance mode
# - remove: remove patroni after bootstrap
patroni_mode: default # pause|default|remove
pg_namespace: /pg # top level key namespace in dcs
patroni_port: 8008 # default patroni port
patroni_watchdog_mode: automatic # watchdog mode: off|automatic|required
# - template - #
pg_conf: tiny.yml # user provided patroni config template path
pg_init: initdb.sh # user provided post-init script path, default: initdb.sh
# - authentication - #
pg_hba_common: [] # hba entries for all instances
pg_hba_primary: [] # hba entries for primary instance
pg_hba_replica: [] # hba entries for replicas instances
pg_hba_pgbouncer: [] # hba entries for pgbouncer
# - credential - #
pg_dbsu_password: '' # dbsu password (leaving blank will disable sa password login)
pg_replication_username: replicator # replication user
pg_replication_password: replicator # replication password
pg_monitor_username: dbuser_monitor # monitor user
pg_monitor_password: dbuser_monitor # monitor password
# - default - #
pg_default_username: postgres # non 'postgres' will create a default admin user (not superuser)
pg_default_password: postgres # dbsu password, omit for 'postgres'
pg_default_database: postgres # non 'postgres' will create a default database
pg_default_schema: public # default schema will be create under default database and used as first element of search_path
pg_default_extensions: "tablefunc,postgres_fdw,file_fdw,btree_gist,btree_gin,pg_trgm"
# - pgbouncer - #
pgbouncer_port: 6432 # default pgbouncer port
pgbouncer_poolmode: transaction # default pooling mode: transaction pooling
pgbouncer_max_db_conn: 100 # important! do not set this larger than postgres max conn or conn limit
#------------------------------------------------------------------------------
# MONITOR PROVISION
#------------------------------------------------------------------------------
# - monitor options -
node_exporter_port: 9100 # default port for node exporter
pg_exporter_port: 9630 # default port for pg exporter
pgbouncer_exporter_port: 9631 # default port for pgbouncer exporter
exporter_metrics_path: /metrics # default metric path for pg related exporter
#------------------------------------------------------------------------------
# PROXY PROVISION
#------------------------------------------------------------------------------
# - vip - #
vip_enabled: true # level2 vip requires primary/standby under same switch
vip_address: 127.0.0.1 # virtual ip address ip/cidr
vip_cidrmask: 32 # virtual ip address cidr mask
vip_interface: eth0 # virtual ip network interface
# - haproxy - #
haproxy_enabled: true # enable haproxy among every cluster members
haproxy_policy: leastconn # roundrobin, leastconn
haproxy_admin_username: admin # default haproxy admin username
haproxy_admin_password: admin # default haproxy admin password
haproxy_client_timeout: 3h # client side connection timeout
haproxy_server_timeout: 3h # server side connection timeout
haproxy_exporter_port: 9101 # default admin/exporter port
haproxy_check_port: 8008 # default health check port (patroni 8008 by default)
haproxy_primary_port: 5433 # default primary port 5433
haproxy_replica_port: 5434 # default replica port 5434
haproxy_backend_port: 6432 # default target port: pgbouncer:6432 postgres:5432
Global variables
Global variables are defined in all.vars
. (Or any other ways that follows ansible standard)
Global variables are aiming at unification of environment. Define different infrastructure (e.g DCS, DNS, NTP address, packages to be installed, unified admin user, etc…) for different environments.
Global variables are merged into host variables before execution. And follows ansible variable precedence.
There are lot’s of variables can be defined, Refer to role document for more detail
Variables are divided into 8 sections
- Connection Information
- Repo Provision
- Node Provision
- Meta Provision
- DCS Provision
- Postgres Installation
- Postgres Cluster Initialization
- Monitoring
- Load Balancer
Standard Example
Here is an example for demo environment:
---
######################################################################
# File : dev.yml
# Path : inventory/dev.yml
# Desc : Configuration file for development (demo) environment
# Note : follow ansible inventory file format
# Ctime : 2020-09-22
# Mtime : 2020-09-22
# Copyright (C) 2019-2020 Ruohang Feng
######################################################################
######################################################################
# Development Environment Inventory #
######################################################################
all: # top-level namespace, match all hosts
#==================================================================#
# Clusters #
#==================================================================#
children: # top-level groups, one group per database cluster (and special group 'meta')
#-----------------------------
# meta controller
#-----------------------------
meta: # special group 'meta' defines the main controller machine
vars:
meta_node: true # mark node as meta controller
ansible_group_priority: 99 # meta group is top priority
# nodes in meta group (1-3)
hosts:
10.10.10.10: # meta node IP ADDRESS
ansible_host: meta # comment this if not access via ssh alias
#-----------------------------
# cluster: pg-meta
#-----------------------------
pg-meta:
# - cluster configs - #
vars:
# basic settings
pg_cluster: pg-meta # define actual cluster name
pg_version: 13 # define installed pgsql version
node_tune: oltp # tune node into oltp|olap|crit|tiny mode
pg_conf: oltp.yml # tune pgsql into oltp/olap/crit/tiny mode
# misc
patroni_mode: pause # enter maintenance mode, {default|pause|remove}
patroni_watchdog_mode: off # disable watchdog (require|automatic|off)
pg_hostname: false # overwrite node hostname with pg instance name
pg_nodename: true # overwrite consul nodename with pg instance name
# bootstrap template
pg_init: initdb.sh # bootstrap postgres cluster with initdb.sh
pg_default_username: meta # default business username
pg_default_password: meta # default business password
pg_default_database: meta # default database name
# vip settings
vip_enabled: true # enable/disable vip (require members in same LAN)
vip_address: 10.10.10.2 # virtual ip address
vip_cidrmask: 8 # cidr network mask length
vip_interface: eth1 # interface to add virtual ip
# - cluster members - #
hosts:
10.10.10.10:
ansible_host: meta # comment this if not access via ssh alias
pg_role: primary # initial role: primary & replica
pg_seq: 1 # instance sequence among cluster
#-----------------------------
# cluster: pg-test
#-----------------------------
pg-test: # define cluster named 'pg-test'
# - cluster configs - #
vars:
# basic settings
pg_cluster: pg-test # define actual cluster name
pg_version: 13 # define installed pgsql version
node_tune: tiny # tune node into oltp|olap|crit|tiny mode
pg_conf: tiny.yml # tune pgsql into oltp/olap/crit/tiny mode
# bootstrap template
pg_init: initdb.sh # bootstrap postgres cluster with initdb.sh
pg_default_username: test # default business username
pg_default_password: test # default business password
pg_default_database: test # default database name
# vip settings
vip_enabled: true # enable/disable vip (require members in same LAN)
vip_address: 10.10.10.3 # virtual ip address
vip_cidrmask: 8 # cidr network mask length
vip_interface: eth1 # interface to add virtual ip
# - cluster members - #
hosts:
10.10.10.11:
ansible_host: node-1 # comment this if not access via ssh alias
pg_role: primary # initial role: primary & replica
pg_seq: 1 # instance sequence among cluster
10.10.10.12:
ansible_host: node-2 # comment this if not access via ssh alias
pg_role: replica # initial role: primary & replica
pg_seq: 2 # instance sequence among cluster
10.10.10.13:
ansible_host: node-3 # comment this if not access via ssh alias
pg_role: replica # initial role: primary & replica
pg_seq: 3 # instance sequence among cluster
#==================================================================#
# Globals #
#==================================================================#
vars:
#------------------------------------------------------------------------------
# CONNECTION PARAMETERS
#------------------------------------------------------------------------------
# this section defines connection parameters
# ansible_user: vagrant # admin user with ssh access and sudo privilege
proxy_env: # global proxy env when downloading packages
no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com"
#------------------------------------------------------------------------------
# REPO PROVISION
#------------------------------------------------------------------------------
# this section defines how to build a local repo
repo_enabled: true # build local yum repo on meta nodes?
repo_name: pigsty # local repo name
repo_address: yum.pigsty # repo external address (ip:port or url)
repo_port: 80 # listen address, must same as repo_address
repo_home: /www # default repo dir location
repo_rebuild: false # force re-download packages
repo_remove: true # remove existing repos
# - where to download - #
repo_upstreams:
- name: base
description: CentOS-$releasever - Base - Aliyun Mirror
baseurl:
- http://mirrors.aliyun.com/centos/$releasever/os/$basearch/
- http://mirrors.aliyuncs.com/centos/$releasever/os/$basearch/
- http://mirrors.cloud.aliyuncs.com/centos/$releasever/os/$basearch/
gpgcheck: no
failovermethod: priority
- name: updates
description: CentOS-$releasever - Updates - Aliyun Mirror
baseurl:
- http://mirrors.aliyun.com/centos/$releasever/updates/$basearch/
- http://mirrors.aliyuncs.com/centos/$releasever/updates/$basearch/
- http://mirrors.cloud.aliyuncs.com/centos/$releasever/updates/$basearch/
gpgcheck: no
failovermethod: priority
- name: extras
description: CentOS-$releasever - Extras - Aliyun Mirror
baseurl:
- http://mirrors.aliyun.com/centos/$releasever/extras/$basearch/
- http://mirrors.aliyuncs.com/centos/$releasever/extras/$basearch/
- http://mirrors.cloud.aliyuncs.com/centos/$releasever/extras/$basearch/
gpgcheck: no
failovermethod: priority
- name: epel
description: CentOS $releasever - EPEL - Aliyun Mirror
baseurl: http://mirrors.aliyun.com/epel/$releasever/$basearch
gpgcheck: no
failovermethod: priority
- name: grafana
description: Grafana - TsingHua Mirror
gpgcheck: no
baseurl: https://mirrors.tuna.tsinghua.edu.cn/grafana/yum/rpm
- name: prometheus
description: Prometheus and exporters
gpgcheck: no
baseurl: https://packagecloud.io/prometheus-rpm/release/el/$releasever/$basearch
- name: pgdg-common
description: PostgreSQL common RPMs for RHEL/CentOS $releasever - $basearch
gpgcheck: no
baseurl: https://download.postgresql.org/pub/repos/yum/common/redhat/rhel-$releasever-$basearch
- name: pgdg13
description: PostgreSQL 13 for RHEL/CentOS $releasever - $basearch - Updates testing
gpgcheck: no
baseurl: https://download.postgresql.org/pub/repos/yum/13/redhat/rhel-$releasever-$basearch
- name: centos-sclo
description: CentOS-$releasever - SCLo
gpgcheck: no
mirrorlist: http://mirrorlist.centos.org?arch=$basearch&release=7&repo=sclo-sclo
- name: centos-sclo-rh
description: CentOS-$releasever - SCLo rh
gpgcheck: no
mirrorlist: http://mirrorlist.centos.org?arch=$basearch&release=7&repo=sclo-rh
- name: nginx
description: Nginx Official Yum Repo
skip_if_unavailable: true
gpgcheck: no
baseurl: http://nginx.org/packages/centos/$releasever/$basearch/
- name: haproxy
description: Copr repo for haproxy
skip_if_unavailable: true
gpgcheck: no
baseurl: https://download.copr.fedorainfracloud.org/results/roidelapluie/haproxy/epel-$releasever-$basearch/
# - what to download - #
repo_packages:
# repo bootstrap packages
- epel-release nginx wget yum-utils yum createrepo # bootstrap packages
# node basic packages
- ntp chrony uuid lz4 nc pv jq vim-enhanced make patch bash lsof wget unzip git tuned # basic system util
- readline zlib openssl libyaml libxml2 libxslt perl-ExtUtils-Embed ca-certificates # basic pg dependency
- numactl grubby sysstat dstat iotop bind-utils net-tools tcpdump socat ipvsadm telnet # system utils
# dcs & monitor packages
- grafana prometheus2 pushgateway alertmanager # monitor and ui
- node_exporter postgres_exporter nginx_exporter blackbox_exporter # exporter
- consul consul_exporter consul-template etcd # dcs
# python3 dependencies
- ansible python python-pip python-psycopg2 # ansible & python
- python3 python3-psycopg2 python36-requests python3-etcd python3-consul # python3
- python36-urllib3 python36-idna python36-pyOpenSSL python36-cryptography # python3 patroni extra deps
# proxy and load balancer
- haproxy keepalived dnsmasq # proxy and dns
# postgres common Packages
- patroni patroni-consul patroni-etcd pgbouncer pg_cli pgbadger pg_activity # major components
- pgcenter boxinfo check_postgres emaj pgbconsole pg_bloat_check pgquarrel # other common utils
- barman barman-cli pgloader pgFormatter pitrery pspg pgxnclient PyGreSQL pgadmin4
# postgres 13 packages
- postgresql13* postgis31* # postgres 13 and postgis 31
- pg_qualstats13 pg_stat_kcache13 system_stats_13 bgw_replstatus13 # stats extensions
- plr13 plsh13 plpgsql_check_13 pldebugger13 # pl extensions
- hdfs_fdw_13 mongo_fdw13 mysql_fdw_13 ogr_fdw13 redis_fdw_13 # FDW extensions
- wal2json13 count_distinct13 ddlx_13 geoip13 orafce13 # other extensions
- hypopg_13 ip4r13 jsquery_13 logerrors_13 periods_13 pg_auto_failover_13 pg_catcheck13
- pg_fkpart13 pg_jobmon13 pg_partman13 pg_prioritize_13 pg_track_settings13 pgaudit15_13
- pgcryptokey13 pgexportdoc13 pgimportdoc13 pgmemcache-13 pgmp13 pgq-13 # pgrouting_13
- pguint13 pguri13 prefix13 safeupdate_13 semver13 table_version13 tdigest13
# Postgres 12 Packages
# - postgresql12* postgis30_12* timescaledb_12 citus_12 pglogical_12 # postgres 12 basic
# - pg_qualstats12 pg_cron_12 pg_repack12 pg_squeeze12 pg_stat_kcache12 wal2json12 pgpool-II-12 pgpool-II-12-extensions python3-psycopg2 python2-psycopg2
# - ddlx_12 bgw_replstatus12 count_distinct12 extra_window_functions_12 geoip12 hll_12 hypopg_12 ip4r12 jsquery_12 multicorn12 osm_fdw12 mysql_fdw_12 ogr_fdw12 mongo_fdw12 hdfs_fdw_12 cstore_fdw_12 wal2mongo12 orafce12 pagila12 pam-pgsql12 passwordcheck_cracklib12 periods_12 pg_auto_failover_12 pg_bulkload12 pg_catcheck12 pg_comparator12 pg_filedump12 pg_fkpart12 pg_jobmon12 pg_partman12 pg_pathman12 pg_track_settings12 pg_wait_sampling_12 pgagent_12 pgaudit14_12 pgauditlogtofile-12 pgbconsole12 pgcryptokey12 pgexportdoc12 pgfincore12 pgimportdoc12 pgmemcache-12 pgmp12 pgq-12 pgrouting_12 pgtap12 plpgsql_check_12 plr12 plsh12 postgresql_anonymizer12 postgresql-unit12 powa_12 prefix12 repmgr12 safeupdate_12 semver12 slony1-12 sqlite_fdw12 sslutils_12 system_stats_12 table_version12 topn_12
repo_url_packages:
- https://github.com/Vonng/pg_exporter/releases/download/v0.2.0/pg_exporter-0.2.0-1.el7.x86_64.rpm
- https://github.com/cybertec-postgresql/vip-manager/releases/download/v0.6/vip-manager_0.6-1_amd64.rpm
- http://guichaz.free.fr/polysh/files/polysh-0.4-1.noarch.rpm
#------------------------------------------------------------------------------
# NODE PROVISION
#------------------------------------------------------------------------------
# this section defines how to provision nodes
# - node dns - #
node_dns_hosts: # static dns records in /etc/hosts
- 10.10.10.10 yum.pigsty
node_dns_server: add # add (default) | none (skip) | overwrite (remove old settings)
node_dns_servers: # dynamic nameserver in /etc/resolv.conf
- 10.10.10.10
node_dns_options: # dns resolv options
- options single-request-reopen timeout:1 rotate
- domain service.consul
# - node repo - #
node_repo_method: local # none|local|public (use local repo for production env)
node_repo_remove: true # whether remove existing repo
# local repo url (if method=local, make sure firewall is configured or disabled)
node_local_repo_url:
- http://yum.pigsty/pigsty.repo
# - node packages - #
node_packages: # common packages for all nodes
- wget,yum-utils,ntp,chrony,tuned,uuid,lz4,vim-minimal,make,patch,bash,lsof,wget,unzip,git,readline,zlib,openssl
- numactl,grubby,sysstat,dstat,iotop,bind-utils,net-tools,tcpdump,socat,ipvsadm,telnet,tuned,pv,jq
- python3,python3-psycopg2,python36-requests,python3-etcd,python3-consul
- python36-urllib3,python36-idna,python36-pyOpenSSL,python36-cryptography
- node_exporter,consul,consul-template,etcd,haproxy,keepalived,vip-manager
node_extra_packages: # extra packages for all nodes
- patroni,patroni-consul,patroni-etcd,pgbouncer,pgbadger,pg_activity
node_meta_packages: # packages for meta nodes only
- grafana,prometheus2,alertmanager,nginx_exporter,blackbox_exporter,pushgateway
- dnsmasq,nginx,ansible,pgbadger,polysh
# - node features - #
node_disable_numa: false # disable numa, important for production database, reboot required
node_disable_swap: false # disable swap, important for production database
node_disable_firewall: true # disable firewall (required if using kubernetes)
node_disable_selinux: true # disable selinux (required if using kubernetes)
node_static_network: true # keep dns resolver settings after reboot
node_disk_prefetch: false # setup disk prefetch on HDD to increase performance
# - node kernel modules - #
node_kernel_modules:
- softdog
- br_netfilter
- ip_vs
- ip_vs_rr
- ip_vs_rr
- ip_vs_wrr
- ip_vs_sh
- nf_conntrack_ipv4
# - node tuned - #
node_tune: tiny # install and activate tuned profile: none|oltp|olap|crit|tiny
node_sysctl_params: # set additional sysctl parameters, k:v format
net.bridge.bridge-nf-call-iptables: 1 # for kubernetes
# - node user - #
node_admin_setup: true # setup an default admin user ?
node_admin_uid: 88 # uid and gid for admin user
node_admin_username: admin # default admin user
node_admin_ssh_exchange: true # exchange ssh key among cluster ?
node_admin_pks: # public key list that will be installed
- 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQC7IMAMNavYtWwzAJajKqwdn3ar5BhvcwCnBTxxEkXhGlCO2vfgosSAQMEflfgvkiI5nM1HIFQ8KINlx1XLO7SdL5KdInG5LIJjAFh0pujS4kNCT9a5IGvSq1BrzGqhbEcwWYdju1ZPYBcJm/MG+JD0dYCh8vfrYB/cYMD0SOmNkQ== vagrant@pigsty.com'
# - node ntp - #
node_ntp_service: ntp # ntp or chrony
node_ntp_config: true # overwrite existing ntp config?
node_timezone: Asia/Shanghai # default node timezone
node_ntp_servers: # default NTP servers
- pool cn.pool.ntp.org iburst
- pool pool.ntp.org iburst
- pool time.pool.aliyun.com iburst
- server 10.10.10.10 iburst
#------------------------------------------------------------------------------
# META PROVISION
#------------------------------------------------------------------------------
# - ca - #
ca_method: create # create|copy|recreate
ca_subject: "/CN=root-ca" # self-signed CA subject
ca_homedir: /ca # ca cert directory
ca_cert: ca.crt # ca public key/cert
ca_key: ca.key # ca private key
# - nginx - #
nginx_upstream:
- { name: consul, host: c.pigsty, url: "127.0.0.1:8500" }
- { name: grafana, host: g.pigsty, url: "127.0.0.1:3000" }
- { name: prometheus, host: p.pigsty, url: "127.0.0.1:9090" }
- { name: alertmanager, host: a.pigsty, url: "127.0.0.1:9093" }
# - nameserver - #
dns_records: # dynamic dns record resolved by dnsmasq
- 10.10.10.2 pg-meta # sandbox vip for pg-meta
- 10.10.10.3 pg-test # sandbox vip for pg-test
- 10.10.10.10 meta-1 # sandbox node meta-1 (node-0)
- 10.10.10.11 node-1 # sandbox node node-1
- 10.10.10.12 node-2 # sandbox node node-2
- 10.10.10.13 node-3 # sandbox node node-3
- 10.10.10.10 pigsty
- 10.10.10.10 y.pigsty yum.pigsty
- 10.10.10.10 c.pigsty consul.pigsty
- 10.10.10.10 g.pigsty grafana.pigsty
- 10.10.10.10 p.pigsty prometheus.pigsty
- 10.10.10.10 a.pigsty alertmanager.pigsty
- 10.10.10.10 n.pigsty ntp.pigsty
# - prometheus - #
prometheus_scrape_interval: 2s # global scrape & evaluation interval (2s for dev, 15s for prod)
prometheus_scrape_timeout: 1s # global scrape timeout (1s for dev, 8s for prod)
prometheus_metrics_path: /metrics # default metrics path (only affect job 'pg')
prometheus_data_dir: /export/prometheus/data # prometheus data dir
prometheus_retention: 30d # how long to keep
# - grafana - #
grafana_url: http://10.10.10.10:3000 # grafana url
grafana_admin_password: admin # default grafana admin user password
grafana_plugin: install # none|install|reinstall
grafana_cache: /www/pigsty/grafana/plugins.tar.gz # path to grafana plugins tarball
grafana_provision_mode: db # none|db|api
grafana_plugins: # default grafana plugins list
- redis-datasource
- simpod-json-datasource
- fifemon-graphql-datasource
- sbueringer-consul-datasource
- camptocamp-prometheus-alertmanager-datasource
- ryantxu-ajax-panel
- marcusolsson-hourly-heatmap-panel
- michaeldmoore-multistat-panel
- marcusolsson-treemap-panel
- pr0ps-trackmap-panel
- dalvany-image-panel
- magnesium-wordcloud-panel
- cloudspout-button-panel
- speakyourcode-button-panel
- jdbranham-diagram-panel
- grafana-piechart-panel
- snuids-radar-panel
- digrich-bubblechart-panel
grafana_git_plugins:
- https://github.com/Vonng/grafana-echarts
# grafana_dashboards: [] # default dashboards (use role default)
#------------------------------------------------------------------------------
# DCS PROVISION
#------------------------------------------------------------------------------
dcs_type: consul # consul | etcd | both
dcs_name: pigsty # consul dc name | etcd initial cluster token
# dcs server dict in name:ip format
dcs_servers:
meta-1: 10.10.10.10 # you could use existing dcs cluster
# meta-2: 10.10.10.11 # host which have their IP listed here will be init as server
# meta-3: 10.10.10.12 # 3 or 5 dcs nodes are recommend for production environment
dcs_exists_action: skip # abort|skip|clean if dcs server already exists
consul_data_dir: /var/lib/consul # consul data dir (/var/lib/consul by default)
etcd_data_dir: /var/lib/etcd # etcd data dir (/var/lib/consul by default)
#------------------------------------------------------------------------------
# POSTGRES INSTALLATION
#------------------------------------------------------------------------------
# - dbsu - #
pg_dbsu: postgres # os user for database, postgres by default (change it is not recommended!)
pg_dbsu_uid: 26 # os dbsu uid and gid, 26 for default postgres users and groups
pg_dbsu_sudo: limit # none|limit|all|nopass (Privilege for dbsu, limit is recommended)
pg_dbsu_home: /var/lib/pgsql # postgresql binary
pg_dbsu_ssh_exchange: false # exchange ssh key among same cluster
# - postgres packages - #
pg_version: 12 # default postgresql version
pgdg_repo: false # use official pgdg yum repo (disable if you have local mirror)
pg_add_repo: false # add postgres related repo before install (useful if you want a simple install)
pg_bin_dir: /usr/pgsql/bin # postgres binary dir
pg_packages:
- postgresql${pg_version}*
- postgis31_${pg_version}*
- pgbouncer patroni pg_exporter pgbadger
- patroni patroni-consul patroni-etcd pgbouncer pgbadger pg_activity
- python3 python3-psycopg2 python36-requests python3-etcd python3-consul
- python36-urllib3 python36-idna python36-pyOpenSSL python36-cryptography
pg_extensions:
- pg_qualstats${pg_version} pg_stat_kcache${pg_version} wal2json${pg_version}
# - ogr_fdw${pg_version} mysql_fdw_${pg_version} redis_fdw_${pg_version} mongo_fdw${pg_version} hdfs_fdw_${pg_version}
# - count_distinct${version} ddlx_${version} geoip${version} orafce${version} # popular features
# - hypopg_${version} ip4r${version} jsquery_${version} logerrors_${version} periods_${version} pg_auto_failover_${version} pg_catcheck${version}
# - pg_fkpart${version} pg_jobmon${version} pg_partman${version} pg_prioritize_${version} pg_track_settings${version} pgaudit15_${version}
# - pgcryptokey${version} pgexportdoc${version} pgimportdoc${version} pgmemcache-${version} pgmp${version} pgq-${version} pgquarrel pgrouting_${version}
# - pguint${version} pguri${version} prefix${version} safeupdate_${version} semver${version} table_version${version} tdigest${version}
#------------------------------------------------------------------------------
# POSTGRES CLUSTER PROVISION
#------------------------------------------------------------------------------
# - identity - #
# pg_cluster: # [REQUIRED] cluster name (validated during pg_preflight)
# pg_seq: 0 # [REQUIRED] instance seq (validated during pg_preflight)
# pg_role: replica # [REQUIRED] service role (validated during pg_preflight)
pg_hostname: false # overwrite node hostname with pg instance name
pg_nodename: true # overwrite consul nodename with pg instance name
# - retention - #
# pg_exists_action, available options: abort|clean|skip
# - abort: abort entire play's execution (default)
# - clean: remove existing cluster (dangerous)
# - skip: end current play for this host
# pg_exists: false # auxiliary flag variable (DO NOT SET THIS)
pg_exists_action: clean
# - storage - #
pg_data: /pg/data # postgres data directory
pg_fs_main: /export # data disk mount point /pg -> {{ pg_fs_main }}/postgres/{{ pg_instance }}
pg_fs_bkup: /var/backups # backup disk mount point /pg/* -> {{ pg_fs_bkup }}/postgres/{{ pg_instance }}/*
# - connection - #
pg_listen: '0.0.0.0' # postgres listen address, '0.0.0.0' by default (all ipv4 addr)
pg_port: 5432 # postgres port (5432 by default)
# - patroni - #
# patroni_mode, available options: default|pause|remove
# - default: default ha mode
# - pause: into maintenance mode
# - remove: remove patroni after bootstrap
patroni_mode: default # pause|default|remove
pg_namespace: /pg # top level key namespace in dcs
patroni_port: 8008 # default patroni port
patroni_watchdog_mode: automatic # watchdog mode: off|automatic|required
# - template - #
pg_conf: tiny.yml # user provided patroni config template path
pg_init: initdb.sh # user provided post-init script path, default: initdb.sh
# - authentication - #
pg_hba_common:
- '"# allow: meta node access with password"'
- host all all 10.10.10.10/32 md5
- '"# allow: intranet admin role with password"'
- host all +dbrole_admin 10.0.0.0/8 md5
- host all +dbrole_admin 172.16.0.0/12 md5
- host all +dbrole_admin 192.168.0.0/16 md5
- '"# allow local (pgbouncer) read-write user (production user) password access"'
- local all +dbrole_readwrite md5
- host all +dbrole_readwrite 127.0.0.1/32 md5
- '"# intranet common user password access"'
- host all all 10.0.0.0/8 md5
- host all all 172.16.0.0/12 md5
- host all all 192.168.0.0/16 md5
pg_hba_primary: [ ]
pg_hba_replica:
- '"# allow remote readonly user (stats, personal user) password access (directly)"'
- local all +dbrole_readonly md5
- host all +dbrole_readonly 127.0.0.1/32 md5
# this hba is added directly to /etc/pgbouncer/pgb_hba.conf instead of patroni conf
pg_hba_pgbouncer:
- '# biz_user intranet password access'
- local all all md5
- host all all 127.0.0.1/32 md5
- host all all 10.0.0.0/8 md5
- host all all 172.16.0.0/12 md5
- host all all 192.168.0.0/16 md5
# - credential - #
pg_dbsu_password: '' # dbsu password (leaving blank will disable sa password login)
pg_replication_username: replicator # replication user
pg_replication_password: replicator # replication password
pg_monitor_username: dbuser_monitor # monitor user
pg_monitor_password: dbuser_monitor # monitor password
# - default - #
# pg_default_username: postgres # non 'postgres' will create a default admin user (not superuser)
# pg_default_password: postgres # dbsu password, omit for 'postgres'
# pg_default_database: postgres # non 'postgres' will create a default database
pg_default_schema: public # default schema will be create under default database and used as first element of search_path
pg_default_extensions: "tablefunc,postgres_fdw,file_fdw,btree_gist,btree_gin,pg_trgm"
# - pgbouncer - #
pgbouncer_port: 6432 # default pgbouncer port
pgbouncer_poolmode: transaction # default pooling mode: transaction pooling
pgbouncer_max_db_conn: 100 # important! do not set this larger than postgres max conn or conn limit
#------------------------------------------------------------------------------
# MONITOR PROVISION
#------------------------------------------------------------------------------
# - monitor options -
node_exporter_port: 9100 # default port for node exporter
pg_exporter_port: 9630 # default port for pg exporter
pgbouncer_exporter_port: 9631 # default port for pgbouncer exporter
exporter_metrics_path: /metrics # default metric path for pg related exporter
#------------------------------------------------------------------------------
# PROXY PROVISION
#------------------------------------------------------------------------------
# - vip - #
vip_enabled: true # level2 vip requires primary/standby under same switch
# vip_address: 127.0.0.1 # virtual ip address ip/cidr
# vip_cidrmask: 32 # virtual ip address cidr mask
# vip_interface: eth0 # virtual ip network interface
# - haproxy - #
haproxy_enabled: true # enable haproxy among every cluster members
haproxy_policy: leastconn # roundrobin, leastconn
haproxy_admin_username: admin # default haproxy admin username
haproxy_admin_password: admin # default haproxy admin password
haproxy_client_timeout: 3h # client side connection timeout
haproxy_server_timeout: 3h # server side connection timeout
haproxy_exporter_port: 9101 # default admin/exporter port
haproxy_check_port: 8008 # default health check port (patroni 8008 by default)
haproxy_primary_port: 5433 # default primary port 5433
haproxy_replica_port: 5434 # default replica port 5434
haproxy_backend_port: 6432 # default target port: pgbouncer:6432 postgres:5432
...
Customize
There are two ways to customize pigsty besides of variables, which are patroni template and initdb template
Patroni Template
For the sake of unification, Pigsty use patroni for cluster bootstrap even if you choose not enabling it at all. So you can customize your database cluster with patroni configuration.
Pigsty is shipped with four pre-defined patroni templates/
oltp.yml
Common OTLP database cluster, default configuration
olap.yml
OLAP database cluster, increasing throughput and long-run queries
crit.yml
Critical database cluster which values security and intergity more than availability
tiny.yml
Tiny database cluster that runs on small or virtual machine. Which is default for this demo
You can customize those templates or just write your own, and specify template path with variable pg_conf
Initdb Template
When database cluster is initialized. there’s a chance that user can intercede. E.g: create default roles and users, schemas, privilleges and so forth.
Pigsty will use ../roles/postgres/templates/initdb.sh
as the default initdb scripts. It is a shell scripts run as dbsu that can do anything to a newly bootstrapped database.
The default initdb scripts will customize database according to following variables:
pg_default_username: postgres # non 'postgres' will create a default admin user (not superuser)
pg_default_password: postgres # dbsu password, omit for 'postgres'
pg_default_database: postgres # non 'postgres' will create a default database
pg_default_schema: public # default schema will be create under default database and used as first element of search_path
pg_default_extensions: "tablefunc,postgres_fdw,file_fdw,btree_gist,btree_gin,pg_trgm"
Of course, you can customize initdb template or just write your own. and specify template path with variable pg_init
Pigsty配置指南
Pigsty的配置通过200+个参数定义了一套数据库基础设施,以及多个数据库集群,是项目的灵魂所在。
太长不看
-
配置文件采用YAML格式的Ansible Inventory ,默认将所有机器与配置参数都定义在同一配置文件中。
-
配置文件分为两大部分:全局变量定义,以及数据库集群定义。
-
全局变量定义all.vars
包含整个环境统一使用的配置,通常生产环境,开发环境等不同环境会有自己的一套配置。
-
数据库集群定义all.children
使用Ansible群组语法,每个数据库集群单独定义一个群组,特殊群组meta
下的机器标记为中控机
-
每个数据库集群/分组可以带有自己的变量,群组变量会覆盖全局变量,例如默认数据库名、用户名的定制可以使用群组变量。
-
每个数据库集群包含至少一个主机,每个主机只能隶属于一个数据库集群,但中控机分组下的机器可以同时隶属于普通数据库群组。
-
每个数据库集群必须包含一个且仅包含一个主库(主机变量 pg_role=primary
)
-
每个数据库实例必须带有三个变量:集群名pg_cluster
,实例角色pg_role
,实例序号:pg_seq
。
-
变量优先级:命令行变量 > 主机变量 > 群组变量 > 全局变量 > 默认变量
集群清单
集群清单定义了系统需要管理的数据库实例,一个数据库集群所需的最少信息包括:
- 外部IP地址(或其他连接信息)
- 集群名称
pg_cluster
,遵循DNS命名标准,只包含小写字母,数字和-
- 实例标号
pg_seq
,实例标号为非负整数,必须在集群范围内唯一,通常建议从0开始依次分配。
- 实例角色
pg_role
,实例角色必须为primary
或 replica
,一个数据库集群中有且仅能有一个主库。
- 其它变量,可以按照需求在主机或群组级别配置,并覆盖全局配置与默认配置。
集群清单也可以按照Ansible标准使用ini
格式(不推荐),如下所示。
[pg-test]
10.10.10.11 pg_role=primary pg_seq=1
10.10.10.12 pg_role=replica pg_seq=2
10.10.10.13 pg_role=replica pg_seq=3
[pg-test:vars]
pg_cluster = pg-test
pg_version = 12
全局变量定义
全局变量默认定义于all.vars
,也可以遵循ansible标准使用通过其他方式定义。
全局变量旨在针对一套环境配置统一的默认选项。针对不同的环境(开发,测试,生产),可以使用不同的全局变量。
全局变量针对所有机器生效,当用户希望使用统一的配置时,例如在所有机器上配置相同的 DNS,NTP Server,安装相同的软件包,使用统一的su密码时,可以修改全局变量。
全局变量定义分为8个部分,具体的配置项请参阅文档
- 连接信息
- 本地源定义
- 机器节点初始化
- 控制节点初始化
- DCS元数据库初始化
- Postgres安装
- Postgres集群初始化
- 监控初始化
- 负载均衡代理初始化
单节点最小化配置样例
下面的例子定义了一个仅包含一个节点的环境。
---
######################################################################
# File : min.yml
# Path : inventory/min.yml
# Desc : Configuration file for (min)imal environment
# Note : follow ansible inventory file format
# Ctime : 2020-09-22
# Mtime : 2020-09-22
# Copyright (C) 2019-2020 Ruohang Feng
######################################################################
######################################################################
# Minimal Environment Inventory #
######################################################################
all: # top-level namespace, match all hosts
#==================================================================#
# Clusters #
#==================================================================#
children: # top-level groups, one group per database cluster (and special group 'meta')
#-----------------------------
# meta controller
#-----------------------------
meta: # special group 'meta' defines the main controller machine
vars:
meta_node: true # mark node as meta controller
ansible_group_priority: 99 # meta group is top priority
# nodes in meta group (1-3)
hosts:
10.10.10.10: # meta node IP ADDRESS
ansible_host: meta # comment this if not access via ssh alias
#-----------------------------
# cluster: pg-meta
#-----------------------------
pg-meta:
# - cluster configs - #
vars:
pg_cluster: pg-meta # define actual cluster name
pg_version: 12 # define installed pgsql version
pg_default_username: meta # default business username
pg_default_password: meta # default business password
pg_default_database: meta # default database name
vip_enabled: true # enable/disable vip (require members in same LAN)
vip_address: 10.10.10.2 # virtual ip address
vip_cidrmask: 8 # cidr network mask length
vip_interface: eth1 # interface to add virtual ip
#==================================================================#
# Globals #
#==================================================================#
vars:
proxy_env: # global proxy env when downloading packages
no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com"
...
沙箱环境配置文件 (vagrant)
---
######################################################################
# File : dev.yml
# Path : inventory/dev.yml
# Desc : Configuration file for development (demo) environment
# Note : follow ansible inventory file format
# Ctime : 2020-09-22
# Mtime : 2020-09-22
# Copyright (C) 2019-2020 Ruohang Feng
######################################################################
######################################################################
# Development Environment Inventory #
######################################################################
all: # top-level namespace, match all hosts
#==================================================================#
# Clusters #
#==================================================================#
children: # top-level groups, one group per database cluster (and special group 'meta')
#-----------------------------
# meta controller
#-----------------------------
meta: # special group 'meta' defines the main controller machine
vars:
meta_node: true # mark node as meta controller
ansible_group_priority: 99 # meta group is top priority
# nodes in meta group (1-3)
hosts:
10.10.10.10: # meta node IP ADDRESS
ansible_host: meta # comment this if not access via ssh alias
#-----------------------------
# cluster: pg-meta
#-----------------------------
pg-meta:
# - cluster configs - #
vars:
# basic settings
pg_cluster: pg-meta # define actual cluster name
pg_version: 13 # define installed pgsql version
node_tune: oltp # tune node into oltp|olap|crit|tiny mode
pg_conf: oltp.yml # tune pgsql into oltp/olap/crit/tiny mode
# misc
patroni_mode: pause # enter maintenance mode, {default|pause|remove}
patroni_watchdog_mode: off # disable watchdog (require|automatic|off)
pg_hostname: false # overwrite node hostname with pg instance name
pg_nodename: true # overwrite consul nodename with pg instance name
# bootstrap template
pg_init: initdb.sh # bootstrap postgres cluster with initdb.sh
pg_default_username: meta # default business username
pg_default_password: meta # default business password
pg_default_database: meta # default database name
# vip settings
vip_enabled: true # enable/disable vip (require members in same LAN)
vip_address: 10.10.10.2 # virtual ip address
vip_cidrmask: 8 # cidr network mask length
vip_interface: eth1 # interface to add virtual ip
# - cluster members - #
hosts:
10.10.10.10:
ansible_host: meta # comment this if not access via ssh alias
pg_role: primary # initial role: primary & replica
pg_seq: 1 # instance sequence among cluster
#-----------------------------
# cluster: pg-test
#-----------------------------
pg-test: # define cluster named 'pg-test'
# - cluster configs - #
vars:
# basic settings
pg_cluster: pg-test # define actual cluster name
pg_version: 13 # define installed pgsql version
node_tune: tiny # tune node into oltp|olap|crit|tiny mode
pg_conf: tiny.yml # tune pgsql into oltp/olap/crit/tiny mode
# bootstrap template
pg_init: initdb.sh # bootstrap postgres cluster with initdb.sh
pg_default_username: test # default business username
pg_default_password: test # default business password
pg_default_database: test # default database name
# vip settings
vip_enabled: true # enable/disable vip (require members in same LAN)
vip_address: 10.10.10.3 # virtual ip address
vip_cidrmask: 8 # cidr network mask length
vip_interface: eth1 # interface to add virtual ip
# - cluster members - #
hosts:
10.10.10.11:
ansible_host: node-1 # comment this if not access via ssh alias
pg_role: primary # initial role: primary & replica
pg_seq: 1 # instance sequence among cluster
10.10.10.12:
ansible_host: node-2 # comment this if not access via ssh alias
pg_role: replica # initial role: primary & replica
pg_seq: 2 # instance sequence among cluster
10.10.10.13:
ansible_host: node-3 # comment this if not access via ssh alias
pg_role: replica # initial role: primary & replica
pg_seq: 3 # instance sequence among cluster
#==================================================================#
# Globals #
#==================================================================#
vars:
#------------------------------------------------------------------------------
# CONNECTION PARAMETERS
#------------------------------------------------------------------------------
# this section defines connection parameters
# ansible_user: vagrant # admin user with ssh access and sudo privilege
proxy_env: # global proxy env when downloading packages
no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com"
#------------------------------------------------------------------------------
# REPO PROVISION
#------------------------------------------------------------------------------
# this section defines how to build a local repo
repo_enabled: true # build local yum repo on meta nodes?
repo_name: pigsty # local repo name
repo_address: yum.pigsty # repo external address (ip:port or url)
repo_port: 80 # listen address, must same as repo_address
repo_home: /www # default repo dir location
repo_rebuild: false # force re-download packages
repo_remove: true # remove existing repos
# - where to download - #
repo_upstreams:
- name: base
description: CentOS-$releasever - Base - Aliyun Mirror
baseurl:
- http://mirrors.aliyun.com/centos/$releasever/os/$basearch/
- http://mirrors.aliyuncs.com/centos/$releasever/os/$basearch/
- http://mirrors.cloud.aliyuncs.com/centos/$releasever/os/$basearch/
gpgcheck: no
failovermethod: priority
- name: updates
description: CentOS-$releasever - Updates - Aliyun Mirror
baseurl:
- http://mirrors.aliyun.com/centos/$releasever/updates/$basearch/
- http://mirrors.aliyuncs.com/centos/$releasever/updates/$basearch/
- http://mirrors.cloud.aliyuncs.com/centos/$releasever/updates/$basearch/
gpgcheck: no
failovermethod: priority
- name: extras
description: CentOS-$releasever - Extras - Aliyun Mirror
baseurl:
- http://mirrors.aliyun.com/centos/$releasever/extras/$basearch/
- http://mirrors.aliyuncs.com/centos/$releasever/extras/$basearch/
- http://mirrors.cloud.aliyuncs.com/centos/$releasever/extras/$basearch/
gpgcheck: no
failovermethod: priority
- name: epel
description: CentOS $releasever - EPEL - Aliyun Mirror
baseurl: http://mirrors.aliyun.com/epel/$releasever/$basearch
gpgcheck: no
failovermethod: priority
- name: grafana
description: Grafana - TsingHua Mirror
gpgcheck: no
baseurl: https://mirrors.tuna.tsinghua.edu.cn/grafana/yum/rpm
- name: prometheus
description: Prometheus and exporters
gpgcheck: no
baseurl: https://packagecloud.io/prometheus-rpm/release/el/$releasever/$basearch
- name: pgdg-common
description: PostgreSQL common RPMs for RHEL/CentOS $releasever - $basearch
gpgcheck: no
baseurl: https://download.postgresql.org/pub/repos/yum/common/redhat/rhel-$releasever-$basearch
- name: pgdg13
description: PostgreSQL 13 for RHEL/CentOS $releasever - $basearch - Updates testing
gpgcheck: no
baseurl: https://download.postgresql.org/pub/repos/yum/13/redhat/rhel-$releasever-$basearch
- name: centos-sclo
description: CentOS-$releasever - SCLo
gpgcheck: no
mirrorlist: http://mirrorlist.centos.org?arch=$basearch&release=7&repo=sclo-sclo
- name: centos-sclo-rh
description: CentOS-$releasever - SCLo rh
gpgcheck: no
mirrorlist: http://mirrorlist.centos.org?arch=$basearch&release=7&repo=sclo-rh
- name: nginx
description: Nginx Official Yum Repo
skip_if_unavailable: true
gpgcheck: no
baseurl: http://nginx.org/packages/centos/$releasever/$basearch/
- name: haproxy
description: Copr repo for haproxy
skip_if_unavailable: true
gpgcheck: no
baseurl: https://download.copr.fedorainfracloud.org/results/roidelapluie/haproxy/epel-$releasever-$basearch/
# - what to download - #
repo_packages:
# repo bootstrap packages
- epel-release nginx wget yum-utils yum createrepo # bootstrap packages
# node basic packages
- ntp chrony uuid lz4 nc pv jq vim-enhanced make patch bash lsof wget unzip git tuned # basic system util
- readline zlib openssl libyaml libxml2 libxslt perl-ExtUtils-Embed ca-certificates # basic pg dependency
- numactl grubby sysstat dstat iotop bind-utils net-tools tcpdump socat ipvsadm telnet # system utils
# dcs & monitor packages
- grafana prometheus2 pushgateway alertmanager # monitor and ui
- node_exporter postgres_exporter nginx_exporter blackbox_exporter # exporter
- consul consul_exporter consul-template etcd # dcs
# python3 dependencies
- ansible python python-pip python-psycopg2 # ansible & python
- python3 python3-psycopg2 python36-requests python3-etcd python3-consul # python3
- python36-urllib3 python36-idna python36-pyOpenSSL python36-cryptography # python3 patroni extra deps
# proxy and load balancer
- haproxy keepalived dnsmasq # proxy and dns
# postgres common Packages
- patroni patroni-consul patroni-etcd pgbouncer pg_cli pgbadger pg_activity # major components
- pgcenter boxinfo check_postgres emaj pgbconsole pg_bloat_check pgquarrel # other common utils
- barman barman-cli pgloader pgFormatter pitrery pspg pgxnclient PyGreSQL pgadmin4
# postgres 13 packages
- postgresql13* postgis31* # postgres 13 and postgis 31
- pg_qualstats13 pg_stat_kcache13 system_stats_13 bgw_replstatus13 # stats extensions
- plr13 plsh13 plpgsql_check_13 pldebugger13 # pl extensions
- hdfs_fdw_13 mongo_fdw13 mysql_fdw_13 ogr_fdw13 redis_fdw_13 # FDW extensions
- wal2json13 count_distinct13 ddlx_13 geoip13 orafce13 # other extensions
- hypopg_13 ip4r13 jsquery_13 logerrors_13 periods_13 pg_auto_failover_13 pg_catcheck13
- pg_fkpart13 pg_jobmon13 pg_partman13 pg_prioritize_13 pg_track_settings13 pgaudit15_13
- pgcryptokey13 pgexportdoc13 pgimportdoc13 pgmemcache-13 pgmp13 pgq-13 # pgrouting_13
- pguint13 pguri13 prefix13 safeupdate_13 semver13 table_version13 tdigest13
# Postgres 12 Packages
# - postgresql12* postgis30_12* timescaledb_12 citus_12 pglogical_12 # postgres 12 basic
# - pg_qualstats12 pg_cron_12 pg_repack12 pg_squeeze12 pg_stat_kcache12 wal2json12 pgpool-II-12 pgpool-II-12-extensions python3-psycopg2 python2-psycopg2
# - ddlx_12 bgw_replstatus12 count_distinct12 extra_window_functions_12 geoip12 hll_12 hypopg_12 ip4r12 jsquery_12 multicorn12 osm_fdw12 mysql_fdw_12 ogr_fdw12 mongo_fdw12 hdfs_fdw_12 cstore_fdw_12 wal2mongo12 orafce12 pagila12 pam-pgsql12 passwordcheck_cracklib12 periods_12 pg_auto_failover_12 pg_bulkload12 pg_catcheck12 pg_comparator12 pg_filedump12 pg_fkpart12 pg_jobmon12 pg_partman12 pg_pathman12 pg_track_settings12 pg_wait_sampling_12 pgagent_12 pgaudit14_12 pgauditlogtofile-12 pgbconsole12 pgcryptokey12 pgexportdoc12 pgfincore12 pgimportdoc12 pgmemcache-12 pgmp12 pgq-12 pgrouting_12 pgtap12 plpgsql_check_12 plr12 plsh12 postgresql_anonymizer12 postgresql-unit12 powa_12 prefix12 repmgr12 safeupdate_12 semver12 slony1-12 sqlite_fdw12 sslutils_12 system_stats_12 table_version12 topn_12
repo_url_packages:
- https://github.com/Vonng/pg_exporter/releases/download/v0.2.0/pg_exporter-0.2.0-1.el7.x86_64.rpm
- https://github.com/cybertec-postgresql/vip-manager/releases/download/v0.6/vip-manager_0.6-1_amd64.rpm
- http://guichaz.free.fr/polysh/files/polysh-0.4-1.noarch.rpm
#------------------------------------------------------------------------------
# NODE PROVISION
#------------------------------------------------------------------------------
# this section defines how to provision nodes
# - node dns - #
node_dns_hosts: # static dns records in /etc/hosts
- 10.10.10.10 yum.pigsty
node_dns_server: add # add (default) | none (skip) | overwrite (remove old settings)
node_dns_servers: # dynamic nameserver in /etc/resolv.conf
- 10.10.10.10
node_dns_options: # dns resolv options
- options single-request-reopen timeout:1 rotate
- domain service.consul
# - node repo - #
node_repo_method: local # none|local|public (use local repo for production env)
node_repo_remove: true # whether remove existing repo
# local repo url (if method=local, make sure firewall is configured or disabled)
node_local_repo_url:
- http://yum.pigsty/pigsty.repo
# - node packages - #
node_packages: # common packages for all nodes
- wget,yum-utils,ntp,chrony,tuned,uuid,lz4,vim-minimal,make,patch,bash,lsof,wget,unzip,git,readline,zlib,openssl
- numactl,grubby,sysstat,dstat,iotop,bind-utils,net-tools,tcpdump,socat,ipvsadm,telnet,tuned,pv,jq
- python3,python3-psycopg2,python36-requests,python3-etcd,python3-consul
- python36-urllib3,python36-idna,python36-pyOpenSSL,python36-cryptography
- node_exporter,consul,consul-template,etcd,haproxy,keepalived,vip-manager
node_extra_packages: # extra packages for all nodes
- patroni,patroni-consul,patroni-etcd,pgbouncer,pgbadger,pg_activity
node_meta_packages: # packages for meta nodes only
- grafana,prometheus2,alertmanager,nginx_exporter,blackbox_exporter,pushgateway
- dnsmasq,nginx,ansible,pgbadger,polysh
# - node features - #
node_disable_numa: false # disable numa, important for production database, reboot required
node_disable_swap: false # disable swap, important for production database
node_disable_firewall: true # disable firewall (required if using kubernetes)
node_disable_selinux: true # disable selinux (required if using kubernetes)
node_static_network: true # keep dns resolver settings after reboot
node_disk_prefetch: false # setup disk prefetch on HDD to increase performance
# - node kernel modules - #
node_kernel_modules:
- softdog
- br_netfilter
- ip_vs
- ip_vs_rr
- ip_vs_rr
- ip_vs_wrr
- ip_vs_sh
- nf_conntrack_ipv4
# - node tuned - #
node_tune: tiny # install and activate tuned profile: none|oltp|olap|crit|tiny
node_sysctl_params: # set additional sysctl parameters, k:v format
net.bridge.bridge-nf-call-iptables: 1 # for kubernetes
# - node user - #
node_admin_setup: true # setup an default admin user ?
node_admin_uid: 88 # uid and gid for admin user
node_admin_username: admin # default admin user
node_admin_ssh_exchange: true # exchange ssh key among cluster ?
node_admin_pks: # public key list that will be installed
- 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQC7IMAMNavYtWwzAJajKqwdn3ar5BhvcwCnBTxxEkXhGlCO2vfgosSAQMEflfgvkiI5nM1HIFQ8KINlx1XLO7SdL5KdInG5LIJjAFh0pujS4kNCT9a5IGvSq1BrzGqhbEcwWYdju1ZPYBcJm/MG+JD0dYCh8vfrYB/cYMD0SOmNkQ== vagrant@pigsty.com'
# - node ntp - #
node_ntp_service: ntp # ntp or chrony
node_ntp_config: true # overwrite existing ntp config?
node_timezone: Asia/Shanghai # default node timezone
node_ntp_servers: # default NTP servers
- pool cn.pool.ntp.org iburst
- pool pool.ntp.org iburst
- pool time.pool.aliyun.com iburst
- server 10.10.10.10 iburst
#------------------------------------------------------------------------------
# META PROVISION
#------------------------------------------------------------------------------
# - ca - #
ca_method: create # create|copy|recreate
ca_subject: "/CN=root-ca" # self-signed CA subject
ca_homedir: /ca # ca cert directory
ca_cert: ca.crt # ca public key/cert
ca_key: ca.key # ca private key
# - nginx - #
nginx_upstream:
- { name: consul, host: c.pigsty, url: "127.0.0.1:8500" }
- { name: grafana, host: g.pigsty, url: "127.0.0.1:3000" }
- { name: prometheus, host: p.pigsty, url: "127.0.0.1:9090" }
- { name: alertmanager, host: a.pigsty, url: "127.0.0.1:9093" }
# - nameserver - #
dns_records: # dynamic dns record resolved by dnsmasq
- 10.10.10.2 pg-meta # sandbox vip for pg-meta
- 10.10.10.3 pg-test # sandbox vip for pg-test
- 10.10.10.10 meta-1 # sandbox node meta-1 (node-0)
- 10.10.10.11 node-1 # sandbox node node-1
- 10.10.10.12 node-2 # sandbox node node-2
- 10.10.10.13 node-3 # sandbox node node-3
- 10.10.10.10 pigsty
- 10.10.10.10 y.pigsty yum.pigsty
- 10.10.10.10 c.pigsty consul.pigsty
- 10.10.10.10 g.pigsty grafana.pigsty
- 10.10.10.10 p.pigsty prometheus.pigsty
- 10.10.10.10 a.pigsty alertmanager.pigsty
- 10.10.10.10 n.pigsty ntp.pigsty
# - prometheus - #
prometheus_scrape_interval: 2s # global scrape & evaluation interval (2s for dev, 15s for prod)
prometheus_scrape_timeout: 1s # global scrape timeout (1s for dev, 8s for prod)
prometheus_metrics_path: /metrics # default metrics path (only affect job 'pg')
prometheus_data_dir: /export/prometheus/data # prometheus data dir
prometheus_retention: 30d # how long to keep
# - grafana - #
grafana_url: http://10.10.10.10:3000 # grafana url
grafana_admin_password: admin # default grafana admin user password
grafana_plugin: install # none|install|reinstall
grafana_cache: /www/pigsty/grafana/plugins.tar.gz # path to grafana plugins tarball
grafana_provision_mode: db # none|db|api
grafana_plugins: # default grafana plugins list
- redis-datasource
- simpod-json-datasource
- fifemon-graphql-datasource
- sbueringer-consul-datasource
- camptocamp-prometheus-alertmanager-datasource
- ryantxu-ajax-panel
- marcusolsson-hourly-heatmap-panel
- michaeldmoore-multistat-panel
- marcusolsson-treemap-panel
- pr0ps-trackmap-panel
- dalvany-image-panel
- magnesium-wordcloud-panel
- cloudspout-button-panel
- speakyourcode-button-panel
- jdbranham-diagram-panel
- grafana-piechart-panel
- snuids-radar-panel
- digrich-bubblechart-panel
grafana_git_plugins:
- https://github.com/Vonng/grafana-echarts
# grafana_dashboards: [] # default dashboards (use role default)
#------------------------------------------------------------------------------
# DCS PROVISION
#------------------------------------------------------------------------------
dcs_type: consul # consul | etcd | both
dcs_name: pigsty # consul dc name | etcd initial cluster token
# dcs server dict in name:ip format
dcs_servers:
meta-1: 10.10.10.10 # you could use existing dcs cluster
# meta-2: 10.10.10.11 # host which have their IP listed here will be init as server
# meta-3: 10.10.10.12 # 3 or 5 dcs nodes are recommend for production environment
dcs_exists_action: skip # abort|skip|clean if dcs server already exists
consul_data_dir: /var/lib/consul # consul data dir (/var/lib/consul by default)
etcd_data_dir: /var/lib/etcd # etcd data dir (/var/lib/consul by default)
#------------------------------------------------------------------------------
# POSTGRES INSTALLATION
#------------------------------------------------------------------------------
# - dbsu - #
pg_dbsu: postgres # os user for database, postgres by default (change it is not recommended!)
pg_dbsu_uid: 26 # os dbsu uid and gid, 26 for default postgres users and groups
pg_dbsu_sudo: limit # none|limit|all|nopass (Privilege for dbsu, limit is recommended)
pg_dbsu_home: /var/lib/pgsql # postgresql binary
pg_dbsu_ssh_exchange: false # exchange ssh key among same cluster
# - postgres packages - #
pg_version: 12 # default postgresql version
pgdg_repo: false # use official pgdg yum repo (disable if you have local mirror)
pg_add_repo: false # add postgres related repo before install (useful if you want a simple install)
pg_bin_dir: /usr/pgsql/bin # postgres binary dir
pg_packages:
- postgresql${pg_version}*
- postgis31_${pg_version}*
- pgbouncer patroni pg_exporter pgbadger
- patroni patroni-consul patroni-etcd pgbouncer pgbadger pg_activity
- python3 python3-psycopg2 python36-requests python3-etcd python3-consul
- python36-urllib3 python36-idna python36-pyOpenSSL python36-cryptography
pg_extensions:
- pg_qualstats${pg_version} pg_stat_kcache${pg_version} wal2json${pg_version}
# - ogr_fdw${pg_version} mysql_fdw_${pg_version} redis_fdw_${pg_version} mongo_fdw${pg_version} hdfs_fdw_${pg_version}
# - count_distinct${version} ddlx_${version} geoip${version} orafce${version} # popular features
# - hypopg_${version} ip4r${version} jsquery_${version} logerrors_${version} periods_${version} pg_auto_failover_${version} pg_catcheck${version}
# - pg_fkpart${version} pg_jobmon${version} pg_partman${version} pg_prioritize_${version} pg_track_settings${version} pgaudit15_${version}
# - pgcryptokey${version} pgexportdoc${version} pgimportdoc${version} pgmemcache-${version} pgmp${version} pgq-${version} pgquarrel pgrouting_${version}
# - pguint${version} pguri${version} prefix${version} safeupdate_${version} semver${version} table_version${version} tdigest${version}
#------------------------------------------------------------------------------
# POSTGRES CLUSTER PROVISION
#------------------------------------------------------------------------------
# - identity - #
# pg_cluster: # [REQUIRED] cluster name (validated during pg_preflight)
# pg_seq: 0 # [REQUIRED] instance seq (validated during pg_preflight)
# pg_role: replica # [REQUIRED] service role (validated during pg_preflight)
pg_hostname: false # overwrite node hostname with pg instance name
pg_nodename: true # overwrite consul nodename with pg instance name
# - retention - #
# pg_exists_action, available options: abort|clean|skip
# - abort: abort entire play's execution (default)
# - clean: remove existing cluster (dangerous)
# - skip: end current play for this host
# pg_exists: false # auxiliary flag variable (DO NOT SET THIS)
pg_exists_action: clean
# - storage - #
pg_data: /pg/data # postgres data directory
pg_fs_main: /export # data disk mount point /pg -> {{ pg_fs_main }}/postgres/{{ pg_instance }}
pg_fs_bkup: /var/backups # backup disk mount point /pg/* -> {{ pg_fs_bkup }}/postgres/{{ pg_instance }}/*
# - connection - #
pg_listen: '0.0.0.0' # postgres listen address, '0.0.0.0' by default (all ipv4 addr)
pg_port: 5432 # postgres port (5432 by default)
# - patroni - #
# patroni_mode, available options: default|pause|remove
# - default: default ha mode
# - pause: into maintenance mode
# - remove: remove patroni after bootstrap
patroni_mode: default # pause|default|remove
pg_namespace: /pg # top level key namespace in dcs
patroni_port: 8008 # default patroni port
patroni_watchdog_mode: automatic # watchdog mode: off|automatic|required
# - template - #
pg_conf: tiny.yml # user provided patroni config template path
pg_init: initdb.sh # user provided post-init script path, default: initdb.sh
# - authentication - #
pg_hba_common:
- '"# allow: meta node access with password"'
- host all all 10.10.10.10/32 md5
- '"# allow: intranet admin role with password"'
- host all +dbrole_admin 10.0.0.0/8 md5
- host all +dbrole_admin 172.16.0.0/12 md5
- host all +dbrole_admin 192.168.0.0/16 md5
- '"# allow local (pgbouncer) read-write user (production user) password access"'
- local all +dbrole_readwrite md5
- host all +dbrole_readwrite 127.0.0.1/32 md5
- '"# intranet common user password access"'
- host all all 10.0.0.0/8 md5
- host all all 172.16.0.0/12 md5
- host all all 192.168.0.0/16 md5
pg_hba_primary: [ ]
pg_hba_replica:
- '"# allow remote readonly user (stats, personal user) password access (directly)"'
- local all +dbrole_readonly md5
- host all +dbrole_readonly 127.0.0.1/32 md5
# this hba is added directly to /etc/pgbouncer/pgb_hba.conf instead of patroni conf
pg_hba_pgbouncer:
- '# biz_user intranet password access'
- local all all md5
- host all all 127.0.0.1/32 md5
- host all all 10.0.0.0/8 md5
- host all all 172.16.0.0/12 md5
- host all all 192.168.0.0/16 md5
# - credential - #
pg_dbsu_password: '' # dbsu password (leaving blank will disable sa password login)
pg_replication_username: replicator # replication user
pg_replication_password: replicator # replication password
pg_monitor_username: dbuser_monitor # monitor user
pg_monitor_password: dbuser_monitor # monitor password
# - default - #
# pg_default_username: postgres # non 'postgres' will create a default admin user (not superuser)
# pg_default_password: postgres # dbsu password, omit for 'postgres'
# pg_default_database: postgres # non 'postgres' will create a default database
pg_default_schema: public # default schema will be create under default database and used as first element of search_path
pg_default_extensions: "tablefunc,postgres_fdw,file_fdw,btree_gist,btree_gin,pg_trgm"
# - pgbouncer - #
pgbouncer_port: 6432 # default pgbouncer port
pgbouncer_poolmode: transaction # default pooling mode: transaction pooling
pgbouncer_max_db_conn: 100 # important! do not set this larger than postgres max conn or conn limit
#------------------------------------------------------------------------------
# MONITOR PROVISION
#------------------------------------------------------------------------------
# - monitor options -
node_exporter_port: 9100 # default port for node exporter
pg_exporter_port: 9630 # default port for pg exporter
pgbouncer_exporter_port: 9631 # default port for pgbouncer exporter
exporter_metrics_path: /metrics # default metric path for pg related exporter
#------------------------------------------------------------------------------
# PROXY PROVISION
#------------------------------------------------------------------------------
# - vip - #
vip_enabled: true # level2 vip requires primary/standby under same switch
# vip_address: 127.0.0.1 # virtual ip address ip/cidr
# vip_cidrmask: 32 # virtual ip address cidr mask
# vip_interface: eth0 # virtual ip network interface
# - haproxy - #
haproxy_enabled: true # enable haproxy among every cluster members
haproxy_policy: leastconn # roundrobin, leastconn
haproxy_admin_username: admin # default haproxy admin username
haproxy_admin_password: admin # default haproxy admin password
haproxy_client_timeout: 3h # client side connection timeout
haproxy_server_timeout: 3h # server side connection timeout
haproxy_exporter_port: 9101 # default admin/exporter port
haproxy_check_port: 8008 # default health check port (patroni 8008 by default)
haproxy_primary_port: 5433 # default primary port 5433
haproxy_replica_port: 5434 # default replica port 5434
haproxy_backend_port: 6432 # default target port: pgbouncer:6432 postgres:5432
...
定制初始化模板
在Pigsty中,除了上述的参数变量,还提供两种定制化的方式
数据库初始化模板
初始化模板是用于初始化数据库集群的定义文件,默认位于roles/postgres/templates/patroni.yml
,采用patroni.yml
配置文件格式
在templates/
目录中,有四种预定义好的初始化模板:
oltp.yml
常规OLTP模板,默认配置
olap.yml
OLAP模板,提高并行度,针对吞吐量优化,针对长时间运行的查询进行优化。
crit.yml
核心业务模板,基于OLTP模板针对安全性,数据完整性进行优化,采用同步复制,启用数据校验和。
tiny.yml
微型数据库模板,针对低资源场景进行优化,例如运行于虚拟机中的演示数据库集群。
用户也可以基于上述模板进行定制与修改,并通过pg_conf
参数使用相应的模板。
数据库初始化脚本
当数据库初始化完毕后,用户通常希望对数据库进行自定义的定制脚本,例如创建统一的默认角色,用户,创建默认的模式,配置默认权限等。
本项目提供了一个默认的初始化脚本roles/postgres/templates/initdb.sh
,基于以下几个变量创建默认的数据库与用户。
pg_default_username: postgres # non 'postgres' will create a default admin user (not superuser)
pg_default_password: postgres # dbsu password, omit for 'postgres'
pg_default_database: postgres # non 'postgres' will create a default database
pg_default_schema: public # default schema will be create under default database and used as first element of search_path
pg_default_extensions: "tablefunc,postgres_fdw,file_fdw,btree_gist,btree_gin,pg_trgm"
用户可以基于本脚本进行定制,并通过pg_init
参数使用相应的自定义脚本。
3 - 基础设施供给
如何使用参数定义基础设施,以及如何供给基础设施
Infrastructure Provision [DRAFT]
TL;DR
-
Configure infrastructure parameters
-
Run infra provision playbook
Parameters
#------------------------------------------------------------------------------
# CONNECTION PARAMETERS
#------------------------------------------------------------------------------
# this section defines connection parameters
# ansible_user: vagrant # admin user with ssh access and sudo privilege
proxy_env: # global proxy env when downloading packages
# http_proxy: 'http://xxxxxx'
# https_proxy: 'http://xxxxxx'
# all_proxy: 'http://xxxxxx'
no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com"
#------------------------------------------------------------------------------
# REPO PROVISION
#------------------------------------------------------------------------------
# this section defines how to build a local repo
repo_enabled: true # build local yum repo on meta nodes?
repo_name: pigsty # local repo name
repo_address: yum.pigsty # repo external address (ip:port or url)
repo_port: 80 # listen address, must same as repo_address
repo_home: /www # default repo dir location
repo_rebuild: false # force re-download packages
repo_remove: true # remove existing repos
# - where to download - #
repo_upstreams:
- name: base
description: CentOS-$releasever - Base - Aliyun Mirror
baseurl:
- http://mirrors.aliyun.com/centos/$releasever/os/$basearch/
- http://mirrors.aliyuncs.com/centos/$releasever/os/$basearch/
- http://mirrors.cloud.aliyuncs.com/centos/$releasever/os/$basearch/
gpgcheck: no
failovermethod: priority
- name: updates
description: CentOS-$releasever - Updates - Aliyun Mirror
baseurl:
- http://mirrors.aliyun.com/centos/$releasever/updates/$basearch/
- http://mirrors.aliyuncs.com/centos/$releasever/updates/$basearch/
- http://mirrors.cloud.aliyuncs.com/centos/$releasever/updates/$basearch/
gpgcheck: no
failovermethod: priority
- name: extras
description: CentOS-$releasever - Extras - Aliyun Mirror
baseurl:
- http://mirrors.aliyun.com/centos/$releasever/extras/$basearch/
- http://mirrors.aliyuncs.com/centos/$releasever/extras/$basearch/
- http://mirrors.cloud.aliyuncs.com/centos/$releasever/extras/$basearch/
gpgcheck: no
failovermethod: priority
- name: epel
description: CentOS $releasever - EPEL - Aliyun Mirror
baseurl: http://mirrors.aliyun.com/epel/$releasever/$basearch
gpgcheck: no
failovermethod: priority
- name: grafana
description: Grafana - TsingHua Mirror
gpgcheck: no
baseurl: https://mirrors.tuna.tsinghua.edu.cn/grafana/yum/rpm
- name: prometheus
description: Prometheus and exporters
gpgcheck: no
baseurl: https://packagecloud.io/prometheus-rpm/release/el/$releasever/$basearch
- name: pgdg-common
description: PostgreSQL common RPMs for RHEL/CentOS $releasever - $basearch
gpgcheck: no
baseurl: https://download.postgresql.org/pub/repos/yum/common/redhat/rhel-$releasever-$basearch
- name: pgdg13
description: PostgreSQL 13 for RHEL/CentOS $releasever - $basearch - Updates testing
gpgcheck: no
baseurl: https://download.postgresql.org/pub/repos/yum/13/redhat/rhel-$releasever-$basearch
- name: centos-sclo
description: CentOS-$releasever - SCLo
gpgcheck: no
mirrorlist: http://mirrorlist.centos.org?arch=$basearch&release=7&repo=sclo-sclo
- name: centos-sclo-rh
description: CentOS-$releasever - SCLo rh
gpgcheck: no
mirrorlist: http://mirrorlist.centos.org?arch=$basearch&release=7&repo=sclo-rh
- name: nginx
description: Nginx Official Yum Repo
skip_if_unavailable: true
gpgcheck: no
baseurl: http://nginx.org/packages/centos/$releasever/$basearch/
- name: haproxy
description: Copr repo for haproxy
skip_if_unavailable: true
gpgcheck: no
baseurl: https://download.copr.fedorainfracloud.org/results/roidelapluie/haproxy/epel-$releasever-$basearch/
# for latest consul & kubernetes
- name: harbottle
description: Copr repo for main owned by harbottle
skip_if_unavailable: true
gpgcheck: no
baseurl: https://download.copr.fedorainfracloud.org/results/harbottle/main/epel-$releasever-$basearch/
# - what to download - #
repo_packages:
# repo bootstrap packages
- epel-release nginx wget yum-utils yum createrepo # bootstrap packages
# node basic packages
- ntp chrony uuid lz4 nc pv jq vim-enhanced make patch bash lsof wget unzip git tuned # basic system util
- readline zlib openssl libyaml libxml2 libxslt perl-ExtUtils-Embed ca-certificates # basic pg dependency
- numactl grubby sysstat dstat iotop bind-utils net-tools tcpdump socat ipvsadm telnet # system utils
# dcs & monitor packages
- grafana prometheus2 pushgateway alertmanager # monitor and ui
- node_exporter postgres_exporter nginx_exporter blackbox_exporter # exporter
- consul consul_exporter consul-template etcd # dcs
# python3 dependencies
- ansible python python-pip python-psycopg2 # ansible & python
- python3 python3-psycopg2 python36-requests python3-etcd python3-consul # python3
- python36-urllib3 python36-idna python36-pyOpenSSL python36-cryptography # python3 patroni extra deps
# proxy and load balancer
- haproxy keepalived dnsmasq # proxy and dns
# postgres common Packages
- patroni patroni-consul patroni-etcd pgbouncer pg_cli pgbadger pg_activity # major components
- pgcenter boxinfo check_postgres emaj pgbconsole pg_bloat_check pgquarrel # other common utils
- barman barman-cli pgloader pgFormatter pitrery pspg pgxnclient PyGreSQL pgadmin4 tail_n_mail
# postgres 13 packages
- postgresql13* postgis31* citus_13 pgrouting_13 # postgres 13 and postgis 31
- pg_repack13 pg_squeeze13 # maintenance extensions
- pg_qualstats13 pg_stat_kcache13 system_stats_13 bgw_replstatus13 # stats extensions
- plr13 plsh13 plpgsql_check_13 plproxy13 plr13 plsh13 plpgsql_check_13 pldebugger13 # PL extensions # pl extensions
- hdfs_fdw_13 mongo_fdw13 mysql_fdw_13 ogr_fdw13 redis_fdw_13 pgbouncer_fdw13 # FDW extensions
- wal2json13 count_distinct13 ddlx_13 geoip13 orafce13 # MISC extensions
- rum_13 hypopg_13 ip4r13 jsquery_13 logerrors_13 periods_13 pg_auto_failover_13 pg_catcheck13
- pg_fkpart13 pg_jobmon13 pg_partman13 pg_prioritize_13 pg_track_settings13 pgaudit15_13
- pgcryptokey13 pgexportdoc13 pgimportdoc13 pgmemcache-13 pgmp13 pgq-13
- pguint13 pguri13 prefix13 safeupdate_13 semver13 table_version13 tdigest13
repo_url_packages:
- https://github.com/Vonng/pg_exporter/releases/download/v0.3.1/pg_exporter-0.3.1-1.el7.x86_64.rpm
- https://github.com/cybertec-postgresql/vip-manager/releases/download/v0.6/vip-manager_0.6-1_amd64.rpm
- http://guichaz.free.fr/polysh/files/polysh-0.4-1.noarch.rpm
#------------------------------------------------------------------------------
# NODE PROVISION
#------------------------------------------------------------------------------
# this section defines how to provision nodes
# - node dns - #
node_dns_hosts: # static dns records in /etc/hosts
- 10.10.10.10 yum.pigsty
node_dns_server: add # add (default) | none (skip) | overwrite (remove old settings)
node_dns_servers: # dynamic nameserver in /etc/resolv.conf
- 10.10.10.10
node_dns_options: # dns resolv options
- options single-request-reopen timeout:1 rotate
- domain service.consul
# - node repo - #
node_repo_method: local # none|local|public (use local repo for production env)
node_repo_remove: true # whether remove existing repo
# local repo url (if method=local, make sure firewall is configured or disabled)
node_local_repo_url:
- http://yum.pigsty/pigsty.repo
# - node packages - #
node_packages: # common packages for all nodes
- wget,yum-utils,ntp,chrony,tuned,uuid,lz4,vim-minimal,make,patch,bash,lsof,wget,unzip,git,readline,zlib,openssl
- numactl,grubby,sysstat,dstat,iotop,bind-utils,net-tools,tcpdump,socat,ipvsadm,telnet,tuned,pv,jq
- python3,python3-psycopg2,python36-requests,python3-etcd,python3-consul
- python36-urllib3,python36-idna,python36-pyOpenSSL,python36-cryptography
- node_exporter,consul,consul-template,etcd,haproxy,keepalived,vip-manager
node_extra_packages: # extra packages for all nodes
- patroni,patroni-consul,patroni-etcd,pgbouncer,pgbadger,pg_activity
node_meta_packages: # packages for meta nodes only
- grafana,prometheus2,alertmanager,nginx_exporter,blackbox_exporter,pushgateway
- dnsmasq,nginx,ansible,pgbadger,polysh
# - node features - #
node_disable_numa: false # disable numa, important for production database, reboot required
node_disable_swap: false # disable swap, important for production database
node_disable_firewall: true # disable firewall (required if using kubernetes)
node_disable_selinux: true # disable selinux (required if using kubernetes)
node_static_network: true # keep dns resolver settings after reboot
node_disk_prefetch: false # setup disk prefetch on HDD to increase performance
# - node kernel modules - #
node_kernel_modules:
- softdog
- br_netfilter
- ip_vs
- ip_vs_rr
- ip_vs_rr
- ip_vs_wrr
- ip_vs_sh
- nf_conntrack_ipv4
# - node tuned - #
node_tune: tiny # install and activate tuned profile: none|oltp|olap|crit|tiny
node_sysctl_params: # set additional sysctl parameters, k:v format
net.bridge.bridge-nf-call-iptables: 1 # for kubernetes
# - node user - #
node_admin_setup: true # setup an default admin user ?
node_admin_uid: 88 # uid and gid for admin user
node_admin_username: admin # default admin user
node_admin_ssh_exchange: true # exchange ssh key among cluster ?
node_admin_pks: # public key list that will be installed
- 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQC7IMAMNavYtWwzAJajKqwdn3ar5BhvcwCnBTxxEkXhGlCO2vfgosSAQMEflfgvkiI5nM1HIFQ8KINlx1XLO7SdL5KdInG5LIJjAFh0pujS4kNCT9a5IGvSq1BrzGqhbEcwWYdju1ZPYBcJm/MG+JD0dYCh8vfrYB/cYMD0SOmNkQ== vagrant@pigsty.com'
# - node ntp - #
node_ntp_service: ntp # ntp or chrony
node_ntp_config: true # overwrite existing ntp config?
node_timezone: Asia/Shanghai # default node timezone
node_ntp_servers: # default NTP servers
- pool cn.pool.ntp.org iburst
- pool pool.ntp.org iburst
- pool time.pool.aliyun.com iburst
- server 10.10.10.10 iburst
#------------------------------------------------------------------------------
# META PROVISION
#------------------------------------------------------------------------------
# - ca - #
ca_method: create # create|copy|recreate
ca_subject: "/CN=root-ca" # self-signed CA subject
ca_homedir: /ca # ca cert directory
ca_cert: ca.crt # ca public key/cert
ca_key: ca.key # ca private key
# - nginx - #
nginx_upstream:
- {name: home, host: pigsty, url: "127.0.0.1:3000"}
- { name: consul, host: c.pigsty, url: "127.0.0.1:8500" }
- { name: grafana, host: g.pigsty, url: "127.0.0.1:3000" }
- { name: prometheus, host: p.pigsty, url: "127.0.0.1:9090" }
- { name: alertmanager, host: a.pigsty, url: "127.0.0.1:9093" }
# - nameserver - #
dns_records: # dynamic dns record resolved by dnsmasq
- 10.10.10.2 pg-meta # sandbox vip for pg-meta
- 10.10.10.3 pg-test # sandbox vip for pg-test
- 10.10.10.10 meta-1 # sandbox node meta-1 (node-0)
- 10.10.10.11 node-1 # sandbox node node-1
- 10.10.10.12 node-2 # sandbox node node-2
- 10.10.10.13 node-3 # sandbox node node-3
- 10.10.10.10 pigsty
- 10.10.10.10 y.pigsty yum.pigsty
- 10.10.10.10 c.pigsty consul.pigsty
- 10.10.10.10 g.pigsty grafana.pigsty
- 10.10.10.10 p.pigsty prometheus.pigsty
- 10.10.10.10 a.pigsty alertmanager.pigsty
- 10.10.10.10 n.pigsty ntp.pigsty
# - prometheus - #
prometheus_scrape_interval: 2s # global scrape & evaluation interval (2s for dev, 15s for prod)
prometheus_scrape_timeout: 1s # global scrape timeout (1s for dev, 1s for prod)
prometheus_metrics_path: /metrics # default metrics path (only affect job 'pg')
prometheus_data_dir: /export/prometheus/data # prometheus data dir
prometheus_retention: 30d # how long to keep
# - grafana - #
grafana_url: http://admin:admin@10.10.10.10:3000 # grafana url
grafana_admin_password: admin # default grafana admin user password
grafana_plugin: install # none|install|reinstall
grafana_cache: /www/pigsty/grafana/plugins.tar.gz # path to grafana plugins tarball
grafana_customize: true # customize grafana resources
grafana_plugins: # default grafana plugins list
- redis-datasource
- simpod-json-datasource
- fifemon-graphql-datasource
- sbueringer-consul-datasource
- camptocamp-prometheus-alertmanager-datasource
- ryantxu-ajax-panel
- marcusolsson-hourly-heatmap-panel
- michaeldmoore-multistat-panel
- marcusolsson-treemap-panel
- pr0ps-trackmap-panel
- dalvany-image-panel
- magnesium-wordcloud-panel
- cloudspout-button-panel
- speakyourcode-button-panel
- jdbranham-diagram-panel
- grafana-piechart-panel
- snuids-radar-panel
- digrich-bubblechart-panel
grafana_git_plugins:
- https://github.com/Vonng/grafana-echarts
#------------------------------------------------------------------------------
# DCS PROVISION
#------------------------------------------------------------------------------
dcs_type: consul # consul | etcd | both
dcs_name: pigsty # consul dc name | etcd initial cluster token
# dcs server dict in name:ip format
dcs_servers:
meta-1: 10.10.10.10 # you could use existing dcs cluster
# meta-2: 10.10.10.11 # host which have their IP listed here will be init as server
# meta-3: 10.10.10.12 # 3 or 5 dcs nodes are recommend for production environment
dcs_exists_action: skip # abort|skip|clean if dcs server already exists
consul_data_dir: /var/lib/consul # consul data dir (/var/lib/consul by default)
etcd_data_dir: /var/lib/etcd # etcd data dir (/var/lib/consul by default)
Playbook
infra.yml
will bootstrap entire infrastructure on given inventory
play #1 (meta): Init local repo TAGS: [repo]
tasks:
Create local repo directory TAGS: [repo, repo_dir]
Backup & remove existing repos TAGS: [repo, repo_upstream]
Add required upstream repos TAGS: [repo, repo_upstream]
Check repo pkgs cache exists TAGS: [repo, repo_prepare]
Set fact whether repo_exists TAGS: [repo, repo_prepare]
Move upstream repo to backup TAGS: [repo, repo_prepare]
Add local file system repos TAGS: [repo, repo_prepare]
repo : Remake yum cache if not exists TAGS: [repo, repo_prepare]
Install repo bootstrap packages TAGS: [repo, repo_boot]
Render repo nginx server files TAGS: [repo, repo_nginx]
Disable selinux for repo server TAGS: [repo, repo_nginx]
Launch repo nginx server TAGS: [repo, repo_nginx]
Waits repo server online TAGS: [repo, repo_nginx]
repo : Download web url packages TAGS: [repo, repo_download]
Download repo packages TAGS: [repo, repo_download]
Download repo pkg deps TAGS: [repo, repo_download]
Create local repo index TAGS: [repo, repo_download]
repo : Copy bootstrap scripts TAGS: [repo, repo_download, repo_script]
Mark repo cache as valid TAGS: [repo, repo_download]
play #2 (all): Provision Node TAGS: [node]
tasks:
Update node hostname TAGS: [node, node_name]
node : Add new hostname to /etc/hosts TAGS: [node, node_name]
node : Write static dns records TAGS: [node, node_dns]
node : Get old nameservers TAGS: [node, node_resolv]
node : Truncate resolv file TAGS: [node, node_resolv]
node : Write resolv options TAGS: [node, node_resolv]
node : Add new nameservers TAGS: [node, node_resolv]
node : Append old nameservers TAGS: [node, node_resolv]
node : Node configure disable firewall TAGS: [node, node_firewall]
node : Node disable selinux by default TAGS: [node, node_firewall]
node : Backup existing repos TAGS: [node, node_repo]
node : Install upstream repo TAGS: [node, node_repo]
node : Install local repo TAGS: [node, node_repo]
Install node basic packages TAGS: [node, node_pkgs]
Install node extra packages TAGS: [node, node_pkgs]
node : Install meta specific packages TAGS: [node, node_pkgs]
Install node basic packages TAGS: [node, node_pkgs]
Install node extra packages TAGS: [node, node_pkgs]
node : Install meta specific packages TAGS: [node, node_pkgs]
node : Node configure disable numa TAGS: [node, node_feature]
node : Node configure disable swap TAGS: [node, node_feature]
node : Node configure unmount swap TAGS: [node, node_feature]
node : Node setup static network TAGS: [node, node_feature]
node : Node configure disable firewall TAGS: [node, node_feature]
node : Node configure disk prefetch TAGS: [node, node_feature]
node : Enable linux kernel modules TAGS: [node, node_kernel]
node : Enable kernel module on reboot TAGS: [node, node_kernel]
node : Get config parameter page count TAGS: [node, node_tuned]
node : Get config parameter page size TAGS: [node, node_tuned]
node : Tune shmmax and shmall via mem TAGS: [node, node_tuned]
node : Create tuned profiles TAGS: [node, node_tuned]
node : Render tuned profiles TAGS: [node, node_tuned]
node : Active tuned profile TAGS: [node, node_tuned]
node : Change additional sysctl params TAGS: [node, node_tuned]
node : Copy default user bash profile TAGS: [node, node_profile]
Setup node default pam ulimits TAGS: [node, node_ulimit]
node : Create os user group admin TAGS: [node, node_admin]
node : Create os user admin TAGS: [node, node_admin]
node : Grant admin group nopass sudo TAGS: [node, node_admin]
node : Add no host checking to ssh config TAGS: [node, node_admin]
node : Add admin ssh no host checking TAGS: [node, node_admin]
node : Fetch all admin public keys TAGS: [node, node_admin]
node : Exchange all admin ssh keys TAGS: [node, node_admin]
node : Install public keys TAGS: [node, node_admin]
node : Install ntp package TAGS: [node, ntp_install]
node : Install chrony package TAGS: [node, ntp_install]
Setup default node timezone TAGS: [node, ntp_config]
node : Copy the ntp.conf file TAGS: [node, ntp_config]
node : Copy the chrony.conf template TAGS: [node, ntp_config]
node : Launch ntpd service TAGS: [node, ntp_launch]
node : Launch chronyd service TAGS: [node, ntp_launch]
play #3 (meta): Init meta service TAGS: [meta]
tasks:
Create local ca directory TAGS: [ca, ca_dir, meta]
Copy ca cert from local files TAGS: [ca, ca_copy, meta]
Check ca key cert exists TAGS: [ca, ca_create, meta]
ca : Create self-signed CA key-cert TAGS: [ca, ca_create, meta]
Make sure nginx package installed TAGS: [meta, nginx]
Copy nginx default config TAGS: [meta, nginx]
Copy nginx upstream conf TAGS: [meta, nginx]
nginx : Create local html directory TAGS: [meta, nginx]
Update default nginx index page TAGS: [meta, nginx]
Restart meta nginx service TAGS: [meta, nginx]
Wait for nginx service online TAGS: [meta, nginx]
Make sure nginx exporter installed TAGS: [meta, nginx, nginx_exporter]
Config nginx_exporter options TAGS: [meta, nginx, nginx_exporter]
Restart nginx_exporter service TAGS: [meta, nginx, nginx_exporter]
Wait for nginx exporter online TAGS: [meta, nginx, nginx_exporter]
Install prometheus and alertmanager TAGS: [meta, prometheus, prometheus_install]
Wipe out prometheus config dir TAGS: [meta, prometheus, prometheus_clean]
Wipe out existing prometheus data TAGS: [meta, prometheus, prometheus_clean]
Recreate prometheus data dir TAGS: [meta, prometheus, prometheus_config]
Copy /etc/prometheus configs TAGS: [meta, prometheus, prometheus_config]
Copy /etc/prometheus opts TAGS: [meta, prometheus, prometheus_config]
Overwrite prometheus scrape_interval TAGS: [meta, prometheus, prometheus_config]
Overwrite prometheus evaluation_interval TAGS: [meta, prometheus, prometheus_config]
Overwrite prometheus scrape_timeout TAGS: [meta, prometheus, prometheus_config]
Overwrite prometheus pg metrics path TAGS: [meta, prometheus, prometheus_config]
Launch prometheus service TAGS: [meta, prometheus, prometheus_launch]
prometheus : Launch alertmanager service TAGS: [meta, prometheus, prometheus_launch]
Wait for prometheus online TAGS: [meta, prometheus, prometheus_launch]
prometheus : Wait for alertmanager online TAGS: [meta, prometheus, prometheus_launch]
Make sure grafana is installed TAGS: [grafana, grafana_install, meta]
Check grafana plugin cache exists TAGS: [grafana, grafana_plugin, meta]
Provision grafana plugins via cache TAGS: [grafana, grafana_plugin, meta]
Download grafana plugins from web TAGS: [grafana, grafana_plugin, meta]
Download grafana plugins from web TAGS: [grafana, grafana_plugin, meta]
Create grafana plugins cache TAGS: [grafana, grafana_plugin, meta]
Copy /etc/grafana/grafana.ini TAGS: [grafana, grafana_config, meta]
Remove grafana provision dir TAGS: [grafana, grafana_config, meta]
grafana : Copy provisioning content TAGS: [grafana, grafana_config, meta]
grafana : Copy pigsty dashboards TAGS: [grafana, grafana_config, meta]
grafana : Copy pigsty icon image TAGS: [grafana, grafana_config, meta]
Replace grafana icon with pigsty TAGS: [grafana, grafana_config, grafana_customize, meta]
Launch grafana service TAGS: [grafana, grafana_launch, meta]
Wait for grafana online TAGS: [grafana, grafana_launch, meta]
Update grafana default preferences TAGS: [grafana, grafana_provision, meta]
Register consul grafana service TAGS: [grafana, grafana_register, meta]
grafana : Reload consul TAGS: [grafana, grafana_register, meta]
play #4 (all): Init dcs TAGS: []
tasks:
Check for existing consul TAGS: [consul_check, dcs]
consul : Consul exists flag fact set TAGS: [consul_check, dcs]
Abort due to consul exists TAGS: [consul_check, dcs]
Clean existing consul instance TAGS: [consul_check, dcs]
Stop any running consul instance TAGS: [consul_check, dcs]
Remove existing consul dir TAGS: [consul_check, dcs]
Recreate consul dir TAGS: [consul_check, dcs]
Make sure consul is installed TAGS: [consul_install, dcs]
Make sure consul dir exists TAGS: [consul_config, dcs]
consul : Get dcs server node names TAGS: [consul_config, dcs]
consul : Get dcs node name from var TAGS: [consul_config, dcs]
consul : Get dcs node name from var TAGS: [consul_config, dcs]
consul : Fetch hostname as dcs node name TAGS: [consul_config, dcs]
consul : Get dcs name from hostname TAGS: [consul_config, dcs]
Copy /etc/consul.d/consul.json TAGS: [consul_config, dcs]
Copy consul agent service TAGS: [consul_config, dcs]
consul : Get dcs bootstrap expect quroum TAGS: [consul_server, dcs]
Copy consul server service unit TAGS: [consul_server, dcs]
Launch consul server service TAGS: [consul_server, dcs]
Wait for consul server online TAGS: [consul_server, dcs]
Launch consul agent service TAGS: [consul_agent, dcs]
Wait for consul agent online TAGS: [consul_agent, dcs]
play #5 (meta): Copy ansible scripts TAGS: [ansible]
tasks:
Create ansible tarball TAGS: [ansible]
Create ansible directory TAGS: [ansible]
Copy ansible tarball TAGS: [ansible]
Extract tarball TAGS: [ansible]
4 - Postgres供应
如何定义并拉起PostgreSQL数据库集群
Postgres Provision
TL;DR
-
Configure postgres parameters in config file
-
Run postgres provision playbook
Parameters
---
#------------------------------------------------------------------------------
# POSTGRES INSTALLATION
#------------------------------------------------------------------------------
# - dbsu - #
pg_dbsu: postgres # os user for database, postgres by default (change it is not recommended!)
pg_dbsu_uid: 26 # os dbsu uid and gid, 26 for default postgres users and groups
pg_dbsu_sudo: limit # none|limit|all|nopass (Privilege for dbsu, limit is recommended)
pg_dbsu_home: /var/lib/pgsql # postgresql binary
pg_dbsu_ssh_exchange: false # exchange ssh key among same cluster
# - packages - #
pg_version: 12 # default postgresql version
pgdg_repo: false # use official pgdg yum repo (disable if you have local mirror)
pg_add_repo: false # add postgres related repo before install (useful if you want a simple install)
pg_bin_dir: /usr/pgsql/bin # postgres binary dir
pg_packages: # packages to be installed (Postgres 13)
- postgresql${pg_version}*
- postgis31_${pg_version}*
- pgbouncer patroni pg_exporter pgbadger
- patroni patroni-consul patroni-etcd pgbouncer pgbadger pg_activity
- python3 python3-psycopg2 python36-requests python3-etcd python3-consul
- python36-urllib3 python36-idna python36-pyOpenSSL python36-cryptography
pg_extensions:
- pg_repack${pg_version} pg_qualstats${pg_version} pg_stat_kcache${pg_version} wal2json${pg_version}
# - ogr_fdw${pg_version} mysql_fdw_${pg_version} redis_fdw_${pg_version} mongo_fdw${pg_version} hdfs_fdw_${pg_version}
# - count_distinct${version} ddlx_${version} geoip${version} orafce${version}
# - hypopg_${version} ip4r${version} jsquery_${version} logerrors_${version} periods_${version} pg_auto_failover_${version} pg_catcheck${version}
# - pg_fkpart${version} pg_jobmon${version} pg_partman${version} pg_prioritize_${version} pg_track_settings${version} pgaudit15_${version}
# - pgcryptokey${version} pgexportdoc${version} pgimportdoc${version} pgmemcache-${version} pgmp${version} pgq-${version} pgquarrel pgrouting_${version}
# - pguint${version} pguri${version} prefix${version} safeupdate_${version} semver${version} table_version${version} tdigest${version}
#------------------------------------------------------------------------------
# POSTGRES PROVISION
#------------------------------------------------------------------------------
# - identity - #
# pg_cluster: # [REQUIRED] cluster name (validated during pg_preflight)
# pg_seq: 0 # [REQUIRED] instance seq (validated during pg_preflight)
pg_role: replica # [REQUIRED] service role (validated during pg_preflight)
pg_hostname: false # overwrite node hostname with pg instance name
# - cleanup - #
# pg_exists_action, available options: abort|clean|skip
# - abort: abort entire play's execution (default)
# - clean: remove existing cluster (dangerous)
# - skip: end current play for this host
pg_exists: false # auxiliary flag variable (DO NOT SET THIS)
pg_exists_action: abort
# - storage - #
pg_data: /pg/data # postgres data directory
pg_fs_main: /export # data disk mount point /pg -> {{ pg_fs_main }}/postgres/{{ pg_instance }}
pg_fs_bkup: /var/backups # backup disk mount point /pg/* -> {{ pg_fs_bkup }}/postgres/{{ pg_instance }}/*
# - connection - #
pg_listen: '0.0.0.0' # postgres listen address, '0.0.0.0' by default (all ipv4 addr)
pg_port: 5432 # postgres port (5432 by default)
pg_localhost: /var/run/postgresql
pg_shared_libraries: pg_stat_statements, auto_explain
#------------------------------------------------------------------------------
# PATRONI PROVISION
#------------------------------------------------------------------------------
# - patroni - #
# patroni_mode, available options: default|pause|remove
# default: default ha mode
# pause: into maintainance mode
# remove: remove patroni after bootstrap
patroni_mode: default # pause|default|remove
pg_namespace: /pg # top level key namespace in dcs
patroni_port: 8008 # default patroni port
patroni_watchdog_mode: automatic # watchdog mode: off|automatic|required
pg_conf: patroni.yml # user provided patroni config template path
#------------------------------------------------------------------------------
# PGBOUNCER PROVISION
#------------------------------------------------------------------------------
# - pgbouncer - #
pgbouncer_port: 6432 # default pgbouncer port
pgbouncer_poolmode: transaction # default pooling mode: transaction pooling
pgbouncer_max_db_conn: 100 # important! do not set this larger than postgres max conn or conn limit
#------------------------------------------------------------------------------
# CLUSTER TEMPLATE
#------------------------------------------------------------------------------
pg_init: pg-init # init script for cluster template
# - system roles - #
pg_replication_username: replicator # system replication user
pg_replication_password: DBUser.Replicator # system replication password
pg_monitor_username: dbuser_monitor # system monitor user
pg_monitor_password: DBUser.Monitor # system monitor password
pg_admin_username: dbuser_admin # system admin user
pg_admin_password: DBUser.Admin # system admin password
# - default roles - #
pg_default_roles:
- username: dbrole_readonly # sample user:
options: NOLOGIN # role can not login
comment: role for readonly access # comment string
- username: dbrole_readwrite # sample user: one object for each user
options: NOLOGIN
comment: role for read-write access
groups: [ dbrole_readonly ] # read-write includes read-only access
- username: dbrole_admin # sample user: one object for each user
options: NOLOGIN BYPASSRLS # admin can bypass row level security
comment: role for object creation
groups: [dbrole_readwrite,pg_monitor,pg_signal_backend]
# NOTE: replicator, monitor, admin password are overwrite by separated config entry
- username: postgres # reset dbsu password to NULL (if dbsu is not postgres)
options: SUPERUSER LOGIN
comment: system superuser
- username: replicator
options: REPLICATION LOGIN
groups: [pg_monitor, dbrole_readonly]
comment: system replicator
- username: dbuser_monitor
options: LOGIN CONNECTION LIMIT 10
comment: system monitor user
groups: [pg_monitor, dbrole_readonly]
- username: dbuser_admin
options: LOGIN BYPASSRLS
comment: system admin user
groups: [dbrole_admin]
- username: dbuser_stats
password: DBUser.Stats
options: LOGIN
comment: business read-only user for statistics
groups: [dbrole_readonly]
# object created by dbsu and admin will have their privileges properly set
pg_default_privilegs:
- GRANT USAGE ON SCHEMAS TO dbrole_readonly
- GRANT SELECT ON TABLES TO dbrole_readonly
- GRANT SELECT ON SEQUENCES TO dbrole_readonly
- GRANT EXECUTE ON FUNCTIONS TO dbrole_readonly
- GRANT INSERT, UPDATE, DELETE ON TABLES TO dbrole_readwrite
- GRANT USAGE, UPDATE ON SEQUENCES TO dbrole_readwrite
- GRANT TRUNCATE, REFERENCES, TRIGGER ON TABLES TO dbrole_admin
- GRANT CREATE ON SCHEMAS TO dbrole_admin
- GRANT USAGE ON TYPES TO dbrole_admin
# schemas
pg_default_schemas: [monitor]
# extension
pg_default_extensions:
- { name: 'pg_stat_statements', schema: 'monitor' }
- { name: 'pgstattuple', schema: 'monitor' }
- { name: 'pg_qualstats', schema: 'monitor' }
- { name: 'pg_buffercache', schema: 'monitor' }
- { name: 'pageinspect', schema: 'monitor' }
- { name: 'pg_prewarm', schema: 'monitor' }
- { name: 'pg_visibility', schema: 'monitor' }
- { name: 'pg_freespacemap', schema: 'monitor' }
- { name: 'pg_repack', schema: 'monitor' }
- name: postgres_fdw
- name: file_fdw
- name: btree_gist
- name: btree_gin
- name: pg_trgm
- name: intagg
- name: intarray
# - hba - #
pg_hba_rules:
- title: allow meta node password access
role: common
rules:
- host all all 10.10.10.10/32 md5
- title: allow intranet admin password access
role: common
rules:
- host all +dbrole_admin 10.0.0.0/8 md5
- host all +dbrole_admin 172.16.0.0/12 md5
- host all +dbrole_admin 192.168.0.0/16 md5
- title: allow intranet password access
role: common
rules:
- host all all 10.0.0.0/8 md5
- host all all 172.16.0.0/12 md5
- host all all 192.168.0.0/16 md5
- title: allow local read-write access (local production user via pgbouncer)
role: common
rules:
- local all +dbrole_readwrite md5
- host all +dbrole_readwrite 127.0.0.1/32 md5
- title: allow read-only user (stats, personal) password directly access
role: replica
rules:
- local all +dbrole_readonly md5
- host all +dbrole_readonly 127.0.0.1/32 md5
# pgbouncer host-based authentication rules
pgbouncer_hba_rules:
- title: local password access
role: common
rules:
- local all all md5
- host all all 127.0.0.1/32 md5
- title: intranet password access
role: common
rules:
- host all all 10.0.0.0/8 md5
- host all all 172.16.0.0/12 md5
- host all all 192.168.0.0/16 md5
#------------------------------------------------------------------------------
# BUSINESS TEMPLATE
#------------------------------------------------------------------------------
# - business - #
# users that are ad hoc to each cluster
pg_users:
- username: dbuser_test
password: DBUser.Test
options: LOGIN NOINHERIT
comment: business read-write user
groups: [dbrole_readwrite]
pg_databases: # additional business database
- name: test # one object for each database
owner: dbuser_test
schemas: [monitor, public]
extensions: [{name: "postgis", schema: "public"}]
parameters:
search_path: 'yay,public,monitor' # set default search path
...
Playbook
initdb.yml
will bootstrap PostgreSQL cluster according to inventory (assume infra provisioned)
tasks:
Create os group postgres TAGS: [instal, pg_dbsu, postgres]
postgres : Make sure dcs group exists TAGS: [instal, pg_dbsu, postgres]
postgres : Create dbsu {{ pg_dbsu }} TAGS: [instal, pg_dbsu, postgres]
postgres : Grant dbsu nopass sudo TAGS: [instal, pg_dbsu, postgres]
postgres : Grant dbsu all sudo TAGS: [instal, pg_dbsu, postgres]
postgres : Grant dbsu limited sudo TAGS: [instal, pg_dbsu, postgres]
postgres : Config patroni watchdog support TAGS: [instal, pg_dbsu, postgres]
postgres : Add dbsu ssh no host checking TAGS: [instal, pg_dbsu, postgres]
postgres : Fetch dbsu public keys TAGS: [instal, pg_dbsu, postgres]
postgres : Exchange dbsu ssh keys TAGS: [instal, pg_dbsu, postgres]
postgres : Install offical pgdg yum repo TAGS: [instal, pg_install, postgres]
postgres : Install pg packages TAGS: [instal, pg_install, postgres]
postgres : Install pg extensions TAGS: [instal, pg_install, postgres]
postgres : Link /usr/pgsql to current version TAGS: [instal, pg_install, postgres]
postgres : Add pg bin dir to profile path TAGS: [instal, pg_install, postgres]
postgres : Fix directory ownership TAGS: [instal, pg_install, postgres]
Remove default postgres service TAGS: [instal, pg_install, postgres]
postgres : Check necessary variables exists TAGS: [always, pg_preflight, postgres, preflight]
postgres : Fetch variables via pg_cluster TAGS: [always, pg_preflight, postgres, preflight]
postgres : Set cluster basic facts for hosts TAGS: [always, pg_preflight, postgres, preflight]
postgres : Assert cluster primary singleton TAGS: [always, pg_preflight, postgres, preflight]
postgres : Setup cluster primary ip address TAGS: [always, pg_preflight, postgres, preflight]
postgres : Setup repl upstream for primary TAGS: [always, pg_preflight, postgres, preflight]
postgres : Setup repl upstream for replicas TAGS: [always, pg_preflight, postgres, preflight]
postgres : Debug print instance summary TAGS: [always, pg_preflight, postgres, preflight]
Check for existing postgres instance TAGS: [pg_check, postgres, prepare]
postgres : Set fact whether pg port is open TAGS: [pg_check, postgres, prepare]
Abort due to existing postgres instance TAGS: [pg_check, postgres, prepare]
Clean existing postgres instance TAGS: [pg_check, postgres, prepare]
Shutdown existing postgres service TAGS: [pg_clean, postgres, prepare]
postgres : Remove registerd consul service TAGS: [pg_clean, postgres, prepare]
Remove postgres metadata in consul TAGS: [pg_clean, postgres, prepare]
Remove existing postgres data TAGS: [pg_clean, postgres, prepare]
postgres : Make sure main and backup dir exists TAGS: [pg_dir, postgres, prepare]
Create postgres directory structure TAGS: [pg_dir, postgres, prepare]
postgres : Create pgbouncer directory structure TAGS: [pg_dir, postgres, prepare]
postgres : Create links from pgbkup to pgroot TAGS: [pg_dir, postgres, prepare]
postgres : Create links from current cluster TAGS: [pg_dir, postgres, prepare]
postgres : Copy pg_cluster to /pg/meta/cluster TAGS: [pg_meta, postgres, prepare]
postgres : Copy pg_version to /pg/meta/version TAGS: [pg_meta, postgres, prepare]
postgres : Copy pg_instance to /pg/meta/instance TAGS: [pg_meta, postgres, prepare]
postgres : Copy pg_seq to /pg/meta/sequence TAGS: [pg_meta, postgres, prepare]
postgres : Copy pg_role to /pg/meta/role TAGS: [pg_meta, postgres, prepare]
Copy postgres scripts to /pg/bin/ TAGS: [pg_scripts, postgres, prepare]
postgres : Copy alias profile to /etc/profile.d TAGS: [pg_scripts, postgres, prepare]
Copy psqlrc to postgres home TAGS: [pg_scripts, postgres, prepare]
postgres : Setup hostname to pg instance name TAGS: [pg_hostname, postgres, prepare]
postgres : Copy consul node-meta definition TAGS: [pg_nodemeta, postgres, prepare]
postgres : Restart consul to load new node-meta TAGS: [pg_nodemeta, postgres, prepare]
postgres : Config patroni watchdog support TAGS: [pg_watchdog, postgres, prepare]
postgres : Get config parameter page count TAGS: [pg_config, postgres]
postgres : Get config parameter page size TAGS: [pg_config, postgres]
postgres : Tune shared buffer and work mem TAGS: [pg_config, postgres]
postgres : Hanlde small size mem occasion TAGS: [pg_config, postgres]
Calculate postgres mem params TAGS: [pg_config, postgres]
postgres : create patroni config dir TAGS: [pg_config, postgres]
postgres : use predefined patroni template TAGS: [pg_config, postgres]
postgres : Render default /pg/conf/patroni.yml TAGS: [pg_config, postgres]
postgres : Link /pg/conf/patroni to /pg/bin/ TAGS: [pg_config, postgres]
postgres : Link /pg/bin/patroni.yml to /etc/patroni/ TAGS: [pg_config, postgres]
postgres : Config patroni watchdog support TAGS: [pg_config, postgres]
postgres : create patroni systemd drop-in dir TAGS: [pg_config, postgres]
Copy postgres systemd service file TAGS: [pg_config, postgres]
postgres : create patroni systemd drop-in file TAGS: [pg_config, postgres]
postgres : Render default initdb scripts TAGS: [pg_config, postgres]
postgres : Launch patroni on primary instance TAGS: [pg_primary, postgres]
postgres : Wait for patroni primary online TAGS: [pg_primary, postgres]
Wait for postgres primary online TAGS: [pg_primary, postgres]
Check primary postgres service ready TAGS: [pg_primary, postgres]
postgres : Check replication connectivity to primary TAGS: [pg_primary, postgres]
postgres : Render default pg-init scripts TAGS: [pg_init, pg_init_config, postgres]
postgres : Render template init script TAGS: [pg_init, pg_init_config, postgres]
postgres : Execute initialization scripts TAGS: [pg_init, postgres]
postgres : Check primary instance ready TAGS: [pg_init, postgres]
postgres : Add dbsu password to pgpass if exists TAGS: [pg_pass, postgres]
postgres : Add system user to pgpass TAGS: [pg_pass, postgres]
postgres : Check replication connectivity to primary TAGS: [pg_replica, postgres]
postgres : Launch patroni on replica instances TAGS: [pg_replica, postgres]
postgres : Wait for patroni replica online TAGS: [pg_replica, postgres]
Wait for postgres replica online TAGS: [pg_replica, postgres]
Check replica postgres service ready TAGS: [pg_replica, postgres]
postgres : Render hba rules TAGS: [pg_hba, postgres]
postgres : Reload hba rules TAGS: [pg_hba, postgres]
postgres : Pause patroni TAGS: [pg_patroni, postgres]
postgres : Stop patroni on replica instance TAGS: [pg_patroni, postgres]
postgres : Stop patroni on primary instance TAGS: [pg_patroni, postgres]
Launch raw postgres on primary TAGS: [pg_patroni, postgres]
Launch raw postgres on primary TAGS: [pg_patroni, postgres]
Wait for postgres online TAGS: [pg_patroni, postgres]
postgres : Check pgbouncer is installed TAGS: [pgbouncer, pgbouncer_check, postgres]
postgres : Stop existing pgbouncer service TAGS: [pgbouncer, pgbouncer_clean, postgres]
postgres : Remove existing pgbouncer dirs TAGS: [pgbouncer, pgbouncer_clean, postgres]
Recreate dirs with owner postgres TAGS: [pgbouncer, pgbouncer_clean, postgres]
postgres : Copy /etc/pgbouncer/pgbouncer.ini TAGS: [pgbouncer, pgbouncer_config, postgres]
postgres : Copy /etc/pgbouncer/pgb_hba.conf TAGS: [pgbouncer, pgbouncer_config, postgres]
postgres : Touch userlist and database list TAGS: [pgbouncer, pgbouncer_config, postgres]
postgres : Add default users to pgbouncer TAGS: [pgbouncer, pgbouncer_config, postgres]
postgres : Copy pgbouncer systemd service TAGS: [pgbouncer, pgbouncer_launch, postgres]
postgres : Launch pgbouncer pool service TAGS: [pgbouncer, pgbouncer_launch, postgres]
postgres : Wait for pgbouncer service online TAGS: [pgbouncer, pgbouncer_launch, postgres]
postgres : Check pgbouncer service is ready TAGS: [pgbouncer, pgbouncer_launch, postgres]
postgres : Render business init script TAGS: [business, pg_biz_config, pg_biz_init, postgres]
postgres : Render database baseline sql TAGS: [business, pg_biz_config, pg_biz_init, postgres]
postgres : Execute business init script TAGS: [business, pg_biz_init, postgres]
postgres : Execute database baseline sql TAGS: [business, pg_biz_init, postgres]
postgres : Add pgbouncer busniess users TAGS: [business, pg_biz_pgbouncer, postgres]
postgres : Add pgbouncer busniess database TAGS: [business, pg_biz_pgbouncer, postgres]
postgres : Restart pgbouncer TAGS: [business, pg_biz_pgbouncer, postgres]
Copy postgres service definition TAGS: [pg_register, postgres, register]
postgres : Reload consul service TAGS: [pg_register, postgres, register]
postgres : Render grafana datasource definition TAGS: [pg_grafana, postgres, register]
postgres : Register datasource to grafana TAGS: [pg_grafana, postgres, register]
monitor : Create /etc/pg_exporter conf dir TAGS: [monitor, pg_exporter]
monitor : Copy default pg_exporter.yaml TAGS: [monitor, pg_exporter]
monitor : Config /etc/default/pg_exporter TAGS: [monitor, pg_exporter]
monitor : Config pg_exporter service unit TAGS: [monitor, pg_exporter]
monitor : Launch pg_exporter systemd service TAGS: [monitor, pg_exporter]
monitor : Wait for pg_exporter service online TAGS: [monitor, pg_exporter]
monitor : Register pg-exporter consul service TAGS: [monitor, pg_exporter]
monitor : Reload pg-exporter consul service TAGS: [monitor, pg_exporter]
monitor : Config pgbouncer_exporter opts TAGS: [monitor, pgbouncer_exporter]
monitor : Config pgbouncer_exporter service TAGS: [monitor, pgbouncer_exporter]
monitor : Launch pgbouncer_exporter service TAGS: [monitor, pgbouncer_exporter]
monitor : Wait for pgbouncer_exporter online TAGS: [monitor, pgbouncer_exporter]
monitor : Register pgb-exporter consul service TAGS: [monitor, pgbouncer_exporter]
monitor : Reload pgb-exporter consul service TAGS: [monitor, pgbouncer_exporter]
monitor : Copy node_exporter systemd service TAGS: [monitor, node_exporter]
monitor : Config default node_exporter options TAGS: [monitor, node_exporter]
monitor : Launch node_exporter service unit TAGS: [monitor, node_exporter]
monitor : Wait for node_exporter online TAGS: [monitor, node_exporter]
monitor : Register node-exporter service TAGS: [monitor, node_exporter]
monitor : Reload node-exporter consul service TAGS: [monitor, node_exporter]
proxy : Templating /etc/default/vip-manager.yml TAGS: [proxy, vip]
proxy : create vip-manager. systemd drop-in dir TAGS: [proxy, vip]
proxy : create vip-manager systemd drop-in file TAGS: [proxy, vip]
proxy : Launch vip-manager TAGS: [proxy, vip]
proxy : Set pg_instance in case of absence TAGS: [haproxy, proxy]
proxy : Fetch postgres cluster memberships TAGS: [haproxy, proxy]
Templating /etc/haproxyhaproxy.cfg TAGS: [haproxy, proxy]
Copy haproxy systemd service file TAGS: [haproxy, proxy]
Launch haproxy load balancer service TAGS: [haproxy, proxy]
Wait for haproxy load balancer online TAGS: [haproxy, proxy]
Copy haproxy service definition TAGS: [haproxy_register, proxy]
Reload haproxy consul service TAGS: [haproxy_register, proxy]
5 - 集群模板
如何通过集群模版来定制您的数据库集群
Templates [DRAFT]
Customize
There are two ways to customize pigsty besides of variables, which are patroni template and initdb template
Patroni Template
For the sake of unification, Pigsty use patroni for cluster bootstrap even if you choose not enabling it at all. So you can customize your database cluster with patroni configuration.
Pigsty is shipped with four pre-defined patroni templates/
oltp.yml
Common OTLP database cluster, default configuration
olap.yml
OLAP database cluster, increasing throughput and long-run queries
crit.yml
Critical database cluster which values security and intergity more than availability
tiny.yml
Tiny database cluster that runs on small or virtual machine. Which is default for this demo
You can customize those templates or just write your own, and specify template path with variable pg_conf
Initdb Template
When database cluster is initialized. there’s a chance that user can intercede. E.g: create default roles and users, schemas, privilleges and so forth.
Pigsty will use ../roles/postgres/templates/pg-init
as the default initdb scripts. It is a shell scripts run as dbsu that can do anything to a newly bootstrapped database.
The default initdb scripts will customize database according to following variables:
pg_default_username: postgres # non 'postgres' will create a default admin user (not superuser)
pg_default_password: postgres # dbsu password, omit for 'postgres'
pg_default_database: postgres # non 'postgres' will create a default database
pg_default_schema: public # default schema will be create under default database and used as first element of search_path
pg_default_extensions: "tablefunc,postgres_fdw,file_fdw,btree_gist,btree_gin,pg_trgm"
Of course, you can customize initdb template or just write your own. and specify template path with variable pg-init
定制初始化模板
在Pigsty中,除了上述的参数变量,还提供两种定制化的方式
数据库初始化模板
初始化模板是用于初始化数据库集群的定义文件,默认位于roles/postgres/templates/patroni.yml
,采用patroni.yml
配置文件格式
在templates/
目录中,有四种预定义好的初始化模板:
oltp.yml
常规OLTP模板,默认配置
olap.yml
OLAP模板,提高并行度,针对吞吐量优化,针对长时间运行的查询进行优化。
crit.yml
核心业务模板,基于OLTP模板针对安全性,数据完整性进行优化,采用同步复制,启用数据校验和。
tiny.yml
微型数据库模板,针对低资源场景进行优化,例如运行于虚拟机中的演示数据库集群。
用户也可以基于上述模板进行定制与修改,并通过pg_conf
参数使用相应的模板。
数据库初始化脚本
当数据库初始化完毕后,用户通常希望对数据库进行自定义的定制脚本,例如创建统一的默认角色,用户,创建默认的模式,配置默认权限等。
本项目提供了一个默认的初始化脚本roles/postgres/templates/initdb.sh
,基于以下几个变量创建默认的数据库与用户。
pg_default_username: postgres # non 'postgres' will create a default admin user (not superuser)
pg_default_password: postgres # dbsu password, omit for 'postgres'
pg_default_database: postgres # non 'postgres' will create a default database
pg_default_schema: public # default schema will be create under default database and used as first element of search_path
pg_default_extensions: "tablefunc,postgres_fdw,file_fdw,btree_gist,btree_gin,pg_trgm"
用户可以基于本脚本进行定制,并通过pg_init
参数使用相应的自定义脚本。
6 - 服务发现
服务发现的工作原理与可选配置
Service Discovery [DRAFT]
Service Overview
Pigsty is intergreted with DCS based Service Discovery. All service are automatically registed to DCS. Which eliminate manual maintenance work on monitoring system. And you can see health status about all nodes and service in an intuitive way.
Consul is the only DCS that is supported (etcd will be added further). You can use consul as DNS service provider to achieve DNS based traffic routing.
Service Register
Service are registered by consul agent.
Service are defined in json format, put in /etc/consul.d/
each service have a json file named as svc-<service>.json
Take postgres service as an example:
{
"service": {
"name": "postgres",
"port": {{ pg_port }},
"tags": [
"{{ pg_role }}",
"{{ pg_cluster }}"
],
"meta": {
"type": "postgres",
"role": "{{ pg_role }}",
"seq": "{{ pg_seq }}",
"instance": "{{ pg_instance }}",
"service": "{{ pg_service }}",
"cluster": "{{ pg_cluster }}",
"version": "{{ pg_version }}"
},
"check": {
"tcp": "127.0.0.1:{{ pg_port }}",
"interval": "15s",
"timeout": "1s"
}
}
}
Service Discovery
Prometheus can discover service from consul directly
- job_name: pg
# https://prometheus.io/docs/prometheus/latest/configuration/configuration/#consul_sd_config
consul_sd_configs:
- server: localhost:8500
refresh_interval: 5s
tags:
- pg
- exporter
Service Maintenance
Sometimes service metadata may change, it requires a consul reload
to take effect.
Anti-Entropy script /pg/bin/pg-register
will periodically check and fix postgres role. And it will be triggered when failover occurs.
blah blah
服务发现
Pigsty内置了基于DCS的配置管理与自动服务发现,用户可以直观地察看系统中的所有节点与服务信息,以及健康状态。Pigsty中的所有服务都会自动注册至DCS中,因此创建、销毁、修改数据库集群时,元数据会自动修正,监控系统能够自动发现监控目标,无需手动维护配置。
目前仅支持Consul作为DCS,用户亦可通过Consul提供的DNS与服务发现机制,实现基于DNS的自动流量切换。