快速上手

基于vagrant,快速在本机拉起演示系统

这篇文档将介绍如何在本地基于Vagrant与Virtualbox拉起Pigsty演示沙箱。

太长;不看

如果您的本地计算机上已经安装有vagrantvirtualbox,那么只需要克隆并进入本项目后执行以下命令:

sudo make dns    # 将Pigsty所需的静态DNS写入您的/etc/hosts文件 (需要sudo权限,当然您也可以跳过这一步,并使用IP端口直接访问)
make new         # 使用Vagrant创建四台Virtualbox虚拟机,并使用这些虚拟机拉起Pigsty本地演示沙箱
make mon-view    # 从本地访问Pigsty主页,默认的用户名与密码都是:admin

宿主机的操作系统没有特殊要求,只要能够安装运行Vagrant与virtualbox即可。作者验证可行的环境有:

  • MacOS 10.15, MacOS 11.1, CentOS 7.8
  • Vagrant 2.2.14
  • Virtualbox 6.1.14

TL;DR

If you already have vagrant and virtualbox properly installed. Just run following commands:

# run under pigsty home dir
make up          # pull up all vagrant nodes
make ssh         # setup vagrant ssh access
make init        # init infrastructure and databaes clusters
sudo make dns    # write static DNS record to your host (sudo required)
make mon-view    # monitoring system home page (default: admin:admin) 

Verified version: MacOS 10.15, Vagrant 2.2.10, Virtualbox 6.1.14, CentOS 7.8

Preparation

System Requirement

  • CentOS 7 / Red Hat 7 / Oracle Linux 7
  • CentOS 7.6/7.8 is highly recommened (which are fully tested under minimal installtion)

Minimal setup

  • Self-contained single node, singleton database pg-meta
  • Minimal requirement: 2 CPU Core & 2 GB RAM

Standard setup ( TINY mode, vagrant demo)

  • 4 Node, including single meta node, singleton databaes cluster pg-meta and 3-instances database cluster pg-test
  • Recommend Spec: 2Core/2GB for meta controller node, 1Core/1GB for database node

Production setup (OLTP/OLAP/CRIT mode)

  • 200~1000 nodes, 3~5 meta nodes

Verified environment: Dell R740 / 64 Core / 400GB Mem / 3TB PCI-E SSD x 200

If you wish to run pigsty on virtual machine in your laptop. Consider using vagrant and virtualbox. Which enables you create and destroy virtual machine easily. Check Vagrant Provision for more information. Other virtual machine solution such as vmware also works.

Get Started

Step 1: Prepare

  • Prepare nodes, bare metal or virtual machine.

    Currently only CentOS 7 is supported and fully tested.

    You will need one node for minial setup, and four nodes for a complete demonstration.

  • Pick one node as meta node, Which is controller of entire system.

    Meta node is controller of the system. Which will run essential service such as Nginx, Yum Repo, DNS Server, NTP Server, Consul Server, Prometheus, AlterManager, Grafana, and other components. It it recommended to have 1 meta node in sandbox/dev environment, and 3 ~ 5 meta nodes in production environment.

  • Create admin user on these nodes which has nopassword sudo privilege.

    echo "<username> ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers.d/<username>
    
  • Setup admin user SSH nopass access from meta node.

    ssh-copy-id <address>
    

    You could execute playbooks on your host machine directly instead of meta node when running pigsty inside virtual machines. It is convenient for development and testing.

  • Install Ansible on meta node (or your host machine if you prefer running playbooks there)

    yum install ansible     # centos
    brew install ansible    # macos
    

    If your meta node does not have Internet access. You could perform an Offline Installation. Or figure out your own way installing ansible there.

  • Clone this repo to meta node

    git clone https://github.com/vonng/pigsty && cd pigsty 
    
  • [Optional]: download pre-packaged offline installation resource tarball to ${PIGSTY_HOME}/files/pkg.tgz

    If you happen to have exactly same OS (e.g CentOS 7.8 pkg). You could download it and put it there. So the first-time provision will be extremely fast.

Step 2: Configuration

Configuration is essential to pigsty.

dev.yml is the Configuration file for vagrant sandbox environment. And conf/all.yml is the default configuration file path, which is a soft link to conf/dev.yml by default.

You can leave most parameters intact, only small portion of parameters need adjustment such as cluster inventory definition. A typical cluster definition only require 3 variables to work: pg_cluster , pg_role, and pg_seq. Check configuration guide for more detail.

#-----------------------------
# cluster: pg-test
#-----------------------------
pg-test: # define cluster named 'pg-test'
  # - cluster members - #
  hosts:
    10.10.10.11: {pg_seq: 1, pg_role: primary, ansible_host: node-1}
    10.10.10.12: {pg_seq: 1, pg_role: replica, ansible_host: node-2}
    10.10.10.13: {pg_seq: 1, pg_role: replica, ansible_host: node-3}
  # - cluster configs - #
  vars:
    # basic settings
    pg_cluster: pg-test                 # define actual cluster name
    pg_version: 13                      # define installed pgsql version
    node_tune: tiny                     # tune node into oltp|olap|crit|tiny mode
    pg_conf: tiny.yml                   # tune pgsql into oltp/olap/crit/tiny mode

    pg_users:
      - username: test
        password: test
        comment: default test user
        groups: [ dbrole_readwrite ]
    pg_databases:                       # create a business database 'test'
      - name: test
        extensions: [{name: postgis}]   # create extra extension postgis
        parameters:                     # overwrite database meta's default search_path
          search_path: public,monitor
    pg_default_database: test           # default database will be used as primary monitor target

    # proxy settings
    vip_enabled: true                   # enable/disable vip (require members in same LAN)
    vip_address: 10.10.10.3             # virtual ip address
    vip_cidrmask: 8                     # cidr network mask length
    vip_interface: eth1                 # interface to add virtual ip

Step 3: Provision

It is straight forward to materialize that configuration about infrastructure & database cluster:

./infra.yml    # init infrastructure according to config
./initdb.yml   # init database cluster according to config

It may take around 5~30min to download all necessary rpm packages from internet according to your network condition. (Only for the first time, you could cache downloaded packages by running make cache)

(Consider using other upstream yum repo if not applicable , check conf/all.yml , all.vars.repo_upstreams)

Step 4: Explore

Start exploring Pigsty.

  • Main Page: http://pigsty or http://<meta-ip-address>

  • Grafana: http://g.pigsty or http://<meta-ip-address>:3000 (default userpass: admin:admin)

  • Consul: http://c.pigsty or http://<meta-ip-address>:8500 (consul only listen on localhost)

  • Prometheus: http://p.pigsty or http://<meta-ip-address>:9090

  • AlertManager: http://a.pigsty or http://<meta-ip-address>:9093

You may need to write DNS to your host before accessing pigsty via domain names.

sudo make dns				   # write local DNS record to your /etc/hosts, sudo required

快速开始

本节介绍如何快速拉起Pigsty沙箱环境,更多信息请参阅快速上手

  1. 准备机器

    • 使用预分配好的机器,或基于预定义的沙箱Vagrantfile在本地生成演示虚拟机,选定一台作为中控机。

    • 配置中控机到其他机器的SSH免密码访问,并确认所使用的的SSH用户在机器上具有免密码sudo的权限。

    • 如果您在本机安装有vagrant和virtualbox,则可直接在项目根目录下执行以make up拉个四节点虚拟机环境,详见Vagrant供给

    make up
    
  2. 准备项目

    在中控机上安装Ansible,并克隆本项目。如果采用本地虚拟机环境,亦可在宿主机上安装ansible执行命令。

    git clone https://github.com/vonng/pigsty && cd pigsty 
    

    如果目标环境没有互联网访问,或者速度不佳,考虑下载预打包的离线安装包,或使用有互联网访问/代理的同系统的另一台机器制作离线安装包。离线安装细节请参考离线安装教程。

  3. 修改配置

    按需修改配置文件。配置文件使用YAML格式与Ansible清单语义,配置项与格式详情请参考配置教程

    vi conf/all.yml			# 默认配置文件路径
    
  4. 初始化基础设施

    执行此剧本,将基础设施定义参数实例化,详情请参阅 基础设施供给

    ./infra.yml         # 执行此剧本,将基础设施定义参数实例化
    
  5. 初始化数据库集群

    执行此剧本,将拉起所有的数据库集群,数据库集群供给详情请参阅 数据库集群供给

    ./initdb.yml        # 执行此剧本,将所有数据库集群定义实例化
    
  6. 开始探索

    可以通过参数nginx_upstream中自定义的域名(沙箱环境中默认为http://pigsty)访问Pigsty主页。

    监控系统的默认域名为http://g.pigsty,默认用户名与密码均为admin

    监控系统可以直接通过meta节点上的3000端口访问,如需从本地通过域名访问,可以执行sudo make dns将所需的DNS记录写入宿主机中。


最后修改 January 4, 2021: update zh doc (d400d32)