PostgreSQLデータベースのSlony-Iでのレプリケート例

環境

マスタ側(更新側) 172.16.167.144
スレーブ側(参照側) 172.16.167.130
OS CentOS 6.5
PostgreSQL 8.4(rpm版)
$ cat /etc/redhat-release 
CentOS release 6.5 (Final)
$ rpm -qa | grep postgres
postgresql-libs-8.4.20-1.el6_5.x86_64
postgresql-devel-8.4.20-1.el6_5.x86_64
postgresql-server-8.4.20-1.el6_5.x86_64
postgresql-8.4.20-1.el6_5.x86_64
$

make the slony-I

$ bzcat slony1-2.2.3.tar.bz2 | tar xvf -
$ cd slony1-2.2.3
$ ./configure --prefix=/usr/local/slony --with-pgconfigdir=/usr/local/pgsql/bin --with-perltools
$ make
$ su
# make install

create DB

マスタ側、スレーブ側ともに同じにするため、マスタ側のDBをダンプしてそれをスレーブに流し込む。マスタのDBにはPL/pgSQLがないと怒られるのでcreate languageで作成しておく。

$ pg_dump -s testdb1 > testdb1.dump
$ cat testdb1.dump
--
-- PostgreSQL database dump
--

SET statement_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = off;
SET check_function_bodies = false;
SET client_min_messages = warning;
SET escape_string_warning = off;

--
-- Name: plpgsql; Type: PROCEDURAL LANGUAGE; Schema: -; Owner: postgres
--

CREATE PROCEDURAL LANGUAGE plpgsql;


ALTER PROCEDURAL LANGUAGE plpgsql OWNER TO postgres;

SET search_path = public, pg_catalog;

SET default_tablespace = '';

SET default_with_oids = false;

--
-- Name: table1; Type: TABLE; Schema: public; Owner: postgres; Tablespace: 
--

CREATE TABLE table1 (
    id integer NOT NULL,
    uname text
);


ALTER TABLE public.table1 OWNER TO postgres;

--
-- Name: table2; Type: TABLE; Schema: public; Owner: postgres; Tablespace: 
--

CREATE TABLE table2 (
    id integer NOT NULL,
    uname text,
    date timestamp without time zone
);


ALTER TABLE public.table2 OWNER TO postgres;

--
-- Name: table1_pkey; Type: CONSTRAINT; Schema: public; Owner: postgres; Tablespace: 
--

ALTER TABLE ONLY table1
    ADD CONSTRAINT table1_pkey PRIMARY KEY (id);


--
-- Name: table2_pkey; Type: CONSTRAINT; Schema: public; Owner: postgres; Tablespace: 
--

ALTER TABLE ONLY table2
    ADD CONSTRAINT table2_pkey PRIMARY KEY (id);


--
-- Name: public; Type: ACL; Schema: -; Owner: postgres
--

REVOKE ALL ON SCHEMA public FROM PUBLIC;
REVOKE ALL ON SCHEMA public FROM postgres;
GRANT ALL ON SCHEMA public TO postgres;
GRANT ALL ON SCHEMA public TO PUBLIC;


--
-- PostgreSQL database dump complete
--

スレーブ側では以下のとおり。

$ psql testdb < testdb1.dump

PostgreSQLのアクセス設定

マスタ、スレーブ同士が通信できるように設定する。

# iptables -I INPUT -s 172.16.167.0/24 -p tcp --dport 5432 -j ACCEPT
# iptables -L
# /etc/init.d/iptables save
# cd /var/lib/pgsql/data
# vi postgresql.conf

以下のように変更
listen_addresses = 'localhost,172.16.167.130,172.16.167.144'
#listen_addresses = '*'

# vi pg_hba.conf

以下を追加
host    all         all         172.16.167.0/24       trust

# service postgresql restart

マスタ側の設定

を行う。perlツールでconfからslonikに読み込ませる内容を作成する感じ。

# cd /usr/local/slony/etc
# cp -ip slon_tools.conf-sample slon_tools.conf
# cat slon_tools.conf

# 
# Author: Christopher Browne
# Copyright 2004-2009 Afilias Canada
# Revised extensively by Steve Simms

# Keeping the following three lines for backwards compatibility in
# case this gets incorporated into a 1.0.6 release.
#
# TODO: The scripts should check for an environment variable
# containing the location of a configuration file.  That would
# simplify this configuration file and allow Slony-I tools to still work
# in situations where it doesn't exist.
#
if ($ENV{"SLONYNODES"}) {
    require $ENV{"SLONYNODES"};
} else {

    # The name of the replication cluster.  This will be used to
    # create a schema named _$CLUSTER_NAME in the database which will
    # contain Slony-related data.
    $CLUSTER_NAME = 'replication_test';

    # The directory where Slony store PID files.  This
    # directory will need to be writable by the user that invokes
    # Slony.
    $PIDFILE_DIR = '/var/run/slony1';

    # PID file prefix. Files will be generated as $PIDFILE_PREFIX_node#
    # in PIDFILE_DIR
    # If not defined, default to $CLUSTER_NAME
    # $PIDFILE_PREFIX =

    # The directory where Slony should record log messages.  This
    # directory will need to be writable by the user that invokes
    # Slony.
    $LOGDIR = '/var/log/slony1';

    # (Optional) If you would like to use Apache's rotatelogs tool to
    # manage log output, uncomment the following line and ensure that
    # it points to the executable.
    #
    # $APACHE_ROTATOR = '/usr/local/apache/bin/rotatelogs';

    # Log line suffix for Slony-I log. For options, look at date(1) 
    # man page.
    #
    # $LOG_NAME_SUFFIX = '%a';

    # SYNC check interval (slon -s option)
    # $SYNC_CHECK_INTERVAL = 1000;

    # Which node is the default master for all sets?
    $MASTERNODE = 1;

    # Which debugging level to use?  [0-4]
    $DEBUGLEVEL = 2;

    # WATCHDOG configuration
    # WatchDog inicialization (default 1 as in slon_start)
    # $START_WATCHDOG = 1;

    # Watchdog Sleep Time (default 30 as in slon_start)
    $SLEEP_TIME = 60;

    # Watchdog version to use 1 or 2 (default 1)
    # $WATCHDOG_VERSION = 1;
    # Be verbose (default 1)
    $WATCHDOG_VERBOSE = 0;

    # Include add_node lines for each node in the cluster.  Be sure to
    # use host names that will resolve properly on all nodes
    # (i.e. only use 'localhost' if all nodes are on the same host).
    # Also, note that the user must be a superuser account.

    add_node(node     => 1,
	     host     => '172.16.167.144',
	     dbname   => 'testdb1',
	     port     => 5432,
	     user     => 'postgres',
             password => '');

    add_node(node     => 2,
	     host     => '172.16.167.130',
	     dbname   => 'testdb1',
	     port     => 5432,
	     user     => 'postgres',
             password => '');

    # If the node should only receive event notifications from a
    # single node (e.g. if it can't access the other nodes), you can
    # specify a single parent.  The downside to this approach is that
    # if the parent goes down, your node becomes stranded.

}

# The $SLONY_SETS variable contains information about all of the sets
# in your cluster.

$SLONY_SETS = {

    # A unique name for the set
    "set1_name" => {

	# The set_id, also unique
	"set_id" => 1,

	# Uncomment the following line to change the origin
	# (a.k.a. master) for the set.  The default is $MASTERNODE.
	#
	"origin" => 1,

	# If this is set to 1, table and sequence names will be folded to lower-case
	# to match the way that PostgreSQL handles unquoted names.
	# For example, CREATE TABLE ACCOUNT(...) actually turns into CREATE TABLE account(...);
	# unless you put quotes around the table name
	# Slony always quotes object names, so you may get a mis-match between the table-name
	# as PostgreSQL understands it, and as Slony represents it.
	# default value is 0
	#
	# foldCase => 0,

	# The first ID to use for tables and sequences that are added
	# to the replication cluster.  This must be unique across the
	# cluster.
	# 
	# TODO: This should be determined automatically, which can be
	# done fairly easily in most cases using psql.  create_set
	# should derive it, and give an option to override it with a
	# specific value.
	"table_id"    => 1,
	"sequence_id" => 1,

	# This array contains a list of tables that already have
	# primary keys.
        "pkeyedtables" => [
			   'table1',
			   'table2',
			   ],

	# For tables that have unique not null keys, but no primary
	# key, enter their names and indexes here.
	"keyedtables" => {
	},


	# Sequences that need to be replicated should be entered here.
	"sequences" => [
			],
    },

};

# Keeping the following three lines for backwards compatibility in
# case this gets incorporated into a 1.0.6 release.
#
# TODO: The scripts should check for an environment variable
# containing the location of a configuration file.  That would
# simplify this configuration file and allow Slony tools to still work
# in situations where it doesn't exist.
#
if ($ENV{"SLONYSET"}) {
    require $ENV{"SLONYSET"};
}

# Please do not add or change anything below this point.
1;

# cd ../bin
クラスタ設定
# ./slonik_init_cluster --config=../etc/slon_tools.conf

# INIT CLUSTER
cluster name = replication_test;
 node 1 admin conninfo='host=172.16.167.144 dbname=testdb1 user=postgres port=5432';
 node 2 admin conninfo='host=172.16.167.130 dbname=testdb1 user=postgres port=5432';
  init cluster (id = 1, comment = 'Node 1 - testdb1@172.16.167.144');

# STORE NODE
  store node (id = 2, event node = 1, comment = 'Node 2 - testdb1@172.16.167.130');
  echo 'Set up replication nodes';

# STORE PATH
  echo 'Next: configure paths for each node/origin';
  store path (server = 1, client = 2, conninfo = 'host=172.16.167.144 dbname=testdb1 user=postgres port=5432');
  store path (server = 2, client = 1, conninfo = 'host=172.16.167.130 dbname=testdb1 user=postgres port=5432');
  echo 'Replication nodes prepared';
  echo 'Please start a slon replication daemon for each node';

# ./slonik_init_cluster --config=../etc/slon_tools.conf | slonik
<stdin>:10: Set up replication nodes
<stdin>:13: Next: configure paths for each node/origin
<stdin>:16: Replication nodes prepared
<stdin>:17: Please start a slon replication daemon for each node
セット作成

slonyで管理する「セット」を作成する。

#./slonik_create_set --config=../etc/slon_tools.conf set1
cluster name = replication_test;
 node 1 admin conninfo='host=172.16.167.144 dbname=testdb1 user=postgres port=5432';
 node 2 admin conninfo='host=172.16.167.130 dbname=testdb1 user=postgres port=5432';

# TABLE ADD KEY

# CREATE SET
    create set (id = 1, origin = 1, comment = 'Set 1 (set1_name) for replication_test');

# SET ADD TABLE
  echo 'Subscription set 1 (set1_name) created';
  echo 'Adding tables to the subscription set';
  set add table (set id = 1, origin = 1, id = 1,
                 full qualified name = 'public.table1',
                 comment = 'Table public.table1 with primary key');
  echo 'Add primary keyed table public.table1';
  set add table (set id = 1, origin = 1, id = 2,
                 full qualified name = 'public.table2',
                 comment = 'Table public.table2 with primary key');
  echo 'Add primary keyed table public.table2';

# SET ADD SEQUENCE
  echo 'Adding sequences to the subscription set';
  echo 'All tables added';

# ./slonik_create_set --config=../etc/slon_tools.conf set1 | ./slonik
<stdin>:11: Subscription set 1 (set1_name) created
<stdin>:12: Adding tables to the subscription set
<stdin>:16: Add primary keyed table public.table1
<stdin>:20: Add primary keyed table public.table2
<stdin>:23: Adding sequences to the subscription set
<stdin>:24: All tables added
slony起動

デーモンを起動するが、pidファイルを保存するディレクトリがないと起動しないので作成した。

# mkdir /var/run/slony1
# ./slon_start --config=../etc/slon_tools.conf node1
Invoke slon for node 1 - /usr/local/slony/bin/slon -p /var/run/slony1/replication_test_node1.pid -s 1000 -d2  replication_test 'host=172.16.167.144 dbname=testdb1 user=postgres port=5432' > /var/log/slony1/node1/testdb1-2014-10-17.log 2>&1 &
Slon successfully started for cluster replication_test, node node1
#

設定ファイルをスレーブ側にコピー

同じ設定ファイルを使うのでコピーする。

# cd /usr/local/slony/etc
# scp -p slon_tools.conf 172.16.167.130:/usr/local/slony/etc

スレーブ側での設定

マスタ側と同様、pidファイルを保存するディレクトリを作成しておく。

# mkdir /var/run/slony1
# cd /usr/local/slony/bin
# ./slon_start --config=../etc/slon_tools.conf node2
Invoke slon for node 2 - /usr/local/slony/bin/slon -p /var/run/slony1/replication_test_node2.pid -s 1000 -d2  replication_test 'host=172.16.167.130 dbname=testdb1 user=postgres port=5432' > /var/log/slony1/node2/testdb1-2014-10-17.log 2>&1 &
Slon successfully started for cluster replication_test, node node2
PID [33233]
Start the watchdog process as well...
#
スレーブ側でのセット登録

マスタ側で作成した「セット」の登録。

# ./slonik_subscribe_set --config=../etc/slon_tools.conf set1 node2
cluster name = replication_test;
 node 1 admin conninfo='host=172.16.167.144 dbname=testdb1 user=postgres port=5432';
 node 2 admin conninfo='host=172.16.167.130 dbname=testdb1 user=postgres port=5432';
 
    subscribe set (id = 1, provider = 1, receiver = 2, forward = yes);
  echo 'Subscribed nodes to set 1';
# ./slonik_subscribe_set --config=../etc/slon_tools.conf set1 node2 | ./slonik
<stdin>:6: Subscribed nodes to set 1
#

slonyの動作確認

これでレプリケーションができているはず。確認は"_クラスタ名"のスキーマのsl_statusを参照し、st_lag_timeが十数秒以内だといいらしい。

# su - postgres
$ psql testdb1
testdb1=# select * from _replication_test.sl_status;
 st_origin | st_received | st_last_event |       st_last_event_ts        | st_last_received |     st_last_received_ts      |   st_last_received_event_ts   | st_lag_num_events |   st_lag_time   
-----------+-------------+---------------+-------------------------------+------------------+------------------------------+-------------------------------+-------------------+-----------------
         1 |           2 |    5000000076 | 2014-10-17 18:08:44.906713+09 |       5000000075 | 2014-10-17 18:08:40.31399+09 | 2014-10-17 18:08:34.891492+09 |                 1 | 00:00:11.549382
(1)

この後、実際にマスタでinsertやupdateを実行してみてスレーブに反映されたことを確認。