postgres 读书笔记
第1章
预写日志
预写日志在客户端写入数据的时候首先写入预写日志中,从原理上来说因为是顺序写性能会更好。
预写日志的配置。
在编译安装的时候可以指定--with-wal-segsize=size
修改默认的预写日志文件大小。
wal_level=replica
fsync=on
max_wal_size = 1GB
min_wal_size = 80MB
wal_level=replica 三个级别。
minimal 不能通过基础备份和预写日志恢复数据库。
replica 该级别支持预写日志的归档和复制。
logical 在replica级别的基础智商添加了支持逻辑解码所需要的信息。
日志记录配置
# linux的默认配置,书上推荐使用csv格式
#log_destination = 'stderr' # Valid values are combinations of
# stderr, csvlog, jsonlog, syslog, and
# eventlog, depending on platform.
# csvlog and jsonlog require
# logging_collector to be on.
# This is used when logging to stderr:
logging_collector = on # Enable capturing of stderr, jsonlog,
# and csvlog into log files. Required
# to be on for csvlogs and jsonlogs.
# (change requires restart)
# These are only used if logging_collector is on:
log_directory = 'log' # directory where log files are written,
# can be absolute or relative to PGDATA
#log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log' # log file name pattern,
# can include strftime() escapes
#log_file_mode = 0600 # creation mode for log files,
# begin with 0 to use octal notation
#log_rotation_age = 1d # Automatic rotation of logfiles will
# happen after that time. 0 disables.
# 日志到多大的时候重新生成一个新的日志文件。
#log_rotation_size = 10MB # Automatic rotation of logfiles will
1.5 归档日志
归档日志就是对预写日志的归档可以对把数据恢复到一个具体的时间点。
# 是否打开,需要重启数据库
archive_mode = on
# 归档的命令
archive_command = 'cp %p /data/postgres/archive/%f'
# 强制切换日志的时间0表示关闭
#archive_timeout = 0
自动清理进程autovacuum
pgsql有版本记录的功能,在执行update和delete之后数据库不会直接删除原来的数据而是标记为删除状态。在事务提交之后这些数据就没有价值了,所以需要定期删除。
#------------------------------------------------------------------------------
# AUTOVACUUM
#------------------------------------------------------------------------------
#autovacuum = on # Enable autovacuum subprocess? 'on'
# requires track_counts to also be on.
#autovacuum_max_workers = 3 # max number of autovacuum subprocesses
# (change requires restart)
#autovacuum_naptime = 1min # time between autovacuum runs
#autovacuum_vacuum_threshold = 50 # min number of row updates before
# vacuum
#autovacuum_vacuum_insert_threshold = 1000 # min number of row inserts
# before vacuum; -1 disables insert
# vacuums
#autovacuum_analyze_threshold = 50 # min number of row updates before
# analyze
#autovacuum_vacuum_scale_factor = 0.2 # fraction of table size before vacuum
#autovacuum_vacuum_insert_scale_factor = 0.2 # fraction of inserts over table
# size before insert vacuum
#autovacuum_analyze_scale_factor = 0.1 # fraction of table size before analyze
#autovacuum_freeze_max_age = 200000000 # maximum XID age before forced vacuum
# (change requires restart)
#autovacuum_multixact_freeze_max_age = 400000000 # maximum multixact age
# before forced vacuum
# (change requires restart)
#autovacuum_vacuum_cost_delay = 2ms # default vacuum cost delay for
# autovacuum, in milliseconds;
# -1 means use vacuum_cost_delay
#autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for
# autovacuum, -1 means use
# vacuum_cost_limit
内存结构
本地内存、共享内存。
几个参数:
#work_mem = 4MB
#maintenance_work_mem = 64MB
#temp_buffers = 8MB
shared_buffers = 128MB
#effective_cache_size = 4GB
常用命令总结
# 查看所有的库
select datname from pg_database;
# 或者
\l
# 查看已有的数据库信息
select oid,datname,datistemplate,datallowconn from pg_database;
修改日志格式位csv格式
# 查看日志输出位置变量
postgres=# show log_destination;
# vi postgresql.conf
log_destination = 'csvlog'
logging_collector = on
log_directory = 'log'
log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'
# 热加载配置
pg_ctl reload
pg_ctl 管理数据库实例
~]$ pg_ctl --help
pg_ctl is a utility to initialize, start, stop, or control a PostgreSQL server.
Usage:
pg_ctl init[db] [-D DATADIR] [-s] [-o OPTIONS]
pg_ctl start [-D DATADIR] [-l FILENAME] [-W] [-t SECS] [-s]
[-o OPTIONS] [-p PATH] [-c]
pg_ctl stop [-D DATADIR] [-m SHUTDOWN-MODE] [-W] [-t SECS] [-s]
pg_ctl restart [-D DATADIR] [-m SHUTDOWN-MODE] [-W] [-t SECS] [-s]
[-o OPTIONS] [-c]
pg_ctl reload [-D DATADIR] [-s]
pg_ctl status [-D DATADIR]
pg_ctl promote [-D DATADIR] [-W] [-t SECS] [-s]
pg_ctl logrotate [-D DATADIR] [-s]
pg_ctl kill SIGNALNAME PID
postgres 的两种分布式集群实现
-
greenplum
事务能力很强。
-
citus
Citus中没有全局的事务快照,这和MyCAT等常见的分库分表方案一样。这样可以简化设计提升性能,带来的问题就是不能保证全局的一致性读,而只能实现的最终一致性。
Postgres对象管理
数据库本身是一个postgres的对象,一个数据库可以包含多个模式。模式中又包含表、函数、及操作等数据库对象。在新建数据库时会默认创建public的模式。
1.创建一个schema
mydb=# create schema gxl
mydb=# \dn
List of schemas
Name | Owner
--------+-------------------
gxl | postgres
public | pg_database_owner
2.表的基本操作
创建表,如果直接使用创建表的语句默认情况下是创建在public的模式下。如果要指定是创建在哪个模式下使用schema.table_name(.......)
的方式去创建一个表。
mydb=# \dt gxl.*
Did not find any relation named "gxl.*" # 如果模式中没有表就会返回这个提示
# 创建表
mydb=# create table gxl.test2(id int,name varchar(32), age int);
CREATE TABLE
# 查看表
mydb=# \dt gxl.*
List of relations
Schema | Name | Type | Owner
--------+-------+-------+----------
gxl | test2 | table | postgres
(1 row)
# 查看表结构
mydb=# \d gxl.test2;
Table "gxl.test2"
Column | Type | Collation | Nullable | Default
--------+-----------------------+-----------+----------+---------
id | integer | | |
name | character varying(32) | | |
age | integer | | |
# 修改表结构
mydb=# alter table gxl.test2 add gender varchar(1) default 'M';
ALTER TABLE
mydb=# \d gxl.test2;
Table "gxl.test2"
Column | Type | Collation | Nullable | Default
--------+-----------------------+-----------+----------+------------------------
id | integer | | |
name | character varying(32) | | |
age | integer | | |
gender | character varying(1) | | | 'M'::character varying
# 修改列
alter table gxl.test2 alter gender type varchar(10);
alter table gxl.test2 drop column gender;
# 删除表
drop table gxl.test2;
3.表的约束
一种限制条件,比如说主键必须是唯一的,当出现重复的主键的时候会报错。
- 主键
- 唯一键
- 外键
- 检查约束
- 非空约束
- 默认值约束
# 主键约束
create table testprimarykey(id int primary key,name varchar(20));
mydb=# \d testprimarykey;
Table "public.testprimarykey"
Column | Type | Collation | Nullable | Default
--------+-----------------------+-----------+----------+---------
id | integer | | not null |
name | character varying(20) | | |
Indexes:
"testprimarykey_pkey" PRIMARY KEY, btree (id)
mydb=# insert into testprimarykey values(1,'tom');
INSERT 0 1
mydb=# insert into testprimarykey values(2,'jerry');
INSERT 0 1
mydb=# insert into testprimarykey values(3,'jimi');
INSERT 0 1
mydb=# insert into testprimarykey values(1,'defan');
ERROR: duplicate key value violates unique constraint "testprimarykey_pkey"
DETAIL: Key (id)=(1) already exists. # 主键冲突
4.表的碎片
Postgres和mysql逻辑删除的方式原理都一样,并不会立即释放表空间。需要手动整理表空间来整理磁盘的碎片化空间。
在整理表空间的时候会有一个vacumm的进程可以通过ps -ef | grep vacu
查看。
# 创建新表插入数据然后查看表空间大小。
mydb=# create table testfragement(tid int,tname varchar(20));
CREATE TABLE
mydb=# insert into testfragement select n,'myname_'||n from generate_series(1,50000) n;
INSERT 0 50000;
mydb=# select pg_size_pretty(pg_relation_size('testfragement'));
pg_size_pretty
----------------
651 MB
(1 row)
mydb=# delete from testfragement;
DELETE 55000
mydb=# select pg_size_pretty(pg_relation_size('testfragement'));
pg_size_pretty
----------------
651 MB
(1 row)
mydb=# vacuum testfragement;
VACUUM
mydb=# select pg_size_pretty(pg_relation_size('testfragement'));
pg_size_pretty
----------------
0 bytes
(1 row)
# 查看表状态
mydb=# \x
Expanded display is on.
mydb=# select * from pg_stat_user_tables where relname = 'testfragement';
-[ RECORD 1 ]-------+------------------------------
relid | 16773
schemaname | public
relname | testfragement
seq_scan | 2
seq_tup_read | 55000
idx_scan |
idx_tup_fetch |
n_tup_ins | 13085872
n_tup_upd | 0
n_tup_del | 55000
n_tup_hot_upd | 0
n_live_tup | 0 # 这两个都变成0说明表空间已经被回收
n_dead_tup | 0 # 这两个都变成0说明表空间已经被回收
n_mod_since_analyze | 0
n_ins_since_vacuum | 0
last_vacuum | 2024-10-25 11:39:54.195119+08
last_autovacuum | 2024-10-25 11:39:52.894711+08
last_analyze |
last_autoanalyze | 2024-10-25 11:39:53.22663+08
vacuum_count | 1 # 这里整理表空间 +1
autovacuum_count | 2
analyze_count | 0
autoanalyze_count | 2
5.pgsql的统计信息
pg_class: 记录表和索引的行数、块数等统计信息。
pg_statistic: 记录analyze命令创建的统计信息,这些统计信息由查询优化器使用。
pg_stats视图提供数据库的统计信息。
# 从系统表中获取表的统计信息
mydb=# select oid,relname,relpages,reltuples from pg_class where relname='testprimarykey';
-[ RECORD 1 ]-------------
oid | 16768
relname | testprimarykey
relpages | 0
reltuples | -1
mydb=# select * from pg_stats where tablename='testprimarykey';
从上面的结果发现并不准确,因为表中是有数据的。
6.使用analyze命令手动收集数据库统计信息
注意:使用analyze命令会锁表。
mydb=# analyze verbose testprimarykey;
INFO: analyzing "public.testprimarykey"
INFO: "testprimarykey": scanned 1 of 1 pages, containing 3 live rows and 1 dead rows; 3 rows in sample, 3 estimated total rows
ANALYZE
# 再次查看系统表的统计
# 发信数据统计正常。
7.收集器配置项
mydb=# show track_
track_activities track_commit_timestamp track_functions track_wal_io_timing
track_activity_query_size track_counts track_io_timing
8.临时表
只存在当前会话中,当会话结束会自动删除并清理表空间。
因为只存在当前会话中不需要关心是否表重名。
# 创建临时表
create temporary table temptable(
tid int primary key,
tname varchar(10));
9.索引
pg_indexs是一个视图通过它获取某个模式下的索引信息。
# 查看public schema下的索引信息。
mydb=# select schemaname,tablename,indexname from pg_indexes where schemaname = 'public';
schemaname | tablename | indexname
------------+----------------+---------------------
public | students | students_pkey
public | test_1 | test_1_pkey
public | dept | dept_pkey
public | testprimarykey | testprimarykey_pkey
# 查询索引的详细信息
mydb=# \x
Expanded display is on.
mydb=# select * from pg_index where indrelid in (select oid from pg_class where relname = 'testprimarykey');
-[ RECORD 1 ]-------+------
indexrelid | 16771
indrelid | 16768
indnatts | 1
indnkeyatts | 1
indisunique | t
indnullsnotdistinct | f
indisprimary | t
indisexclusion | f
indimmediate | t
indisclustered | f
indisvalid | t
indcheckxmin | f
indisready | t
indislive | t
indisreplident | f
indkey | 1
indcollation | 0
indclass | 1978
indoption | 0
indexprs |
indpred |
pgsql的特色索引。
- 部分索引
- 表达式索引
pgsql不能强制使用特定的索引,或者阻止pgsql使用顺序扫描,所以在调试的时候可以把顺序扫描关了。
9.1 索引类型
普通索引、唯一索引、主键索引、组合索引、全文索引和哈希索引。
10.清理索引索引
检查清理重复的索引
select indrelid::regclass as tablename,array_agg(indexrelid::regclass) as indexes from pg_index group by indrelid,indkey having count(*) >1;
检查长期未使用的索引。
mydb=# select relname,indexrelname,idx_scan from pg_catalog.pg_stat_user_indexes;
检查无效的索引。
mydb=# select indexrelid,indisvalid from pg_index where indisvalid = 'f';
11.视图
视图是一种虚表,因此它本身不包含数据。视图将作为一条select语句保存在数据字典中。视图依赖的表叫做基表,通过视图展示表的部分数据。
# 创建视图
create or replace view view1 as select * from testprimarykey where id <=1;
mydb=# select * from view1;
id | name
----+------
1 | tom
(1 row)
并行查询
默认情况只会进行顺序扫描,当有where条件的时候就会进行并行查询。
可以根据实际的物理机配置情况合理配置worker进程的数量,以提高pgsql的查询速度。
# 创建一个表
mydb=# create table testtable1(id int, tname varchar(20));
mydb=# insert into testtable1 select n,'myname_'||n from generate_series(1,5000000) n;
mydb=# explain analyze select * from testtable1 where tname='myname_10';
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------
Gather (cost=1000.00..58878.99 rows=1 width=18) (actual time=0.920..579.144 rows=1 loops=1)
Workers Planned: 2 # 计划的worker
Workers Launched: 2 # 实际执行的worker
-> Parallel Seq Scan on testtable1 (cost=0.00..57878.89 rows=1 width=18) (actual time=375.012..566.492 rows=0 loops=3)
Filter: ((tname)::text = 'myname_10'::text)
Rows Removed by Filter: 1666666
Planning Time: 0.177 ms
Execution Time: 579.193 ms
(8 rows)
# Seq Scan 顺序扫描
mydb=# set max_parallel_workers_per_gather = 0;
mydb=# explain analyze select * from testtable1 where tname='myname_10';
QUERY PLAN
------------------------------------------------------------------------------------------------------------
Seq Scan on testtable1 (cost=0.00..94336.12 rows=1 width=18) (actual time=0.365..1828.760 rows=1 loops=1)
Filter: ((tname)::text = 'myname_10'::text)
Rows Removed by Filter: 4999999
Planning Time: 0.145 ms
Execution Time: 1828.814 ms
(5 rows)
# 在关闭work进程之后查询时间增长了3倍多。
worker 查询的相关参数。
#max_worker_processes = 8 # (change requires restart)
#max_parallel_workers_per_gather = 2 # limited by max_parallel_workers
#max_parallel_workers = 8 # number of max_worker_processes that
#parallel_setup_cost = 1000.0 # same scale as above
#parallel_tuple_cost = 0.1 # same scale as above
#min_parallel_table_scan_size = 8MB
#min_parallel_index_scan_size = 512kB
1.并行扫描
- 顺序扫描在没有索引的情况下,不带条件的select。
- 并行索引扫描,带where条件且条件是索引的内容。
- 并行位图扫描有索引的条件带
or
关键字的查询。
2.查看表锁
模拟一个表锁。
# 查看表的oid
# pg_class 表存了表的信息
mydb=# select oid,relname,relkind,relfilenode from pg_class where relname='testtable1';
oid | relname | relkind | relfilenode
-------+------------+---------+-------------
16787 | testtable1 | r | 16787
# 启动一个事物修改数据
postgres=# \c mydb
You are now connected to database "mydb" as user "postgres".
mydb=# start transaction;
START TRANSACTION
mydb=*# update testtable1 set tname = 'gongxiaoliao' where id = 946026;
UPDATE 1
# 此时修改表结构,发现执行很慢
mydb=# alter table testtable1 add dno int;
# 查看锁表信息
postgres=# \x
Expanded display is on.
postgres=# select * from pg_locks where relation = '16787';
-[ RECORD 1 ]------+------------------------------
locktype | relation
database | 16397
relation | 16787
page |
tuple |
virtualxid |
transactionid |
classid |
objid |
objsubid |
virtualtransaction | 5/31
pid | 793183
mode | AccessExclusiveLock
granted | f
fastpath | f
waitstart | 2024-10-28 12:00:23.476624+08
-[ RECORD 2 ]------+------------------------------
locktype | relation
database | 16397
relation | 16787
page |
tuple |
virtualxid |
transactionid |
classid |
objid |
objsubid |
virtualtransaction | 4/305
pid | 793068
mode | RowExclusiveLock
granted | t
fastpath | f
waitstart |
# 当更新语句提交之后就可以正常修表结构了,随及返回了更新信息。
mydb=# alter table testtable1 add dno int;
ALTER TABLE
# 检查锁等待的SQL语句
---------------------------
postgres=# with
t_wait as
(
select
a.mode,a.locktype,a.database,a.relation,a.page,a.tuple,
a.classid,a.granted,a.objid,a.objsubid,a.pid,
a.virtualtransaction,a.virtualxid,a.transactionid,a.fastpath,
b.state,b.query,b.xact_start,b.query_start,b.usename,
b.datname,b.client_addr,b.client_port,b.application_name
from pg_locks a,pg_stat_activity b where a.pid=b.pid and not a.granted
),
t_run as
(
select
a.mode,a.locktype,a.database,a.relation,a.page,a.tuple,
a.classid,a.granted,a.objid,a.objsubid,a.pid,
a.virtualtransaction,a.virtualxid,a.transactionid,a.fastpath,
b.state,b.query,b.xact_start,b.query_start,b.usename,
b.datname,b.client_addr,b.client_port,b.application_name
from pg_locks a,pg_stat_activity b where a.pid=b.pid and a.granted
),
t_overlap as
(
select r.* from t_wait w join t_run r on
(
r.locktype is not distinct from w.locktype and
r.database is not distinct from w.database and
r.relation is not distinct from w.relation and
r.page is not distinct from w.page and
r.tuple is not distinct from w.tuple and
r.virtualxid is not distinct from w.virtualxid and
r.transactionid is not distinct from w.transactionid and
r.classid is not distinct from w.classid and
r.objid is not distinct from w.objid and
r.objsubid is not distinct from w.objsubid and
r.pid <> w.pid
)
),
t_unionall as
(
select r.* from t_overlap r
union all
select w.* from t_wait w
)
select
locktype,datname,relation::regclass,page,tuple,
virtualxid,transactionid::text,classid::regclass,
objid,objsubid,
string_agg(
'Pid: '
||case when pid is null
then 'NULL' else pid::text end||chr(10)||'Lock_Granted: '
||case when granted is null
then 'NULL' else granted::text end||' , Mode: '
||case when mode is null
then 'NULL' else mode::text end||' , FastPath: '
||case when fastpath is null
then 'NULL' else fastpath::text end||' , VirtualTransaction: '
||case when virtualtransaction is null
then 'NULL' else virtualtransaction::text end||' , Session_State: '
||case when state is null
then 'NULL' else state::text end||chr(10)||'Username: '
||case when usename is null
then 'NULL' else usename::text end||' , Database: '
||case when datname is null
then 'NULL' else datname::text end||' , Client_Addr: '
||case when client_addr is null
then 'NULL' else client_addr::text end||' , Client_Port: '
||case when client_port is null
then 'NULL' else client_port::text end||' , Application_Name: '
||case when application_name is null
then 'NULL' else application_name::text end
||chr(10)||'Xact_Start: '
||case when xact_start is null
then 'NULL' else xact_start::text end||' , Query_Start: '
||case when query_start is null
then 'NULL' else query_start::text end||' , Xact_Elapse: '
||case when (now()-xact_start) is null
then 'NULL' else (now()-xact_start)::text end||' , Query_Elapse: '
||case when (now()-query_start) is null
then 'NULL' else (now()-query_start)::text end
||chr(10)||'SQL (Current SQL in Transaction): '||chr(10)
||case when query is null then 'NULL' else query::text end,chr(10)
||'--------'||chr(10)
order by
( case mode
when 'INVALID' then 0
when 'AccessShareLock' then 1
when 'RowShareLock' then 2
when 'RowExclusiveLock' then 3
when 'ShareUpdateExclusiveLock' then 4
when 'ShareLock' then 5
when 'ShareRowExclusiveLock' then 6
when 'ExclusiveLock' then 7
when 'AccessExclusiveLock' then 8
else 0
end ) desc,
(case when granted then 0 else 1 end)
) as lock_conflict
from t_unionall
group by locktype,datname,relation,page,tuple,virtualxid,
transactionid::text,classid,objid,objsubid;
3.死锁
和死锁相关的几个参数。
- deadlock_timeout
- lock_timeout
- statement_timeout
查看死锁的语句
# 监控死锁信息的SQL脚本
----------------------------------
SELECT blocked_locks.pid AS blocked_pid,
blocked_activity.usename AS blocked_user,
blocking_locks.pid AS blocking_pid,
blocking_activity.usename AS blocking_user,
blocked_activity.query AS blocked_statement,
blocking_activity.query AS current_statement_in_blocking_process
FROM pg_catalog.pg_locks blocked_locks
JOIN pg_catalog.pg_stat_activity blocked_activity
ON blocked_activity.pid = blocked_locks.pid
JOIN pg_catalog.pg_locks blocking_locks
ON blocking_locks.locktype = blocked_locks.locktype
AND blocking_locks.DATABASE IS NOT DISTINCT
FROM blocked_locks.DATABASE
AND blocking_locks.relation IS NOT DISTINCT
FROM blocked_locks.relation
AND blocking_locks.page IS NOT DISTINCT
FROM blocked_locks.page
AND blocking_locks.tuple IS NOT DISTINCT
FROM blocked_locks.tuple
AND blocking_locks.virtualxid IS NOT DISTINCT
FROM blocked_locks.virtualxid
AND blocking_locks.transactionid IS NOT DISTINCT
FROM blocked_locks.transactionid
AND blocking_locks.classid IS NOT DISTINCT
FROM blocked_locks.classid
AND blocking_locks.objid IS NOT DISTINCT
FROM blocked_locks.objid
AND blocking_locks.objsubid IS NOT DISTINCT
FROM blocked_locks.objsubid
AND blocking_locks.pid != blocked_locks.pid
JOIN pg_catalog.pg_stat_activity blocking_activity
ON blocking_activity.pid = blocking_locks.pid
WHERE NOT blocked_locks.GRANTED;
用户管理
create user
和create role
两者都差不多,create user创建的用户默认自带login属性。
pgsql在初始化的时候总是包含一个预定义角色,此角色始终是超级用户在默认情况下这个用户名与系统用户同名,通常情况下被命名为postgres。
# 查看当前存在的用户
postgres=# \du
List of roles
Role name | Attributes | Member of
-----------+------------------------------------------------------------+-----------
admin2 | Superuser | {}
jgjadmin | | {}
postgres | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
属性 | 说明 |
---|---|
login | 可以用作数据库连接的初始角名 |
superuser | 数据库超级用户 |
createdb | 创建数据库权限 |
createrole | 创建或删除其它的普通用户 |
replication | 做流复制用到的一个用户属性 |
password | 指定密码模式 |
inherit | 集成用户的权限 |
pg数据库中的用户表和角色表
postgres=# select * from pg_user;
usename | usesysid | usecreatedb | usesuper | userepl | usebypassrls | passwd | valuntil | useconfig
----------+-----------+-------------+----------+---------+--------------+----------+----------+-----------
admin2 | 16388 | f | t | f | f | ******** | |
jgjadmin | 119922631 | f | f | f | f | ******** | |
postgres | 10 | t | t | t | t | ******** | |
(3 rows)
postgres=# select rolname from pg_roles;
rolname
---------------------------
pg_database_owner
pg_read_all_data
pg_write_all_data
pg_monitor
pg_read_all_settings
pg_read_all_stats
pg_stat_scan_tables
pg_read_server_files
pg_write_server_files
pg_execute_server_program
pg_signal_backend
pg_checkpoint
admin2
jgjadmin
postgres
(15 rows)
-- 修改用户的密码 --
create user tom;
alter user tom with password 'pas123';
postgresql的passwordcheck插件可以检查密码的复杂度。
# 如果提前编译了
[root@jgj-postgres passwordcheck]# ls
expected Makefile passwordcheck.c passwordcheck.o passwordcheck.so sql
[root@jgj-postgres passwordcheck]# pwd
/opt/postgresql-15.7/contrib/passwordcheck
[root@jgj-postgres passwordcheck]# make install
vi /data/postgresql/data/postgresql.conf
shared_preload_libraries = 'passwordcheck'
# 如果是多个模块就这样添加
~]# grep shared_pr /data/postgres/postgresql.conf
shared_preload_libraries = 'file_fdw,passwordcheck' # (change requires restart)
postgres=# show shared_preload_libraries;
shared_preload_libraries
--------------------------
file_fdw,passwordcheck
# 会自动做一些密码规范的校验
postgres=# alter user tom with password 'pas123';
ERROR: password is too short
1.权限管理
任何数据库对象都是所有者的,因此所有者有的对象不必单独授权;所有者可以隐式的把对象的操作权限授权给其它的用户。
pgsql中的权限有两种,一种在创建用户的时候指定的权限,一种是grant和revoke。
# 创建一个用户设置密码,授予创建数据库的权限,能够远程登陆数据库。
postgres=# create role cdb createdb password 'abc@123aaa' login;
CREATE ROLE
# 或者
create user cdb;
alter user cdb with createdb password 'aaa@1234!';
postgres=# \du+ cdb;
List of roles
Role name | Attributes | Member of | Description
-----------+------------+-----------+-------------
cdb | Create DB | {} |
2.使用grant和revoke管理用户权限
这两个命令可以管理如下权限:
- 在数据库中创建模式的权限。
- 在指定数据库中创建临时表的权限。
- 连接某个数据库的权限。
- 在某个数据库中创建对象的权限如:表、视图和函数。
- 在一些表中执行DML
- 对序列进行操作的权限。
- 把表、索引创建到指定空间中的权限。
# 直接创建用户没有授权如果能登陆默认是有列出数据库的权限。
mydb=> \dn
List of schemas
Name | Owner
--------+-------------------
gxl | postgres
public | pg_database_owner
(2 rows)
mydb=>
mydb=> \d
List of relations
Schema | Name | Type | Owner
--------+----------------+-------+----------
public | dept | table | postgres
public | students | table | pguser
public | test_1 | table | pguser
public | testfragement | table | postgres
public | testprimarykey | table | postgres
public | testtable1 | table | postgres
public | view1 | view | postgres
(7 rows)
mydb=> select * from testtable1 limit 10;
ERROR: permission denied for table testtable1
# 授权cdb用户能够crud mydb库中默认public中的表。
mydb=# grant select,insert,update,delete on all tables in schema public to cdb;
# 此时已经有权限去查看表中内容
mydb=> select * from testtable1 limit 10;
id | tname | dno
----+-----------+-----
1 | myname_1 |
2 | myname_2 |
3 | myname_3 |
4 | myname_4 |
5 | myname_5 |
6 | myname_6 |
7 | myname_7 |
8 | myname_8 |
9 | myname_9 |
10 | myname_10 |
(10 rows)
# 此时授权的操作只对已经存在数据库有效,后续创建的新对象依然没有权限,需要对用户添加一个默认权限。
alter default privileges grant select,insert,update,delete on tables to cdb;
3.admin option和grant option
admin option在选项授予角色时,角色的被授予者可以将得到的角色转授给其它角色。
grant option选项授权时,权限的被授予者可以将权限赋予其它用户。
逻辑关系演示。
# 创建一个role1 和两个用户A\B。
mydb=# create role role1;
CREATE ROLE
mydb=# create user A with password '126.comM';
CREATE ROLE
mydb=# create user B with password '126.comM';
# 授权用户A
mydb=# grant select,update,delete on all tables in schema public to role1;
GRANT
mydb=# grant role1 to A with admin option;
GRANT ROLE
# 使用用户A登陆并把A用户得到的权限授权给B用户
# pgsql 的用户在默认配置下创建的大写的用户转换为小写。
~]$ psql -W -h `hostname -I` mydb a
mydb=> grant role1 to b;
GRANT ROLE
# 取消的命令
mydb=# revoke role1 from a;
REVOKE ROLE
# 如果缺少权限就会报错
# 权限转让
mydb=# grant select on testtable1 to a with grant option;
[postgres@pgsql-jgj-test ~]$ psql -W -h `hostname -I` mydb a
Password:
psql (15.8)
Type "help" for help.
mydb=> grant select on testtable1 to b;
# 此时b有了a的查询权限
# 如果要单独取消b的权限
revoke select on testtable1 from b cascade;
4.分组的权限管理
在用户过多的时候我们使用组进行管理用户,这样久只需要对组进行授权管理。简化管理量。
组角色中的成员会自动继承组角色的权限,组角色和成员角色是父子关系。
# 创建组角色
create role father login nosuperuser nocreatedb nocreaterole password '126.comM' inherit;
inherit # 组角色
# pg不允许超级权限通过组的方式进行继承
# 授权组的权限
mydb=# grant connect on database mydb to father;
GRANT
mydb=# \c mydb
You are now connected to database "mydb" as user "postgres".
mydb=# grant usage on schema public to father;
GRANT
mydb=# grant select on public.testtable1 to father;
GRANT
# 把组权限赋予成员
create role child1 login nosuperuser nocreatedb nocreaterole password '126.comM' inherit;
grant father to child1;
# 登陆访问测试
[postgres@pgsql-jgj-test ~]$ psql -W -h `hostname -I` mydb child1
Password:
psql (15.8)
Type "help" for help.
mydb=> select * from testtable1 limit 10;
id | tname | dno
----+-----------+-----
1 | myname_1 |
2 | myname_2 |
3 | myname_3 |
4 | myname_4 |
5 | myname_5 |
6 | myname_6 |
7 | myname_7 |
8 | myname_8 |
9 | myname_9 |
10 | myname_10 |
(10 rows)
mydb=> update testtable1 set tname='myname_gxl' where id=1;
ERROR: permission denied for table testtable1
5.使用set role命令显示启用角色的权限
使用场景:
创建两个角色两个角色有不同的权限,把两个角色同事授权给一个子角色,然后再把这个子角色授权给特定的用户,这个用户在登陆数据库的时候在同一会话中可以通过set role
来修改自己的角色来获得不同的权限。
6.审计日志
存储数据库系统中发生的特定事件信息,增加配置之后会给数据库代理额外的工作量所以需要合理的配置才不会影响性能。或者使用pgaudit的插件,可以审计DDL和DML sql语句。
推荐的线上配置。
https://www.postgresql.org/docs/16/runtime-config-logging.html
log_destination = 'csv'
logging_collector = on
log_directory = 'log'
log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'
log_truncate_on_rotation = on
log_connections = on
log_disconnections = on
log_statement = ddl
log_min_duration_statement = 60s
log_checkpoints = on
log_lock_waits = on
deadlock_timeout = 1s
备份与恢复
1.设置归档日志
归档日志是为了备份重做日志。
# 创建归档日志的目录
mkdir -p /data/postgres/archlog/
# 修改配置文件并重启pgsql
archive_mode = on
archive_command = 'cp %p /data/postgres/archlog/%f'
wal_level = replica
# 登陆数据库查看配置
postgres=# show archive_mode;
archive_mode
--------------
on
(1 row)
postgres=# show archive_command;
archive_command
----------------------------------------
cp %p /data/postgresql/archlog/%f
(1 row)
postgres=# show wal_level;
wal_level
-----------
replica
(1 row)
# 查看预写日志列表
select * from pg_ls_waldir() order by modification desc;
# 手动切换日志
# checkpoint 会触发完全的检查点把内存中的脏数据写入磁盘。
checkpoint;
select pg_switch_wal()
# 再次查看预写日志列表
select * from pg_ls_waldir() order by modification desc;
2.管理过期的归档日志
在归档日志开启后会产生大量的归档日志,可以使用下面几种方式来清理。
-
归档命令写成脚本的方式,在执行归档的同时也要清理。
-
使用系统定时任务。
find /data/postgresql/data/archive -type f -name "00000001*" -mtime +7 -delete
-
使用pgsq自带的命令。
这里主要使用pg自带的管理工具。
[postgres@pgsql-jgj-test data]$ pg_controldata /data/postgres
pg_control version number: 1300
Catalog version number: 202209061
Database system identifier: 7406166691963946602
Database cluster state: in production
pg_control last modified: Wed 30 Oct 2024 03:07:39 PM CST
Latest checkpoint location: 0/770000C0
Latest checkpoint's REDO location: 0/77000088
Latest checkpoint's REDO WAL file: 000000020000000000000077
# 表示可以删除000000020000000000000077之前的文件都可以删除,表示执行到这里的时候内存中的脏数据都已经持久化。
# 清理归档日志。
[postgres@pgsql-jgj-test archlog]$ pg_archivecleanup -d /data/postgres/archlog/ 000000020000000000000077
pg_archivecleanup: keeping WAL file "/data/postgres/archlog//000000020000000000000077" and later
pg_archivecleanup: removing file "/data/postgres/archlog//00000002000000000000006F"
pg_archivecleanup: removing file "/data/postgres/archlog//000000020000000000000070"
pg_archivecleanup: removing file "/data/postgres/archlog//000000020000000000000071"
pg_archivecleanup: removing file "/data/postgres/archlog//000000020000000000000072"
pg_archivecleanup: removing file "/data/postgres/archlog//000000020000000000000073"
pg_archivecleanup: removing file "/data/postgres/archlog//000000020000000000000074"
pg_archivecleanup: removing file "/data/postgres/archlog//000000020000000000000075"
pg_archivecleanup: removing file "/data/postgres/archlog//000000020000000000000076"
# 清理预写日志
pg_archivecleanup -d /data/postgres/pg_wal/ 000000020000000000000077
3.逻辑备份
- pg_dump
- pg_dumpall
4.文件级别备份
停止pg数据库,然后把数据目录备份,有问题的时候再把数据目录回滚这种方式需要停止数据库在生产上不太适合使用。
5.使用pg_basebackup完成热备与恢复
# pg_basebackup热备份需要开启归档模式。
----------------------# 完成一个备份------------------------
mkdir /data/pg_backup
~]$ pg_basebackup -Ft -v -Xs -D /data/pg_backup/$(date +%F)/
pg_basebackup: initiating base backup, waiting for checkpoint to complete
pg_basebackup: checkpoint completed
pg_basebackup: write-ahead log start point: 0/78000028 on timeline 2
pg_basebackup: starting background WAL receiver
pg_basebackup: created temporary replication slot "pg_basebackup_1028668"
pg_basebackup: write-ahead log end point: 0/78000100
pg_basebackup: waiting for background process to finish streaming ...
pg_basebackup: syncing data to disk ...
pg_basebackup: renaming backup_manifest.tmp to backup_manifest
pg_basebackup: base backup completed
# 查看备份的内容
~]$ ll /data/pg_backup/2024-10-31/
total 334312
-rw------- 1 postgres postgres 275368448 Oct 31 15:10 16396.tar
-rw------- 1 postgres postgres 274234 Oct 31 15:10 backup_manifest
-rw------- 1 postgres postgres 49908224 Oct 31 15:10 base.tar
-rw------- 1 postgres postgres 16780288 Oct 31 15:10 pg_wal.tar
----------------------# 恢复一个备份------------------------
# 清理原来的数据
postgres]$ pwd
/data/postgres
postgres]$ rm -fr ./*
# 先将备份的文件解压到pgsql的数据目录
tar -xf base.tar -C /data/postgres/
2024-10-31]$ ll /data/postgres/
total 60
drwxrwxr-x 2 postgres postgres 182 Oct 31 15:10 archlog
-rw------- 1 postgres postgres 227 Oct 31 15:10 backup_label
-rw-rw-r-- 1 postgres postgres 232 Sep 3 10:49 backup_label.old
drwx------ 6 postgres postgres 46 Sep 3 10:49 base
drwx------ 2 postgres postgres 4096 Oct 31 15:18 global
drwx------ 2 postgres postgres 4096 Oct 28 00:00 log
drwx------ 2 postgres postgres 6 Sep 3 10:49 pg_commit_ts
drwx------ 2 postgres postgres 6 Sep 3 10:49 pg_dynshmem
-rw------- 1 postgres postgres 4793 Sep 3 10:49 pg_hba.conf
-rw------- 1 postgres postgres 1636 Sep 3 10:49 pg_ident.conf
drwx------ 4 postgres postgres 68 Oct 31 15:10 pg_logical
drwx------ 4 postgres postgres 36 Sep 3 10:49 pg_multixact
drwx------ 2 postgres postgres 6 Sep 3 10:49 pg_notify
drwx------ 2 postgres postgres 6 Oct 31 15:10 pg_replslot
-rw-rw-r-- 1 postgres postgres 109 Oct 30 14:26 pg_rman_recovery.conf
drwx------ 2 postgres postgres 6 Sep 3 10:49 pg_serial
drwx------ 2 postgres postgres 6 Sep 3 10:49 pg_snapshots
drwx------ 2 postgres postgres 6 Oct 30 15:05 pg_stat
drwx------ 2 postgres postgres 6 Sep 3 10:49 pg_stat_tmp
drwx------ 2 postgres postgres 6 Sep 3 10:49 pg_subtrans
drwx------ 2 postgres postgres 6 Sep 3 10:52 pg_tblspc
drwx------ 2 postgres postgres 6 Sep 3 10:49 pg_twophase
-rw------- 1 postgres postgres 3 Sep 3 10:49 PG_VERSION
drwx------ 3 postgres postgres 28 Oct 31 15:18 pg_wal
drwx------ 2 postgres postgres 18 Sep 3 10:49 pg_xact
-rw------- 1 postgres postgres 88 Sep 3 10:49 postgresql.auto.conf
-rw-rw-r-- 1 postgres postgres 8230 Oct 30 15:05 postgresql.conf
-rw------- 1 postgres postgres 26 Oct 31 15:10 tablespace_map
-rw-rw-r-- 1 postgres postgres 26 Sep 3 10:49 tablespace_map.old
# 将归档日志解压到自定义目录
mkdir /data/postgres/back_wal_log
tar -xf pg_wal.tar -C /data/postgres/back_wal_log
# 设置恢复参数
vi postgresql.conf
restore_command = 'cp /data/postgres/back_wal_log/%f %p'
recovery_target = 'immediate' # 尽快的恢复
# 告诉postgres要执行恢复的操作。
touch /data/postgres/recovery.signal
# 启动数据库
pg_ctl start -D ${PGDATA}
# 是只读状态
mydb=# create schema ns2;
ERROR: cannot execute CREATE SCHEMA in a read-only transaction
~]$ pg_controldata -D /data/postgres/data/
pg_control version number: 1300
Catalog version number: 202209061
Database system identifier: 7432142558057729866
# 正在恢复
Database cluster state: in archive recovery
----- # 或者使用
# 调用函数关闭只读模式 /或者配置promote:
select pg_wal_replay_resume();
-- 该系统函数用于在前面调用之后恢复WAL文件的回放
# 执行恢复,这步执行之后会把恢复的标志文件清理
~]$ pg_ctl promote -D /data/postgres/data/
waiting for server to promote.... done
server promoted
# 表示运行正常的状态
[postgres@my-pgsql data]$ pg_controldata -D /data/postgres/data/
pg_control version number: 1300
Catalog version number: 202209061
Database system identifier: 7432142558057729866
Database cluster state: in production
5.连续归档基于时间点的恢复
# 创建一个表并插入数据
create table bak_test(id int,time timestamp);
insert into bak_test values(1,now());
mydb=# select * from bak_test;
id | time
----+---------------------------
1 | 2024-11-01 15:49:07.18224
(1 row)
# 切换日志文件并触发一个检查点,这里select是确保预写日志能够尽快的被归档
# 或者调整archive_timeout的时间,生产中建议为60s。
select pg_switch_wal();
checkpoint;
# 这个时候会产生一个归档文件
[root@my-pgsql archlog]# ll
total 32772
-rw------- 1 postgres postgres 16777216 Nov 1 12:00 000000010000000000000019
-rw------- 1 postgres postgres 16777216 Nov 1 15:50 00000002000000000000001A
-rw------- 1 postgres postgres 33 Nov 1 15:32 00000002.history
# 创建一个基础备份
pg_basebackup -Ft -P -R -v -D /data/postgres/pgbackup/
# 此时的备份和归档文件
[postgres@my-pgsql postgres]$ ll archlog/ pgbackup/
archlog/:
total 49156
-rw------- 1 postgres postgres 16777216 Nov 1 17:02 000000010000000000000001
-rw------- 1 postgres postgres 16777216 Nov 1 17:04 000000010000000000000002
-rw------- 1 postgres postgres 16777216 Nov 1 17:04 000000010000000000000003
-rw------- 1 postgres postgres 338 Nov 1 17:04 000000010000000000000003.00000028.backup
pgbackup/:
total 47076
-rw------- 1 postgres postgres 180806 Nov 1 17:04 backup_manifest
-rw------- 1 postgres postgres 31237120 Nov 1 17:04 base.tar
-rw------- 1 postgres postgres 16778752 Nov 1 17:04 pg_wal.tar
# 插入第二条数据
insert into bak_test values(2,now()); -- 用于恢复到指定的时间点
select pg_switch_wal();
checkpoint;
select * from bak_test;
# 插入第三条数据
insert into bak_test values(3,now()); -- 测试恢复到标记的位置
select pg_create_restore_point('my_restore_point');
select pg_switch_wal();
checkpoint;
mydb=# select * from bak_test;
id | time
----+----------------------------
1 | 2024-11-01 17:02:34.125168
2 | 2024-11-04 11:25:06.65344
3 | 2024-11-04 11:25:23.605243
(3 rows)
# 插入第四条数据
insert into bak_test values(4,now()); -- 测试恢复到指定的事务上
# 获取当前事务ID
select txid_current(); --741
select pg_switch_wal();
checkpoint;
mydb=# select * from bak_test;
id | time
----+----------------------------
1 | 2024-11-01 17:02:34.125168
2 | 2024-11-04 11:25:06.65344
3 | 2024-11-04 11:25:23.605243
4 | 2024-11-04 11:26:03.3499 # id是4指定事务的时候插入的id
(4 rows)
# 插入第五条数据
insert into bak_test values(5,now()); -- 测试恢复到最近的时间点
select pg_switch_wal();
checkpoint;
select * from bak_test;
恢复测试,关闭postgres并删除数据。
基于时间点的恢复
恢复数据:
- 先恢复全备,将备份文件解压到数据目录下。
- 将预写日志pg_wal.tar解压到pg_wal目录下。
20241104]$ tar -xf base.tar -C /data/postgres/data/
20241104]$ tar -xf pg_wal.tar -C /data/postgres/data/pg_wal/
# 创建恢复信号文件
rm -fr standby.signal
touch recovery.signal
# 修改主配置文件,配置要恢复到的时间点。
restore_command = 'cp /data/postgres/archlog/%f %p'
# 这里的时间点需要比记录的时间点稍微大一点才能够包含数据被修改的时间。
recovery_target_time = '2024-11-04 11:25:06'
恢复到指定的恢复点
# 修改主配置文件,配置要恢复到的时间点。
restore_command = 'cp /data/postgres/archlog/%f %p'
recovery_target_name = 'my_restore_point'
# 在数据目录下添加恢复文件
touch recovery.signal
# 重启数据库验证数据
mydb=# select * from bak_test;
id | time
----+----------------------------
1 | 2024-11-01 17:02:34.125168
2 | 2024-11-04 11:25:06.65344
3 | 2024-11-04 11:25:23.605243
(3 rows)
恢复到指定的事务
restore_command = 'cp /data/postgres/archlog/%f %p'
recovery_target_xid = '741'
# 在数据目录下添加恢复文件
touch recovery.signal
# 验证
mydb=# select * from bak_test;
id | time
----+----------------------------
1 | 2024-11-01 17:02:34.125168
2 | 2024-11-04 11:25:06.65344
3 | 2024-11-04 11:25:23.605243
4 | 2024-11-04 11:26:03.3499
(4 rows)
恢复到最近时间点
# 修改主配置文件
restore_command = 'cp /data/postgres/archlog/%f %p'
recovery_target_timeline = 'latest'
# 在数据目录下添加恢复文件
touch recovery.signal
#重启pgsql验证数据
mydb=# select * from bak_test;
id | time
----+----------------------------
1 | 2024-11-01 17:02:34.125168
2 | 2024-11-04 11:25:06.65344
3 | 2024-11-04 11:25:23.605243
4 | 2024-11-04 11:26:03.3499
5 | 2024-11-04 11:26:57.197616 # id是5的数据
(5 rows)
第三方备份工具pg_rman
pg_rman是第三方工具需要在在github下载,支持热备,增量备份,需要pgsql打开归档日志并且和pg装在同一台机器上,在使用的时候需要进行初始化配置,建议不要和pg数据盘在同一磁盘。
https://github.com/ossc-db/pg_rman/releases/tag/V1.3.16
根据具体的pg版本下载合适的包。
# 依赖包
wget https://download.postgresql.org/pub/repos/yum/15/redhat/rhel-7.9-x86_64/postgresql15-libs-15.8-1PGDG.rhel7.x86_64.rpm
yum localinstall -y postgresql15-libs-15.8-1PGDG.rhel7.x86_64.rpm
# pg_rman
wget https://github.com/ossc-db/pg_rman/releases/download/V1.3.16/pg_rman-1.3.16-1.pg15.rhel7.x86_64.rpm
yum localinstall -y pg_rman-1.3.16-1.pg15.rhel7.x86_64.rpm
# 查看安装位置
rpm -ql pg_rman
/usr/pgsql-15/bin/pg_rman
# 初始化pg_rman
# 创建目录
[postgres@my-pgsql ~]$ mkdir -p /data/postgres/pg_rman_backup/{fullbak,archlog,srvlog}
# 配置pg_rman的初始化配置,备份的路径及日志文件路径
~]# cat /etc/profile.d/pg_rman.sh
export PATH=$PATH:/usr/pgsql-15/bin/
# 这个表示pg数据库服务的日志路径
export SRVLOG_PATH=/data/postgres/data/log
# 这个表示pg数据库归档日志的存放路径就像mysql的binlog,在恢复的时候pgsql会读取这个路径下的归档日志。
export ARCLOG_PATH=/data/postgres/archlog/
# 这个定义的是pg_rman备份的主数据目录
export BACKUP_PATH=/data/postgres/pg_rman_backup/fullbak
~]# source /etc/profile.d/pg_rman.sh
~]# pg_rman init
INFO: ARCLOG_PATH is set to '/data/postgres/archlog/'
INFO: SRVLOG_PATH is set to '/data/postgres/data/log'
执行一次全备
pg_rman的每次备份必须通过check
选项去做校验,不然无法使用。
pg_rman的每次备份具备以下特性。
- 仅使用一条命令即可对整个数据库包含表空间进行备份。
- 支持增量备份和备份文件压缩。
- 支持管理备份版本并显示备份目录。
- 支持存储快照。
# 执行一次全备
pg_rman backup --backup-mode=full --with-serverlog --progress -U postgres -h /tmp/
backup # 备份
--backup-mode=full # 全备
--with-serverlog # 服务日志
--progress # 显示备份进度
-h # 指定postgresql主机名或者socket目录的位置
-U # 连接的用户名
[postgres@my-pgsql ~]$ pg_rman validate
INFO: validate: "2024-11-05 10:19:29" backup, archive log files and server log files by CRC
INFO: backup "2024-11-05 10:19:29" is valid
[postgres@my-pgsql ~]$ pg_rman show
=====================================================================
StartTime EndTime Mode Size TLI Status
=====================================================================
2024-11-05 10:19:29 2024-11-05 10:19:31 FULL 20MB 3 OK
进行一次增量备份
# 查看原始数据
mydb=# select * from bak_test;
id | time
----+----------------------------
1 | 2024-11-01 17:02:34.125168
2 | 2024-11-04 11:25:06.65344
3 | 2024-11-04 11:25:23.605243
4 | 2024-11-04 11:26:03.3499
5 | 2024-11-04 11:26:57.197616
6 | 2024-11-04 13:39:03.510724
7 | 2024-11-04 14:16:04.582568
(7 rows)
# 插入增量数据
mydb=# insert into bak_test values(10,now());
INSERT 0 1
mydb=# insert into bak_test values(11,now());
INSERT 0 1
mydb=# insert into bak_test values(12,now());
INSERT 0 1
mydb=# insert into bak_test values(13,now());
INSERT 0 1
mydb=# select * from bak_test where id >= 10;
id | time
----+----------------------------
10 | 2024-11-05 11:19:55.079292
11 | 2024-11-05 11:19:57.647613
12 | 2024-11-05 11:19:59.919833
13 | 2024-11-05 11:20:02.551359
(4 rows)
# 执行一次增量备份
~]$ pg_rman backup --backup-mode=incremental --progress --compress-data -U postgres -h /tmp/
--backup-mode=incremental # 增量备份
--compress-data,-Z # 压缩数据
~]$ pg_rman validate
~]$ pg_rman show
=====================================================================
StartTime EndTime Mode Size TLI Status
=====================================================================
2024-11-05 11:21:57 2024-11-05 11:21:59 INCR 676B 3 OK
2024-11-05 10:19:29 2024-11-05 10:19:31 FULL 20MB 3 OK
使用pg_rman进行恢复
pg_rman的恢复有两种方法,原地覆盖和新增$PGDATA,在恢复的时候需要停止postgresql。
# 停止数据库
systemctl stop postgresql-15
~]$ pg_rman restore --recovery-target-time "2024-11-05 10:19:31" --hard-copy
--recovery-target-time # 指定恢复的时间点,不指定恢复到最新的时间点
--hard-copy # 把备份目录中的归档日志直接复制到归档目录下。
INFO: restore complete
HINT: Recovery will start automatically when the PostgreSQL server is started. After the recovery is done, we recommend to remove recovery-related parameters configured by pg_rman.
# 启动数据库,现在的数据是全备时候的数据
mydb=# select * from bak_test;
id | time
----+----------------------------
1 | 2024-11-01 17:02:34.125168
2 | 2024-11-04 11:25:06.65344
3 | 2024-11-04 11:25:23.605243
4 | 2024-11-04 11:26:03.3499
5 | 2024-11-04 11:26:57.197616
6 | 2024-11-04 13:39:03.510724
7 | 2024-11-04 14:16:04.582568
(7 rows)
# 恢复到增量备份的时间点
systemctl stop postgresql-15
pg_rman restore --recovery-target-time "2024-11-05 11:21:59" --hard-copy
systemctl start postgresql-15
mydb=# select * from bak_test; -- 10-13 id的数据
id | time
----+----------------------------
1 | 2024-11-01 17:02:34.125168
2 | 2024-11-04 11:25:06.65344
3 | 2024-11-04 11:25:23.605243
4 | 2024-11-04 11:26:03.3499
5 | 2024-11-04 11:26:57.197616
6 | 2024-11-04 13:39:03.510724
7 | 2024-11-04 14:16:04.582568
10 | 2024-11-05 11:19:55.079292
11 | 2024-11-05 11:19:57.647613
12 | 2024-11-05 11:19:59.919833
13 | 2024-11-05 11:20:02.551359
(11 rows)
监控诊断与优化数据库
1、使用pgbench进行基准测试
# 生成测试数据,会生成pgbench_开头的表。
pg_bench -i -s 5
-i # 初始化
-s # 100000 的倍数
# 执行一次简单的基准测试
~]$ pgbench
pgbench (15.8)
starting vacuum...end.
transaction type: <builtin: TPC-B (sort of)>
scaling factor: 5
query mode: simple
number of clients: 1
number of threads: 1
maximum number of tries: 1
number of transactions per client: 10
number of transactions actually processed: 10/10
number of failed transactions: 0 (0.000%)
latency average = 1.537 ms # 平均响应时间
initial connection time = 3.787 ms
tps = 650.787453 (without initial connection time) # tps
# 设置工作线程为2,客户端为4个,每个客户端的事务为60个
[postgres@my-pgsql ~]$ pgbench -r -j2 -c4 -t60
pgbench (15.8)
starting vacuum...end.
transaction type: <builtin: TPC-B (sort of)>
scaling factor: 5
query mode: simple
number of clients: 4
number of threads: 2
maximum number of tries: 1
number of transactions per client: 60
number of transactions actually processed: 240/240
number of failed transactions: 0 (0.000%)
latency average = 6.375 ms
initial connection time = 25.847 ms
tps = 627.436217 (without initial connection time)
statement latencies in milliseconds and failures:
0.005 0 \set aid random(1, 100000 * :scale)
0.001 0 \set bid random(1, 1 * :scale)
0.001 0 \set tid random(1, 10 * :scale)
0.001 0 \set delta random(-5000, 5000)
0.296 0 BEGIN;
0.929 0 UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;
0.845 0 SELECT abalance FROM pgbench_accounts WHERE aid = :aid;
0.910 0 UPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid = :tid;
1.224 0 UPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE bid = :bid;
0.712 0 INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);
1.206 0 END;
# 使用内置脚本进行测试
~]$ pgbench -b
pgbench: option requires an argument -- 'b'
pgbench: hint: Try "pgbench --help" for more information.
[postgres@my-pgsql ~]$ pgbench -b list
Available builtin scripts:
tpcb-like: <builtin: TPC-B (sort of)>
simple-update: <builtin: simple update>
select-only: <builtin: select only>
~]$ pgbench -b simple-update:
pgbench: error: no builtin script found for name "simple-update:"
Available builtin scripts:
tpcb-like: <builtin: TPC-B (sort of)>
simple-update: <builtin: simple update>
select-only: <builtin: select only>
[postgres@my-pgsql ~]$ pgbench -b simple-update
pgbench (15.8)
starting vacuum...end.
transaction type: <builtin: simple update>
scaling factor: 5
query mode: simple
number of clients: 1
number of threads: 1
maximum number of tries: 1
number of transactions per client: 10
number of transactions actually processed: 10/10
number of failed transactions: 0 (0.000%)
latency average = 2.164 ms
initial connection time = 5.652 ms
tps = 462.128564 (without initial connection time)
2、使用扩展监控和诊断数据库
pg_top
这个命令就是一个系统的命令行工具,可以展示进程当前运行的sql,select的查询计划,进程持有的锁,进程的IO统计信息,下游节点的复制信息。
仓库地址:https://gitlab.com/pg_top/pg_top
tar -xf pg_top-main.tar.gz
# 安装依赖,cmake需要》3.1.0
wget https://cmake.org/files/v3.1/cmake-3.1.0-Linux-x86_64.tar.gz
yum -y install libbsd libbsd-devel libmd libmd-devel
cmake -DCMAKE_INSTALL_PREFIX=/usr/local/pg_top CMakeLists.txt
make install
echo 'export PATH=$PATH:/usr/local/pg_top/bin' > /etc/profile.d/pgtop.sh
source /etc/profile.d/pgtop.sh
last pid: 38255; load avg: 0.00, 0.02, 0.05; up 4+04:45:21 15:14:47
7 processes: 6 other background task(s), 1 active
CPU states: 0.2% user, 0.0% nice, 0.2% system, 99.6% idle, 0.0% iowait
Memory: 2829M used, 961M free, 0K shared, 100K buffers, 2513M cached
DB activity: 0 tps, 0 rollbs/s, 0 buffer r/s, 100 hit%, 100 row r/s, 0 row w/s
DB I/O: 0 reads/s, 0 KB/s, 0 writes/s, 1 KB/s
Swap: 0K used, 0K free, 0K cached, 0K in, 0K out
PID USERNAME SIZE RES STATE XTIME QTIME %CPU LOCKS COMMAND
38256 postgres 273M 7332K active 0:00 0:00 0.4 8 postgres: postgres postgres [local] idle
36906 postgres 271M 2064K 0:00 0:00 0.0 0 postgres: logical replication launcher
36676 270M 23M 0:00 0:00 0.0 0 postgres: checkpointer
36677 270M 2392K 0:00 0:00 0.0 0 postgres: background writer
36904 271M 2048K 0:00 0:00 0.0 0 postgres: autovacuum launcher
36903 270M 5304K 0:00 0:00 0.0 0 postgres: walwriter
36905 270M 1276K 0:00 0:00 0.0 0 postgres: archiver last was 000000040000000000000015
# 常用扩展参数
-X # 查看每个进程的IO信息
-R # 查看主从复制信息
-Z # 只监听特定的用户
-o # 根据输出字段进行排序
-x # 将输出的信息输出到文本中
pg_stat_statements扩展与配置
可以用它来监控数据库中执行的sql语句.比如查询慢sql。
pg_stat_monitor查询性能监控
这个插件可以从性能、应用程序和分析角度等方面提供更全面的查询视图来简化查询的可观察性,并将数据分到可配置的时间桶中以提供聚合统计信息,客户端信息,计划详细信息和直方图信息。这些时间桶允许pg_stat_monitor扩展捕获较小的负载和性能信息,因此可以根据时间和工作量来识别数据库的性能问题。
https://github.com/percona/pg_stat_monitor
以上两种工具使用方法差不多,pg_stat_monitor支持的功能会更全面。
auto_explain扩展监控慢查询
任何扩展的开启都有性能的负担,应该根据实际情况考虑。记录慢查询语句的执行计划的功能。
pg_profile生成数据库性能报告
在发现数据库有性能问题时,需要分析真个数据库或者集群,包括索io、cpu、内存等。pg_profile是基于pg数据库标准统计信息视图的诊断工具。
使用pg_profile可以在指定时间内生成数据库快照,并提供html格式来解释快照之间的统计数据,从而分析和诊断数据库的性能问题。
https://github.com/zubkov-andrei/pg_profile
3、使用pgsql的分区
分区表是解决单表过大的引起的性能问题的一种方式,因为单表如果过大,执行全表扫描的成本就会增加,进而造成慢查询。在一般情况下,当单表大小超过内存大小时就应该考虑使用分区。
pgsql支持三种分区形式。
- 范围分区
- 列表分区
- 哈希分区
范围分区
范围分区是根据一个分区键或一组分区键划分为相应的'范围',并在不同的范围时间没有重叠。分区键的值是连续的,在表中插入数据时,按照分区键的匹配关系将数据保存到哪个分区中。例如根据日期范围把特定的对象的标识划分范围分区。
-- 创建表
create table sales(
prod_id int not null,
cust_id int not null,
sales_date date not null
) partition by range (sales_date);
-- 在表中创建分区
create table sales_y2024m01 partition of sales for values from ('2024-01-01') to ('2024-02--01');
create table sales_y2024m02 partition of sales for values from ('2024-02-01') to ('2024-03--01');
create table sales_y2024m03 partition of sales for values from ('2024-03-01') to ('2024-04--01');
-- 查看表结构
mydb=# \d+ sales;
Partitioned table "public.sales"
Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description
------------+---------+-----------+----------+---------+---------+-------------+--------------+-------------
prod_id | integer | | not null | | plain | | |
cust_id | integer | | not null | | plain | | |
sales_date | date | | not null | | plain | | |
Partition key: RANGE (sales_date)
Partitions: sales_y2024m01 FOR VALUES FROM ('2024-01-01') TO ('2024-02-01'),
sales_y2024m02 FOR VALUES FROM ('2024-02-01') TO ('2024-03-01'),
sales_y2024m03 FOR VALUES FROM ('2024-03-01') TO ('2024-04-01')
-- 插入数据
insert into sales values(1,100,'2024-01-12');
insert into sales values(2,200,'2024-02-10');
insert into sales values(3,300,'2024-03-16');
-- 查看执行计划
mydb=# explain select * from sales where sales_date = '2024-03-10';
QUERY PLAN
-----------------------------------------------------------------------
Seq Scan on sales_y2024m03 sales (cost=0.00..35.50 rows=10 width=12) # 可以看出只扫描了sales_y2024m03 这个分区
Filter: (sales_date = '2024-03-10'::date)
(2 rows)
列表分区
列表分区是根据特定的值来划分分区,即通过显示的列出每个分区中的键值来划分表,列表分区的分区键值是离散的。
CREATE TABLE students (
student_id SERIAL,
name VARCHAR(100) NOT NULL,
gender CHAR(1) CHECK (gender IN ('男', '女')),
grade VARCHAR(10) NOT NULL,
grade_level CHAR(1) CHECK (grade_level IN ('A', 'B', 'C', 'D')),
PRIMARY KEY (student_id, grade_level) -- 添加 grade_level 到主键
) PARTITION BY LIST (grade_level);
CREATE TABLE students_grade_level_pA PARTITION OF students FOR VALUES IN ('A');
CREATE TABLE students_grade_level_pB PARTITION OF students FOR VALUES IN ('B');
CREATE TABLE students_grade_level_pC PARTITION OF students FOR VALUES IN ('C');
CREATE TABLE students_grade_level_pD PARTITION OF students FOR VALUES IN ('D');
mydb=# insert into students values(001,'job','男',98,'A');
INSERT 0 1
mydb=# insert into students values(002,'tom','男',96,'A');
INSERT 0 1
mydb=# insert into students values(003,'jerry','男',80,'B');
INSERT 0 1
mydb=# insert into students values(004,'baker','男',85,'B');
INSERT 0 1
mydb=# insert into students values(005,'limeimei','女',57,'D');
# 数据来自D分区
mydb=# explain select * from students where grade_level = 'D';
QUERY PLAN
-----------------------------------------------------------------------------------
Seq Scan on students_grade_level_pd students (cost=0.00..13.25 rows=1 width=276)
Filter: (grade_level = 'D'::bpchar)
(2 rows)
哈希分区
哈希分区根据分区键的哈希值建立分区,如果分区键的哈希值相同,那么对应的数据将保存到同一个分区中。
create table emp_hash_by_job(
id int,
job varchar(20),
deptno varchar(20))
partition by hash(id);
-- 划分分区采用了取模的方式
create table hash_p1 partition of emp_hash_by_job for values with (modulus 4, remainder 0);
create table hash_p2 partition of emp_hash_by_job for values with (modulus 4, remainder 1);
create table hash_p3 partition of emp_hash_by_job for values with (modulus 4, remainder 2);
-- 查看分区
mydb=# \d+ emp_hash_by_job;
Partitioned table "public.emp_hash_by_job"
Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description
--------+-----------------------+-----------+----------+---------+----------+-------------+--------------+-------------
id | integer | | | | plain | | |
job | character varying(20) | | | | extended | | |
deptno | character varying(20) | | | | extended | | |
Partition key: HASH (id)
Partitions: hash_p1 FOR VALUES WITH (modulus 4, remainder 0),
hash_p2 FOR VALUES WITH (modulus 4, remainder 1),
hash_p3 FOR VALUES WITH (modulus 4, remainder 2)
# 插入数据
insert into emp_hash_by_job values(1,'wash',001);
insert into emp_hash_by_job values(2,'cut',002);
insert into emp_hash_by_job values(3,'blow',003);
# 查看执行计划
explain select * from emp_hash_by_job where id=1;
# 可以看到数据·来自哪个分区
mydb=# explain select * from emp_hash_by_job where id=3;
QUERY PLAN
--------------------------------------------------------------------------
Seq Scan on hash_p2 emp_hash_by_job (cost=0.00..16.88 rows=3 width=120)
Filter: (id = 3)
(2 rows)
pgsql的优化
1、优化硬件包括cpu、内存、磁盘、网络。
2、优化操作系统。
2、配置主从。
3、使用分布式集群。
4、优化pgsql的配置参数。
# 最大连接数,生产建议10000
mydb=# show max_connections;
# 数据缓冲区,设置为物理内存的25%
mydb=# show shared_buffers;
# 估计可以做磁盘缓存的内存大小,通常往大了的调,可以提高性能。
mydb=# show effective_cache_size;
# 维护任务的内存,增加这个对修改表结构,回收脏数据,创建索引有明显性能提升
mydb=# show maintenance_work_mem;
# 指定检查点的速度。
# checkpoint_completion_target的参数值越大表示刷盘的频率久越小刷数据盘的io就越平滑,可以提升io性能。
mydb=# show checkpoint_completion_target; 0.9
mydb=# show checkpoint_timeout; 5
# 列的默认统计目标,增加该数值会增加sql语句分析的时间
mydb=# show default_statistics_target;
# 让数据库更倾向索引扫描还是索引扫描
mydb=# show random_page_cost;
减少该数值会更倾向索引扫描。
# 磁盘IO并发数,磁盘并发操作的条目数,增加可以提高io性能。
mydb=# show effective_io_concurrency;
# 在写入文件之前每个操作可使用的最大内存容量
mydb=# show work_mem;
# 预写日志缓冲区,指定预写日志的缓冲区。预写日志共享内存量。不建议设置得太大有助于在繁忙的数据库上提高数据写的性能。
mydb=# show wal_buffers;
# 预写日志的最小维护尺寸
mydb=# show min_wal_size;
# 预写日志最大维护尺寸
mydb=# show max_wal_size;
# 后台最大进程数,只能在服务启动的时候设置。
mydb=# show max_worker_processes;
# 复制槽的限制
max_slot_wal_keep_size;
pgsql的性能视图
mydb=# select relname from pg_class where relname like 'pg_stat_%';
pg_stat_database 数据库集群内所有库信息
pg_stat_user_tables 记录用户自己创建的表信息
pg_stat_user_indexes 用户自己创建的索引
pg_statio_user_indexes 记录用户表的io 信息
pg_stat_bgwriter 后台进程写数据情况
PG数据库的高可用架构
记录一个配置文件
postgres.conf
# 监听所有IP
listen_addresses = "0.0.0.0"
# 最大连接数,从pgsql需要大于或等于主的值
max_connections = 300
restore_command = 'cp /data/postgresql-12/archive/%f %p'
# archive_cleanup_command = '/usr/local/postgresql-12/bin/pg_archivecleanup -d /data/postgresql-12/data/pg_wal %r && /usr/local/postgresql-12/bin/pg_archivecleanup -d /data/postgresql-12/archive %r >> /data/postgresql-12/log/archive_cleanup.log 2>&1'
# 9.6开始没有hot_standby(热备模式)
wal_level = replica
# 最多有16个流复制连接。
max_wal_senders = 16
# 设置比主库大,可以设置为2倍的数值
wal_keep_segments = 512
max_logical_replication_workers = 10
autovacuum_max_workers = 2
# 和主的值保持一致即可
max_worker_processes = 16
# 说明这台机器不仅用于数据归档,还可以用于数据查询
hot_standby = on
#流备份的最大延迟时间
max_standby_streaming_delay = 30s
# 向主机汇报本机状态的间隔时间
wal_receiver_status_interval = 10s
# 出现错误复制,向主机反馈
hot_standby_feedback = on
# 日志设置
log_destination = 'stderr'
logging_collector = on
log_directory = '/data/postgresql-12/log'
log_filename = 'postgresql-%w.log'
log_file_mode = 0600
log_truncate_on_rotation = on
log_rotation_age = 1d
log_rotation_size = 1GB
log_min_messages = error
# 执行超过300ms的sql语句会被记录到pgsql的日志文件中
log_min_duration_statement = 300
log_checkpoints = on
log_connections = on
log_disconnections = on
log_error_verbosity = verbose
log_hostname = on
log_line_prefix = '%m [%p] '
log_lock_waits = on
log_statement = 'ddl'
pg数据库主从
复制槽
standby端如果长时间停机,重启后standby可能因缺少相应的WAL日志无法连接primary。此时可以通过启用max_replication_slots参数启用复制槽来解决此问题。
primary端实例会一直保留预写日志(WAL)文件,直到所有备库所需的插槽都确认已接收到特定段为止。只有完成此操作后,主库实例才会移除相应的WAL文件。
主库端复制槽要和从库数量对应,我这里有2个从库,创建2个复制槽:
cndba=# select * from pg_create_physical_replication_slot('pgsql_db2');
-[ RECORD 1 ]--------
slot_name | pgsql_db2
lsn |
cndba=# select * from pg_create_physical_replication_slot('pgsql_db3');
-[ RECORD 1 ]--------
slot_name | pgsql_db3
lsn |
查看复制槽:
postgres=# select * from pg_replication_slots;
主机名 | IP |
---|---|
master | 192.168.21.92 |
slave | 192.168.21.178 |
# 设置hosts
~]# cat /etc/hosts
192.168.21.92 master
192.168.21.178 slave
主库配置
# 创建流复制用户
postgres=# create user replicator replication password '126.comM';
# 修改基础认证配置文件
~]$ vi /data/postgres/data/pg_hba.conf
host replication replicator 192.168.21.178/32 scram-sha-256
systemctl restart postgresql-15
# 主库的发送进程
ps -ef | grep sender
# 查询主从同步状态
select pid,state,client_addr,sync_priority,sync_state from pg_stat_replication;
从库配置
# 在从库上备份主库的数据
pg_basebackup -h master -D ./data/ -U replicator -P -v -R -X stream
-C -S slot1 # 创建一个槽位,槽位的名字叫做slot1
# 备份完成后会在备份目录下生成
standby.signal # 表示一个从库
postgresql.auto.conf # 记录有主库的连接信息
master-bak]$ cat postgresql.auto.conf
# Do not edit this file manually!
# It will be overwritten by the ALTER SYSTEM command.
primary_conninfo = 'user=postgres passfile=''/home/postgres/.pgpass'' channel_binding=disable port=5432 sslmode=disable sslcompression=0 sslsni=1 ssl_min_protocol_version=TLSv1.2 gssencmode=disable krbsrvname=postgres target_session_attrs=any'
primary_conninfo = 'user=replicator password=126.comM channel_binding=disable host=master port=5432 sslmode=disable sslcompression=0 sslsni=1 ssl_min_protocol_version=TLSv1.2 gssencmode=disable krbsrvname=postgres target_session_attrs=any'
primary_slot_name = 'slot1'
# 把备份的数据放到$PGDATA目录下
[postgres@slave postgres]$ ll $PGDATA/
total 252
-rw------- 1 postgres postgres 227 Nov 6 16:32 backup_label
-rw-rw-r-- 1 postgres postgres 234 Nov 6 16:32 backup_label.old
-rw------- 1 postgres postgres 185196 Nov 6 16:32 backup_manifest
drwx------ 6 postgres postgres 46 Nov 6 16:32 base
drwxr-xr-x 2 postgres postgres 6 Nov 6 16:32 bin
drwx------ 2 postgres postgres 4096 Nov 6 16:32 global
drwx------ 2 postgres postgres 6 Nov 6 16:32 pg_commit_ts
drwx------ 2 postgres postgres 6 Nov 6 16:32 pg_dynshmem
-rw------- 1 postgres postgres 4805 Nov 6 16:32 pg_hba.conf
-rw------- 1 postgres postgres 1636 Nov 6 16:32 pg_ident.conf
drwx------ 4 postgres postgres 68 Nov 6 16:32 pg_logical
drwx------ 4 postgres postgres 36 Nov 6 16:32 pg_multixact
drwx------ 2 postgres postgres 6 Nov 6 16:32 pg_notify
drwx------ 2 postgres postgres 6 Nov 6 16:32 pg_replslot
-rw-rw-r-- 1 postgres postgres 155 Nov 6 16:32 pg_rman_recovery.conf
drwx------ 2 postgres postgres 6 Nov 6 16:32 pg_serial
drwx------ 2 postgres postgres 6 Nov 6 16:32 pg_snapshots
drwx------ 2 postgres postgres 6 Nov 6 16:32 pg_stat
drwx------ 2 postgres postgres 6 Nov 6 16:32 pg_stat_tmp
drwx------ 2 postgres postgres 6 Nov 6 16:32 pg_subtrans
drwx------ 2 postgres postgres 6 Nov 6 16:32 pg_tblspc
drwx------ 2 postgres postgres 6 Nov 6 16:32 pg_twophase
-rw------- 1 postgres postgres 3 Nov 6 16:32 PG_VERSION
drwx------ 3 postgres postgres 84 Nov 6 16:32 pg_wal
drwx------ 2 postgres postgres 18 Nov 6 16:32 pg_xact
-rw------- 1 postgres postgres 600 Nov 6 16:32 postgresql.auto.conf
-rw-rw-r-- 1 postgres postgres 29588 Nov 6 16:32 postgresql.conf
drwxr-xr-x 3 postgres postgres 17 Nov 6 16:32 share
-rw------- 1 postgres postgres 0 Nov 6 16:32 standby.signal
-rw------- 1 postgres postgres 0 Nov 6 16:32 tablespace_map.old
# 启动从库
systemctl start postgresql-15
postgres=# select pg_is_in_recovery();
pg_is_in_recovery
-------------------
t
(1 row)
查看是否处于recovery状态,值为“t”说明是当前数据库是备库,值为“f”说明当前数据库是主库(也可能复制流出现异常出现双主
# 查看wal 日志接收状态
mydb=# select * from pg_stat_wal_receiver;
# 从库的接收进程
ps -ef | grep recei
# 查看数据库状态
主库:
-bash-4.2$ pg_controldata | grep "Database cluster state:"
Database cluster state: in production
从库:
-bash-4.2$ pg_controldata | grep "Database cluster state:"
Database cluster state: in archive recovery
-bash-4.2$
手动完成从库到主库的切换
# 1、停止主库
[root@master data]# systemctl stop postgresql-15
# 2、启动从库可写
[postgres@slave data]$ pg_ctl promote
waiting for server to promote.... done
server promoted
-- 从库开启写后接收日志流的进程没有了。
# 提升备库为主库之后,后台进程中不再有startup recovering进程了,同时,多了postgres: walwriter 写进程。
# 把挂掉的主库作为从库加入到新的主库中
# 几个注意点:
hba配置文件
host replication replicator 192.168.21.0/24 scram-sha-256
# 创建文件standby.signal
#1、在新的从库操作
touch standby.signal
data]# cat postgresql.auto.conf # 在最后添加
primary_conninfo = 'user=replicator password=126.comM host=slave port=5432'
# 启动新的从库
systemctl start postgresql-15
[postgres@master postgres]$ ps -ef | grep postgres
root 6458 6438 0 15:03 pts/0 00:00:00 su - postgres
postgres 6459 6458 0 15:03 pts/0 00:00:00 -bash
postgres 7695 1 0 16:06 ? 00:00:00 /opt/postgresql/bin/postgres -D /data/postgres/data
postgres 7696 7695 0 16:06 ? 00:00:00 postgres: checkpointer
postgres 7697 7695 0 16:06 ? 00:00:00 postgres: background writer
postgres 7698 7695 0 16:06 ? 00:00:00 postgres: startup recovering 00000007000000000000002F
postgres 7700 7695 0 16:06 ? 00:00:00 postgres: walreceiver streaming 0/2F0005B8
postgres 7715 6459 0 16:15 pts/0 00:00:00 ps -ef
postgres 7716 6459 0 16:15 pts/0 00:00:00 grep --color=auto postgres
[postgres@master postgres]$ pg_controldata | grep state
Database cluster state: in archive recovery # 从库只读状态
标签:11,10,postgres,读书笔记,2024,pg,mydb
From: https://www.cnblogs.com/gshelldon/p/18555076