首页 > 数据库 >PostgreSQL CPU占用高分析,查询IO消耗最高的SQL,查询耗时最长的SQL,查询被锁住的session会话 转发 https://blog.csdn.net/sunny_day_day

PostgreSQL CPU占用高分析,查询IO消耗最高的SQL,查询耗时最长的SQL,查询被锁住的session会话 转发 https://blog.csdn.net/sunny_day_day

时间:2023-11-08 15:33:49浏览次数:47  
标签:stat tup 查询 blks pg SQL total sio day

查看当前活跃的DB session 正在运行的SQL语句(运行时间超过10秒)

SELECT now() - query_start as "runtime", usename,application_name, client_hostname, datname,  state, query
	    FROM  pg_stat_activity
	    WHERE now() - query_start > '10 seconds'::interval
	       and state!='idle'
	   ORDER BY runtime DESC;

按 total_time 列出查询,并查看哪个查询在数据库中花费的时间最多

SELECT round(total_time*1000)/1000 AS total_time,query
	FROM pg_stat_statements
	ORDER BY total_time DESC limit 5;

查看哪些table未及时做vacuum,以及未及时收集统计信息

SELECT relname, n_live_tup, n_dead_tup, trunc(100*n_dead_tup/(n_live_tup+1))::float "ratio%",
	to_char(last_autovacuum, 'YYYY-MM-DD HH24:MI:SS') as autovacuum_date,
	to_char(last_autoanalyze, 'YYYY-MM-DD HH24:MI:SS') as autoanalyze_date
	FROM pg_stat_all_tables
	ORDER BY last_autovacuum;

查看有没有被锁的session

select pid,
	usename,
	pg_blocking_pids(pid) as blocked_by,
	query as blocked_query
	from pg_stat_activity
	where cardinality(pg_blocking_pids(pid)) > 0;

推荐创建如下session

pg_stat_tables

CREATE OR REPLACE VIEW pg_stat_tables
	AS
	WITH s AS (
	SELECT *, cast((n_tup_ins + n_tup_upd + n_tup_del) AS numeric) AS total
	       FROM pg_stat_user_tables
	)
	SELECT s.schemaname,       s.relname,       s.relid,
	       s.seq_scan,         s.idx_scan,
	       CASE WHEN s.seq_scan + s.idx_scan = 0 THEN 'NaN'::double precision
	            ELSE round(100 * s.idx_scan/(s.seq_scan+s.idx_scan),2)  END AS idx_scan_ratio,

	       s.seq_tup_read,       s.idx_tup_fetch,

	       sio.heap_blks_read,       sio.heap_blks_hit,
	       CASE WHEN sio.heap_blks_read = 0 THEN 0.00
	            ELSE round(100*sio.heap_blks_hit/(sio.heap_blks_read+sio.heap_blks_hit),2)  END AS hit_ratio,

	       n_tup_ins,       n_tup_upd,       n_tup_del,
	       CASE WHEN s.total = 0 THEN 0.00
	            ELSE round((100*cast(s.n_tup_ins AS numeric)/s.total) ,2) END AS ins_ratio,
	       CASE WHEN s.total = 0 THEN 0.00
	            ELSE round((100*cast(s.n_tup_upd AS numeric)/s.total) ,2) END AS upd_ratio,
	       CASE WHEN s.total = 0 THEN 0.00
	            ELSE round((100*cast(s.n_tup_del AS numeric)/s.total) ,2) END AS del_ratio,

	       s.n_tup_hot_upd,
	       CASE WHEN s.n_tup_upd = 0 THEN 'NaN'::double precision
	            ELSE round(100*cast(cast(n_tup_hot_upd as numeric)/n_tup_upd as numeric), 2) END AS hot_upd_ratio,

	       pg_size_pretty(pg_relation_size(sio.relid)) AS "table_size",
	       pg_size_pretty(pg_total_relation_size(sio.relid)) AS "total_size",

	       s.last_vacuum,       s.last_autovacuum,
	       s.vacuum_count,      s.autovacuum_count,
	       s.last_analyze,      s.last_autoanalyze,
	       s.analyze_count,     s.autoanalyze_count
	FROM s, pg_statio_user_tables AS sio WHERE s.relid = sio.relid ORDER BY relname;

pg_stat_indexes

AS
	SELECT s.schemaname,       s.relname,       s.indexrelname,       s.relid,
	       s.idx_scan,       s.idx_tup_read,       s.idx_tup_fetch,
	       sio.idx_blks_read,       sio.idx_blks_hit,
	       CASE WHEN sio.idx_blks_read  + sio.idx_blks_hit = 0 THEN 'NaN'::double precision
	       ELSE round(100 * sio.idx_blks_hit/(sio.idx_blks_read + sio.idx_blks_hit), 2) END AS idx_hit_ratio,
	       pg_size_pretty(pg_relation_size(s.indexrelid)) AS "index_size"
	FROM pg_stat_user_indexes AS s, pg_statio_user_indexes AS sio
	WHERE s.relid = sio.relid ORDER BY relname;

pg_stat_users

CREATE OR REPLACE VIEW pg_stat_users
	AS
	SELECT datname,       usename,       pid,       backend_start, 
	       (current_timestamp - backend_start)::interval(3) AS "login_time"
	FROM pg_stat_activity;
	pg_stat_queries
	CREATE OR REPLACE VIEW pg_stat_queries 
	AS
	SELECT datname,       usename,       pid,
	       (current_timestamp - xact_start)::interval(3) AS duration, 
	       waiting,       query
	FROM pg_stat_activity WHERE pid != pg_backend_pid();

pg_stat_long_trx

CREATE OR REPLACE VIEW pg_stat_long_trx 
	AS
	SELECT pid,        waiting,
	    (current_timestamp - xact_start)::interval(3) AS duration, query
	FROM pg_stat_activity
	WHERE pid <> pg_backend_pid();

pg_stat_waiting_locks

CREATE OR REPLACE VIEW pg_stat_waiting_locks
	AS
	SELECT l.locktype,       c.relname,       l.pid,       l.mode,
	       substring(a.query, 1, 6) AS query,
	       (current_timestamp - xact_start)::interval(3) AS duration
	FROM pg_locks AS l
	  LEFT OUTER JOIN pg_stat_activity AS a ON l.pid = a.pid
	  LEFT OUTER JOIN pg_class AS c ON l.relation = c.oid 
	WHERE  NOT l.granted ORDER BY l.pid;

要启用 pg_stat_statements,请修改现有的自定义参数组并设置以下值:

shared_preload_libraries = pg_stat_statements
track_activity_query_size = 2048
pg_stat_statements.track = ALL
pg_stat_statements.max = 10000

选择立即应用,然后重启数据库实例。然后,在要监控的数据库上运行与以下内容类似的命令:

注意:以下示例在“demo”数据库中安装扩展。

demo=> select current_database();
current_database
------------------
demo
(1 row)
demo=> CREATE EXTENSION pg_stat_statements;

设置 pg_stat_statements 后,您可以使用以下方法之一监控输出:

按 total_time 列出查询,并查看哪个查询在数据库中花费的时间最多:

SELECT round(total_time*1000)/1000 AS total_time,query
FROM pg_stat_statements
ORDER BY total_time DESC limit 2;

列出查询及调用总数、总行数和返回的行数:

SELECT query, calls, total_time, rows, 100.0 * shared_blks_hit /
nullif(shared_blks_hit + shared_blks_read, 0) AS hit_percent
FROM pg_stat_statements ORDER BY total_time DESC LIMIT 5;

基于每次执行列出查询,以便随时间对查询进行抽样:

SELECT query, calls, total_time/calls as avg_time_ms, rows/calls as avg_rows,
temp_blks_read/calls as avg_tmp_read, temp_blks_written/calls as avg_temp_written
FROM pg_stat_statements
WHERE calls != 0
ORDER BY total_time DESC LIMIT 5;

确定导致高 CPU 使用率的查询后,可以使用以下方法进一步降低 CPU 使用率:

如果有机会进行调整,请使用 EXPLAIN 和 EXPLAIN ANALYZE 来识别警告。有关更多信息,请参阅有关 EXPLAIN 的 PostgreSQL 文档。
如果有重复运行的查询,请使用准备的语句来降低 CPU 的压力。

标签:stat,tup,查询,blks,pg,SQL,total,sio,day
From: https://www.cnblogs.com/zclzc/p/17817507.html

相关文章

  • SQL查询重复记录操作
    假设现有一张人员表(表名:Person),若想将姓名、身份证号、住址这三个字段完全相同的记录查找出来,使用1:SELECTp1.*2:FROMpersonsp1,personsp23:WHEREp1.id<>p2.id4:ANDp1.cardid=p2.cardid5:ANDp1.pnam......
  • mysql 远端数据库的数据备份与恢复
    备份基本语法mysqldump-u[用户名]-p[密码]-h[主机IP]-P[端口号][数据库名称]>/home/username/备份文件.sql例如mysqldump-uroot-p123456-h192.168.1.100-P3306mydb>mydb_backup.sql恢复基本语法mysql-u[用户名]-p[密码]-h[主机IP]-P[端口号][数据库名......
  • 坏块处理postgresql
    问题概述现场实施发来求救,简单查询数据表报错, 业务应用出现异常select*frommiddXXXXX.t_geo_mv_xxxxxegment_var;ERROR:missingchuunknumber0fortoastvalue142340922inpg_toast_2619问题原因此报错信息一般为数据库中有坏块导致。https://www.postgresql.org/mes......
  • MySQL学习(13)内存中的数据——Buffer Pool
    BufferPool是什么BufferPool就是MySQL服务器启动时向操作系统申请的一片连续的内存。默认情况下,BufferPool的容量为128MB。SHOWVARIABLESLIKE'innodb_buffer_pool_%'; 可以通过启动项innodb_buffer_pool_size设置(单位为字节),不能设置小于5MB。[server]innodb_buf......
  • MySQL的存储引擎、事务补充、MySQL的锁机制、MySQL的日志
    MySQL的存储引擎概述数据库存储引擎是数据库底层软件组织,数据库管理系统(DBMS)使用数据引擎进行创建、查询、更新和删除数据。不同的存储引擎提供不同的存储机制、索引技巧、锁定水平等功能。现在许多不同的数据库管理系统都支持多种不同的数据引擎。MySQL的核心就是存储引擎。用户可......
  • Burp联动Sqlmap插件进行sql注入扫描
    一、插件介绍sqlmap4burp++是一款兼容Windows,mac,linux多个系统平台的Burp与sqlmap联动插件这个插件嘎嘎好用,大大提升了sqlmap的效率项目地址https://github.com/c0ny1/sqlmap4burp-plus-plusgithub中间有空格,把空格去掉在进行访问即可进入burp拓展模块点击添加上传文件......
  • 解决升级到 Delphi 12 后遇到 SQLite 不兼容的问题
    今天升级到Delphi12后,编译运行原来的项目,出现“[FireDAC][Phys][SQLite][sqlite3]-303.Capabilityisnotsupported”的错误。经过查找文档,发现是FireDACSQLite版本更新导致的。RADStudio12.0支持SQLite3.42,同时保留了使用FireDAC加密(FDE)的SQLite3.31.1的选项......
  • Adding empty space into SQL Server Reporting Services
    REF:http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=1400080&SiteID=1(1)Ratherthaninsertingspaces,canyouadjustthepaddingpropertyonthetextbox?Ifyougoalistohavethefieldindented,thatshouldwork.(2)Anyreasonyoucouldn'......
  • SQL Server 事务处理 回滚事务
     创建表:SETANSI_NULLSONGOSETQUOTED_IDENTIFIERONGOCREATETABLE[dbo].[t1]([Id][int]NOTNULL,[c1][nvarchar](50)NULL,[c2][datetime]NULL,CONSTRAINT[PK_Table1]PRIMARYKEYCLUSTERED([Id]ASC)WITH(PAD_INDEX=OFF,S......
  • SQL Server 2005透视表运算符PIVOT应用示例
    SQLServer2005行列转换 有用SQL写过交叉报表的,往往都比较头痛,还好现在SQL2005中提供了新的PIVOT操作符,可以很简单地写出交叉数据查询。正好前两天在研究ORACLE最新的11G版本提供的新特性,发现ORACLE11G也同样推出这个新PIVOT,而且语法格式也几乎是一样的,呵,看来这些主流的数据库都......