首页 > 其他分享 >Reason=Low socket*core*thread count, Low CPUs [slurm@2021-09-15T15:18:53]

Reason=Low socket*core*thread count, Low CPUs [slurm@2021-09-15T15:18:53]

时间:2022-12-08 16:36:54浏览次数:53  
标签:count core slurm 作业 CPUs State m1 Low Nodes


提交作业:

# srun hostname
srun: Required node not available (down, drained or reserved)
srun: job 58 queued and waiting for resources

查看作业状态:

squeue
58   compute hostname     root PD       0:00      1 (Nodes required for job are DOWN, DRAINED or reserved for jobs in higher priority partitions)
作业所需的节点已关闭、耗尽或保留给优先级较高的分区中的作业

查看MPI作业详细信息
scontrol show jobs

显示队列或节点状态

sinfo
PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST 
control up infinite 1 drain* m1
compute* up infinite 1 drain c1

终止一个作业步骤:
scancel 命令与这个作业 ID 来终止该作业步骤

scancel 58

修改节点状态:

scontrol update NodeName=m1 State=idle

查看日志:

/var/log/slurmctld.log

error: Nodes m1 not responding

查看节点状态:

scontrol show node
计算节点的状态 Reason=Low socket*core*thread count, Low CPUs [slurm@2021-09-15T15:18:53]

排查 了一下,查看配置

vim /etc/slurm/slurm.conf

把 CPUs=1 CoresPerSocket=1 ThreadsPerCore=1 RealMemory=900 Procs=1 改小就行了,根据自己的服务器资源酌情配置

NodeName=m1 NodeAddr=192.168.8.150  CPUs=1 CoresPerSocket=1 ThreadsPerCore=1 RealMemory=900 Procs=1 State=UNKNOWN
NodeName=c1 NodeAddr=192.168.8.145 CPUs=1 CoresPerSocket=1 ThreadsPerCore=1 RealMemory=900 Procs=1 State=UNKNOWN
PartitionName=control Nodes=m1 Default=NO MaxTime=INFINITE State=UP
PartitionName=compute Nodes=c1 Default=YES MaxTime=INFINITE State=UP



标签:count,core,slurm,作业,CPUs,State,m1,Low,Nodes
From: https://blog.51cto.com/u_15906694/5922718

相关文章