第一部分
管理内存和cpu分配
关联vm内存
虚拟机感觉自己拥有4g的内存,并且最多不会使用超过4g的物理内存.我们可以超额的关联内存给vm,例如:esxi主机的物理内存只有8g,但是我们可以给三个vm分配4g内存.
esxi四大高级内存控制技术
1.page sharing(透明的页面共享)
the first memory-management technology vmware wsxi uses is transparent page sharing,in which identical memory pages are shared among vms to reduce the lotal number of memory pages needed.the hypervisor computes hashes of the contents of memory pages to identify pages that contain identical memory.if a hash match is found,a full comparison of the matching memory pages is made in order to exclude a false positive.once the pages are confirmed to be identical,the hypervisor will transparently remap the memory pages of the vms so they are sharing the same physical memory page.this reduces overall host memory consumpton.(相同内存的页面不同的vm使用一个vm读取,其他vm访问这台读取页面的vm)
2.ballooning
ballooning involves the use of a driver-referred to as the balloon driver-installed into the guest os.this driver is part of vmware tools and gets installed when vmware tools are installed.once installed into the guest os,the balloon driver can respond to commands from the hypervisor to reclaim memory from that particular guest os.the balloon driver does this by requesting memory from the guest of-a process calling inflating-and then passing that memory back to the hypervisor for use by other vms.when the memory pressure on the host passes,the balloon driver will deflate,or return memory to the guest os.(需要安装vmware tools,强迫主机更少使用内存,让急需内存的vm来使用)
3.swapping
a.guest os swapping
b.hypervisor swapping
hypervisor swapping means that esxi is going to swap memory pages out to disk in order to reclaim memory that is needed elsewhere.esxi's swapping takes place without any regard to whether the pages are being actively used by the guest os.as a result,and due to the fact that disk response times are thousands of times slower than memory response times,guest os performance is severely impacted if hypervisor swapping is invoked.it is for this reason that esxi won't invoke swapping unless it is absolutely necessary.(虚拟内存,影响整体性能,使用硬盘空间来实现内存,降低效率)
4.memory compression(内存压缩)
when an esxi host gets to the point that hypervisor swapping is necessary,the vmkernel will attempt to compress memory pages and keep them in rem in a compressed memory cache.pages that can be successfully compressed by at least 50 percent are put into the compressed memory cache instead of being written to disk and can then be recovered much more quickly if the guest os needs that memory page.memory compression can dramatically reduce the number of pages that must be swapped to disk and thus can dramatically improve the performance of an esxi host that is under strong memory pressure.compression is invoked only when the esxi host reaches the point that swapping is needed.(内存压缩,节约内存空间,压缩到50%)
配置内存关联
预留
限制
共享
配置内存预留
1.vm能够请求的最大内存数量就是配置的数量
2.预留决定了这个vm至少能够使用多少物理内存,其余的可能使用swap,也可能使用物理内存
配置内存限制
the limit sets the actual limit on how much physical ram may be utilized by that virtual machine.
let's now change the limit on this virtual machine from the unlimited default setting to 768mb.
1.the virtual machine is configured with 1024mb of ram,so the guest operating system running inside that virtual machine believes that it has 1024mb of ram available to use.
2.the virtual machine has a reservation of 512mb of ram,which means that the esxi host must allocate 512mb of physical ram to the virtual machine.this ram is guaranteed to this virtual machine.
3.assuming the esxi host has enough physical ram installed and available,the hypervisor will allocate memory to the virtual machine as needed up to 768mb (the limit).upon reaching 768mb,the balloon driver kicks in to prevent the guest operating system from using any more memory.when the guest operating system's memory demands drop below 768mb,the balloon driver deflates and returns memory to the guest.the effective result of this behavior is that the memory that the guest operating system uses remains below 768mb(the limit).
4.the 256mb "gap" between the reservation and the limit may be supplied by either physical ram or vmkernel swap space.esxi will allocate physical ram if it is available.
配置份额值
shares(份额值) are a way of establishing a priority setting for a virtual machine requesting memory that is greater than the virtual machine's reservation but less than its limit.
实例一:
物理主机内存2000m
vm1 预留500m闲置2000m份额值1000
vm2 预留500m闲置2000m份额值1000
vm1 实际得到的物理内存为1000m
vm2 实际得到的物理内存为1000m
实例二:
物理主机内存2000m
vm1 预留500m闲置2000m份额值2000
vm2 预留500m闲置2000m份额值1000
vm1 实际得到的物理内存为1166m
vm2 实际得到的物理内存为833m
vm cpu介绍
how could a virtual machine emulate a cpu?the answer wa "no emulation."think about a virtual system board that has a "hole" where the cpu socket goes-and the guest operating system simply looks through the hole and sees one of the cores in the host server.
多cpu调度介绍:
the vmkernel simultaneously schedules cpu cycles for multi-vcpu virtual machines.this means that when a dual-vcpu virtual machine places a request for cpu cycles,the request goes into a queue for the host to process,and the host has to wait until there are at least two cores or hyperthreads(if hyperthreading is enabled)with concurrent idle cycles to schedule that virtual machine.a relaxed coscheduling algorithm provides a bit of flexibility in allowing the cores to be scheduled on a slightly skewed basis,but even so,it can be more difficult for the hypervisor to find open time slots on at least two cores.this occurs even if the virtual machine needs only a few clock cycles to do some menial task that could be done with a single processor.建议在开始创建vm的时候,都使用一个vcpu.
cpu affinity介绍
in addition to shares,reservations,and limits,vsphere offersa fourth option for managing cpu usage:cpu affinity.cpu affinity allows an administrator to statically associate a vm to a specific physical cpu core.cpu affinity is generally not recommended;it has a list of rather significant drawbacks;(多核cpu可以使用任何一个核cpu)
cpu affinity技术的限制
1.cpu affinity breaks vmotion.(不能做vmotion)
2.the hypervisor is unable to load-balance the vm across all the processing cores in the server.this prevents the hypervisor's scheduling engine from making the most efficient use of the host's resources.(可以在所有cpu做负载均衡,任何核心都可以为vm提供服务)
3.because vmotion is broken,you cannot use cpu affinities in a cluster where vsphere drs isn't set to manual operation.
配置cpu关联
预留
限制
共享
实例接收cpu关联
实例环境介绍:
1.the esxi host includes dual,single-core,3 ghz cpus.
2.the esxi host has one or more vms
scenario 1 the esx host has a single vm running.the shares are set at the defaults for the running vms.will the shares value have any effect in this scenario?no. there's no competition between vms for cpu time.
scenario 2 the esx host has two idle vms running.the shares are set at the defaults for the running vms.will the shares values have any effect in this scenario?no. there's no competition between vms for cpu time because both are idele.
scenario 3 the esx host has two equally busy vms running(both requesting maximum cpu capacity).the shares are set at the defaults for the running vms.will the shares values have any effect in this scenario?no. again,there's no competition between vms for cpu time,this time because each vm is serviced by a different core in the host.
scenario 4 to force contention,both vms are configured to use the same cpu by setting the cpu affinity.the esxi host has two equally busy vms running(both requesting maximum cpu capacity).this ensures contention between the vms.the shares are set at the defaults for the running vms.will the shares values have any effect in this scenario? yes!but in this case,because all vms have equal shares values,each vm has equal access
scenario 5 the esxi host has two equally busy vms running (both requesting maximum cpu capacity with cpu affinity set to the same core).the shares are set as follows;vm1 is set to 2000 cpu shares,and vm2 is set to the default 1000 cpu shares.will the shares values have any effect in this scenario?yes. inthis case,vm1 has double the number of shares that vm2 has.this means that for every clock cycle that vm2 is assigned by the host,vm1 is assigned two clock cycles,stated another way,out of every three clock cycles assigned to vms by the esxi host,two are assigned to vm1,and one is assigned to vm2.
scenario 6 the esxi host has three equally busy vms running(each requesting maximum cpu capabilities with cpu affinity set to the same core).the shares are set as follows;vm1 is set to 2000 cpu shares,and vm2 and vm3 are set to the default 1000 cpu shares.will the shares values have any effect in this scenario? yes.inthis case,vm1 has double the number of shares that vm2 and vm3 have assigned.this means that for every two clock cycles that vm1 is assigned by the host,vm2 and vm3 are each assigned a single clock cycle.stated another way,out of every four clock cycles assigned to vms by the esxi host,two cycles are assigned to vm1,one is assigned to vm2,and one is assigned to vm3.you can see that this has effectively watered down vm1's cpu capabilities.
scenario 7 the esxi host has three vms running.vm1 is idle while vm2 and vm3 are equally busy(each requesting maximum cpu capabilities,and all three vms are set with the same cpu affinity).the shares are set as follows;vm1 is set to 2000 cpu shares,and vm2 and vm3 are set to the deufalt 1000 cpu shares.will the shares values have any effect in this scenario? yes.but in this case vm1 is idle,which means it isn't requesting any cpu cycles.this means that vm1's shares value is not considered when apportioning the host cpu to the active vms.in this case,vm2 and vm3 would equally share the host cpu.cycles because their shares are set to an equal value.
标签:shares,CPU,host,vSphere5.9,内存,memory,vms,cpu From: https://www.cnblogs.com/smoke520/p/18370168