|Priority:||4 - Normal|
|Created by:||Patrick Mooney [X]|
|Reported by:||Patrick Mooney [X]|
|Assigned to:||John Levon [X]|
Fixed: A fix for this issue is checked into the tree and tested.
(Resolution Date: 2018-04-03T15:18:10.568Z)
2018-04-12 Promised Land (Release Date: 2018-04-12)
For the time being, bhyve on illumos is directly allocating kernel memory to furnish as physical memory for hosted guests. Allocating out of the normal kmem arena would mean guest memory would be included by default in OS crash dumps. For initial testing, the zio arena was used since it isn't included in crash dumps. It would be nice to have an independent arena so
::memstat accounting was clear and future control over potential largepage allocation was easy.
First, a historical note: the current code (since Illumos bug 6914, which introduced
segzio) figures on 1.5x physmem for
kernelheap, and 1.5x physmem for
segzio. Ignoring loose change, this means that we have space for around
(COREHEAP_BASE-VALLOC_BASE==1Tb) / 3 physmem. The current code actually uses 256Gb for this figure. (KPM lives below
Past this limit,
startup_memlist is supposed to move things down to make room. But it actually seems to mis-calculate the adjustment:
+ segkpm_base = -(P2ROUNDUP((4 * kpm_resv_amount), + KERNEL_REDZONE_SIZE));
This worked when the physmem limit was 1Tb. But now, at e.g. 512Gb physmem, this comes out as higher than the original
SEGKPM_BASE (0xffffff8000000000 > 0xfffffe0000000000).
We only get away with this because a) the code will ensure the new base is no higher than the original immediately after and b) at least on my machine,
plat_dr_physmax is huge and the adjustment will work out OK.
Regardless, the funny maths here seems to have no purpose at all, so we'll just rewrite this to make the adjustment normally.
We'll introduce a new segkvmm arena into the kernel address space as described in
i86pc/os/startup.c. Note that bhyve currently uses this arena, backed by
segkmem_alloc to build a (4Kb) set of kernel mappings for all the VM memory. This is in addition to
segvmm which maps the same physical pages into the bhyve userspace.
We'll reserve 4x physmem for this arena: we don't have any quantum caches, as we're expecting only relatively large VA allocations, so we want to over-provision the VA space to avoid fragmentation issues. It's also a convenient number as the default layout accounts for up to 256Gb physmem: 4x that is 1Tb.
To account for the new space, we will permanently move down
SEGKPM_BASE by 1Tb (and hence
We also need to fix up the adjustment code in
startup_memlist(). We will now have room for 2Tb/7 physmem - which we'll still call 256Gb - but when we adjust we need to want to make sure there's enough extra VA x8 (1.5x heap, 1.5x segzio, 1x kpm, 4x vmm).
This increase actually means we need to officially drop support for DR memory: on my test machine,
plat_dr_physmax is 16Tb, and we actually don't have enough VA above the hole for this calculation!
This has been tested on two machines : a smaller (vmware) instance, and a machine large enough to cause an adjustment of kernelbase. Verified that the VA layout looks sensible on both systems.
Both DEBUG and non-DEBUG were taken for a spin, and KVM was sanity tested.
bhyve VMs of various sizes were booted to verify that the ::vmem / ::vmem_seg output was as expected, and dropped back to zero on de-allocation.
A bunch of random start/stop load was also placed on the system, though this regularly hits LAB-253.
OS-6606 want memory arena for vmm applications
OS-6835 memory DR should be disabled
Reviewed by: Jerry Jelinek <firstname.lastname@example.org>
Reviewed by: Patrick Mooney <email@example.com>
Approved by: Patrick Mooney <firstname.lastname@example.org>