|Priority:||4 - Normal|
|Created by:||Hans Rosenfeld [X]|
|Reported by:||Hans Rosenfeld [X]|
|Assigned to:||Hans Rosenfeld [X]|
Fixed: A fix for this issue is checked into the tree and tested.
(Resolution Date: 2018-12-05T23:59:16.100Z)
2018-12-06 Grizz (Release Date: 2018-12-06)
When a bhyve zone is destroyed the resources used by the bhyve instance are released, regardless of whether other processes still have the instance device opened. This may be a very unlikely situation with bhyvectl (which never keeps the device open for long), but it's a real problem with the upcoming mdb bhyve target.
Instead of freeing the resources immediately we can set a flag to request freeing of the instance resources on the last close, and then do just that.
Testing: I have tested this in conjunction with the mdb bhyve target.
When mdb is attached to a VM, the VM can be halted either with vmadm or by halting the guest OS. This will actually finish faster than normal since the work to free the VM resources isn't done at this time. When mdb calls into libvmm again it will notice the VM is gone, and when exiting it'll actually wait for the resources to be freed. If a VM is started again before mdb exits, the VM will fail to start as it's resources (including softc & name) are still around.
Rebooting a VM is different, mdb won't notice that at all and just continue to operate normally.
As requested by John I did another round of stress testing. I booted up 24 VMs running various Linuxes and SmartOS. I randomly halted and booted random VMs, and in a separate terminal I attached mdb for random periods of time to random VMs. I kept that running for a few hours and I didn't see any kind of odd behavior.
illumos-joyent commit e0f08ef22a20cf816904f0b60a05b9dbe40cb836 (branch master, by Hans Rosenfeld)
OS-7394 defer bhyve instance destruction to last close
Reviewed by: Patrick Mooney <email@example.com>
Reviewed by: John Levon <firstname.lastname@example.org>
Approved by: John Levon <email@example.com>