Issue Type: | Improvement |
---|---|
Priority: | 4 - Normal |
Status: | Open |
Created at: | 2019-03-11T22:13:32.138Z |
Updated at: | 2019-07-30T19:15:16.784Z |
Created by: | Former user |
---|---|
Reported by: | Former user |
Currently when you use vmadm update $uuid < add_disks.json
to add a disk to a bhyve instance, it is likely to result in an error saying there is no space available even if the pool has plenty of space. This is because the zfs quota on zones/$uuid
is set to prevent unlimited space consumption by an overzealous snapshotter.
For the SmartOS user, this doesn't make much sense. One of the following approaches could be used to make this better.
flexible_disk_size
is not present in the zone configuration quota
and reservation
should not be set on zones/$uuid
. In this case it would be up to CloudAPI to ensure that it only allows snapshots, disk creation, disk removal, and disk resize when the instance has flexible_disk_size
set.flexible_disk_size
is not present in the zone configuration, any attempt to add, remove, or resize a disk should result in the appropriate adjustment to the zfs quota
and reservation
on zones/$uuid
. In this case, it would be up to CloudAPI to ensure that it only allows disk creation, disk removal, and disk resize when the instance has flexible_disk_size
set.The first approach appears marginally better because it will also make the job of a SmartOS admin that wishes to use ZFS snapshot easier.
Whichever approach is chosen, any surprising behavior and the "right way" needs to appear in examples in the vmadm man page.