Currently w/ DOCKER-40#icft=DOCKER-40 and friends, we've implemented the exit_status, but the exit_status will be negative in cases where the zone was killed by a signal. This is due to the fact that the exit_status value we get from lastexited (OS-3429) is the wait status. vmadm converts this to exit status by doing:
if (stat === -1) {
// -1 is special status (OS-3429) that indicates normal exit
result.exit_status = 0;
} else if ((stat & 0xff) == 0) {
// WIFEXITED == true, so we can use exit status
result.exit_status = (stat >> 8);
} else {
// WIFEXITED != true, so we just pull out the non-exit bits * -1
result.exit_status = ((stat & 0xff) * -1);
}
which means the exit status here will be for example:
-15 when init was killed with SIGTERM
-9 when init was killed with SIGKILL
so we need to figure out how these should map correctly to what docker does in these cases.
Former user commented on 2015-03-27T15:22:35.000-0400:
joshw [12:22 PM]
Well, I think it actually mostly works. All that needs to happen there is making sure there are not edge cases that we're missing.
Todd Whiteman commented on 2015-03-30T13:53:47.000-0400 (edited 2015-05-07T19:47:30.000-0400):
It seems to mostly work, but the exit code is not propagate to the calling shell in docker-sdc:
$ docker run -t -i centos bash
$ exit 5
$ echo $? > returns 0, rather than 5
Todd Whiteman commented on 2015-03-30T14:20:47.000-0400 (edited 2015-05-07T19:49:55.000-0400):
So the problem is the container listing is done too early (before the container has died) so the ExitCode is always 0 and Status is running, a subsequent container listing afterwards produces the expected result:
{
"Id": "9e55df88ccec48438fe61b16f27e001a4522f0989d484eeb85eb967f23ce6db7",
"Created": "2015-03-28T01:44:36.919Z",
"Config": {
"AppArmorProfile": "",
"AttachStderr": true,
"AttachStdin": true,
"AttachStdout": true,
"CpuShares": 8,
"Cpuset": "",
"Domainname": "local",
"ExposedPorts": null,
"Hostname": "9e55df88ccec",
"MacAddress": "",
"Memory": 1073741824,
"MemorySwap": 2147483648,
"NetworkDisabled": false,
"OnBuild": null,
"OpenStdin": true,
"PortSpecs": null,
"SecurityOpt": null,
"StdinOnce": false,
"Tty": true,
"User": "",
"Volumes": null,
"WorkingDir": "",
"Image": "centos",
"Cmd": ["bash"],
"Entrypoint": null,
"Env": ["PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"]
},
"Path": "bash",
"Args": [],
"Driver": "sdc",
"ExecDriver": "sdc-0.1",
"HostConfig": {
"Binds": null,
"CapAdd": null,
"CapDrop": null,
"ContainerIDFile": "",
"Devices": [],
"Dns": null,
"DnsSearch": null,
"ExtraHosts": null,
"IpcMode": "",
"Links": null,
"LxcConf": [],
"NetworkMode": "bridge",
"PortBindings": {},
"Privileged": false,
"PublishAllPorts": false,
"RestartPolicy": {
"MaximumRetryCount": 0,
"Name": ""
},
"VolumesFrom": null
},
"Volumes": null,
"VolumesRW": null,
"RestartCount": 0,
"HostnamePath": "/etc/hostname",
"HostsPath": "/etc/hosts",
"Image": "88f9454e60ddf4ae5f23fad8247a2c53e8d3ff63b0bdac59fc17ceceab058ce6",
"MountLabel": "",
"Name": "/reverent_cori",
"NetworkSettings": {
"Bridge": "eth0",
"Gateway": "10.88.88.2",
"IPAddress": "10.88.88.12",
"IPPrefixLen": 24,
"MacAddress": "90:b8:d0:24:2e:88",
"PortMapping": null,
"Ports": {}
},
"ProcessLabel": "",
"ResolvConfPath": "/etc/resolv.conf",
"State": {
"Error": "",
"ExitCode": 5,
"FinishedAt": "2015-03-28T01:45:03.322Z",
"OOMKilled": false,
"Paused": false,
"Pid": 0,
"Restarting": false,
"Running": false,
"StartedAt": "0001-01-01T00:00:00Z"
}
}
So, is there a way to detect this race condition, or rather wait until the container is fully stopped?
Note: Using "--rm" does the correct thing, as it waits for the container to stop before returning.
Former user commented on 2018-06-06T03:15:50.051-0400:
With the reduced investment in docker, we'll not spend time on this improvement. We'll file a new ticket or reopen this if the need comes up again.