[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [ezjail] Race when attaching zfs datasets to jail?



Yaroslav Tykhiy wrote:
> Hi All,
> 
> Is it just me, or is there a race window between ezjail.sh invoking "zfs
> jail" from outside a jail and the jail running its /etc/rc.d/zfs?  In my
> case it manifests itself as follows: No ZFS file systems get mounted
> when the jail starts up but running "/etc/rc.d/zfs start" later on by
> hand gets all of them mounted.  Of course, "zfs_enable=YES" is set in
> jail and the required sysctls are set, too, for otherwise it wouldn't
> have worked at all.
> 
> If my reading of the code is right, "zfs jail" isn't synchronised with
> the jail and a race can take place so "zfs jail" runs later than
> rc.d/zfs.  Unfortunately, this doesn't look like a mere bug in ezjail.sh
> fixable by tweaking some shell code.  What is missing is a natural way
> to synchronise the execution of rc.d/zfs in a jail with the respective
> "zfs jail" in the host system.  So the real issue can be in the FreeBSD
> kernel API and the most robust, if complicated, solution would be to
> attach ZFS datasets right in the jail(2) syscall so that there is no
> race window.
> 
> However, a possible workaround in ezjail for the time being can be to
> check the zfs_enable setting in the jail and run "zfs mount -a" on the
> jail's behalf after its datasets were attached.

The right way would be for rc.d/jail to first create the jail, attach
the zfs dataset and then start the jail. This way rc.d/zfs inside the
jail would run after attaching and everything would work as expected.

Same goes with all the allow.* parameters you can set using jail -m.

greetings,
philipp