Trying to figure out if there is a way to do this without zfs send
ing a ton of data. I have:
s/test1
, inside it are folders:folder1
folder2
I have this pool backed up remotely by sending snapshots.
I’d like to split this up into:
s/test1
, inside is folder:folder1
s/test2
, inside is folder:folder2
I’m trying to figure out if there is some combination of clone
and promote
that would limit the amount of data needed to be sent over the network.
Or maybe there is some record/replay method I could do on snapshots that I’m not aware of.
Thoughts?
I can’t think of a way off hand to match your scenario, but Ive heard ideas suggested that come close. This is exactly the type of question you should ask at practicalzfs.com.
If you don’t know it, that’s Jim Salter’s forum (author of sanoid and syncoid) and there are some sharp ZFS experts hanging out there.
what is your goal with this?
do you still want to keep all the data in a single pool?
if so, you could make datasets in the pool, and move the top directories into the datasets. datasets are basically dirs that can have special settings on how they are handledninja edit: now that I think about it, moving across datasets probably makes that data to be resent.
it would be easier to give advice by knowing why do you want to do thisYea your edit is the problem unfortunately. Moving across datasets would incur disk reads/writes and sending of terabytes of data.
The goal in separating them out is because I want to be able to independently
zfs send
folder 1 somewhere without including folder 2. Poor choice of dataset layout when I built the array.