Trying to figure out if there is a way to do this without zfs sending a ton of data. I have:

  • s/test1, inside it are folders:
    • folder1
    • folder2

I have this pool backed up remotely by sending snapshots.

I’d like to split this up into:

  • s/test1, inside is folder:
    • folder1
  • s/test2, inside is folder:
    • folder2

I’m trying to figure out if there is some combination of clone and promote that would limit the amount of data needed to be sent over the network.

Or maybe there is some record/replay method I could do on snapshots that I’m not aware of.

Thoughts?

  • fmstrat@lemmy.nowsci.comOP
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    Yea your edit is the problem unfortunately. Moving across datasets would incur disk reads/writes and sending of terabytes of data.

    The goal in separating them out is because I want to be able to independently zfs send folder 1 somewhere without including folder 2. Poor choice of dataset layout when I built the array.