Very nice setup you have. My layout approach looks a bit different, I use different pools all Z3 with optane nvme logs to avoid power outage data loss. With each pool I can swap disk generations and increase space steadily. Also fragmentation can be reduced by moving data between pools.
So, with ZFS, you don't have to move pools to prevent fragmentation, you can do it at the dataset level, each dataset is treated as it's own space, what I do is have staging datasets then after it's complete it goes from incomplete > to be sorted or staging then into its final dataset and it's written without any fragmentation.
You can see that I only have ~2% fragmentation below, some of that is from an iSCSI LUN.
Code:
root@The-Archive:~ # zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
The-Repository 473T 257T 216T - - 2% 54% 1.00x ONLINE /mnt
I am not sure what your exact vdev layout is, but z3 seems a bit excessive to me, but I also replicate all my stuff to an offsite replica server, so I am never more than ~24 hours out of sync from my local server.
The log devices only help if you are using a protocol that is using synchronous writes, if you are using SMB or NFS with async you will still lose the inbound, you would need to set the pool/dataset sync policy to sync=always, if you can handle a redownload, then, snapshots will cover most of it, you can just revert or nuke the bad file and re-pull it, the log/sync=always is more for supercritical inflight files, think running a VM off ZFS via iSCSI or NFS as its disk. ZFS being Copy On Write, once it's on the pool, it's pretty safe, even during a move.
Very nice setup you have. My layout approach looks a bit different, I use different pools all Z3 with optane nvme logs to avoid power outage data loss. With each pool I can swap disk generations and increase space steadily. Also fragmentation can be reduced by moving data between pools.
Hi all , still watching the old stuff you have . i still have vhs collection i started with , i am now at 50tb anime on 2 nas systems , need to look for something easyer to watch and sort the anime series.
Do you also still use dvd and blurays --> to MKV or is there a beter way now to encode.
For me there are plenty enough groups releasing files for all series that I dont really see the point to encode anything myself (beside the few random old stuff that nobody want to touch), also sub from dvd are horrible and bluray are barely better, some release groups do a better work with softsubs that what is available by default.
I do buy physical but thats just my collectionner habit, most of the boxes are kept with their original packaging wrap, maybe it will be worth something when im old haha
Hi all , still watching the old stuff you have . i still have vhs collection i started with , i am now at 50tb anime on 2 nas systems , need to look for something easyer to watch and sort the anime series.
Do you also still use dvd and blurays --> to MKV or is there a beter way now to encode.
As of the time of writing, I'm looking at 427 TB of Total Storage Space.
it is quite perplexing how do you manage that! I personally been collecting stuff for about three decades by now, and didn't even get close to a quarter of that... I am guessing you store a lot of remuxes!
Shoutbox
post #16 by fafnir on 22.01.2025 19:39
As of the time of writing, I'm looking at 505 TiB of Total Storage Space. Anyone reached the PiB mark yet?
according to my local Whereisit listing, im at 1154TB, but that may include 10% ~ of duplicated files since my collection is not on an unified storage
post #15 by xemnarth on 22.01.2025 08:16
post #14 by gcs8 on 25.10.2024 11:13
Very nice setup you have. My layout approach looks a bit different, I use different pools all Z3 with optane nvme logs to avoid power outage data loss. With each pool I can swap disk generations and increase space steadily. Also fragmentation can be reduced by moving data between pools.
root@fate ~# zpool list | sort -k 2 -h NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT misaka 72.8T 59.8T 13.0T - - 0% 82% 1.00x ONLINE - kurumi 91.0T 70.2T 20.8T - - 9% 77% 1.00x ONLINE - tohsaka 127T 41.9T 85.4T - - 0% 32% 1.00x ONLINE - maomao 255T 104T 151T - - 1% 40% 1.00x ONLINE - sakura 327T 162T 166T - - 0% 49% 1.00x ONLINE -
So, with ZFS, you don't have to move pools to prevent fragmentation, you can do it at the dataset level, each dataset is treated as it's own space, what I do is have staging datasets then after it's complete it goes from incomplete > to be sorted or staging then into its final dataset and it's written without any fragmentation.
You can see that I only have ~2% fragmentation below, some of that is from an iSCSI LUN.
root@The-Archive:~ # zpool list NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT The-Repository 473T 257T 216T - - 2% 54% 1.00x ONLINE /mnt
I am not sure what your exact vdev layout is, but z3 seems a bit excessive to me, but I also replicate all my stuff to an offsite replica server, so I am never more than ~24 hours out of sync from my local server.
The log devices only help if you are using a protocol that is using synchronous writes, if you are using SMB or NFS with async you will still lose the inbound, you would need to set the pool/dataset sync policy to sync=always, if you can handle a redownload, then, snapshots will cover most of it, you can just revert or nuke the bad file and re-pull it, the log/sync=always is more for supercritical inflight files, think running a VM off ZFS via iSCSI or NFS as its disk. ZFS being Copy On Write, once it's on the pool, it's pretty safe, even during a move.
post #13 by realsenpai on 21.09.2024 23:17
This is where I am at.
root@The-Archive:~ # zpool list -v NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT The-Repository 473T 256T 217T - - 2% 54% 1.00x ONLINE /mnt raidz2-0 98.2T 57.4T 40.8T - - 2% 58.5% - ONLINE gptid/74a76dda-fe51-11ec-b425-0cc47a8ff400 16.4T - - - - - - - ONLINE gptid/74cf7687-fe51-11ec-b425-0cc47a8ff400 16.4T - - - - - - - ONLINE gptid/74bf89e2-fe51-11ec-b425-0cc47a8ff400 16.4T - - - - - - - ONLINE gptid/74bddae9-fe51-11ec-b425-0cc47a8ff400 16.4T - - - - - - - ONLINE gptid/74b5b89c-fe51-11ec-b425-0cc47a8ff400 16.4T - - - - - - - ONLINE gptid/74fa30a4-fe51-11ec-b425-0cc47a8ff400 16.4T - - - - - - - ONLINE raidz2-1 98.2T 57.2T 41.0T - - 2% 58.3% - ONLINE gptid/a85e917d-fe52-11ec-b425-0cc47a8ff400 16.4T - - - - - - - ONLINE gptid/a88a3803-fe52-11ec-b425-0cc47a8ff400 16.4T - - - - - - - ONLINE gptid/a8b3f64f-fe52-11ec-b425-0cc47a8ff400 16.4T - - - - - - - ONLINE gptid/a8cad6ee-fe52-11ec-b425-0cc47a8ff400 16.4T - - - - - - - ONLINE gptid/a8c042f7-fe52-11ec-b425-0cc47a8ff400 16.4T - - - - - - - ONLINE gptid/a8e48839-fe52-11ec-b425-0cc47a8ff400 16.4T - - - - - - - ONLINE raidz2-2 98.2T 57.3T 40.9T - - 2% 58.4% - ONLINE gptid/8f8c482f-fe53-11ec-b425-0cc47a8ff400 16.4T - - - - - - - ONLINE gptid/8fa39a65-fe53-11ec-b425-0cc47a8ff400 16.4T - - - - - - - ONLINE gptid/8f6c4344-fe53-11ec-b425-0cc47a8ff400 16.4T - - - - - - - ONLINE gptid/8fb5830a-fe53-11ec-b425-0cc47a8ff400 16.4T - - - - - - - ONLINE gptid/8fadc7eb-fe53-11ec-b425-0cc47a8ff400 16.4T - - - - - - - ONLINE gptid/90007b32-fe53-11ec-b425-0cc47a8ff400 16.4T - - - - - - - ONLINE raidz2-3 43.6T 21.6T 22.1T - - 2% 49.4% - ONLINE gptid/b3d9fe73-1077-11ed-9ec6-0cc47a8ff400 7.28T - - - - - - - ONLINE gptid/b3ffe81c-1077-11ed-9ec6-0cc47a8ff400 7.28T - - - - - - - ONLINE gptid/b37d5788-1077-11ed-9ec6-0cc47a8ff400 7.28T - - - - - - - ONLINE gptid/b37c09b9-1077-11ed-9ec6-0cc47a8ff400 7.28T - - - - - - - ONLINE gptid/b3b2b5bb-1077-11ed-9ec6-0cc47a8ff400 7.28T - - - - - - - ONLINE gptid/b3eca4c4-1077-11ed-9ec6-0cc47a8ff400 7.28T - - - - - - - ONLINE raidz2-4 43.6T 21.6T 22.1T - - 2% 49.5% - ONLINE gptid/b3bda5fc-1077-11ed-9ec6-0cc47a8ff400 7.28T - - - - - - - ONLINE gptid/b33561b2-1077-11ed-9ec6-0cc47a8ff400 7.28T - - - - - - - ONLINE gptid/b3431a8b-1077-11ed-9ec6-0cc47a8ff400 7.28T - - - - - - - ONLINE gptid/b3639a1e-1077-11ed-9ec6-0cc47a8ff400 7.28T - - - - - - - ONLINE gptid/b349723d-1077-11ed-9ec6-0cc47a8ff400 7.28T - - - - - - - ONLINE gptid/b35fa144-1077-11ed-9ec6-0cc47a8ff400 7.28T - - - - - - - ONLINE raidz2-5 43.6T 20.4T 23.3T - - 2% 46.7% - ONLINE gptid/24d4c4f0-0a1a-11ee-9d5c-0cc47a8ff400 16.4T - - - - - - - ONLINE gptid/b36bdfa2-1077-11ed-9ec6-0cc47a8ff400 7.28T - - - - - - - ONLINE gptid/b3787c97-1077-11ed-9ec6-0cc47a8ff400 7.28T - - - - - - - ONLINE gptid/b3ea2233-1077-11ed-9ec6-0cc47a8ff400 7.28T - - - - - - - ONLINE gptid/b4b966de-1077-11ed-9ec6-0cc47a8ff400 7.28T - - - - - - - ONLINE gptid/b4e61650-1077-11ed-9ec6-0cc47a8ff400 7.28T - - - - - - - ONLINE raidz2-6 43.6T 20.4T 23.2T - - 2% 46.8% - ONLINE gptid/b4d65d8b-1077-11ed-9ec6-0cc47a8ff400 7.28T - - - - - - - ONLINE gptid/b4df86c7-1077-11ed-9ec6-0cc47a8ff400 7.28T - - - - - - - ONLINE gptid/b4eb92c4-1077-11ed-9ec6-0cc47a8ff400 7.28T - - - - - - - ONLINE gptid/b4dd6cce-1077-11ed-9ec6-0cc47a8ff400 7.28T - - - - - - - ONLINE gptid/b4ffd2b5-1077-11ed-9ec6-0cc47a8ff400 7.28T - - - - - - - ONLINE gptid/b5063ba1-1077-11ed-9ec6-0cc47a8ff400 7.28T - - - - - - - ONLINE special - - - - - - - - - mirror-8 3.48T 223G 3.27T - - 26% 6.26% - ONLINE gptid/95d2146b-45c5-11ed-9d5c-0cc47a8ff400 3.49T - - - - - - - ONLINE gptid/95ceca71-45c5-11ed-9d5c-0cc47a8ff400 3.49T - - - - - - - ONLINE gptid/95d46363-45c5-11ed-9d5c-0cc47a8ff400 3.49T - - - - - - - ONLINE logs - - - - - - - - - mirror-7 186G 4.45M 186G - - 0% 0.00% - ONLINE gptid/a5c37821-15e6-11ed-a8da-0cc47a8ff400 186G - - - - - - - ONLINE gptid/a5cc0ea0-15e6-11ed-a8da-0cc47a8ff400 186G - - - - - - - ONLINE cache - - - - - - - - - gptid/a5baac72-15e6-11ed-a8da-0cc47a8ff400 1.82T 1.57T 256G - - 0% 86.2% - ONLINE gptid/a5b88c4b-15e6-11ed-a8da-0cc47a8ff400 1.82T 1.56T 266G - - 0% 85.7% - ONLINE spare - - - - - - - - - gptid/5f8faa0b-0ba6-11ee-9d5c-0cc47a8ff400 16.4T - - - - - - - AVAIL
Very nice setup you have. My layout approach looks a bit different, I use different pools all Z3 with optane nvme logs to avoid power outage data loss. With each pool I can swap disk generations and increase space steadily. Also fragmentation can be reduced by moving data between pools.
root@fate ~# zpool list | sort -k 2 -h NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT misaka 72.8T 59.8T 13.0T - - 0% 82% 1.00x ONLINE - kurumi 91.0T 70.2T 20.8T - - 9% 77% 1.00x ONLINE - tohsaka 127T 41.9T 85.4T - - 0% 32% 1.00x ONLINE - maomao 255T 104T 151T - - 1% 40% 1.00x ONLINE - sakura 327T 162T 166T - - 0% 49% 1.00x ONLINE -
post #12 by gcs8 on 14.09.2024 17:37
root@The-Archive:~ # zpool list -v NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT The-Repository 473T 256T 217T - - 2% 54% 1.00x ONLINE /mnt raidz2-0 98.2T 57.4T 40.8T - - 2% 58.5% - ONLINE gptid/74a76dda-fe51-11ec-b425-0cc47a8ff400 16.4T - - - - - - - ONLINE gptid/74cf7687-fe51-11ec-b425-0cc47a8ff400 16.4T - - - - - - - ONLINE gptid/74bf89e2-fe51-11ec-b425-0cc47a8ff400 16.4T - - - - - - - ONLINE gptid/74bddae9-fe51-11ec-b425-0cc47a8ff400 16.4T - - - - - - - ONLINE gptid/74b5b89c-fe51-11ec-b425-0cc47a8ff400 16.4T - - - - - - - ONLINE gptid/74fa30a4-fe51-11ec-b425-0cc47a8ff400 16.4T - - - - - - - ONLINE raidz2-1 98.2T 57.2T 41.0T - - 2% 58.3% - ONLINE gptid/a85e917d-fe52-11ec-b425-0cc47a8ff400 16.4T - - - - - - - ONLINE gptid/a88a3803-fe52-11ec-b425-0cc47a8ff400 16.4T - - - - - - - ONLINE gptid/a8b3f64f-fe52-11ec-b425-0cc47a8ff400 16.4T - - - - - - - ONLINE gptid/a8cad6ee-fe52-11ec-b425-0cc47a8ff400 16.4T - - - - - - - ONLINE gptid/a8c042f7-fe52-11ec-b425-0cc47a8ff400 16.4T - - - - - - - ONLINE gptid/a8e48839-fe52-11ec-b425-0cc47a8ff400 16.4T - - - - - - - ONLINE raidz2-2 98.2T 57.3T 40.9T - - 2% 58.4% - ONLINE gptid/8f8c482f-fe53-11ec-b425-0cc47a8ff400 16.4T - - - - - - - ONLINE gptid/8fa39a65-fe53-11ec-b425-0cc47a8ff400 16.4T - - - - - - - ONLINE gptid/8f6c4344-fe53-11ec-b425-0cc47a8ff400 16.4T - - - - - - - ONLINE gptid/8fb5830a-fe53-11ec-b425-0cc47a8ff400 16.4T - - - - - - - ONLINE gptid/8fadc7eb-fe53-11ec-b425-0cc47a8ff400 16.4T - - - - - - - ONLINE gptid/90007b32-fe53-11ec-b425-0cc47a8ff400 16.4T - - - - - - - ONLINE raidz2-3 43.6T 21.6T 22.1T - - 2% 49.4% - ONLINE gptid/b3d9fe73-1077-11ed-9ec6-0cc47a8ff400 7.28T - - - - - - - ONLINE gptid/b3ffe81c-1077-11ed-9ec6-0cc47a8ff400 7.28T - - - - - - - ONLINE gptid/b37d5788-1077-11ed-9ec6-0cc47a8ff400 7.28T - - - - - - - ONLINE gptid/b37c09b9-1077-11ed-9ec6-0cc47a8ff400 7.28T - - - - - - - ONLINE gptid/b3b2b5bb-1077-11ed-9ec6-0cc47a8ff400 7.28T - - - - - - - ONLINE gptid/b3eca4c4-1077-11ed-9ec6-0cc47a8ff400 7.28T - - - - - - - ONLINE raidz2-4 43.6T 21.6T 22.1T - - 2% 49.5% - ONLINE gptid/b3bda5fc-1077-11ed-9ec6-0cc47a8ff400 7.28T - - - - - - - ONLINE gptid/b33561b2-1077-11ed-9ec6-0cc47a8ff400 7.28T - - - - - - - ONLINE gptid/b3431a8b-1077-11ed-9ec6-0cc47a8ff400 7.28T - - - - - - - ONLINE gptid/b3639a1e-1077-11ed-9ec6-0cc47a8ff400 7.28T - - - - - - - ONLINE gptid/b349723d-1077-11ed-9ec6-0cc47a8ff400 7.28T - - - - - - - ONLINE gptid/b35fa144-1077-11ed-9ec6-0cc47a8ff400 7.28T - - - - - - - ONLINE raidz2-5 43.6T 20.4T 23.3T - - 2% 46.7% - ONLINE gptid/24d4c4f0-0a1a-11ee-9d5c-0cc47a8ff400 16.4T - - - - - - - ONLINE gptid/b36bdfa2-1077-11ed-9ec6-0cc47a8ff400 7.28T - - - - - - - ONLINE gptid/b3787c97-1077-11ed-9ec6-0cc47a8ff400 7.28T - - - - - - - ONLINE gptid/b3ea2233-1077-11ed-9ec6-0cc47a8ff400 7.28T - - - - - - - ONLINE gptid/b4b966de-1077-11ed-9ec6-0cc47a8ff400 7.28T - - - - - - - ONLINE gptid/b4e61650-1077-11ed-9ec6-0cc47a8ff400 7.28T - - - - - - - ONLINE raidz2-6 43.6T 20.4T 23.2T - - 2% 46.8% - ONLINE gptid/b4d65d8b-1077-11ed-9ec6-0cc47a8ff400 7.28T - - - - - - - ONLINE gptid/b4df86c7-1077-11ed-9ec6-0cc47a8ff400 7.28T - - - - - - - ONLINE gptid/b4eb92c4-1077-11ed-9ec6-0cc47a8ff400 7.28T - - - - - - - ONLINE gptid/b4dd6cce-1077-11ed-9ec6-0cc47a8ff400 7.28T - - - - - - - ONLINE gptid/b4ffd2b5-1077-11ed-9ec6-0cc47a8ff400 7.28T - - - - - - - ONLINE gptid/b5063ba1-1077-11ed-9ec6-0cc47a8ff400 7.28T - - - - - - - ONLINE special - - - - - - - - - mirror-8 3.48T 223G 3.27T - - 26% 6.26% - ONLINE gptid/95d2146b-45c5-11ed-9d5c-0cc47a8ff400 3.49T - - - - - - - ONLINE gptid/95ceca71-45c5-11ed-9d5c-0cc47a8ff400 3.49T - - - - - - - ONLINE gptid/95d46363-45c5-11ed-9d5c-0cc47a8ff400 3.49T - - - - - - - ONLINE logs - - - - - - - - - mirror-7 186G 4.45M 186G - - 0% 0.00% - ONLINE gptid/a5c37821-15e6-11ed-a8da-0cc47a8ff400 186G - - - - - - - ONLINE gptid/a5cc0ea0-15e6-11ed-a8da-0cc47a8ff400 186G - - - - - - - ONLINE cache - - - - - - - - - gptid/a5baac72-15e6-11ed-a8da-0cc47a8ff400 1.82T 1.57T 256G - - 0% 86.2% - ONLINE gptid/a5b88c4b-15e6-11ed-a8da-0cc47a8ff400 1.82T 1.56T 266G - - 0% 85.7% - ONLINE spare - - - - - - - - - gptid/5f8faa0b-0ba6-11ee-9d5c-0cc47a8ff400 16.4T - - - - - - - AVAIL
post #11 by fafnir on 06.09.2024 19:38
Hi all , still watching the old stuff you have . i still have vhs collection i started with , i am now at 50tb anime on 2 nas systems , need to look for something easyer to watch and sort the anime series.
Do you also still use dvd and blurays --> to MKV or is there a beter way now to encode.
For me there are plenty enough groups releasing files for all series that I dont really see the point to encode anything myself (beside the few random old stuff that nobody want to touch), also sub from dvd are horrible and bluray are barely better, some release groups do a better work with softsubs that what is available by default.
I do buy physical but thats just my collectionner habit, most of the boxes are kept with their original packaging wrap, maybe it will be worth something when im old haha
post #10 by uni on 02.09.2024 18:58
Do you also still use dvd and blurays --> to MKV or is there a beter way now to encode.
post #9 by ridojiri on 01.09.2024 00:45
As of the time of writing, I'm looking at 427 TB of Total Storage Space.
youre currently beating me then. @ 344 with fill capacity to 464
i want to go flash. while solutions are now available, prices are still too high.
post #8 by nabiru3 on 31.08.2024 09:38
CGi did it before and it was way better at it!
post #7 by fafnir on 30.08.2024 11:37
just kidding, but they really like to waste everyone bandwidth and disk space
post #6 by nabiru3 on 29.08.2024 09:44
As of the time of writing, I'm looking at 427 TB of Total Storage Space.
it is quite perplexing how do you manage that! I personally been collecting stuff for about three decades by now, and didn't even get close to a quarter of that... I am guessing you store a lot of remuxes!
post #5 by xemnarth on 29.08.2024 09:29
post #4 by JimKi on 04.01.2019 08:17
Strangely, mylist it 13.2 TB, i suppose it is because of deleted releases
post #3 by uni on 30.09.2018 18:52
post #2 by Kuroibara on 29.09.2018 16:41
post #1 by fafnir on 24.02.2012 19:20