<div dir="ltr"><br clear="all"><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature">sudo systemctl status sanoid.service<br>● sanoid.service - Snapshot ZFS Pool<br> Loaded: loaded (/lib/systemd/system/sanoid.service; static; vendor preset: enabled)<br> Active: inactive (dead) since Sun 2019-12-29 12:00:03 EST; 5min ago<br> Process: 10396 ExecStart=/usr/sbin/sanoid --take-snapshots --verbose (code=exited, status=0/SUCCESS)<br> Main PID: 10396 (code=exited, status=0/SUCCESS)<br><br>Dec 29 12:00:01 XXXXX sanoid[10396]: INFO: taking snapshots...<br>Dec 29 12:00:01 XXXXX sanoid[10396]: taking snapshot homedirs@autosnap_2019-12-29_12:00:01_daily<br>Dec 29 12:00:02 XXXXX sanoid[10396]: cannot create snapshots : out of space<br>Dec 29 12:00:02 XXXXX sanoid[10396]: CRITICAL ERROR: /sbin/zfs snapshot homedirs@autosnap_2019-12-29_12:00:01_daily fai<br>Dec 29 12:00:02 XXXXX sanoid[10396]: taking snapshot homedirs@autosnap_2019-12-29_12:00:01_hourly<br>Dec 29 12:00:03 XXXXX sanoid[10396]: cannot create snapshots : out of space<br>Dec 29 12:00:03 XXXXX sanoid[10396]: CRITICAL ERROR: /sbin/zfs snapshot homedirs@autosnap_2019-12-29_12:00:01_hourly fa<br>Dec 29 12:00:03 XXXXX sanoid[10396]: INFO: cache expired - updating from zfs list.<br>Dec 29 12:00:03 XXXXX systemd[1]: sanoid.service: Succeeded.<br>Dec 29 12:00:03 XXXXX systemd[1]: Started Snapshot ZFS Pool.<br><br><br>--------------------------------------------------------------<br> Roger Federer Fanatic Extraordinaire :-)</div></div><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sun, Dec 29, 2019 at 11:54 AM Benjamin Tompkins <<a href="mailto:bjtompkins@gmail.com">bjtompkins@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto">So this is actually expected. Once you get past 80% full on a ZFS pool, write performance starts to suffer. I don't have an article handy regarding this at the moment as I am on my phone.<div dir="auto"><br></div><div dir="auto">Once some space gets freed you should get performance back.</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sun., Dec. 29, 2019, 10:28 a.m. Federer Fanatic, <<a href="mailto:nafdef@gmail.com" target="_blank">nafdef@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Seems to be the same problem, no space on device: <div><br></div><div>sudo zdb -M homedirs<br> vdev 0 metaslabs 116 fragmentation 27%<br> 12: 185132 ****************************************<br> 13: 113286 *************************<br> 14: 72563 ****************<br> 15: 67800 ***************<br> 16: 59987 *************<br> 17: 30729 *******<br> 18: 13720 ***<br> 19: 7163 **<br> 20: 4157 *<br> 21: 1977 *<br> 22: 964 *<br> 23: 502 *<br> 24: 254 *<br> 25: 88 *<br> 26: 5 *<br> 27: 3 *<br> 28: 1 *<br> pool homedirs fragmentation 27%<br> 12: 185132 ****************************************<br> 13: 113286 *************************<br> 14: 72563 ****************<br> 15: 67800 ***************<br> 16: 59987 *************<br> 17: 30729 *******<br> 18: 13720 ***<br> 19: 7163 **<br> 20: 4157 *<br> 21: 1977 *<br> 22: 964 *<br> 23: 502 *<br> 24: 254 *<br> 25: 88 *<br> 26: 5 *<br> 27: 3 *<br> 28: 1 *<br><div><div dir="ltr"><br><br><br>--------------------------------------------------------------<br> Roger Federer Fanatic Extraordinaire :-)</div></div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sun, Dec 29, 2019 at 9:09 AM Federer Fanatic <<a href="mailto:nafdef@gmail.com" rel="noreferrer" target="_blank">nafdef@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Ok. I think I get what's going on. Available space is being released, painfully slowly , <div><br></div><div>zfs get all is show very active processing albeit very slow...<br clear="all"><div><div dir="ltr"><br><br><br>--------------------------------------------------------------<br> Roger Federer Fanatic Extraordinaire :-)</div></div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sun, Dec 29, 2019 at 8:35 AM Federer Fanatic <<a href="mailto:nafdef@gmail.com" rel="noreferrer" target="_blank">nafdef@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Looks like some fragmentation is occurring...presumably because disk was getting full, I read<div>that having dedup setting on can cause some issues, but my</div><div>zfs get all |grep dedup<br>homedirs dedup off default<br></div><div><br clear="all"><div><div dir="ltr"><br><br><br>--------------------------------------------------------------<br> Roger Federer Fanatic Extraordinaire :-)</div></div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sun, Dec 29, 2019 at 8:23 AM Federer Fanatic <<a href="mailto:nafdef@gmail.com" rel="noreferrer" target="_blank">nafdef@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Also zfs get all has a constant stream of output...<br clear="all"><div><div dir="ltr"><br><br><br>--------------------------------------------------------------<br> Roger Federer Fanatic Extraordinaire :-)</div></div><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sun, Dec 29, 2019 at 8:19 AM Federer Fanatic <<a href="mailto:nafdef@gmail.com" rel="noreferrer" target="_blank">nafdef@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Specifically, zpool status<div><br></div><div><br> pool: homedirs<br> state: ONLINE<br> scan: scrub repaired 0B in 0 days 03:57:48 with 0 errors on Sun Dec 8 04:21:49 2019<br>config:<br><br><br></div><div><br></div><div>zpool status yields:</div><div><br></div><div> NAME STATE READ WRITE CKSUM<br> homedirs ONLINE 0 0 0<br> mirror-0 ONLINE 0 0 0<br> sda ONLINE 0 0 0<br> sdb ONLINE 0 0 0<br><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br><div># zpool list homedirs</div><div> <br>NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT<br>homedirs 1.81T 1.76T 57.7G - - 27% 96% 1.00x ONLINE -<br></div><div><br></div><div>which show 1.76 TB. It looks like space is being locked down somehow and is unavailable and yet</div><div><br></div><div>and yet</div><div><br></div><div># df -h </div><div>homedirs 890G 890G 0 100% /homedirs<br></div><div><br></div><div>It's as if the volume is shrinking as a I remove files...</div><div><br></div><div>NOTE. It may have something to do with snapshots sanoid settings:</div><div>as I am also seeing the mount</div><div><br></div><div>Filesystem Size Used Avail Use% Mounted on<br></div><div>homedirs@autosnap_2019-12-26_05:00:22_hourly 1.7T 1.7T 0 100% /homedirs/.zfs/snapshot/autosnap_2019-12-26_05:00:22_hourly<br></div><div> </div><div>---this goes away if I run systemctl stop sanoid.timer however,</div><div>df -h still yes same output as above</div><div><br></div><div><br></div><div>NOTE. the 1.7TB</div><div><br></div><div>Also removing files from ZFS takes an eternity. </div><div><br></div><div>I am obviously not understanding something.</div><div><br></div><div><div><div dir="ltr"><br><br><br>--------------------------------------------------------------<br> Roger Federer Fanatic Extraordinaire :-)</div></div></div></div></div></div>
</blockquote></div>
</blockquote></div>
</blockquote></div>
</blockquote></div>
_______________________________________________<br>
kwlug-disc mailing list<br>
<a href="mailto:kwlug-disc@kwlug.org" rel="noreferrer" target="_blank">kwlug-disc@kwlug.org</a><br>
<a href="http://kwlug.org/mailman/listinfo/kwlug-disc_kwlug.org" rel="noreferrer noreferrer" target="_blank">http://kwlug.org/mailman/listinfo/kwlug-disc_kwlug.org</a><br>
</blockquote></div>
_______________________________________________<br>
kwlug-disc mailing list<br>
<a href="mailto:kwlug-disc@kwlug.org" target="_blank">kwlug-disc@kwlug.org</a><br>
<a href="http://kwlug.org/mailman/listinfo/kwlug-disc_kwlug.org" rel="noreferrer" target="_blank">http://kwlug.org/mailman/listinfo/kwlug-disc_kwlug.org</a><br>
</blockquote></div>