[kwlug-disc] ZFS pool says it's 100% full according df -h and yet I know there is space as I have remove a lot of files...

Federer Fanatic nafdef at gmail.com
Sun Dec 29 16:44:20 EST 2019


OK. I disabled snapshots via systemctl, namely systemctl disable
sanoid.timer and then proceeded to destroy
the snapshot pools via the command

  zfs destroy -f snapshot-instance,

where the various instances are listed via zfs list -t snapshot.



. This finally released the space needed. So I guess one has to be careful
to allow for room
for the snapshots, which themselves are pools of a sort I believe, Delta
from the orignal. Voila I have a space in
the real pool. Some configuration issue with the sanoid.conf file perhaps.

ps. the API to ZFS is weird, when compared to the usual notions of
filesystems via fstab.

FF



--------------------------------------------------------------
 Roger Federer Fanatic Extraordinaire :-)


On Sun, Dec 29, 2019 at 12:07 PM Federer Fanatic <nafdef at gmail.com> wrote:

>
> sudo systemctl status sanoid.service
> ● sanoid.service - Snapshot ZFS Pool
>    Loaded: loaded (/lib/systemd/system/sanoid.service; static; vendor
> preset: enabled)
>    Active: inactive (dead) since Sun 2019-12-29 12:00:03 EST; 5min ago
>   Process: 10396 ExecStart=/usr/sbin/sanoid --take-snapshots --verbose
> (code=exited, status=0/SUCCESS)
>  Main PID: 10396 (code=exited, status=0/SUCCESS)
>
> Dec 29 12:00:01 XXXXX sanoid[10396]: INFO: taking snapshots...
> Dec 29 12:00:01 XXXXX sanoid[10396]: taking snapshot
> homedirs at autosnap_2019-12-29_12:00:01_daily
> Dec 29 12:00:02 XXXXX sanoid[10396]: cannot create snapshots : out of space
> Dec 29 12:00:02 XXXXX sanoid[10396]: CRITICAL ERROR: /sbin/zfs snapshot
> homedirs at autosnap_2019-12-29_12:00:01_daily fai
> Dec 29 12:00:02 XXXXX sanoid[10396]: taking snapshot
> homedirs at autosnap_2019-12-29_12:00:01_hourly
> Dec 29 12:00:03 XXXXX sanoid[10396]: cannot create snapshots : out of space
> Dec 29 12:00:03 XXXXX sanoid[10396]: CRITICAL ERROR: /sbin/zfs snapshot
> homedirs at autosnap_2019-12-29_12:00:01_hourly fa
> Dec 29 12:00:03 XXXXX sanoid[10396]: INFO: cache expired - updating from
> zfs list.
> Dec 29 12:00:03 XXXXX systemd[1]: sanoid.service: Succeeded.
> Dec 29 12:00:03 XXXXX systemd[1]: Started Snapshot ZFS Pool.
>
>
> --------------------------------------------------------------
>  Roger Federer Fanatic Extraordinaire :-)
>
>
> On Sun, Dec 29, 2019 at 11:54 AM Benjamin Tompkins <bjtompkins at gmail.com>
> wrote:
>
>> So this is actually expected.   Once you get past 80% full on a ZFS pool,
>> write performance starts to suffer.  I don't have an article handy
>> regarding this at the moment as I am on my phone.
>>
>> Once some space gets freed you should get performance back.
>>
>> On Sun., Dec. 29, 2019, 10:28 a.m. Federer Fanatic, <nafdef at gmail.com>
>> wrote:
>>
>>> Seems to be the same problem, no space on device:
>>>
>>> sudo zdb -M homedirs
>>> vdev          0 metaslabs  116 fragmentation 27%
>>> 12: 185132 ****************************************
>>> 13: 113286 *************************
>>> 14:  72563 ****************
>>> 15:  67800 ***************
>>> 16:  59987 *************
>>> 17:  30729 *******
>>> 18:  13720 ***
>>> 19:   7163 **
>>> 20:   4157 *
>>> 21:   1977 *
>>> 22:    964 *
>>> 23:    502 *
>>> 24:    254 *
>>> 25:     88 *
>>> 26:      5 *
>>> 27:      3 *
>>> 28:      1 *
>>> pool homedirs fragmentation 27%
>>> 12: 185132 ****************************************
>>> 13: 113286 *************************
>>> 14:  72563 ****************
>>> 15:  67800 ***************
>>> 16:  59987 *************
>>> 17:  30729 *******
>>> 18:  13720 ***
>>> 19:   7163 **
>>> 20:   4157 *
>>> 21:   1977 *
>>> 22:    964 *
>>> 23:    502 *
>>> 24:    254 *
>>> 25:     88 *
>>> 26:      5 *
>>> 27:      3 *
>>> 28:      1 *
>>>
>>>
>>>
>>> --------------------------------------------------------------
>>>  Roger Federer Fanatic Extraordinaire :-)
>>>
>>>
>>> On Sun, Dec 29, 2019 at 9:09 AM Federer Fanatic <nafdef at gmail.com>
>>> wrote:
>>>
>>>> Ok. I think I get what's going on. Available space is being released,
>>>> painfully slowly ,
>>>>
>>>> zfs get all  is show very active processing albeit very slow...
>>>>
>>>>
>>>>
>>>> --------------------------------------------------------------
>>>>  Roger Federer Fanatic Extraordinaire :-)
>>>>
>>>>
>>>> On Sun, Dec 29, 2019 at 8:35 AM Federer Fanatic <nafdef at gmail.com>
>>>> wrote:
>>>>
>>>>> Looks like some fragmentation is occurring...presumably because disk
>>>>> was getting full, I read
>>>>> that having dedup setting on can cause some issues, but my
>>>>> zfs get all |grep dedup
>>>>> homedirs                                       dedup
>>>>> off                    default
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --------------------------------------------------------------
>>>>>  Roger Federer Fanatic Extraordinaire :-)
>>>>>
>>>>>
>>>>> On Sun, Dec 29, 2019 at 8:23 AM Federer Fanatic <nafdef at gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Also zfs get all has a constant stream of output...
>>>>>>
>>>>>>
>>>>>>
>>>>>> --------------------------------------------------------------
>>>>>>  Roger Federer Fanatic Extraordinaire :-)
>>>>>>
>>>>>>
>>>>>> On Sun, Dec 29, 2019 at 8:19 AM Federer Fanatic <nafdef at gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Specifically,  zpool status
>>>>>>>
>>>>>>>
>>>>>>>   pool: homedirs
>>>>>>>  state: ONLINE
>>>>>>>   scan: scrub repaired 0B in 0 days 03:57:48 with 0 errors on Sun
>>>>>>> Dec  8 04:21:49 2019
>>>>>>> config:
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> zpool status yields:
>>>>>>>
>>>>>>> NAME        STATE     READ WRITE CKSUM
>>>>>>> homedirs    ONLINE       0     0     0
>>>>>>>  mirror-0  ONLINE       0     0     0
>>>>>>>    sda     ONLINE       0     0     0
>>>>>>>    sdb     ONLINE       0     0     0
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> # zpool list homedirs
>>>>>>>
>>>>>>> NAME       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP
>>>>>>>  DEDUP    HEALTH  ALTROOT
>>>>>>> homedirs  1.81T  1.76T    57.7G                 -
>>>>>>> -    27%      96%  1.00x        ONLINE  -
>>>>>>>
>>>>>>> which show 1.76 TB. It looks like space is being locked down somehow
>>>>>>> and is unavailable and yet
>>>>>>>
>>>>>>> and yet
>>>>>>>
>>>>>>> # df -h
>>>>>>> homedirs                                      890G  890G     0 100%
>>>>>>> /homedirs
>>>>>>>
>>>>>>> It's as if the volume is shrinking as a I remove files...
>>>>>>>
>>>>>>> NOTE. It may have something to do with snapshots sanoid settings:
>>>>>>> as I am also seeing the mount
>>>>>>>
>>>>>>> Filesystem
>>>>>>>         Size  Used Avail Use%   Mounted on
>>>>>>> homedirs at autosnap_2019-12-26_05:00:22_hourly  1.7T  1.7T       0
>>>>>>> 100%    /homedirs/.zfs/snapshot/autosnap_2019-12-26_05:00:22_hourly
>>>>>>>
>>>>>>> ---this goes away if I run systemctl stop sanoid.timer however,
>>>>>>> df -h still yes same output as above
>>>>>>>
>>>>>>>
>>>>>>> NOTE. the 1.7TB
>>>>>>>
>>>>>>> Also removing files from ZFS takes an eternity.
>>>>>>>
>>>>>>> I am obviously not understanding something.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --------------------------------------------------------------
>>>>>>>  Roger Federer Fanatic Extraordinaire :-)
>>>>>>>
>>>>>> _______________________________________________
>>> kwlug-disc mailing list
>>> kwlug-disc at kwlug.org
>>> http://kwlug.org/mailman/listinfo/kwlug-disc_kwlug.org
>>>
>> _______________________________________________
>> kwlug-disc mailing list
>> kwlug-disc at kwlug.org
>> http://kwlug.org/mailman/listinfo/kwlug-disc_kwlug.org
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://kwlug.org/pipermail/kwlug-disc_kwlug.org/attachments/20191229/ef70d4da/attachment.htm>


More information about the kwlug-disc mailing list