[kwlug-disc] ZFS pool says it's 100% full according df -h and yet I know there is space as I have remove a lot of files...

Federer Fanatic nafdef at gmail.com
Sun Dec 29 10:27:40 EST 2019


Seems to be the same problem, no space on device:

sudo zdb -M homedirs
vdev          0 metaslabs  116 fragmentation 27%
12: 185132 ****************************************
13: 113286 *************************
14:  72563 ****************
15:  67800 ***************
16:  59987 *************
17:  30729 *******
18:  13720 ***
19:   7163 **
20:   4157 *
21:   1977 *
22:    964 *
23:    502 *
24:    254 *
25:     88 *
26:      5 *
27:      3 *
28:      1 *
pool homedirs fragmentation 27%
12: 185132 ****************************************
13: 113286 *************************
14:  72563 ****************
15:  67800 ***************
16:  59987 *************
17:  30729 *******
18:  13720 ***
19:   7163 **
20:   4157 *
21:   1977 *
22:    964 *
23:    502 *
24:    254 *
25:     88 *
26:      5 *
27:      3 *
28:      1 *



--------------------------------------------------------------
 Roger Federer Fanatic Extraordinaire :-)


On Sun, Dec 29, 2019 at 9:09 AM Federer Fanatic <nafdef at gmail.com> wrote:

> Ok. I think I get what's going on. Available space is being released,
> painfully slowly ,
>
> zfs get all  is show very active processing albeit very slow...
>
>
>
> --------------------------------------------------------------
>  Roger Federer Fanatic Extraordinaire :-)
>
>
> On Sun, Dec 29, 2019 at 8:35 AM Federer Fanatic <nafdef at gmail.com> wrote:
>
>> Looks like some fragmentation is occurring...presumably because disk was
>> getting full, I read
>> that having dedup setting on can cause some issues, but my
>> zfs get all |grep dedup
>> homedirs                                       dedup                 off
>>                    default
>>
>>
>>
>>
>> --------------------------------------------------------------
>>  Roger Federer Fanatic Extraordinaire :-)
>>
>>
>> On Sun, Dec 29, 2019 at 8:23 AM Federer Fanatic <nafdef at gmail.com> wrote:
>>
>>> Also zfs get all has a constant stream of output...
>>>
>>>
>>>
>>> --------------------------------------------------------------
>>>  Roger Federer Fanatic Extraordinaire :-)
>>>
>>>
>>> On Sun, Dec 29, 2019 at 8:19 AM Federer Fanatic <nafdef at gmail.com>
>>> wrote:
>>>
>>>> Specifically,  zpool status
>>>>
>>>>
>>>>   pool: homedirs
>>>>  state: ONLINE
>>>>   scan: scrub repaired 0B in 0 days 03:57:48 with 0 errors on Sun Dec
>>>>  8 04:21:49 2019
>>>> config:
>>>>
>>>>
>>>>
>>>> zpool status yields:
>>>>
>>>> NAME        STATE     READ WRITE CKSUM
>>>> homedirs    ONLINE       0     0     0
>>>>  mirror-0  ONLINE       0     0     0
>>>>    sda     ONLINE       0     0     0
>>>>    sdb     ONLINE       0     0     0
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> # zpool list homedirs
>>>>
>>>> NAME       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP
>>>>  HEALTH  ALTROOT
>>>> homedirs  1.81T  1.76T    57.7G                 -                    -
>>>>    27%      96%  1.00x        ONLINE  -
>>>>
>>>> which show 1.76 TB. It looks like space is being locked down somehow
>>>> and is unavailable and yet
>>>>
>>>> and yet
>>>>
>>>> # df -h
>>>> homedirs                                      890G  890G     0 100%
>>>> /homedirs
>>>>
>>>> It's as if the volume is shrinking as a I remove files...
>>>>
>>>> NOTE. It may have something to do with snapshots sanoid settings:
>>>> as I am also seeing the mount
>>>>
>>>> Filesystem
>>>>     Size  Used Avail Use%   Mounted on
>>>> homedirs at autosnap_2019-12-26_05:00:22_hourly  1.7T  1.7T       0 100%
>>>>   /homedirs/.zfs/snapshot/autosnap_2019-12-26_05:00:22_hourly
>>>>
>>>> ---this goes away if I run systemctl stop sanoid.timer however,
>>>> df -h still yes same output as above
>>>>
>>>>
>>>> NOTE. the 1.7TB
>>>>
>>>> Also removing files from ZFS takes an eternity.
>>>>
>>>> I am obviously not understanding something.
>>>>
>>>>
>>>>
>>>>
>>>> --------------------------------------------------------------
>>>>  Roger Federer Fanatic Extraordinaire :-)
>>>>
>>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://kwlug.org/pipermail/kwlug-disc_kwlug.org/attachments/20191229/f5ae0a42/attachment.htm>


More information about the kwlug-disc mailing list