[kwlug-disc] Video about ZFS mishap(s?)

Mikalai Birukou mb at 3nsoft.com
Sun Jan 30 10:00:24 EST 2022


>> https://www.youtube.com/watch?v=Npu7jkJk5nM
> I don't think that has much to do with ZFS :-)

I didn't mean to suggest that ZFS has a problem, rather that one may do 
mishaps around it, lack of admin attention.

A while ago, I've added a script/check into all-checks manually 
triggered and run by salt on all machines to combine zfs status into 
thumb up/down in a console. Over a year I caught one disk with this. 
Yet, story from the outside, adds more validity to a need for at least 
some click-to-run checks.

Also interesting is their finding of which system has what settings for 
scrub, and their historic root. History-related, but not zfs, just last 
week we had a bizarre situation: had to make gitlab project internal for 
runner to pull code to start any processing, while another project works 
having private visibility. The working project was created at older 
versions of gitlab -- is my current hypothesis.

> I was slightly puzzled at their first move to remove the drives that
> were still working but had errors.  With that many failed disks in the
> array, wouldn't it have been a better strategy to try to get everything
> into a good state before replacing entire drives?





More information about the kwlug-disc mailing list