<div dir="ltr">This made me quickly go over my two TrueNAS systems configuration and check on the backup.  LOL.<div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, Jan 29, 2022 at 10:33 PM L.D. Paniak <<a href="mailto:ldpaniak@fourpisolutions.com">ldpaniak@fourpisolutions.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
  
    
  
  <div>
    <p>The only fault of ZFS is it enables one to lose data at the PB
      scale with little experience.<br>
    </p>
    <p>Even minimal monitoring of the array would have helped them to
      avoid this episode.</p>
    <p>Once you are using mechanical drives of more than a handful of
      TB, RAIDZ3 vdevs are something to consider. Rebuild times are long
      and bit error rates have not gone down in time.<br>
      The ServeTheHome RAID reliability calculator is a useful tool
      (that might make you rethink your array):<br>
<a href="https://www.servethehome.com/raid-calculator/raid-reliability-calculator-simple-mttdl-model/" target="_blank">https://www.servethehome.com/raid-calculator/raid-reliability-calculator-simple-mttdl-model/</a><br>
    </p>
    <div>On 2022-01-29 21:16, Jason Eckert
      wrote:<br>
    </div>
    <blockquote type="cite">
      
      <div dir="ltr">I find it mind boggling that they have that much
        hardware, but were too lazy to spend time properly
        configuring/monitoring ZFS until it was too late.</div>
      <br>
      <div class="gmail_quote">
        <div dir="ltr" class="gmail_attr">On Sat, Jan 29, 2022 at 9:12
          PM Chris Frey <<a href="mailto:cdfrey@foursquare.net" target="_blank">cdfrey@foursquare.net</a>>
          wrote:<br>
        </div>
        <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On
          Sat, Jan 29, 2022 at 08:36:43PM -0500, Mikalai Birukou via
          kwlug-disc wrote:<br>
          > <a href="https://www.youtube.com/watch?v=Npu7jkJk5nM" rel="noreferrer" target="_blank">https://www.youtube.com/watch?v=Npu7jkJk5nM</a><br>
          <br>
          I don't think that has much to do with ZFS :-)<br>
          <br>
          I was slightly puzzled at their first move to remove the
          drives that<br>
          were still working but had errors.  With that many failed
          disks in the<br>
          array, wouldn't it have been a better strategy to try to get
          everything<br>
          into a good state before replacing entire drives?<br>
          <br>
          - Chris<br>
          <br>
          <br>
          _______________________________________________<br>
          kwlug-disc mailing list<br>
          <a href="mailto:kwlug-disc@kwlug.org" target="_blank">kwlug-disc@kwlug.org</a><br>
          <a href="https://kwlug.org/mailman/listinfo/kwlug-disc_kwlug.org" rel="noreferrer" target="_blank">https://kwlug.org/mailman/listinfo/kwlug-disc_kwlug.org</a><br>
        </blockquote>
      </div>
      <br>
      <fieldset></fieldset>
      <pre>_______________________________________________
kwlug-disc mailing list
<a href="mailto:kwlug-disc@kwlug.org" target="_blank">kwlug-disc@kwlug.org</a>
<a href="https://kwlug.org/mailman/listinfo/kwlug-disc_kwlug.org" target="_blank">https://kwlug.org/mailman/listinfo/kwlug-disc_kwlug.org</a>
</pre>
    </blockquote>
  </div>

_______________________________________________<br>
kwlug-disc mailing list<br>
<a href="mailto:kwlug-disc@kwlug.org" target="_blank">kwlug-disc@kwlug.org</a><br>
<a href="https://kwlug.org/mailman/listinfo/kwlug-disc_kwlug.org" rel="noreferrer" target="_blank">https://kwlug.org/mailman/listinfo/kwlug-disc_kwlug.org</a><br>
</blockquote></div>