<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<p><br>
</p>
<blockquote type="cite"
cite="mid:04273630-58a1-e188-3e4f-f02a75b4ab7c@3nsoft.com">On
2019-08-07 5:49 p.m., Paul Nijjar via kwlug-disc wrote:
<br>
<blockquote type="cite">ZFS with FreeNAS or Ubuntu (using ZFS on
Linux), raidz
<br>
------------------------------------------------------
</blockquote>
</blockquote>
<p>I want to provide further illustration. Here,
<a class="moz-txt-link-freetext" href="https://stackoverflow.com/questions/44375657/how-to-remove-a-device-disk-from-zpool-where-there-is-no-redundancy">https://stackoverflow.com/questions/44375657/how-to-remove-a-device-disk-from-zpool-where-there-is-no-redundancy</a>
, there is a listing of someone's pool</p>
<p>```</p>
<pre><code>NAME STATE READ WRITE CKSUM
pdx-zfs-02 DEGRADED 0 0 0
raidz2-0 DEGRADED 0 0 0
gptid/c459110a-a73c-1a49-b12c-f03fbec6eca6 FAULTED 158 25.3K 0 too many errors
gptid/8c87e988-7832-1e44-9c45-abe95ee2d8f7 ONLINE 0 0 0
gptid/3b4be4d0-136e-41e3-c546-d5c4ba2b3142 ONLINE 0 0 0
gptid/209e8c9c-ff66-6f6a-e38b-9045c0b6c3ec ONLINE 0 0 0
gptid/ea8b834a-0692-464b-fd29-a877bf8f7bb9 ONLINE 0 0 0
gptid/cf35d740-ea0b-bae6-9e4f-b7a31d66ab1d ONLINE 0 0 0
gptid/fe908e73-c93b-72ed-d4bb-9eae78bcc5b6 ONLINE 0 0 0
gptid/bdf03e4d-ba71-a4cc-dd90-edfd6446bac3 ONLINE 0 0 0
gptid/302bacc1-273a-54c9-c8f9-f458640b0d60 ONLINE 0 0 0
gptid/d94ea326-d5aa-f062-9662-953908ce0b53 ONLINE 0 0 0
raidz2-1 ONLINE 0 0 0
gptid/3c1b1d3b-3977-11e6-b1f0-0025902b035a ONLINE 0 0 0
gptid/3ec0ba4a-3977-11e6-b1f0-0025902b035a ONLINE 0 0 0
gptid/40d8b781-3977-11e6-b1f0-0025902b035a ONLINE 0 0 0
gptid/43387eae-3977-11e6-b1f0-0025902b035a ONLINE 0 0 0
gptid/45800439-3977-11e6-b1f0-0025902b035a ONLINE 0 0 0
gptid/47df2694-3977-11e6-b1f0-0025902b035a ONLINE 0 0 0
da2 ONLINE 0 0 0
</code></pre>
<code>```</code><br>
<code></code><br>
<p><code>Note that pool </code><code><code>pdx-zfs-02</code>
consists of two </code><code><code>raidz2</code> devices with
different numbers of disks in them and a disk da2. You can have
pool to be very heterogeneous.</code></p>
<p><code>Also note that when one device is not in ideal state, i.e.
degraded, the whole pool is marked degraded. But repair of that
device will involve disk from that device.<br>
</code></p>
<p><code>This article talks about how to remove that da2. Data from
it should be drained to the rest of the pool before being
removed, in an ideal world. Is it implemented already? I don't
know. There are spots even on the Sun :) .<br>
</code></p>
<p><code>I have a very high usability requirements at CLI, and ZFS'
simplicity of Admin Experience (AE) sold me immediately. Of
cause, this sort of assortment of disks is a server world
scenario. I wonder what UX benefits of ZFS on desktop are, where
I don't know how to spell b-a-c-k-u-p :) .<br>
</code></p>
<code></code>
</body>
</html>