<FONT face="Default Sans Serif,Verdana,Arial,Helvetica,sans-serif" size=2><div>The question was asked in the meeting last night. Paraphrased it was "Does the iSCSI protocol provide for some level of data integrity, i.e. does it check sum?"<br><br>The answer, according to RFC3720, is that by default it relies on the transport layer (TCP/IP in this case) to provide a CRC. It does go on to say that every initiator and target must support two options "No digest" and "CRC32C" digest for header and data digests.<br><br>Documentation for the iscsitarget package also states that the default setting for HeaderDigest and DataDigest is "None".<br><br>Typically, iSCSI is run over Ethernet. Both Ethernet and TCP provide CRC checks on the data packet. Ethernet's CRC is strong enough to ensure that only 1 out of 4 billion packets will have an undetectable error. (That's one bad 512B sector for 2 TB of data.) TCP's CRC is weaker. Since both would be in use the combination should be quite a but stronger.<br><br>Does anyone know how that compares to the Reed-Solomon error detection used in hard disks?<br></div></FONT>