<FONT face="Default Sans Serif,Verdana,Arial,Helvetica,sans-serif" size=2>-----kwlug-disc-bounces@kwlug.org wrote: -----<br><div><font color="#990099"><br></font>>From: "Andrew Kohlsmith (Mailing List Account)" <aklists@mixdown.ca><br>>I use NFS as well, but I'm always curious as to tuning for NFS<br>>performance. I <br>>know to play around with rsize/wsize to find the "sweet spot" for<br>>transfer <br>>speeds, but do you have any words of wisdom from your experience as<br>>to making <br>>it really shine?<br><br>We haven't addressed performance problems. When working with home dirs it feels like local disk until we do something with a large file and even then it's not that noticeable. I don't really see a difference unless it's over wireless or some other slow connection. <br><br>>tcp vs udp -- is there a real benefit to one over the other? udp<br>>sounds like <br>>it'd be faster just because you don't have the connection<br>>setup/teardown and I <br>>would assume that the data integrity's being done at the application<br>>level <br>>irrespective of whether the protocol is reliable or not.<br><br>I would guess that UDP is faster. I've hear anecdotal evidence that TCP was faster but I think that was with NetApp NAS devices, GigE and jumbo frames.<br><br>>rsize/wsize -- my own experience seems to show that 8k blocks are<br>>about right, <br>>even though this would involve fragmentation on everything but<br>>jumbo-framed <br>>gig-e networks.<br><br>When I worked with HP Unix servers that was the best value as well. I think this is more tied to block size on the remote file system than network MTU.<br><br>>nolock -- I use this on read-only shares (mythtv media) -- may be a<br>>little faster at the cost of potential corruption on reads as the writers<br>>change what's there, but in my experience never a problem for media. :-)<br>><br>>soft -- I almost always use this since NFS is anal about not letting<br>>me get rid of a mount if I've taken the laptop to work and forgotten to<br>>umount first (laptop's an NFS server). Does soft mounting have any performance<br>>penalties?<br><br>The intr is another standard option we used to use, although it doesn't seem to make sense when 'soft' is used. We don't use 'soft'. Since our home dirs are on the NFS server we can't do anything when it's down. Fortunately it's using an IBM blade with a IBM DS4500 SAN storage back-end so it's pretty reliable.<br><br>>v3 vs v4 -- I've no idea here...<br><br>Me neither, I use the default, 4 I guess.<br><br>>What do you use to implement the directory integration? I've seen a<br>>few over the years but with my (tiny) home network I've never bothered. I<br>>would like to take a closer look at it though and would prefer looking at something<br>>that someone knowlegable has already green-lighted for various reasons.<br><br>We use openldap for the directory and pam_ldap for the client. We use phpldapadmin for administration since it can be accessed by Windows clients (we like to use what our customers use.) We don't have an AD server so there's no integration. Another option is the Fedora/RedHat directory server, it's the old Netscape directory server. It supports multi-master and other neat stuff. We keep it simple with openldap since it meets our needs.<br><br>One component we still need to sort out is some form of directory caching for linux notebooks to allow notebook users to easily move on and off the network while syncing there data with the server.<br></div></FONT>