[kwlug-disc] The sweet(?) smell of power supplies
unsolicited at swiz.ca
Mon Apr 7 23:32:36 EDT 2014
> So, I'm not sure what they mean about "more robust on reconnect"
Probably just my reading of the time ... on the lines of if you wait
long enough, and the moon's in the right position, it will come back.
Perhaps vis a vis CIFS timing out, or something. Just goes to show real
practical experience is so much more useful than vague memories of
readings in years past. (And thank you! Good to see you, even!)
Question though - given all that you say for NFS, and CIFS not really
(seemingly) welcome linux to linux, what else if not NFS / why has
something less finicky not taken firm hold in the world?
Merely it's such a Windows / SMB world not a Linux / CIFS world that no
effort is likely to be successful? If Windows don't do it (as in,
understands it), it don't get done?
(Not a big enough user base for such to provide a sufficiently large
test base for people to bet their businesses on? e.g. It's just wild how
long it takes filesystems to percolate in. Witness btrfs and
On 14-04-07 11:01 PM, Cedric Puddy wrote:
> On 2014-04-07, at 8:15 PM, unsolicited <unsolicited at swiz.ca> wrote:
>> It wouldn't, except ...
>> Seems to me some long time ago I was reading about pros/cons of
>> NFS. Including more robust at reconnection, but could be slow about
>> doing so.
> You can either mount NFS volumes sync or async -- in sync mode, if
> you get disconnected, then all attempts access filehandles get
> blocked on disk IO, and hang until the NFS server becomes available
> again, or you reboot your computer. In async mode, if you get
> disconnected, then the filehandles get dropped, and let the
> corruption fall where it may (and good luck to you finding it later
> -- only mount async if you know your application(s) can handle it)
> If you reboot an NFS server with sync mounts on it, then all the file
> handles on the server side are gone, so be prepared to reboot every
> client as well.
> So, I'm not sure what they mean about "more robust on reconnect" but
> the above is my practical experience from running NFS in
> semi-conductor design shops, where NFS remains the standard method of
> sharing files out to build/simulation farms. Just never, ever,
> reboot the NFS server casually, keep the LAN more-or-less online, and
> your users will give you a decent grade.
> SMB is sort of stateless, in that it can have locks, and can have
> files open, and your access to those files can survive the reboot of
> a server without causing corruption (eg: you are working on a
> document, and someone reboots the server, it comes back up, and then
> you go to save your document -- SMB? No problem. NFS Sync mounted?
> You probably dead.)
> SMB, er... CIFS, is a much more baroque, complex protocol -- you want
> a simple listing of a directory with a handful of objects in it?
> That could require upwards of 60-100 Kb of data exchange. NFS?
> Probably less than 1 Kb.
> NFS has a lot of popularity in VM environments -- VMWare, XenServer,
> KVM, etc -- because it's ultra-low overhead, easy to implement, etc.
> iSCSI is similarly popular, again because it's low overhead, and
> mostly gets out of the way. The key to happiness in NFS (and
> networked filesystems generally) is just to make sure your file
> server is very, very, very stable (say, by getting a really nice
> filer that has redundant everything, and has been engineered to never
> Just a couple thoughts,
>> Could also have been a comparison / commentary against Samba, at
>> the time, too. Heartbeat? Disk hit in worst possible place?
>> But a disk is a disk, and a filesystem is a filesystem. If there
>> are problems, probably other issues in play, such as timeouts.
>> Client / server versions / happiness / glitch-free connectivity.
>> (Network saturation? Appears as an NFS issue but delving underneath
>> may be revealing to non-NFS issues, like timeout?)
>> I wonder if some connectivity / file systems beat upon hard drives
>> harder than others?
>> And ... hard drives have become problematic. IDE days, most seemed
>> to just work forever. Hasn't seemed to be the case for some years
>> though. I've wondered if since buying an SSD is building in
>> requirement to replace (in 4'ish years?), some of today's hard
>> drives, and lack of quality thereof, is all that different.
>> On 14-04-07 04:17 PM, Andrew Kohlsmith (mailing lists account)
>>> On Apr 7, 2014, at 3:23 PM, chaslinux at gmail.com wrote:
>>>> The one thing I've wondered is if using NFS might have
>>>> contributed to the hard drive's smart errors (after about 1
>>>> year of use)... I doubt it, but since I haven't done any NFS
>>>> since... The drive was a 1TB Seagate. Current 2TB has been
>>>> great for several years (bought it when hd prices were crazy
>>>> low - $69).
>>> Why would disk access through NFS stress a drive any more than
>>> local disk access, or access through something like Samba or
>>> another networking standard?
>> _______________________________________________ kwlug-disc mailing
>> list kwlug-disc at kwlug.org
> | CCj/ClearLine - Hosting and TCP/IP Network Services since 1997 |
> 118 Louisa Street, Kitchener, Ontario, N2H 5M3, 519-489-0478x102
> \________________________________________________________ Cedric
> Puddy, IS Director cedric at ccjclearline.com
> _______________________________________________ kwlug-disc mailing
> list kwlug-disc at kwlug.org
More information about the kwlug-disc