nfs - interesting read

Story: The Future of NFS ArrivesTotal Replies: 15
Author Content
herzeleid

Mar 27, 2008
11:13 AM EDT
I'm looking forward to the full nfs 4.1 functionality. I am wondering what sort of underlying filesystem will emerge in the meantime as the linux default, and whether it can avoid being the bottleneck.

Certainly ext3 is nowhere near ready for that, and sadly the future of reiser4 is somewhat murky at this point. Perhaps ext4 can get there, but I'm more intrigued by btrfs, or even ocfs2 which is a clustering filesystem, but could nonetheless serve as the linux default fs, an idea floated by on kernel developer recently.

Interesting times ahead, for sure.
techiem2

Mar 27, 2008
12:19 PM EDT
Yeah, quite interesting. I love NFS. The only thing I don't love (besides the non-ideal speed, but I'm still running a 100Mbit network, so I probably don't notice it all that much anyway) is the way it handles..or more precisely DOESN'T handle..when the nfs server is down. Some sort of graceful fail state handling would be great rather than sitting there indefinitely trying to read the nfs mount when you issue the mount command..or df...or ls..or anything else that tries to look at it... when the server has gone down.
herzeleid

Mar 27, 2008
1:15 PM EDT
> Some sort of graceful fail state handling would be great rather than sitting there indefinitely trying to read the nfs mount when you issue the mount command.

Ah, I'd forgotten what life was like without autofs. Yes, raw nfs with hard mounts are not pleasant when connectivity to the server goes away, and that's one of the main things autofs was designed to address.

> when the server has gone down.

Server down? hmm, what server OS are you folks using? We use linux here ;)

root@ashpool:~> uptime 1:15pm up 1078 days 0:45, 1 user, load average: 3.59, 3.33, 3.29 root@ashpool:~> uname -a Linux ashpool 2.6.5-7.151-smp #1 SMP Fri Mar 18 11:31:21 UTC 2005 i686 i686 i386 GNU/Linux root@ashpool:~>
gus3

Mar 27, 2008
8:15 PM EDT
herzeleid:

Running as root?!?
herzeleid

Mar 27, 2008
10:40 PM EDT
> Running as root?!?

We often ssh into servers in this manner to perform system administration tasks. The text I posted is indeed a copy and paste from a root shell session...

Does that answer your question?

Edit: gus3, you say that like it's a bad thing - feel free to share your insights!
techiem2

Mar 28, 2008
10:47 AM EDT
Quoting:Server down? hmm, what server OS are you folks using? We use linux here ;)


lol. It's not often, but occasionally one the machines serving nfs will go down. My fileserver isn't on UPS at the moment (need to move it back down to it's normal home), and my other box occasionally has an issue and chokes (it runs 2 qemu headless servers among other things, and occasionally I do something silly like fill up the hard disk - yeah, it needs a second one/bigger one...).

Is there a good site talking about autofs? Don't think I've messed with it...
herzeleid

Mar 28, 2008
11:29 AM EDT
> Is there a good site talking about autofs?

Well, I originally learned about the autmounter daemon (amd) on the job while working at the university of california. I also learned some things from an o'reilly book on nfs and nis written by hal stern.

Linux autofs is basically an in-kernel version of amd. There's an autofs howto here, an oldie but goodie: http://www.linux-consulting.com/Amd_AutoFS/autofs.html
techiem2

Mar 28, 2008
12:59 PM EDT
Cool. I'll have to look into that. Thanks.
techiem2

Mar 28, 2008
3:14 PM EDT
Hey that's pretty cool. I got it setup and running. For some reason the --ghost option doesn't work right (autofs deletes the base mount dir on stopping apparently, and doesn't recreate the structure properly on start...or something..not really sure. But I got around it using the symlink tweak I saw elsewhere (setup your automount dir, then make symlinks to the mount dirs from where you actually want to access them. I.E. everything is automounted under /mnt/auto/, like /mnt/auto/hd001 (hard disk one on fileserver). Then I made a symlink to it as /fs/hd001 (where I like to access it from). So I assume the fact that it mounts on-access and unmounts after X amount of non-access time saves a tad of bandwidth for samba/nfs/etc. mounts as it doesn't have to keep them alive?
herzeleid

Mar 28, 2008
3:45 PM EDT
> So I assume the fact that it mounts on-access and unmounts after X amount of non-access time saves a tad of bandwidth for samba/nfs/etc. mounts as it doesn't have to keep them alive?

Yes, at the university they found that lan traffic decreased dramatically after they set up autofs on the unix servers.

It's good solid stuff, which has been around for awhile, but for some reason still not widely known. Perhaps in 5-10 years, microsoft will announce some sort of dramatic new innovation, slapping a new name on automount, and trying to patent it.
techiem2

Mar 28, 2008
7:37 PM EDT
Yeah, I always saw automount in the kernel when I configured it, but never figured out how it worked.

Microsoft Live Filesystem? hehe
gus3

Mar 28, 2008
9:03 PM EDT
@herzeleid:

Without any context, I just had it in my head that you popped open an Xterm or some such, so you could type in those commands and post their results. Since you don't need root privs to run them, I couldn't imagine preceding them with "su"... which meant you were already root.

I ran as root for two years, and it's amazing I didn't smash my bytes to bits more often than I did. I found many, many ways to break Linux, not knowing why it broke, only learning "don't do that next time". I finally decided it was time to start letting the system protect me from myself, until I got a clue what I was doing.
herzeleid

Mar 28, 2008
11:49 PM EDT
> Without any context, I just had it in my head that you popped open an Xterm or some such, so you could type in those commands and post their results.

LOL, no way my workstation would be up that long, I'm too eager to try the latest and greatest. My desktops usually are lucky to have a month uptime before I do a kernel update or try a new scheduler or something.

As far as being root on the server with the nice uptime, it's inside a protected lan, and root ssh login is allowed only with ssh keys. I don't ever have any reason log in to that box except to do "root" things.

Point taken about the dangers of root login though. We are transitioning to a system where everybody has to use their user account and sudo, but old habits die hard ;)
gus3

Mar 29, 2008
12:38 AM EDT
Quoting:old habits die hard ;)
"Oh, new, shiny, gotta try it out!" *crash*

(And that's exactly why running as root is a Bad Idea™.)
herzeleid

Mar 29, 2008
8:20 AM EDT
> "Oh, new, shiny, gotta try it out!" *crash*

That only happens on my desktop. On the production servers I take a much more conservative approach, which is why you see a server approaching 1100 days uptime. We also have a few dozen linux servers approaching 600 days uptime.

> (And that's exactly why running as root is a Bad Idea™.)

For noobs defintely, but in the hands of a skilled, battle hardened unix admin, a root shell is a powerful and efficient tool.

So, about the use of root shell: I'm comfortable with it personally. The only problem in practice is, at a big company which has hired a bunch of unix admins, the quality of the admins skill will be uneven. Some will be sharp, and some will be dull, and in order to avoid risk the damage that can be done by a dull admin needs to be limited, and there needs to be accountability and tracking of admin activity.

On the old school unix boxes they have installed this invasive kludge called powerbroker, which requires anyone doing "root" stuff to check out a temporary root password from the powerbroker server, and to log into the unix box with this awkward modifed korn shell that talks to the powerbroker server and logs everything.

We're trying to avoid having to use that monstrosity on linux by leveraging the standard toolset, i.e having everyone use sudo. The old school admins always try to log in as root, so we disabled ssh root password logins, and modified the motd to remind the admin to use sudo.

At first we just left sudo wide open, so as not to limit their ability to do their job, but the problem was, when we try to go back and see who did what, when, in the event of production issues, we'd find that the sa had logged in, and immediately gotten a root shell with sudo, thus circumventing the whole accountability framework.

So we finally modified the sudoers file so that an sa can do absolutely anything, *except* get a root shell. There was a lot of confusion and grumbling at first (Hey, why did you revoke my sudo privileges?) but eventually they have caught on and adapted. The benefit is that every command using sudo goes to syslog.

Are there still holes in the system? Sure, it's not watertight, it's just a quick and dirty first cut. Evil geniuses will find ways around it - but we're more concerned with keeping the rank and file sa in line.

If an evil genius is discovered doing evil among us, it becomes an HR issue anyway.

My, what a rant, didn't see that coming. At any rate, those are some random thoughts on root access, a sort of "stream of consciousness" narrative, so don't beat me up about style. Or do, if you like.
gus3

Mar 29, 2008
10:42 AM EDT
Quoting:The only problem in practice is, at a big company which has hired a bunch of unix admins, the quality of the admins skill will be uneven. Some will be sharp, and some will be dull...
...and once in a while, one will be a lunatic whose wild genius, bordering on insanity, leaves everyone scratching their heads "How did he do that?"

Of course, he knows the awe he commands, so he spices it up by pulling out a rubber chicken and waving it over the CPU right before he hits Return.

Posting in this forum is limited to members of the group: [ForumMods, SITEADMINS, MEMBERS.]

Becoming a member of LXer is easy and free. Join Us!