Mount a remote file system through ssh Using sshfs

If you want to access a remote file system through ssh you need to install sshfs.sshfs is a filesystem client based on the SSH File Transfer Protocol. Since most SSH servers already support this protocol it is very easy to set up: i.e. on the server side there’s nothing to do. On the client side mounting the file system is as easy as logging into the server with ssh.

sshfs Features

  • Based on FUSE (the best userspace filesystem framework for linux)
  • Multithreading: more than one request can be on it’s way to the server
  • Allowing large reads (max 64k)
  • Caching directory contents
  • sshfs runs entirely in user space. A user using sshfs does not need to deal with the root account of the remote machine. In the case of NFS, Samba etc., the admin of the remote machine has to grant access to those who will be using the services.

Install SSHFS in Debian

#apt-get install fuse-utils sshfs

Next, let’s make sure the following condition is met. In the local system, type (as root)

# modprobe fuse

This will load the FUSE kernel module. Besides SSHFS, the FUSE module allows to do lots of other nifty tricks with file systems, such as the BitTorrent file system, the Bluetooth file system, the User-level versioning file system, the CryptoFS, the Compressed read-only file system and many others.

Now you need to make sure you have installed ssh in your debian server using the following command

# apt-get install ssh

Using SSHFS

SSHFS is very simple to use. The following command

$ sshfs user@host: mountpoint

This will mount the home directory of the user@host account into the local directory named mountpoint. That’s as easy as it gets. (Of course, the mountpoint directory must already exist and have the appropriate permissions).

Example

create the mount point

#mkdir /mnt/remote

#chown [user-name]:[group-name] /mnt/remote/

Add yourself to the fuse group

adduser [your-user] fuse

switch to your user and mount the remote filesystem.

sshfs [email protected]:/remote/directory /mnt/remote/

If you want to mount a directory other than the home directory, you can specify it after the colon. Actually, a generic sshfs command looks like this:

$ sshfs [user@]host:[dir] mountpoint [options]

Unmount Your Directory

If you want to unmount your directory use the following command

fusermount -u mountpoint

Sponsored Link

9 thoughts on “Mount a remote file system through ssh Using sshfs

  1. Hi there, thanks alot for this tip! I am using it in conjunction with mondo rescue so i can burn my ISOs from different computers (and also because the computer that I’m backing up doesn’t have a lot of disk space, or a DVDburner).

    Once again, THANK YOU.

    John H =D

  2. This is incorrent.
    To ssh, one does [email protected]
    but to sshfs one does
    sshfs ip.add.re.ss:/remote/dir /local/mount/point

    The username is not part of the sshfs command.
    I know. I use that almost daily.

    ./tony

  3. man sshfs:

    SYNOPSIS
    mounting
    sshfs [user@]host:[dir] mountpoint [options]

    tony’s reply is only true if your remote host and your local host require the same UID.

  4. Here’s a tip for those who are puzzled as to why permissions might break, use the -o idmap=none option to disable mapping remote user id/group id from being applied to the local folder after mounting.

  5. Thanks – you made my day.
    I use pre-shared keys to mount projects using following bash-script:

    /— snipp —/
    #!/bin/bash
    fusermount -u ~/my/mountpoint
    sshfs -o IdentityFile=~/my_keys/private_key [email protected]:/path/to/remote/dir/ ~/my/mountpoint
    /— snipp —/

    I added fusermount at the beginning to use the script to re-establish a connection.
    Hope it helps anyone.

  6. The user part is necessary if the remote user has different loginname.
    If you are logged in as ‘user1’

    ssh remote.location
    is analoguous to
    ssh [email protected]

    AND

    sshfs remote.location:/
    is analogous to
    sshfs [email protected]:/

    However, if you need to login to remote.location as ‘user2’, you need to specify user regardless of which program you are using.

    ssh [email protected] (or ssh remote.location -l user2)
    sshfs [email protected]:/

    With Linux there are usually half a dozen different ways for achieving your goal 🙂

  7. Well, I try “modprobe fuse” only to get this

    ERROR: could not insert ‘fuse’: Unknown symbol in module, or unknown parameter (see dmesg)

    How do I fix it? I do the testing on a preinstalled Debian virtual machine on Virtualbox, if that makes a difference…

  8. I am successfully mount the remote directories on my local directories, I have to created folder and files with 777 permission so others can able to access this files so that’s why I used umask=000 option during mount process. But problem is these newly created folder and files not created with 777 permission on remote server(NFS Server) IT create with 775 permission.

    I have two servers with following user.

    1. NFS Server
    nfs2
    root
    2. NFS clinet
    root
    stag

    Following is the export entry on NFS server

    /home/nfs2/mnt *(rw,sync,no_root_squash)

    Following command I used to mount directories…

    sshfs -o uid=1005,gid=1005,umask=000 [email protected]:/home/nfs2/mnt /home/stag/mount_point -o allow_other

    After apply above command remote directory successfully mounted on /home/stag/mount_point with 777 permission.

    Now I am creating directory using stag user.

    **stag@ubuntu:~/mount_point$ mkdir test
    stag@ubuntu:~/mount_point$ ls -l
    total 4
    drwxrwxrwx 1 stag stag 4096 Mar 5 14:24 test**

    And it is created with 777 permission what I want, So on NFS client side I don’t have any issue.

    Now I checking on NFS server in /home/nfs2/mnt directory.

    **root@192:/home/nfs2/mnt# ls -l
    total 4
    drwxrwxr-x 2 nfs2 nfs2 4096 Mar 5 14:24 test**

    As you see on NFS server test directory was created with 775 permission which did not I want, I want to create test directory with 777 permission.

    I did everything to resolve this issue,

    I set umask 000 into .bashrc file on NFS server but then also this issue not resolved.

    I found following link in which one guy say I have apply patch for sshfs-fush to resolve the permission issue on server side and after that I have to used **remote_umask and remote_fmask** during mount process.

    **http://andre.frimberger.de/index.php/linux/sshfs-fix-for-wrong-file-permissions-on-server/comment-page-1/#comment-211337**

    But I don’t know How to apply this patch.

    Any know how to resolve this problem then please help me.

    Thanks

Leave a comment

Your email address will not be published. Required fields are marked *