Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Linux Software

Mount Remote Filesystems via SSH 269

eval writes "Ever wanted secure access to your files at work or school, but didn't have the necessary permissions or were thwarted by a firewall that allowed ssh access only? The SHFS kernel module allows you to mount directories from machines to which you have shell access. File operations are executed as shell commands on the server via SSH (or rsh). Caching keeps it reasonably fast, and remote commands are optimized based on the server's OS."
This discussion has been archived. No new comments can be posted.

Mount Remote Filesystems via SSH

Comments Filter:
  • Great! (Score:5, Funny)

    by Anonymous Coward on Sunday June 01, 2003 @11:25AM (#6090176)
    Now my web hosting company will probably take away ssh access. Thanks Linux hackers!
  • If you don't have permissions to use network connections other than SSH, are you going to have permissions to mount a filesystem on the computer? The computers at my school (a high school) won't let you access explorer (or at least you're not supposed to). I can see its use for machines at your job, though, because there you would be able to mount filesystems.
    • by wowbagger ( 69688 ) * on Sunday June 01, 2003 @11:31AM (#6090212) Homepage Journal
      f you don't have permissions to use network connections other than SSH, are you going to have permissions to mount a filesystem on the computer?


      Could be: for example, where I work I'm behind a corporate firewall, but I have admin rights on my workstation. As a result, I could very easily mount a remote file system via SSH. In fact, since I administer an FTP server that is outside the firewall, being able to mount it as a file system in a secure fashion would be quite useful.

      Just because network ingress is controlled does not mean that your workstation is controlled. In many ways, this is no different than you burning a CD of your files at home and bringing that into work - the infection/cracking risk is the same. If you are not allowed to mount an external file system then you should not be allowed to mount a local file system.

      However, just because you CAN access your home machine does not mean you SHOULD.
    • Well I think it sounds useful, and my situation's rather common:

      I have an account at the university [fys.ku.dk], and I like to work with the files there from home. (or the other way round). It's annoying to scp my files back and forth. (Even though konqueror can show sftp://me@my-uni as "just another folder"). If I can have it completely integrated, I'm all for it - then I could keep the relevant files at my nightly-backed-up university account, and it would seem like a folder on my harddrive at home. (slightly "W
  • LUFS! (Score:5, Informative)

    by Santabutthead ( 675941 ) on Sunday June 01, 2003 @11:26AM (#6090179) Homepage
    Big deal! I've been doing this for close to a year now, with lufs (http://lufs.sf.net). It's not really the easiest thing to automate but it sure works for day-to-day computing.
    • Re:LUFS! (Score:5, Informative)

      by TTimo ( 253584 ) <ttimo&ttimo,net> on Sunday June 01, 2003 @11:38AM (#6090252) Homepage
      Well .. lufs is the main player in userland filesystem stuff really. It has had sshfs functionality for months. Very slick.

      The difference seems to be that SHFS does some amount of caching, which lufs doesn't do afaik. This has a good chance to improve performance.
      • Re:LUFS! (Score:5, Informative)

        by clump ( 60191 ) on Sunday June 01, 2003 @11:57AM (#6090349)
        LUFS deserves a lot of credit. I now use LUFS's SSHFS to mount my remote file volumes, whereas I previously used a tunneled NFS setup. The latter is a bear to setup but wonderful when operating. LUFS's SSHFS on the other hand requires zero setup on the server, no portmapper on either client or server, and is much easier to automate and control.

        I am looking forward to trying SHFS, but currently very much enjoy LUFS and the hard work put in by its authors. And that means your work on it too, TTimo ;)
      • Re:LUFS! (Score:2, Insightful)

        by Nucleon500 ( 628631 )
        I think technically, I'd rather see LUFS add this caching and do better. I really think SSH isn't something that should be in the kernel, and it isn't really necessary for performance reasons.
    • by NoOneInParticular ( 221808 ) on Sunday June 01, 2003 @04:19PM (#6091526)
      Big Deal! Back in my day, we ran a filesystem over smtp: sent your commands per email, have it executed remotely, and send the results back to the sender. Imagine:

      To: user+bash@host.com

      ls /usr/bin

      And get the result back by email. The tricky part was to do (insecure) copy: cat piped to uuncode etc.

      To paraphrase: it's not really the easiest thing to automate but it sure worked for day-to-day computing

      • Bah, big deal!

        Back in my days, we used to eat gravel for breakfast. Cold gravel, out of a septic tank!

        And our dad would beat us to death with his belt every evening!

        (Credit where credit is due: Monty Python.)
    • I can't get localfs to work. Every time I mount the local filesystem, I can't view any files in the mount point. Typing ls just makes the shell hang.

      What am I doing wrong?
  • Good idea but... (Score:3, Insightful)

    by Vorgo ( 448106 ) on Sunday June 01, 2003 @11:26AM (#6090183) Homepage
    This is a good idea, however there is one problem with the way that the problem is presented above:
    If you're at work or school, are you really going to be able to insert a kernel module on the machine you're on? Generally I would think that you do not have sufficient permissions on the local machine.
    • Re:Good idea but... (Score:5, Informative)

      by skurken ( 58262 ) on Sunday June 01, 2003 @11:33AM (#6090231)
      No I think it's ment to be used the other way around. This way, I can mount my UN*X school account that allows shell access on my Linux computer at home (where you usually have root access). /S
    • As noted by another poster, the main point of this type of thing is to be able to access your remote shell accounts from home. Not necessarily the other way around. However, I'd like to note that I've been working on a version of Knoppix that includes the LUFS lkm and userland deamons, the next step is to get LUFS to automatically setup key-exchange when knoppix boots (set the remote server as boot options), then use the remote server as your /home/dir in Knoppix. I haven't gotten it working perfectly ye
  • Another option (Score:5, Informative)

    by Guiri ( 522079 ) on Sunday June 01, 2003 @11:28AM (#6090188) Homepage
    Just type fish://user@host in your Konqueror location bar ;). It works great!
    • Re:Another option (Score:5, Insightful)

      by mosch ( 204 ) * on Sunday June 01, 2003 @11:45AM (#6090297) Homepage
      fish is a great idea, implemented in the completely incorrect location. This kernel module gets it right.

      Filesystems should be handled by the operating system, not the window manager.

      • Re:Another option (Score:4, Interesting)

        by oliverthered ( 187439 ) <oliverthered@hot ... om minus painter> on Sunday June 01, 2003 @12:04PM (#6090388) Journal
        That's a shortfall of the kernel not KDE.

        Why arn't all the kioslave protocols in the kernel?

        camera:\\
        ftp:\\
        http:\\
        fish:\\
        etc....
        • Re:Another option (Score:3, Informative)

          by Elbows ( 208758 )
          Implementing all of them in the kernel would bloat the kernel a lot. What KDE/Gnome should have done (and what LUFS, mentioned in another thread, seems to do), is have a small kernel module that calls a userspace daemon. All of the protocol code stays in userspace, but the kernel module makes the filesystem accessible to all programs. The overhead from the context switch doesn't matter when you're dealing with remote filesystems.

          It's a pretty slick idea, actually... maybe it will be integrated into the maj
        • Re:Another option (Score:2, Informative)

          by Nucleon500 ( 628631 )
          That's what LUFS does. Someone just needs to either port the kioslaves and gnome-vfs libraries to LUFS or FUSE or rewrite them, whichever is easier. The only advantage of kioslaves and gnome-vfs is that they don't need mounting, so they are more convenient.

          I think the same thing could be done with LUFS, though. Using either automount or a specifically designed LUFS filesystem, make a filesystem where references to a protocol name would cause it to be mounted. For example,

          gqview ~/lufs/camera/pics

          W

        • Those slashes should be forwards

          camera://
          ftp://
          http://
          fish://

          Which is a very good thing since it works on all the platforms KDE works on without having to have 5 or 6 different "kernel level" implementations of every filesystem out there.

          A kernel should be small. Why pile everything into a kernel if it can be handled in user space?
        • Re:Another option (Score:5, Interesting)

          by 73939133 ( 676561 ) on Sunday June 01, 2003 @04:00PM (#6091441)
          No, it's a shortfall of KDE developers: instead of spending time on writing Konqueror modules, they could be writing the equivalent kernel modules.

          But that isn't really anything new: a lot of the KDE effort could be written as more independent, stand-alone functionality, useful to lots of non-KDE software. Instead, KDE produces tightly integrated C++ modules that only work if you are running a large amount of KDE support infrastructure.
          • Re:Another option (Score:4, Insightful)

            by BusterB ( 10791 ) on Sunday June 01, 2003 @05:03PM (#6091724)
            Then again, KDE also works in *BSD, Solaris, HP-UX, and even Windows, to name a few. Writing kernel modules that work for every one of these systems would be a bit of a hassle, no? It might not even be possible in a few of these.

            If someone were to implement all of these systems in the kernel, then KDE/GNOME, etc. can already them. In the mean time, this seems to be the best compromise between functionality and portability.
            • Then again, KDE also works in *BSD, Solaris, HP-UX, and even Windows, to name a few. Writing kernel modules that work for every one of these systems would be a bit of a hassle, no?

              And what good does that do? Yes, I can run KDE on all of those, but each non-KDE application on those platforms is going to see a different file system from KDE applications. So, you have a desktop that integrates poorly on a lot of platforms, as opposed to one that integrates well with the system on a single platform.
          • Re:Another option (Score:4, Informative)

            by be-fan ( 61476 ) on Sunday June 01, 2003 @06:31PM (#6092049)
            KDE is an application framework. If you make things independent of each other, you loose a lot of the consistency and integration that makes an application framework so nice to program for in the first place.
            • you loose a lot of the consistency and integration that makes an application framework so nice to program for in the first place.

              Yes. Too bad that KDE doesn't use the standard application framework on Linux: the Linux kernel and X11.
          • But the problem is that "writing the equivalent kernel modules" would take a lot longer.

            The fact that KDE code uses the "large amount of KDE support infrastructure" is what makes it easy to build a new I/O Slave (such as fish:// or cooler yet cdaudio:// which is a virtual mount of your cd audio as named .ogg vorbis files).

            So it's not a shortfall of KDE developers because KDE developers only have so much time, and building the equivalent kernel modules will be much more time consuming.
      • Re:Another option (Score:5, Insightful)

        by sfraggle ( 212671 ) on Sunday June 01, 2003 @12:10PM (#6090411) Homepage
        While I'm not a huge fan of microkernels, this is one area where a system like the Hurd has advantages over Linux.

        In the Hurd all the filesystems are done by userspace programs called "translators". So to access your local filesystem you have an ext2 translator which accesses it. You can write your own translators - I believe they have a system to access remote systems via FTP.

        Both fish/gnome-vfs and the kernel module system seem wrong to me. With kernel modules you have to be root to load them, but on the other hand its bad to be remaking the wheel by rewriting the filesystem in user space (plus, it only works with programs that are designed to use it).

        It would be nice if the kernel module was added to the main kernel and offered as a "standard" system where nonprivileged users can mount their own filesystems from userspace daemons - Linux is kind of paranoid about non-root users mounting FSes. It would appear to provide the advantages that the Hurd approach brings, while keeping the higher performance of a monolithic kernel (having all FSes in user space like Hurd does seems like a bad idea performance wise)

        • Re:Another option (Score:3, Interesting)

          by amorsen ( 7485 )
          Imagine that you gave regular users permission to mount file systems. Then I, the evil user, mount my own /lib and invoke a suid dynamically linked program, say /bin/passwd. The program load libc.so, which happens to be a link to libevil.so. *Poof*, root for me.

          Even if you only allow the user to mount in specially secured areas of the file system you would still have problems. Right now the Linux VFS places a lot of trust in the individual filesystems. A user-mounted file system could contain deliberate e

          • Re:Another option (Score:3, Insightful)

            by sfraggle ( 212671 )
            Yes, and these are points I've thought about. You'd definitely need some kind of secure interface to check the validity of the filesystem calls (to stop rogue processes from confusing the VFS), but thats pretty much true of all existing calls anyway.

            You'd have to have rules on how the mounting would be done: only mounting on directories owned by the user for example (to stop people doing things like mounting over /lib). And you'd have to place restrictions on the permissions in the mounted FS - like no SU

          • Imagine that you gave regular users permission to mount file systems.

            This is already commonly done, with the /net automount. Every Linux distribution comes with software to handle this.

            Then I, the evil user, mount my own /lib and invoke a suid dynamically linked program, say /bin/passwd.

            You don't give every user access to mount filesystems at any location. Such filesystems would also obviously be mounted nosuid and nodev, just like /net automounts always have been.

            Right now the Linux VFS places a l

          • Re:Another option (Score:3, Informative)

            by Minna Kirai ( 624281 )
            That's why you should never ever have dynamically linked setuid programs! (Or, if they do exist, they should give up root before calling any non-static functions. Some programs do that)

            This evil user could just set LD_PRELOAD to his own library [securityfocus.com], without needing to mess with new filesystems.

            A few years ago, several GNOME programs that were setuid were rearranged to no longer be root, because of this vulnerability. Xcdroast is one of the more famous ones.

            Additionally, in a system where normal users are
        • Re:Another option (Score:3, Insightful)

          by Nucleon500 ( 628631 )
          LUFS or FUSE would be the module you're talking about, which allows you to do filesystems in userspace, and can let users mount them. So Linux can do all three: kernel modules for lean, fast, "real" filesystems, LUFS or FUSE for "exotic" filesystems that shouldn't be in the kernel, and kioslaves and gnome-vfs for those who like reinventing the wheel and want to further bloat their favorite GUI library.
          • As long as the purpose only makes sense inside the window manager environment...

            I'm talking about the cdburning drag+drop, digital camera browsing, play-list "mounting", etc.

            These things make sense inside the window management space because they are useful abstractions for use inside a graphical file manager. However, they are probably not too useful for the general unix environment (i.e. command line, generic VFS calls, some of which may have no translation... fcntl on a candidate for including on a blan
      • Re:Another option (Score:5, Informative)

        by Spy Hunter ( 317220 ) on Sunday June 01, 2003 @12:43PM (#6090593) Journal
        I would say you're right, except the kernel does a lousy job of implementing filesystems in a user-friendly way. KDE IOSlaves are so much cooler for several reasons:
        1. They use URLs everywhere, which makes it easy to access local and remote files anywhere using any protocol from any application.
        2. New filesystems can be installed and activated by the user, you don't need a kernel module.
        3. You don't have to mount anything anywhere.
        4. Non-filesystem like protocols such as HTTP and POP3 can be easily implemented as IOSlaves and then used from any application.

        These features make IOSlaves much cooler than kernel filesystems IMHO.
        • You forgot: "more easily portable"
      • The fish IO-Slave is in the KDE application programming framework, not in the window manager (which is kwin). The KDE folks don't have a lot of say over what goes into the kernel, so putting it in KIO is the best they could do.
  • by yanestra ( 526590 ) * on Sunday June 01, 2003 @11:30AM (#6090205) Journal
    avfs [inf.bme.hu] and lufs [sourceforge.net] are much more common solutions to the "mount userland filesystems" problem. Yet, avfs makes it easy to construct your own whatever-you-want filesystem.
  • by Xpilot ( 117961 ) on Sunday June 01, 2003 @11:32AM (#6090221) Homepage
    ..margerine box at the bottom? Is it what the programmers ate during the creation of shfs? Like that apocryphal Java-drinking sessions at Sun? Does margerine have magical caffiene-like properties too?
  • by vlad_petric ( 94134 ) on Sunday June 01, 2003 @11:33AM (#6090230) Homepage
    LUFS [sf.net] - userland filesystem. It's a userland "teleportation" of the VFS infrastructures (a kernel module sends all the queries to a userland daemon, which takes care of the protocol, etc).

    The advantage of this approach is that adding a new filesystem type implies modifying a user-space daemon, not the kernel. LUFS includes, besides sshfs ftpfs, gnomefs, and gnutellafs and a few others

    • by mosch ( 204 ) * on Sunday June 01, 2003 @11:49AM (#6090315) Homepage
      Not really. LUFS can access a machine which you have sftp access to whereas this project allows you to access the filesystem of a machine that you have true shell only access to, as is common especially in some university environments.
      • Not really. LUFS can access a machine which you have sftp access to whereas this project allows you to access the filesystem of a machine that you have true shell only access to, as is common especially in some university environments.

        If the admin refuses to install ssh, have him fired. If this doesn't work, install your own copy. It need not run on the default port.

      • If you have ssh-access, you have sftp access as well. That's the first thing a SSH admin should learn...

        You don't need sftp to transfer files... Just cat the file on the remote end and redirect the output to a file on the local end, or vise-versa.
  • Yet another option (Score:5, Interesting)

    by BlueEar ( 550461 ) on Sunday June 01, 2003 @11:35AM (#6090241) Homepage
    This seems to be beta quality code. Thus you might want to try Secure NFS via SSH Tunnel [ualberta.ca], which provides, accoding to the author [ualberta.ca] Secure NFS (SNFS) via SSH2 tunneling of UDP datagrams, as suggested in the SSH FAQ [employees.org].
    • Been there before (Score:3, Interesting)

      by clump ( 60191 )

      This seems to be beta quality code. Thus you might want to try Secure NFS via SSH Tunnel, which provides, accoding to the author Secure NFS (SNFS) via SSH2 tunneling of UDP datagrams, as suggested in the SSH FAQ.

      Currently, and indicated in the FAQ above, you cannot tunnel UDP. You can, however, tunnel NFSv3 so long as you make NFS run over TCP. This is precisely how you can tunnel NFS. Here is how I do it:

      Server:
      Put "/nfs_share_dir 127.0.0.1(rw,insecure,root_squash)" in /etc/exports
      Ensure yo

  • by aldjiblah ( 312163 ) on Sunday June 01, 2003 @11:42AM (#6090272)
    An ssh connection forwarding the remote port 139 to 127.0.0.1:139, and then doing smbmount to //127.0.0.1/<mountpoint> - works great, and is practical considering Samba is often already running on the remote side.
    • Maybe it's just me, but I've tried this and gave up. Yes, it works, but performance was horrible, even for a WAN link. SMB was just never designed for WAN work. TCP, OTOH, is a different story.

  • Or... (Score:5, Insightful)

    by Greyfox ( 87712 ) on Sunday June 01, 2003 @11:42AM (#6090275) Homepage Journal
    Create VPN with freeswan or ppp over ssh, mount remote host from VPN.
    • Create VPN with freeswan or ppp over ssh, mount remote host from VPN.

      Yes, but:

      1) This is FAR more difficult than just inserting a kernel module and calling "mount" with the appropriate parameters.

      2) It's a total overkill if all you want to do is mount a remote filesystem securely, or through a strict firewall.
      • 1) Fair enough, but if you're going to do it anyway, you can do that too. Setting up an ssh/ppp VPN really isn't that hard and once you do the initial work to put it in place, it becomes as easy as calling mount with the appropriate parameters.

        2) True but if you're that much of a power user, that much is never enough.

        The down side to this is that if your home machine is compromised, the secure network becomes open to attackers. Network administrators don't like that. So you must put a lot of effort into

        • Isn't much harder? On the contrary
          it's affected by FAR more things

          What if you are behind a NAT box or proxy that won't allow you to use most VPN software?
          What if you don't have remote superuser access?
          etc..etc... etc...

          This is so you can mount anything you have ssh shell access to... that is a WORLD different than setting up a ssh/ppp vpn.

          As for "network adminsitrators" not liking it.. if we don't like it, we don't let you VPN into our secure netowrks in the first place.
  • Windows? (Score:3, Interesting)

    by dnoyeb ( 547705 ) on Sunday June 01, 2003 @11:49AM (#6090321) Homepage Journal
    I have been looking for something like this, however my computer is a windows 2000 box, and the computer I connect to is running ssh on RH8. I don't see any that do this for windows yet.
    • Re:Windows? (Score:2, Informative)

      by 'Aikanaka ( 581446 )
      There is a way to get UNIX/Linux functionality on your Windows box.

      cygwin (www.cygwin.com) has a full implementation of OpenSSH (even includes sshd capability) - plus a whole pile of UNIX/Linux applications that will work ontop of Win2K.

      HTH...
  • by cce ( 24686 ) on Sunday June 01, 2003 @11:51AM (#6090329) Homepage
    LUFS (Linux Userland Filesystem) already provides a nicely-developed interface to allow for userspace programs to implement filesystems over exotic protocols like SSH, FTP -- even Gnutella. Another project, FUSE (Filesystem in Userspace, part of AVFS) performs a similar task.

    Moreover, the SHFS project website admits that it's "partially based" on FTPFS; but the FTPFS website [sourceforge.net] says it's now obsolete and recomends using LUFS [sourceforge.net] instead.

    So the question: why did this merit an article? SHFS is just a proof-of-concept project for some kid's operating systems class, and I'll bet that despite the warning ("Warning: This is beta quality code. It was not tested on SMP machine. Backup data before playing with it!") tons of Slashdotters -- most without any kernel-hacking experience -- will have downloaded and perhaps installed it before I finish typing this post. This is dangerous.

    So -- if you want to play with (and implement your own, it's remarkably easy!) fun filesystems, try LUFS or FUSE instead.

    • by Donald Knuth ( 677635 ) on Sunday June 01, 2003 @12:27PM (#6090497) Homepage
      Why does any software announcement that's posted here always bring out a bunch of elitist trolls? Oh, that's right - because it's not good enough unless it's yours.

      Do you know the authors of shfs, their ages, and what classes they're taking? Have you even downloaded the driver? Compiled it? Actually used it? Have you tested it, exprienced crashed, and therefore empirically come to the conclusion that it's "dangerous"? Or do you just like to play the role of Slashdot nanny?

      Wait, don't answer that. I really don't care.

      *You* should care, however, that you come off looking like a frustrated little prick by shitting on other people's work - for no reason other than to hear your own voice. Nobody wants to read your little pride-ridden, hyper-competitive, and overtly paternalistic little diatribes, no matter how much you think you enjoy writing them.

      Anyway, lighten up and take the post for what it is. And in the future, if you can't say at least one reasonably positive thing about someone else's hard work - do the world a favor and just shut the bloody fuck up.
  • by Polo ( 30659 ) * on Sunday June 01, 2003 @12:18PM (#6090452) Homepage
    I think a better implementation of this might use the sftp protocol on the server side. This has been recently implemented with SSH v2. It's a subsystem within SSH (sftp-server) that supports all the common filesystem operations (open, close, read, write, seek, stat, etc...).

    This is the protocol that scp uses to read and write files and is already part of ssh.
  • macos x (Score:3, Interesting)

    by hachete ( 473378 ) on Sunday June 01, 2003 @12:22PM (#6090462) Homepage Journal
    It would be nice if this worked with Macos X and apple-type file systems. SSH works well on Macos X and I could do with an alternative to webdav and netatalk. Yes, I know that there are "issues" with apple file resources, but I wish they would just *disappear into* the shell so I didn't have to worry about them :-)

    ah well. I can dream.

    h.
    • I think one of the biggest mistakes that Apple did was not doing away with resource forks as file forks when going to OS-X. I understand that they needed to preserv the API of resource forks for all the classic and Carbon programs, but they could have had the back-end implemented as separate files with well-specified names. When NeXTStep would mount HFS filesystems, you could access the resource fork by opening #rsrc#. This would allow easy POSIX access to all of the Mac filesystems, allowing 'rsync' and
  • big deal (Score:5, Interesting)

    by F2F ( 11474 ) on Sunday June 01, 2003 @12:24PM (#6090476)
    we've been doing this with Plan 9 [bell-labs.com] since 2000.

    from the ssh man page [bell-labs.com]:

    Sshnet establishes an SSH connection and, rather than execute a remote command, presents the remote server's TCP stack as a network stack (see the discussion of TCP in ip(3)) mounted at mtpt (default /net), optionally posting a 9P service descriptor for the new file system as /srv/service.
  • I have used LUFS http://lufs.sourceforge.net/lufs/ [sourceforge.net] for a few weeks and I have found it to be a very nice solution for LAN file sharing. It does not perform so well over high latency links, and I am not yet completely convinced that it behaves well under heavy IO loads although I have not proven the contrary either. So in a nutshell, LuFS is good for general purpose file sharing in a LAN environment and it is giving me entire satisfaction - goodbye Samba !

  • by bgarrett ( 6193 ) <[garrett] [at] [memesis.org]> on Sunday June 01, 2003 @01:30PM (#6090820) Homepage
    I set up LUFS last night, and blithely opened a Nautilus window to a mount-point I'd created (to a VERY remote SSHFS-mounted machine). Big mistake.

    I had forgotten, of course, that I had Nautilus turned on to do all its previews, subdir counting, etc. on local files - which of course it was treating this mount point as. And I cursed gnome-vfs2 for not automatically sensing the "remoteness", by reading the list of system-wide mount points and detecting which filesystem was handling the directory into which I'd gone.

    KDE faces a similar problem, ultimately. Until we see "kdesh" or some sort of LD_PRELOAD to offer ioslaves to traditional UNIX utilities, there will be a rift between the well-integrated solutions that KDE and GNOME offer, but which don't interact with lower-level utilities, and the kernel/hybrid solutions which don't provide information to any layer higher than they are (or worse, which are ignored by that higher layer, because of NIH syndrome).

    SHFS is not the wrong answer. If it has caching and LUFS lacks it, maybe some of that code will migrate into LUFS. That's the entire point of open source, people - let the better project win, not just the more established one. But ultimately, neither project is the answer I favor, until the people at work on these various layers of VFS switching start to accept that other peoples' work may be running on the same system their code is.
  • This idea seems somewhat ineficient and more complicated than it has to be... why not simply tunnel port 139 via ssh and mount the remote filesystem via SMB?

  • If you don't want to mount the filesystem, the bash completion project [freshmeat.net] works quite nice with scp. By adding the public key on your computer to the server's authorized_keys file, you can use tab completion when traversing directories or copying files remotely. As a bonus, you get a lot of tab completions with other programs too.
  • by fea ( 39853 )
    I think there are some unnecessary critism in this thread. shfs is exactly what I had been looking for for quite some time. I saw the article on nfs over ssh. This is sort of there, but requires knowledge of iptables, etc. Indeed, it took an entire article to explain how to use it. However, this package is very simple to use ! And it serves the purpose of being able to mount remote drives over ssh. After trying it out, I did have some suggestions which I plan to post to the developers.
  • Damn. I have been using tramp [nongnu.org] for the last few years to do this with emacs. I'll have to try this and lufs out to see which method I like better.

  • Will this hurt ssh? (Score:2, Interesting)

    by mt-biker ( 514724 )
    A few years back I did software-support, and ended up remotely logging in to our customer's machines, often over a firewall.

    When the firewall only allowed telnet access and I needed to transfer files, I'd either end up building a .tar.gz.uu file and either using cat & script to transfer them, or cutting and pasting between windows. What a pain!

    At that time, I started to work on a tool to allow me to transfer files over telnet. What stopped me was an ethical problem - if a company only allows telnet th
    • by sgifford ( 9982 ) <sgifford@suspectclass.com> on Monday June 02, 2003 @04:23AM (#6094143) Homepage Journal
      My experience has been that if a network is configured in an idiotic way (such as allowing telnet and not FTP), it's not because its operators have made careful and well-considered decisions about what to allow and disallow, but simply because they're idiots. That sort of eliminates the whole ethical dilemna. :-)
    • Just use a terminal client that supports Zmodem, and sz the files over the connection.
    • If my understanding of ssh is correct, then it's already possible to transfer files back and forth via ssh. In fact, I believe that any kind of internet connection can be tunneled through ssh, as long as both the client and server support ssh. I use sftp to transfer files all the time, and I frequently open x sessions remotely by tunneling through ssh. So these issues that you bring up, while certainly interesting, have probably already been addressed by those who are concerned/knowledgeable.

      If a company'
  • I'm behind quite an evil corporate firewall, and the only way I can get to the outside world is through an http proxy (5 or so ports open). I can FTP into my stuff to do some file management, but it stinks. Is there any way I can get something like this going through an http proxy?

    Maybe I should just tunnel somehow?

How many QA engineers does it take to screw in a lightbulb? 3: 1 to screw it in and 2 to say "I told you so" when it doesn't work.

Working...