Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Operating Systems Linux IT Technology

New Linux Syscall Enables Secret Memory Even the Kernel Can't Read (lwn.net) 131

RoccamOccam writes: After many months of development, the memfd_secret() system call was finally merged for the upcoming 5.14 release of Linux. There have been many changes during this feature's development, but its core purpose remains the same: allow a user-space process to create a range of memory that is inaccessible to anybody else -- kernel included. That memory can be used to store cryptographic keys or any other data that must not be exposed to others. Reportedly, it is even safe from processor vulnerabilities like Spectre because secret memory is uncached mapped.
This discussion has been archived. No new comments can be posted.

New Linux Syscall Enables Secret Memory Even the Kernel Can't Read

Comments Filter:
  • ...all I have to do is target the memfd_secret call with my 'patch', and I'm golden.

    Right?

    • Yeah, I don't see how this accomplishes anything. I guess it could make it harder to do an accidental memory read, like heartbleed.

    • ...all I have to do is target the memfd_secret call with my 'patch', and I'm golden.

      If you have physical control of the server, you can bypass this protection.

      There is no such thing as a secure device in hostile hands.

      But if you have the ability to boot a custom kernel, people who don't trust you will not be running their applications on your server.

      This will protect you from other users who have found a flaw that allows them to read memory with kernel privileges.

      It isn't perfect, but it is an improvement.

      • by vbdasc ( 146051 )

        But if you have the ability to boot a custom kernel, people who don't trust you will not be running their applications on your server.

        This will protect you from other users

        Sounds like:

        But if you have the ability to boot a custom kernel, copyright holders who don't trust you will not be letting their media run on your server.

        This will protect MPAA and RIAA from piracy

        Personally, I'm fine with this. No DRM is welcome on my machines.

    • Re: So... (Score:4, Insightful)

      by Spit ( 23158 ) on Friday August 20, 2021 @08:04PM (#61713459)

      It's not a silver bullet, but it's still good as part of a defence in depth, zero trust model.

  • Finally (Score:5, Interesting)

    by Opportunist ( 166417 ) on Friday August 20, 2021 @04:34PM (#61712919)

    Now my malware doesn't even need root privs anymore to be shielded from detection.

    • It's a neat trick I think to reserve memory as an application then deny one kernel process that memory just to allow another kernel process access? I am really curious how this system does it.
      • Re:Finally (Score:5, Interesting)

        by DamnOregonian ( 963763 ) on Friday August 20, 2021 @05:31PM (#61713121)
        It's simple. The page given to the application simply is not mapped when in kernel mode.
        Can malicious kernel code re-map it? Sure.
        But any syscall ran by any other process (even if root) will not cause the kernel to map that RAM, meaning it cannot read or write from it. It can only unmap it.
    • Precisely what I was thinking.
    • by Anonymous Coward

      In principle, seeing at an instant which processes use this feature should help to set off flags for a detection system, even if you can't see what's in the allocated range.
      This still feels defeatable somehow, though. Is it possible to connect gdb to a running process and overwrite its non-secret code in memory to do something else with the secret range?

      • Is it possible to connect gdb to a running process and overwrite its non-secret code in memory to do something else with the secret range?

        Ding ding ding. You asked the right question.
        If you can ptrace a process, you can inject code that can read and write from this memory area, and feed it back through to whatever ptraced.

    • Re:Finally (Score:5, Informative)

      by DamnOregonian ( 963763 ) on Friday August 20, 2021 @05:30PM (#61713117)
      Negative. The pages cannot be mapped PROT_EXEC.
      It's useful as a safe storage, and that's all.
      • Until driver developers decide to do something stupid.

        I don't like the idea of even diagnostic utilities under my control not being able to monitor what a piece of software is doing.

        • Until driver developers decide to do something stupid.

          I uhh, don't really know what you're trying to imply.
          If it is: That a driver is going to map some page of memory as PROC_EXEC, and "kernel can't see me"... then that was something they could always do.
          This just makes a syscall so that userspace can request a secure page of RAM.

          I don't like the idea of even diagnostic utilities under my control not being able to monitor what a piece of software is doing.

          I'm sympathetic to this- really, I am.
          But anything running in kernel mode was always able to do this.
          This syscall doesn't change that. It does give a userspace application the ability to hide data from you, but that is also an abi

      • Who said I want to execute code in it?

        • Now my malware doesn't even need root privs anymore to be shielded from detection.

          Your malware isn't doing anyone much harm if it's not executing, now is it?

          • ROP is a thing, after all.

            • Yes, it is.
              However, it doesn't apply here in the slightest.
              You can't return into a non-executable page. It's an immediate segfault.

              ROP is used when you're able to inject data into a non-executable page, but that page also happens to be a location which addresses are pulled out of and then jumped to- i.e., the stack.
              It simply isn't relevant here.
              Nobody is going to be using their secure page as stack, or as a list of pointers from which it is considered safe to arbitrarily jump to.
    • by AmiMoJo ( 196126 )

      It's marked as non executable too so the virus can't hide it's code in there.

      • by HiThere ( 15173 )

        Well...sort of. But it could store code to be interpreted by some other thing...perhaps python or perl...or even a C interpreter. https://stackoverflow.com/ques... [stackoverflow.com]

        • Well...sort of. But it could store code to be interpreted by some other thing...perhaps python or perl...or even a C interpreter

          That's not really a problem. It's not really hiding what it's doing at that point. You've got something interpreting code that you can't see. That's evidence of guilt as far as any party is concerned.

      • So I can only shield my C&C server info in there. That's fine by me.

    • by gweihir ( 88907 )

      Now my malware doesn't even need root privs anymore to be shielded from detection.

      That was pretty much my first thought as well. However, on reading the actual announcement and discussion, it seems that all the kernel has to do to access this anyways is invest a bit more effort. For example, patching process code will allow access. You would have to prevent writing to the entire process space for the kernel to make this secure. That is clearly not possible. The whole feature looks somewhat bogus to me.

  • by GameboyRMH ( 1153867 ) <gameboyrmh@@@gmail...com> on Friday August 20, 2021 @04:34PM (#61712921) Journal

    Sounds like this feature would be more useful for DRM than anything else...probably not a good feature to add, even though we can avoid using software that makes use of it.

    • by mysidia ( 191772 ) on Friday August 20, 2021 @04:37PM (#61712931)

      Sounds like this feature would be more useful for DRM than anything else

      Up until the point where someone builds a custom kernel with the memfd_secret calls modified to Only pretend that it's making a secure memory area while providing a secondary means for the owner of the system to actually read the data arbitrarily

    • Sounds like this feature would be more useful for DRM

      We should just give up on *all* encryption. Since any form of encryption is useful for DRM.

  • by mysidia ( 191772 ) on Friday August 20, 2021 @04:44PM (#61712963)

    So what happens when a program is written to stuff the majority of its code into this "secret memory" - Let's for the sake of argument say the piece of code gets introduced as a security exploit that took over an existing program.. a payload is injected into secret memory, and then the executing program is replaced with a stub which simply fetch-decode-executes the series of instructions from secret memory; the program's stack data and such would also be in secret memory thus allowing the software to conceal its configuration and such. Sounds like something ripe for abuse anyways

    • Excellent Post. Also, what exactly is the definition of "can't" in "can't read". I'm thinking with all the side-channel attacks and cache robbing we'd also pretty much immediately see that this secret area can in fact be read.
      • It means that if you were to walk the entire address space, you wouldn't hit the segment of protected memory. This is because memory mapping is fundamental to how AMD64 accesses memory. If the real physical memory is "unmapped", then you can't construct a memory address that refers to the protected memory.

        It doesn't stop the kernel from subsequently changing the mapping, but it becomes impossible for a memory address to point at the memory with the current mapping.

    • So what happens when a program is written to stuff the majority of its code into this "secret memory" - Let's for the sake of argument say the piece of code gets introduced as a security exploit that took over an existing program.. a payload is injected into secret memory, and then the executing program is replaced with a stub which simply fetch-decode-executes the series of instructions from secret memory; the program's stack data and such would also be in secret memory thus allowing the software to conceal its configuration and such. Sounds like something ripe for abuse anyways

      Or marks the location for another heap process to access? I don't know did you pretty much say the same?

    • by DamnOregonian ( 963763 ) on Friday August 20, 2021 @05:35PM (#61713133)
      The pages cannot be remapped as PROT_EXEC.
      You cannot run code from it.
      • by mysidia ( 191772 )

        You can execute code actually. Just in the exact same way other bytecode interpreters work.. run your own native code dependent upon bytecodes you have stored in the secret memory area read and decoded in a loop one quad at a time or other small chunk at a time -
        supervisory programs such as antivirus will not be able to see what you are doing nor scan or analyze the code, because it is not contained in mapped memory, they cannot "see" what you are loading from and running.

        • You can execute code actually. Just in the exact same way other bytecode interpreters work.. run your own native code dependent upon bytecodes you have stored in the secret memory area read and decoded in a loop one quad at a time or other small chunk at a time

          Useless technicality.
          The bytecode interpreter is the malware to be detected, at that point.

          supervisory programs such as antivirus will not be able to see what you are doing nor scan or analyze the code, because it is not contained in mapped memory, they cannot "see" what you are loading from and running.

          They'll be able to see the bytecode interpreter. That is the target.
          This isn't new. Viruses that are little self-contained VMs have been around for over a decade.

          • by arQon ( 447508 )

            Useless technicality.

            The bytecode interpreter is the malware to be detected, at that point.

            And if the bytecode interpreter is eBPF? You know, the built-in one already running with kernel privs?

            How about the multiple interpreters in the browser you have running all day that already has a constant non-zero CPU load? Or, god forbid, a JVM?

            This is in no way a "useless technicality". That's exactly the same denial that leads to the problems that we've already seen over and over and over again in the past, from Flash, to JS miners and cred stealers, to even Spectre/Meltdown/etc: EVERY time, someone who

            • by arQon ( 447508 )

              That's "you" as in "people in general", not you personally, given your past work. :)

            • And if the bytecode interpreter is eBPF? You know, the built-in one already running with kernel privs?

              Kernel can't run code contained in said page.

              How about the multiple interpreters in the browser you have running all day that already has a constant non-zero CPU load? Or, god forbid, a JVM?

              Na. Browser runs JS. Antivirus solutions aren't currently looking for malicious JS bytecode anyway (particularly since the format changes with JS engine) As for JVM, the jar is still loaded locally, and even if it can load another jar into protected memory (not clear how it would do that, it's not something the JVM allows) so there is still evidence of what happened.
              The tell-tales would stink to high heaven.

              The reason this is a useless technicality, is because

          • by mysidia ( 191772 )

            Useless technicality.
            The bytecode interpreter is the malware to be detected, at that point.

            No.. the Interpreter will be something like libpython, or bash, or one of the other legitimate interpreters contained in multiple programs utilized by something such as busybox or Systemd; that way anything using that as a detection rule to autokill the interpreter as malware would also brick many systems and falsely detect many programs and files as malware, 99.9999% of which are legitimate.

            Just a common interpreter

            • No.. the Interpreter will be something like libpython, or bash, or one of the other legitimate interpreters contained in multiple programs utilized by something such as busybox or Systemd; that way anything using that as a detection rule to autokill the interpreter as malware would also brick many systems and falsely detect many programs and files as malware, 99.9999% of which are legitimate.

              This glosses over so many details that I'm unsurprised that it makes sense to you.
              Your injected libpython still needs heap. It's going to allocate it as it needs it. Even if the code is hidden away from the kernel.
              Infected process text and data segments will be riddled with tell-tales.

              Just a common interpreter which is an ubiquitous construct within legitimate software simply manipulated at runtime into reading its code from this memory address space, instead of the address space that interpreter would normally read from.

              See above.

              Furthermore, the fact that you could detect that it has happened after the fact is irrelevent - the point is not to make a theoretically undetectable event, but that Those evil malware authors can now Hide the contents of the malware payload to block possible analysis of its behavior AND to block determination of IF there is any malicious conduct by the payload, and what exactly the conduct and conditions of it are.

              Now this is just silly.
              You can always dump that segment. The fact that the kernel cannot read it doesn't mean much at all.

              • by mysidia ( 191772 )

                You can always dump that segment. The fact that the kernel cannot read it doesn't mean much at all.

                You won't be able to dump the segment - it sounds like even system "hibernation" feature will have to be blocked; surely PTRACE and any system calls related to tracing or capable of killing or dumping a process must either be rejected with an EACCESS or zero the secure memory after interrupting the process but before providing control to any kind of debugger or tracing tool, Otherwise the secure memory fea

                • You won't be able to dump the segment

                  Yes, you will.
                  The kernel doesn't have it mapped. That does not mean you can't dump the segment.
                  via ptrace, you can install a kernel into the target process and dump whatever you like.

                  surely PTRACE and any system calls related to tracing or capable of killing or dumping a process must either be rejected with an EACCESS or zero the secure memory after interrupting the process but before providing control to any kind of debugger or tracing tool, Otherwise the secure memory feature would not be able to accomplish its specified purpose.

                  This is incorrect. You're confused about what the purpose of this memory is.
                  The purpose isn't to disallow the memory from any outside access, it's to prevent accidental disclosure, outside of system security mechanisms.
                  The easiest way to be sure of that, is to make sure that the mapping isn't live in kernel context.

                  ptrace

                  • by mysidia ( 191772 )

                    ptrace_* is not disabled.

                    Maybe check the code.. This feature's discussion have that mmap() and ptrace() by the root user are blocked and test cases contain tests specifically concerned about verifying Ptrace operations on the secret memory areas are blocked -- meaning there is no facility provided to map or read any of this memory outside the process.

                    This means no facility is there to routinely read such memory areas. It would be necessary to resort to some destructive action such as tampering with the Linu

                    • Maybe check the code.. This feature's discussion have that mmap() and ptrace() by the root user are blocked and test cases contain tests specifically concerned about verifying Ptrace operations on the secret memory areas are blocked -- meaning there is no facility provided to map or read any of this memory outside the process.

                      Don't need to. I've been following this for a long time.
                      ptrace is not disabled. ptrace peeking will simply fail because the page is not mapped in the kernel. this is the same with process_vm_read. It's not explicitly disabled, the page just isn't mapped, so it will fail.
                      However, ptrace does not need to directly observe memory via PTRACE_PEEKDATA.
                      Another way, is to use the process you are attached to to read its own memory and report it back to you.
                      You can do this within gdb with some work. I imagine it

    • by AmiMoJo ( 196126 )

      The Linux security model is not great at dealing with this kind of thing. If apps could simply declare some limits, e.g. no access to this function, or maximum 16 bytes of secret memory, it would mitigate a lot of other issues too.

      A robust permission system with fine grained controls.

    • So what happens when a program is written to stuff the majority of its code into this "secret memory" - Let's for the sake of argument say the piece of code gets introduced as a security exploit that took over an existing program...

      bzzt! Before you being your hand-wringing, why don't you simply find out the answers? Basic details of what it is? So you know if you're spewing like an idiot, or raising a real concern?

      It would be a low bar. Surely even you can manage it? I hope?

  • ... and implement an emulation of Signetics WOM chips

    https://www.baldengineer.com/l... [baldengineer.com]

  • Padlock fallacy (Score:5, Interesting)

    by sphealey ( 2855 ) on Friday August 20, 2021 @04:48PM (#61712973)

    This seems to edging close the the padlock fallacy that trapped one of my coworkers many years ago: if the maintenance team has the tools to install the indestructible padlock on the day shift they also have the tools to uninstall the indestructible padlock on the night shift.

    • by Lanthanide ( 4982283 ) on Friday August 20, 2021 @05:44PM (#61713155)

      If the maintenance crew are locking your co-workers in their offices with unbreakable padlocks, you should probably find a new job.

    • But you're saying the padlock is there and needs to be removed right? There's no such thing as perfect security. There are only ways to make something more difficult for an adversary and you padlock does this. You can start by vetting your maintenance team. Security requires some element of trust. Always.

  • by fahrbot-bot ( 874524 ) on Friday August 20, 2021 @04:52PM (#61712979)

    From TFA:

    In version 17 (February 2021), memfd_secret() was disabled by default and a command-line option (secretmem_enable=) was added to enable it at boot time. This decision was made out of fears that system performance could be degraded by breaking up the direct map and locking secret-area memory in RAM, so the feature is unavailable unless the system administrator turns it on. This version also ended the use of CMA for memory allocations.

    And that leads to what is essentially the current state of memfd_secret(). It took 23 versions to get there, ...

  • I'm skeptical (Score:5, Interesting)

    by swillden ( 191260 ) <shawn-ds@willden.org> on Friday August 20, 2021 @04:58PM (#61713005) Journal

    I'm dubious of the value of this feature. I suppose it does ensure that secrets won't be swapped out, or sent to kernel syscalls, which makes it a little harder to inadvertently send secrets to other processes, but it can't actually hide the data from the kernel, which could easily just map it back in at any time. This means that an attacker who gets code execution in the kernel can always map it in and read it.

    But maybe there are cases in which this is the best that can be done... and it is better than nothing. Perhaps I'm spoiled; I expect to have either hardware crypto engines with write-only registers and the ability to derive secrets from hardware-bound secrets which are never exposed to RAM at all, or else an isolated execution environment or even discrete secure processor when I can keep and use secrets, with appropriate usage restrictions. But if you for some reason don't have anything like that, memfd_secret() isn't completely useless. Just don't overestimate the level of security it gives you, because a kernel compromise or, possibly even just privesc to root, would allow an attacker to map all physical memory and scan it. Or to inject code into the owning process to retrieve and copy the data, or... the attack vectors are many.

    • In conjunction with other EPYC security features. [amd.com]

      • In conjunction with other EPYC security features. [amd.com]

        Those features are useful. I fail to see how this kernel feature usefully interacts with any of them, though. Can you expand on what attacks the combination thwarts?

    • Re:I'm skeptical (Score:5, Informative)

      by DamnOregonian ( 963763 ) on Friday August 20, 2021 @05:37PM (#61713143)

      but it can't actually hide the data from the kernel, which could easily just map it back in at any time.

      Sure it can. But it doesn't.
      A compromised kernel can absolutely get around this. A non-compromised kernel cannot.
      No mistakes in read authorization can make the kernel read this memory, because it can't. It's simply not mapped. And it's not cached, so it can't be leaked side-channel, either.

      • but it can't actually hide the data from the kernel, which could easily just map it back in at any time.

        Sure it can. But it doesn't. A compromised kernel can absolutely get around this.

        This is my point. And the kernel is hardly difficult to compromise.

        • Then what's the problem?

          If the kernel is compromised, I'd say running code from a page that isn't mapped in.... the kernel... is hardly a problem, now is it?
        • Might as well set the root password to "password" then, right? Since perfect security isn't possible, why bother to take steps to make things more secure?

          There is an unplugged computer at the bottom of the ocean, that is partially secured, and then there is everything else that is just wide open. Right?

        • This is my point. And the kernel is hardly difficult to compromise.

          This is like asking what the value of vfs permissions or process caps are.
          If we are to take your standpoint on it, what's the value of selinux? What's the value of encrypted file systems? What's the value of passwords?

          There is no such thing as hiding things from a compromised kernel, obviously. That is not a reason to abandon security that still works under that point.

      • Suppose I am looking for secrets. Perhaps I want to get that secret decryption key out of that proprietary blob some proprietary software vendor insists on using-- because fuck that shit, and fuck black boxes.

        So, I write software that asks the kernel "Is the currently mapped page, the actually physically next page?"

        If the kernel says NO, (because the kernel MUST be able to know this information to be able to keep accounting for this syscall to function) then I know that one of two things has happened.

        1) T

        • Suppose I am looking for secrets. Perhaps I want to get that secret decryption key out of that proprietary blob some proprietary software vendor insists on using-- because fuck that shit, and fuck black boxes.

          So, I write software that asks the kernel "Is the currently mapped page, the actually physically next page?"

          Now that's a trick. The kernel isn't going to give you that information. Physical address information is none of user-space's business.

          If the kernel says NO, (because the kernel MUST be able to know this information to be able to keep accounting for this syscall to function) then I know that one of two things has happened.

          Absolutely the kernel knows.
          If you've interpreted anything here to mean that the kernel itself doesn't know about the page... then it's time to start from the beginning and re-read.

          In either case, I can simply have a dumb routine that trawls the entire address space, and flags every instance where memory is noncontiguous (in the physical sense). That data is then fed to another routine, that then requests that the kernel map the page of physical memory into the address space, and give me a handle to it.

          This makes no fucking sense.
          I'm really trying to figure out what you're talking about, because you seem fairly knowledgeable.
          Are we talking about something running in kernel context?

          At that point, we can use a custom made kernel module to get around the kernel telling us no, and get that page mapped into the kernel module's memory space, request a copy of the page into an unprotected page, and get the super secret sauce.

          Wait, we're

          • 1) Userland does not need to know / userland cannot know.

            This is not true. See also, /proc/(pid)/pagemap
            https://www.kernel.org/doc/Doc... [kernel.org]

            Using reverse deduction we can determine which physical pages are NOT mapped, quite easily, from userland.

            Additionally, we can get a very good guess how many total physical pages there are to begin with, by querying the SMBUS for the RAM module's EPROMs.

            https://damieng.com/blog/2020/... [damieng.com]

            The only final piece needed, is the "mapped, but unallocated" memory. We could kludge

            • This is not true. See also, /proc/(pid)/pagemap https://www.kernel.org/doc/Doc [kernel.org]... [kernel.org]

              Neat- did not know about that. I guess it makes sense (in a way) but PFN could just as well be an opaque reference to identify uniqueness.

              Using reverse deduction we can determine which physical pages are NOT mapped, quite easily, from userland.

              It would seem that is indeed the case, but I'm not sure how it's relevant.

              It would take awhile to do that, but it being persistent would eventually give you a list of physical pages that the kernel is simply refusing to map, and you could get it from userspace.

              Agreed. We could derive never-map PFNs.

              2) Indeed. I dont know where you are getting your absurd notion that I think it would not-- as stated, it would NEED to know that, in order to have accounting to keep track of which pages it has earmarked for secret(), to prevent them being mapped by some other allocation method. The fact that the kernel does indeed know this, is rather obvious-- but the behavior of the kernel due to it having this knowlede, can be exploited-- See above.

              The absurd notion came from you stating the obvious. I said, if you felt the need to state the obvious, perhaps you're confused about what has been said.

              3) Asking for a physical page is something that userland applications really should never need. Virtual addressing is used for a reason. About the only time this would not apply, is when you have shared memory on a PCI card, and even then, that is something that should not be handled by userland.

              Again- your thought process is all over the place. Who said userspace should be allowed to ask for a physica

  • by JoeyRox ( 2711699 ) on Friday August 20, 2021 @05:00PM (#61713011)
    That memory can be used to store cryptographic keys or any other data that must not be exposed to others. Reportedly, it is even safe from processor vulnerabilities like Spectre because secret memory is uncached mapped.

    FTA:

    Since the beginning, memfd_secret() has supported a flag requesting uncached mappings â" memory that bypasses the memory caches â" for the secret area. This feature makes the area even more secure by eliminating copies in the caches (which could be exfiltrated via Spectre vulnerabilities), but setting memory to be uncached will drastically reduce performance. The caches exist for a reason, after all. Andy Lutomirski repeatedly opposed making uncached mappings available, objecting to the performance cost and more; Rapoport finally agreed to remove it. Version 11 removed that feature, leading to the current state where there are no flags that are specific to this system call.
  • The article says that this capability also disables hibernation to avoid writing sensitive data to an external medium.

    • Which is strange since some media is SED. [crucial.com]

    • This will make it unusable on large number of laptops / PCs.
      Is the Linux kernel able to reenable hibernation if all the secret memory areas are deallocated and there are 0 secret memory areas?
  • Want Widevine level 3 support on Linux? I bet something like this would be required. Just guessing...

  • Ok, this is a pretty crappy addition to the kernel from most aspects. It suffers the same problems as any security problem faces which is basically back to the very old and very valid discussion of key exchange... except here public key doesn't seem to be the answer.

    Let me also point out that reading the comments on the linked thread was like visiting a day care center and listening to children argue over which super hero could beat up another one.

    Then I was thinking about this in terms of containers.

    This i
    • tldr; "This feature isn't needed for my use cases, so it must be crap."

      Weak sauce whining.

      • So, you didn't read it... and then commented like that... without seeing that I concluded it could be useful once the infrastructure supports it?

        Interesting.
  • At Last (Score:2, Interesting)

    Linux catches up with one of the features FreeBSD had over a decade ago.
  • Because it means malware can hide in there. Who had this bright idea?

    Well, after reading the description and discussion, it looks more like this is a bogus idea, because the kernel can still get to this memory. Just takes a bit more effort.

  • Couldn't a compromised kernel just inject executable code into the process and access it anyway? Or even re-map the page?

    I mean, it's harder to access, but it doesn't sound that much harder anyway.

As you will see, I told them, in no uncertain terms, to see Figure one. -- Dave "First Strike" Pare

Working...