Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Linux Software

SuSE and Siemens Release Linux Memory Extension 150

hussar noted that SuSE and Siemens have developed a memory extension that will allow Linux to use up to 4GB of memory. Linus has reportedly approved its inclusion in kernel 2.3.15. The strangest part is excite has taken to posting about Linux Kernel Patches. Pretty crazy stuff.
This discussion has been archived. No new comments can be posted.

SuSE and Siemens Release Linux Memory Extension

Comments Filter:
  • Not on my desktop sadly, but the latest UNIX server we bought here has 2GB of memory and that didn't cause too much excitement.

    That kind of memory is pretty common place - and Linux needs to support it.

    Has anyone played with SGI's journaled file system for Linux? I am glad to see this major shortcoming being looked at.
  • Hello,

    Yes, it was key to supporting IRIX. :) In the right crowds, I'm sure that's an important feature. :)

    What's important to note and understand is that everyone, and I mean EVERYONE, has a differnet idea of what is and isn't a good feature. And believe me, I get a fair amount of feedback on my articles and I rarely can please everyone. There's always the person that just isn't happy until the PC internal speaker is supported, or until the kernel can do ISA PnP (it can now), or until it can do some other completely bizarre task that seems trivial to us at first, but them you realize how critical this can be under /just/ the right circumstances.

    That's what makes Linux great: we have such a selection of great things that we have to please everyone a little; but we'll never please everyone entirely.

    As for me, this 4G thing seems pretty silly. :)

    Joe
  • Part of the FreeBSD kernel are actually under GNU...

    Most certainly not!


    Berlin-- http://www.berlin-consortium.org [berlin-consortium.org]
  • HP-UX 10 can go up till 3.7 Gigs.

    HP-UX 11, being a 64-bit OS, breaks the 4 Gig limit, but IIRC the actual limit varies a bit between "patch levels". They're moving it upwards all the time, I seem to recall.

    --

  • wow that's a user of my irc client! ;)
  • just a typo...
  • The issue may be one of whether the extra 4 physical-address lines are actually connected to the world off the chip; that might not be the case on all P6-series processors, even though the processor on the die might contain all the hardware necessary to put non-zero values on those lines.
  • No, that's what Linux has supported all along, and it limits you to 2 GB of _physical_ RAM.

    Yes, NT (unless you turn on the right boot flag, which might work even in non-enterprise editions) has 2GB of the virtual address space of the machine available to user-mode code and 2GB for kernel-mode code, just as had been the case for Linux all along.

    However, I'm not sure that limited you to 2GB of physical RAM on NT, as NT might not require that all of physical memory be mapped into the kernel's address space.

    The new Linux patch supports 4 GB of physical RAM and unlimited user space.

    Linux has, for a long time, supported "unlimited user space" in the sense that user-mode code can map stuff into and out of its address space, so that the no-more-than-4GB window you get on a processor with 32-bit linear addresses (e.g., an x86 processor) can be moved around a more-than-4GB set of data. The patch doesn't change that.

    However, that's true of most OSes that run on x86 processors, these days.

  • You don't need to use segmentation to get at >4GB of physical memory.

    In fact, segmentation doesn't even help, given that the x86 MMU maps 48-bit segmented addresses to 32-bit linear addresses before running said addresses through the page table and translating them to 32-bit or 36-bit physical addresses.

    However, for any single process to access more than 4GB, it does have to do something that amounts to bank switching, i.e. map stuff into and out of its no-more-than-4GB linear address space as necessary; the same applies to kernel-mode code and the kernel-mode portion of the address space.

  • The enterprise edition of Windows NT Server does make use of this.

    NT 4.0, or NT 5.0^H^H^H^H^H^HW2K? I don't think NT 4.0 supports more than 4GB of physical memory, but W2K, at least in the DataCenter Server edition, will.

    See, for example, this note on "Address Windows Extensions and Windows 2000 DataCenter Server" [microsoft.com], and this press-release-like document [microsoft.com] which indicates that this is new in W2K and not in NT 4.0, but also seems to imply that memory above 4GB will be available to the page pool (I'd seen stuff that gave me the impression that it would be wired-down memory that had to be specially mapped into a process's address space, but perhaps that's not the case).

  • The segment refers to a page table

    No, it doesn't. It just has a 32-bit base linear address, a 20-bit length (which can either be in units of bytes or 4K pages), and a bunch of other flags. A 48-bit far address gets translated to a 32-bit linear address, and that is what gets translated to a 32-bit or 36-bit physical address via the page table.

    See (this page that has a link to the PDF document) Intel Architecture Software Developers Manual Volume 3: System Programming [intel.com] for the full story.

  • thanks for the update - I think. Would be better to hear that it was working great and I should switch to it now. Not having a decent journaled FS is a major downside to Linux in my opinion. Who wants to put a lot of disk on a server and then have it take 3 weeks (ok a slight exaggeration) to fsck. Not me.

    Actually I do have a couple of servers with a small amount of disk on them (16GB each as I recall) and the main problem is how slow NFS is.

    still kernal level appletalk support is nice. Is all swings and roundabouts.
  • by Anonymous Coward
    Windows98 supports more than a gig.
  • God, I like FreeBSD (maybe even a bit more than I like Linux, in fact), and I use it in both my gateway and my laptop, but looking at some of its supporters, maybe they should rename it FreeBSE and be done with it...

    I really expected better from the user base than a bad case of Linux Envy...
  • yep. for all the fears people had (and FUD that got spread) about the commercialization of Linux, here we see the Open Source (and specifically the GPL) magic working just right. Large companies like Siemens suddently find it (financially) worthwhile to contribute to an open project. this is a huge ball that got rolling; it's not just a bunch of hackers competing with MS (and Solaris and ...), now it's large companies and their resources, too. and to think that some MS drone was quoted saying that "the linux hype has peaked" just a few days ago... boy he's in for a surprise :)
  • by Anonymous Coward
    I didn't know that P6's had
    36-bit memory addressing...

    Yes, P6's have 36-bit physical memory addressing; this does not mean that virtual (linear) addresses are 36 bits. (Some other 32-bit processors, e.g. 32-bit SPARC processors or processor modules with the SPARC Reference MMU, also supported more than 32 bits of physical address. Heck, PDP-11s with MMUs often supported either 18 or 22 bits of physical address, even though they only supported 16 - or 17, sort of, if you count split I&D space - bits of virtual address.)

    Does *any* OS take advantage of this currently???

    Dynix, on Sequent's servers, might - they support much more than 4GB of physical memory, and if the NUMA part of "NUMA-Q" just means that accessing "other people's" memory is slower, not that you have to play bus-mapping games to get at it, they may use the 36-bit features to do that - although I thought I saw something indicating that some of their machines supported 128GB, which requires 37 bits.

  • what's the point of giving 2GB to the OS? I know the NT kernel is obsese compared to Linux, but still... that sounds like some serious overkill to me, and using almost half of your machine's memory as page cache (in Linux terms, I'm sure NT has an equivalent that's calle dsomething else) doesn't strike me as a very balanced use of a large box.
  • according to some posts in linux-kernel, the BSDs have never made it a big priority to avoid TLB flushes, and Linux has. the BSDs do get performances in the same ballpark as Linux, so I guess it's a decent tradeoff either way (If I understand right, Linux goes more for latency, and the BSDs more for bandwith).

    on a completely unrelated point, I wonder if this new 4GB support is a compile-time option, or if it can be somehow enabled at runtime without a performance hit when it's disabled. that would be neat, but it does sound a bit improbable for a change like this.

  • yeah it does. it has to involve mapping memory zones on demand instead of keeping the same lot of things mapped at the same time. the good old equation that 32bit = 64Gb doesn't mean that you can put 4Gb of RAM in your memory and use it: you need virtual addresses for all kinds of pages that are either paged out or mapped to disk and set to load on demand. and Linux so far has been mapping kernel memory while userlevel code runs too, with just the permissions changed, b/c it's cheaper to do this than to remap it for every syscall and unmap it on the way back. I have no idea how much of this scheme this new patch changes.
  • heh, so much for the "code freeze in 2 weeks" thing :) oh well we've known that Linux code freezes are quite fluid after all, and that's probably a good thing. good software comes out when it's ready, not when someone decides on a deadline.
  • woohoo! moderated up! i know, i know, pointless comment, moderate me down i dare ya. i'd like to point out however (before you do) that each app gets 2GB and not all apps sharing 2GB in userspace like my previous otherwise wonderful post *cough* seems to imply. somebody else posted somewhere that the PIII has 36bit addressing letting it potentially use 4 terabytes of RAM. who wants to help me write the linux kernel patch for that :)
  • We need 3rd party support by manufacturers for products that they sell to make linux more profesisonal
  • ...and Andrea works for SuSE, you ... (ok, I don't wanna get downgraded for this psoting)
    --
    Michael Hasenstein
    http://www.csn.tu-chemnitz.de/~mha/ [tu-chemnitz.de]
  • by Nemesys ( 6004 ) on Tuesday September 07, 1999 @09:44AM (#1696817)
    This is another of those tiny little telltale signs that the media has been won over. This story shows they even seem to "get" the kernel development process. However, they're putting it in terms that the mainstream can understand: Big computer company Siemens helps littler Linux computer company SuSe to write an extension for Linux. Kernel boffin also cooperating.

    My best example of this was when I saw a rehash of Pravenich's 2.4 kernel thing on a tech news site - they'd duplicated his mistakes, and were brainwashing the masses with the stuff! ;)

  • The GNU thing is certainly untrue.

    And *when did I ever say bsd has no drivers*.
  • Absolutely. With many other O/S' ability to support more than this, and the growing demand for the high end servers to require this much memory, maybe linux will start to be used there, and perhaps trickle through some corporate infrastructures. We can hope :)
  • That would be funnier if what you said about MS were true and didn't contradict.

    Office 2000 can now run on Linux with 4GB.
    W2K only supports 128M?
  • This is part of what any OS needs to stay competitive. First is to be able to use the existing technology not necessarily the bleeding edge, but what is out there and what is becoming popular. Bleeding edge is the stuff you add in developer's releases.

    The second and more important IMHO is app support. You need to be able to run things on this nifty OS (whatever OS that is). Now i favor a more specialized approach, using the right OS for the job. Realistically I could care less if I can use a word processor on a web server. And for a desktop only machine I can care less if I can run a web server.

    The 4GB memory capability is something that has been needed. A webserver or any server that needs high performance really doesn't need to be bogged down by the I/O subsystem.

  • None since w2k is no longer supported on non-intel architecture (Compaq recently stopped NT development on Alpha)
  • We especially need coverage in mainstream press. How many PHBs look at/go to /read excite? How many other sites get their summary news bites from excite?

    Things are on the up and up indeed.

  • I'm interested if anybody knows what are the limits of Win 9x/NT, BEOS, *BSD, MacOS,...

    Linux rox!!!
  • The current MacOS can access up to 1.5 gigs (even thought the new high end G4s will let you put 2 gigs in).

    I thinkNT will let you access 4 gigs. At least that is what I understood from the whole Midcraft fiasco...

    I'm not familiar enough with the others to answer...

  • A big company does something, they send out a press release. Nothing new there, it is just that it is about Linux.

    Now, that someone has come up with a bigmem patch that Linus will live with, THAT is news!

    This is a big deal for some users. A real shocker will be if someone comes up with a patch to use the 36 bit addressing on the P6 cpu, for up to 64GB ram on Intel machines.
  • i'd guess it doesn't affect it at all. ia64 has plenty of address space, so there's probably no need to play remapping tricks there.
  • by Guy Harris ( 3803 ) <guy@alum.mit.edu> on Tuesday September 07, 1999 @10:58AM (#1696829)
    what's the point of giving 2GB to the OS?

    I'm not sure, given that the "enterprise" edition lets it use only 1GB, but note that this is virtual memory, not physical memory - this doesn't mean NT is consuming 2GB of physical memory.

    and using almost half of your machine's memory as page cache (in Linux terms, I'm sure NT has an equivalent that's calle dsomething else)

    I infer from the stuff I've seen in the "Inside Windows NT" books that it has a common page/buffer cache; the first edition of Inside Windows NT says

    8.1.2.4 Mapped File I/O and File Caching

    ...

    ...When a thread opens and uses a file, the file system tells the cache manager to create an unnamed section object and maps the caller's file into it. As the caller uses the file, the VM manager brings accessed pages into the section object from disk and flushes them back to disk during paging. ...

    which is similar to what some UNIX systems (e.g., SunOS 4.x, SVR4-based systems, FreeBSD, etc.) do; I have the impression that if the Linux kernel doesn't yet have a unified page/buffer cache in a stable kernel, it's going into the 2.3[.x] line.

    Most of memory is, in effect, a page cache on these systems, unless you count only pages not currently being used as cached, in which case most of memory not used by pages currently being used is a page cache. In addition, the buffer cache (in the sense of the cache of pages read in from files) is just part of the page cache.

  • Its not unusual really. It fits into excite's usual MO.

    For the non-journalists/editors in the crowd, I'll point out that this story was A) from a newswire service (check the DATELINE--ITS IN ALL CAPS LIKE THIS) and B) the story was obviously a press release (witness the "About SuSE" section of the article -- an obvious shameless plug for the company putting out the press release).

    Excite's news section consists almost entirely of newswires because its cheaper than writing your own stories and they generally don't require too much editing (because they are written by professionals who know what newspaper editors like to see in an article in terms of structure, content, syntax etc.)

    FWIW, I used to design/edit/publish several newspapers for non-profit veterans groups like the AMVETS and PVA...at least until I got a "real" job in the IT field. :)

  • by kijiki ( 16916 ) on Tuesday September 07, 1999 @11:03AM (#1696831) Homepage
    I find it somewhat telling that the article didn't mention Andrea Arcangeli and Gerhard Wichert, workin at SuSe and Siemens respectively and wrote this patch pretty much dual-handedly. I suppose with the corpratization of linux, the companies are more important now than the actual people who make linux what it is. At the very least, a link to Andrea's archived message on l-k would give credit where it is due.
  • Offhand I don't know about Solaris, SCO, or anyone else. I'd be a little surprised if Sun didn't do it, although I'd also understand if they left it out to encourage UltraSparc sales.

    I vaguely remember reading that Solaris now supports it, although finding detailed technical information on Sun's shiny new Marketing-Driven(TM) Web site looks as if it'd demand more patience than I have.

    It's probably a question of whether any "commodity" x86 machines support it; if not, then Solaris for Intel, and UnixWare, may not support it either.

  • In both cases Intel uses segmenting

    Not in the case of P6-core machines; segmentation turns a 48-bit segmented address into a 32-bit linear address. They're using paging, instead, i.e. the page table entries, in one of the 36-bit-physical-address modes, generate more than 32 bits of physical address from 32 bits of (linear) virtual address.

    Any one process would have to map stuff into and out of its address space to use more than 4GB (or 4GB minus what kernel-mode code takes) of physical memory, but

    1. the OSes that run on the sorts of machines that would have that much physical memory probably let processes do that (NT, and most, if not all, UNIX-flavored OSes, definitely do);
    2. those OSes also probably support more than one process.

    Kernel-mode code could also map stuff into and out of its part of the address space.

  • by Espressoman ( 8032 ) on Tuesday September 07, 1999 @11:17AM (#1696835)
    This is great work. Just think what will happen when the SGI big memory project is ready. Check out http://oss.sgi.com/projects/bigmem/. Wow. Two big memory solutions. I just don't know which to choose. Oh. Hang on. I don't have four *fricking* gigabytes of RAM....
  • You're right. Big thanks Andrea and Gerhard. That's some fine hacking. You is good people.
  • by cksmith ( 7566 ) on Tuesday September 07, 1999 @11:30AM (#1696838)
    I think this is yet another example of how excellent the open source model is:
    • The hardware vendor gets to show off their hardware.
    • The hardware vendors gains some purely positive publicity and goodwill from the community
    • Everyone else benefits from the contribution.
    Plus, a significant contribution can be merely the seed for further developments, since anyone in the world can read the patch and contribute their own. It's a win-win situation all around.
  • Wow, it's the author of my irc client ;)

    Ok, so I'm posting too much in this non-thread.

  • I've used a gig. Nothing special. Just never swap, and buffers up the wazooo.

    Well ... it depends what you do. Where I work (in the EDA industry), most of our boxes are 2Gb, and we have a few 4Gb boxes, and that's not enough... We still swap like crazy for some jobs.

    I unfortunately can't find more than 4Gb on UltraSparcs (our main platform) without going to a large server. 8(

  • However, it appears that most of the 64 bit architectures (Alpha and UltraSPARC) don't use 64-bit addressing. They use something more like 39-bit or 42-bit -- something in that general neighborhood.

    39-bit = 549755813888 = 512 gigabytes.
    42-bit = 4398046511104 = 4 terabytes.
    45-bit = 35184372088832 = 32 terabytes.

    You only need 4 terabytes of memory if you're altavista or fedex and you want to keep your entire database in memory at once. Otherwise a couple-hundred gigabytes should do you fine. :-)

    (I wish I could remember the specific figure...)
  • I believe the 36 bit addressing only allows you to use memory beyond 32 bits for paging; buffers, cache, etc. Logical addressing (program addresses) is still limited to 2GB or 3GB or whatever. So if you put 64G on your quad Xeon factory heater (tm), programs still max out at 4G. But you get plenty of disk buffers....

    --
  • by Guy Harris ( 3803 ) <guy@alum.mit.edu> on Tuesday September 07, 1999 @11:56AM (#1696843)
    Logical addressing (program addresses) is still limited to 2GB or 3GB or whatever. So if you put 64G on your quad Xeon factory heater (tm), programs still max out at 4G.

    True, in the sense that at any given instant of time, the linear address space of a process can be no larger than 4GB (and, as segmented addresses get translated to 32-bit linear addresses before being run through the page table, segmentation doesn't help).

    However, as I noted in another post, a process could, if the OS lets it (and most OSes you'd run on big machines do, these days), map stuff into and out of its address space, and more than one process could exist on the machine, so you can use it for userland code, userland data, kernel code, kernel data, mapped files, buffer cache, whatever, just as you can use the first 4GB - it's just a question of what the OS does or lets applications do.

  • On a dual xeon with 2GB, I was able to run vmware sessions of 95, 98, and NT all at the same time under linux. Besides having the machine taking up four IP addresses, it all ran very fast.

    Unfortunately, my boss did not like the idea of running MSSQL server that way, and the ServerRAID would not work (IBM has since release drivers, however.)
  • >>...
    which it certainly couldn't this time last year.

    Not true. BW is a commercial service that distributes unedited press releases. Excite runs all of the BW content. (It may be crap, or it may not ... that's for the audience to decide. Investors find press releases, however self-serving, to be useful.)

    The avenue has always been open to Linux. All it takes is a public-relations department. This is not a case of Linux becoming important enough for the general media to cover; it is a case of a Linux company becoming sufficiently aware of How Things Work to get the word out through commonly available channels.
  • Solaris/x86 supports 4GB in versions 2.6 and (2.)7. 2.5 only supports 2GB. Having more hangs the system :-(

    I had read that recent revisions of 7 would support 36 GB on P6 machines; however I guess that functionality was removed.
  • by Hasdi Hashim ( 17383 ) on Tuesday September 07, 1999 @12:55PM (#1696852) Homepage
    Just a quick note to everybody. It is not about being able to use 4GB of physical memory. It is to enable process to use more than 2GB of memory. Traditional Linux memory model to is split the lower half for kernel memory and upper half for user memory. To check if the pointer is pointing to a kernel memory you just need to check its MSB.

    test %eax,0x8000000
    je user_mem_label

    I think they have worked on 3GB prior to this. SOrry, been a while since I checked the kernel lists.

    Anyhow, this is only of practical use to database developers. maybe some but not many. In any case, you might as well use a 64-bit architecture.

    Hasdi
  • ...from Siemens, not a media story.
  • Has anyone played with SGI's journaled file system for Linux? Argh.. I submitted this the other day, but apparently Warcraft 3 was more interesting.. (You call it sour grapes, I'll cal it wine...) 'ext3' (ext2 fs w/journaling) has been officially released as a testable beta. also, reiserfs is close (like, days) away from releasing a journaling patch. check out www.devlinux.org for more info. to paraphrase Hans Reiser, "XFS (on linux) is more of a well-funded press release than a well funded software prject." -- blue
  • This is offtopic I know, but I was wondering what came of the Mindcraft result that Linux's network code isn't multithreaded? I remember hearing that Andrea was working on something to do with the Mindcraft results, and I was wondering if that issue is being addressed.
  • Agree, this is the lkml announce :
    http://boudicca.tux.org/hypermail/linux-kernel/199 9week34/0442.html

    Rob make an "Update" and give credit to people behind those companies.

    For Italian people: Andrea will talk in Padova check http://meeting.pluto.linux.it/ for more
  • IIRC, a swap partition or swap file can only be about 128 MB under linux, with a maximum of 16 swap spaces, leading to a total of 2GB of swap space. Since we can now use 4GB, how are we supposed to allocate enough swap space (I prefer 2 times the physical memory).
    Has this annoying restriction of 128MB been removed, or can we use more swap spaces (lets see, 4 GB divided by 128 MB gives way too much swap spaces to be practical), or are we not supposed to use virtual memory any more ?
    If the restriction of 128 MB per swap space still exists, is there anybody working on removing this so Linux can become practical for modern computers? If the restriction is removed, is it possible to create a single 8GB swap space ?
  • Well, from expearance, win 98 wont be stable with 1 gig. I once put 512Meg in a Win 98 system (we had 4 128M DIMMS at the shop, and I just wanted to see). Strange things started to happen, it blue screened quite a bit. It wasn't prity. I would say that you could stick 4 Gigs in a 98 machine, but it wouldent stay running for more then 5 minutes.
  • by Anonymous Coward
    Yes,

    Since Kernel 2.2 the 128MB limitation is gone. You only need a recent mkswap/swapon to use the new big swap partitions. I dont know if there is another limit, i'm happily using 1GB swap partitions on computers with 1GB RAM.

    -- Anderl
  • by MattyT ( 13116 )
    640Mb should be enough for anybody!
  • So what was the difficulty in allowing Linux to use 4G? I understand their was some issue with flushing certain buffers so was this a x86 only issue?

    How is it that FreeBSD handles more RAM? Do they merely take a performance hit or do they have better kernel architecture?
  • Umm. Only the Xeon CPUs have the 36-bit address bus (versus 32-bit for the standard PIIs - and probably the standard PIIIs also) allowing addressing up to 64 GB of RAM. Also, using those top 4 address bits takes special OS support, and AFAIK Linux doesn't support that as of yet, unless these Siemens/SuSE kernel mods also allow for that.
  • Well, don't discount hardware (in)stability. Very often, pc motherboards deal with (lots of) ram, especially in lots of slots, quite poorly.
    What type of motherboard were you using?
    Try the same eact setup with linux, you'll probably have problems there too.
  • the new boxes themselves can hold 1.5 gigs of ram but the actual os (least in my understanding) can use more.

    though apps themselves can only use 999 megs
  • Code freeze? They should more call it a "congeal" rather than a freeze...
  • Does *any* OS take advantage of this currently???

    The enterprise edition of Windows NT Server does make use of this. They are doing the bankswitching thing, as with EMM.
  • What am I missing.. I remember ages ago a friend of mine put Linux on 2 4 gig quad processor Xeon boxes which at the time needed kernel patches to have it recognize the rest of the mem but they got it working.. We only run solaris on here (w/ only a gig) but don't have this problem on intel boxes.. but (i'm at home now) but this is what my xconfig 2.0.38 says :: under "General Setup" subsection "Memory Configuration" There are three memory configurations available the standard allows use of just under a gig of ram w/ 3 gigs of virt space per process , the enterprise uses 2 gigs of mem but limits process space to 2 gigs.. the custom option allows you to specify the split, subject to kernel constraints.. Linux/x86 can use up to ~3.4 gigs of physical memory... Anyway it sounded vaguely like the NT kernspace userspace splitting you could set.. I know, no one is reading this anymore :(
  • Hold your horses. Last time I asked Andrea (who helped to write the 4 GB patch) he is working on the PAE extensions as well...

    So I'll bet you'll see it SOONER then you think
  • Where do you get 64 Gb from 32 bits of address space? It'd be more like 32 Gb (4 GBytes).
  • actually i stand corrected, after double checking (as all good /.'rs should do when posting out their ass) you are indeed correct about the os's only hadnling 1.5 gigs o ram, but to save face, i was also correct about the apps only being able to use 999 megs
  • I suspect Cutler copied it from VMS, which also splits the address space in half. When VMS was designed, 4 megabytes of physical RAM was a lot of memory.
  • The current MacOS can only address 1 Gb of memory.
    But this is going to change in MacOS X.
    The maximum memory every G2 Mac (7500,7300,8500,7600,8600,9500,9600) and G3 Mac (G3, G3bw, iMac, G3series powerbook, G4) supports is 4 Gb.
    This is going to rise to 1 Tb when Apple is going to switch to the G5 which is 64 bits.

  • Ditto.

    I'm surprised to see this sort of thing again. I can remember the first time I saw this kind of stuff, on the old Atari 130xe: 128 Kbytes through 'bank switching'...

    chris
  • SCO the King? Troll or bad joke?
  • Comment removed based on user account deletion
  • NT 4GB, giving 2GB each to userspace and OS. Enterprise NT allows memory intensive boxen to be configured 3GB for apps and 1GB for the OS. blah. don't know about geography or any other OS.
  • Hello,

    I could give you a dozen more examples, I'm sure. :)

    I have found that, in my experience, the media always seems to pick up on the "errors" and less frequently on the real features. Ah well. I say that it's a "draft" for a reason.

    Joe :)
  • by meersan ( 26609 ) on Tuesday September 07, 1999 @09:58AM (#1696890) Homepage
    Top 5 Reasons Microsoft Is Glad Linux Will Support up to 4G of RAM

    5) Office 2000 memory requirements now supported by Linux, making port much easier.
    4) Enhances sales potential of Windows 2000 -- WINE now able to run W2K under Linux.
    3) Yet another fun Linux feature to deny and obfuscate.
    2) Can complain before tech-unclued journalists about Linux's memory requirements -- 4G compared to W2K's 128M.
    1) Now that Linux supports 4G of RAM, it will be competition on the everyday Joe's desktop, thus making MS-DOJ trial irrelevant.

    (Darn Excite, slashdotted again.)

  • I'm not sure what the actuall OS limitations are, but each program under WinNT can only access 2 gigabytes. Under Win9x a program can access 4 gigs, but will likely crash itself or the system if it accesses anything other than 1 of the gigs.
  • FreeBSd has been able to do 4G for many years now.
  • by tjrw ( 22407 ) on Tuesday September 07, 1999 @09:59AM (#1696893) Homepage
    You are confusing the virtual and physical address spaces.
    The Intel P6 line has a 36bit physical address bus allowing the chip to address up to 64GB of physical memory. However, since it is a 32-bit processor, you can only see 4GB of this memory at any time. You can change what you can see by playing games with PTEs (page table entries), and/or segment registers.
    To address >4GB of memory requires use of either wide (64-bit) PTEs, or the strange mode Intel added to the PII, that allows large pages with narrow PTEs.
    The Sequent Dynix/ptx OS supports up to 64GB of physical memory. The hard part is conserving enough kernel virtual address space.
  • I didn't know that P6's had 36-bit memory addressing... Does *any* OS take advantage of this currently???
  • since linux can't run with more then 960 megs....
    "Subtle mind control? Why do all these HTML buttons say 'Submit' ?"
  • Remember segments from 16-bit code?

    They're still there, albiert in a different form.

    In 32-bit code, near addresses are much more common than far addresses, but far addresses are possible. 32-bit far addresses consist of a 16-bit segment and a 32-bit offset. The segment refers to a page table, a begining, and an ending. The page tables then refer to physical ram, which has 36 address bits these days.

    Environments that have all segments more or less equivalent are called 32-bit flat mode (as opposed to 32-bit segmented mode).

  • Gahhhh! Segment registers, yuck. You'd think we'd be done with them by now, but they're creeping back.

    I did a little 8088 programming back in the Eighties, and all I can say is "Never Again". I don't mind at all on a machine with a flat memory model, but I don't want to touch segments ever again if I can help it. It didn't help that those segments on the 8088 were only 64K in size.

    Let's just move to 64-bit processors, shall we?

  • You are completely wrong with this.
    The issue is not to enlarge the virtual address space of a process, but to enlarge the total probable physical memory to the kernel. The processes still do see only 3GB of virtual memory.
    The difference is, that previously all physical memory had to be mapped into the kernel virtual address space (hence max. 3GB and then 1GB virtual memory to processes), now this is no longer true. Now we have two sorts of memory, normal memory that is always mapped into the kernel virtual address space and the high memory (all above 3GB but to have 3GB virtual address space to processes its usually all above 1GB) which is mapped on-demand to a special virtual address range in the kernel (which is also the overhead of this model).

    Richard.
  • geez, you guys are as bad as the Mac people. Now the linux kernel can do this to. We all know FreeBSD can access 4gb of memory. Who cares?
    "Subtle mind control? Why do all these HTML buttons say 'Submit' ?"
  • As for me, this 4G thing seems pretty silly. :)

    For probably 99.9% of people, it is pretty silly. Most people/companies can't afford that much memory even if they had a motherboard that could handle it (most motherboards are limited to somewhere between 512M and 2G). This limit matters strictly for the very highest end 32 bit systems.
    I think it is mainly being done to eliminate one of Microsoft/Mindcraft's 'checklist' items, and there is certainly a level of justification for doing that sort of thing.
    However, there is also always that 0.1% that it really does matter to, I guess. In general, I would imagine that this probably just won't matter to most of us, but it will remove one of the percieved barriers against high end Linux deployment.
    The difficult thing may be getting people/press to realize that things have changed. Witness the ongoing examples of people who still claim that Linux can't support multiple CPUs at all, when it has had at least rudimentary SMP support for quite some time. And of course they would never mention that Linux SMP support has improved some recently, and is going to be improving further in the next couple of major kernel versions.

  • Supports or demands? :)
  • well i am pretty sure about the 2GB split memory model back in 1.x (or 0.x?) if I read the kernel hacker's guide correctly. I guess it came a long way since then. Back them, 2GB per process seems a lot so this is like a cheap efficiency trick.

    In any case, the limiting factor is the kernel/user memory model rather than the kernel per se. I still think we be better off with a 64-bit architecture.

    Hasdi
  • Last time I looked W2K was only offering scalability beyond the classic 4 CPUs with 4Gb of memory for their "Datacenter" Server range, and perhaps the big memory stuff alone on "Advanced" Server.

    That means your heavy-duty stand-alone applications require you to purchase a very expensive product, which is tuned exclusively for network server and database work. You'll get an expensive set of server apps and licenses "for free" which you'll never use, and wait several weeks after the initial release for each and every service pack (they're always delayed for the high-end NT versions).

    If you're interested in spending this kind of money, run Linux or *BSD on hardware that's designed to have 64Gb of memory and 12 CPUs (eg UltraSparc or Alpha) and don't worry about stupid kludges from Intel or any other 32-bit vendor. When even Intel are telling you that IA32 is a dead-end, it's time to get off.

    In fact, if you've really got this type of job to do, your priority is probably scalability and performance, in which case a Proprietary Unix on it's own native hardware is going to look much more attractive. Want to buy some E450s?

    Nick.

  • Point is the technology in Windows.
    The fact that it's in a different product is because of marketing.
    Hey, why not charge more for that sortta stuff....Sun etc do already.
  • 95 a limit of 64 or 128? uh NO.

    2GB system, 2GB user...same with 98.

    I've run 95 on 256 - so it definitely isn't anywhere near as low as you think.
  • We are currently running with 4GB memory on our 4x500 Dell 6350s. We are using it to run Oracle8, which is a story in and of itself.

    We have worked with an engineer at SGI on implementing the SGI 4GB patch which has been out for some time.

    There were some other issues with Oracle shared memory, but we finally resolved those as well. In other words, 4GB is alive and well and in production on our web site right now. Id expect a writeup in the linux journal in a couple of months on exactly how we accomplished this and implications.

  • Under Win9x a program can access 4 gigs, but will likely crash itself or the system if it accesses anything other than 1 of the gigs.
    I am not sure that the memory amount will cause 9x to crash much less the amount of time that it is running. *grin*

    -sporty
    I like to moo it moo it.

  • FreeBSD [freebsd.org] on Intel supports 4 GB of RAM. I don't know about Alpha. FreeBSD also supports files of up to 8 TB on FFS. Note that this is considerably longer than 2 GB.
  • Certain CPUs(1) have 36 address lines instead of the usual 32. The extra 4 address lines are controlled by registers from the MMU, effectly giving you an maximum address space of 16 4GB segments. This, however requires operating system support to manage the extra 4 address lines.

    (1) PPro and Xenon for sure. Dunno about PII and PIII. Certain PowerPCs also have this.
  • Actually, Excite didn't "cover" this, Siemens & SuSE released a joint press release over BusinessWire, and Excite just passed it along, as they probably do with everything that comes over BusinessWire.

    Maybe it's interesting that Siemens would consider a kernel patch significant enough to warrant a press release, although this doesn't seem surprising to me. It's a pretty significant new feature.

  • I've used a gig. Nothing special. Just never swap, and buffers up the wazooo. How much ram does ol' 98 support - I know 95 had a 64 or 128 MB limit. And I'll bet 98 can't do a fat gig. But then again, WTFDIK (what the F do I know...)
    http://www.bombcar.com It's where it is at.
    • How is that even possible, since I thought 32-bit architecture made 4 Gigs the max addressable limit, period.

    If you remember back in the old days of the 8086 and its friends, you had two values to address everything. Segment register, and index. Now some braindead monkey decided that a segment would be equal to 16 bytes (4 bits!) effectively cutting 12 bits off of what could have been 32-bit addressing on those old 16-bit machines. (16-bit segment, 16-bit offset.) Of course the reason was it was much easier to write 16-bit code if you didn't have to worry about wrapping your segment register in the middle of arrays and stuff, so as long as we used under 1 meg of ram, it seemed like a good thing.

    When they made the 80386, the mistake wasn't made twice. Now the 16-bit segment register points to a virtual page table, which means each 16-bit segment register 32-bit index pair is theoretically capable of addressing 4 gigs. It would be a bitch and a half I think, to use more than 4 gigs in a single process, but that doesn't rule out different processes using drastically different address spaces. So think about having 4 gigs per process and see how well you could use that.

    Frankly, I have no freaking clue what you'd do with it. Maybe I'd increase the size of my memory cache in Netscape...

    Also, I don't know what the implications would be for writing an operating system, as I have never done so. But I assume it would make things tricky. I'll just wait for the 64-bit machines, and enjoy the [insert huge number I don't know here] bytes of directly accessible RAM.

  • 4 Gigs the max addressable limit in a flat space, not period. Otherwise we wouldn't have seen more than 64K on 8086/80286 machines, either. In both cases the CPU has more memory addressing pins than bits in registers: 16-bit registers, 24-bit addressing on the 286; 32-bit registers, 36-bit addressing on the Pentium Pro, II, and III. In both cases Intel uses segmenting to push a cheaper technology to grow a little bigger because the next platform is going to arrive far too late.

    The (impossible?) trick is to make segmented memory access "clean". Neither Linux nor *BSD has chosen to embrace the hoops needed to use large memory P6 machines. NT doesn't get full advantage of >4G either, you know: that RAM does have a 32-bit string attached. Offhand I don't know about Solaris, SCO, or anyone else. I'd be a little surprised if Sun didn't do it, although I'd also understand if they left it out to encourage UltraSparc sales.

    The K5, K6, K6-2, K6-3, M2, C6, et al have no such extensions, of course -- just Intel's patented P6 processors and buses.

"Why should we subsidize intellectual curiosity?" -Ronald Reagan

Working...