Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Linux Software

PC Week Reviews 2.2 68

wardk wrote in to send us linkage to a PCWeek artcile that Linux 2.2. It starts off by saying that Linux is Enterprise Ready. Talks about corporate support and improvements in SMP. Check it out.
This discussion has been archived. No new comments can be posted.

PC Week Reviews 2.2

Comments Filter:
  • Heh, yeah, our company is currently migrating it's Netware servers to NT. They have (I think) somewhere along the lines of 4,000 netware servers, and are now already at about the 10,000 mark with NT, and they're not even finished yet!

    And that's on better hardware.

    *shudder*
  • Yah! I noticed that too and thought it was the one chuckle-worthy spot in an otherwise decent article.

    I think that GUI-dependent types may indeed think that tweaking textual config files is actually tweaking source. By this definition every Unix sysadmin tweaks source every day! I guess stepping up from there to editing Makefiles makes one a hacker eh?


  • and it's mighty snappy. Ever since installing the kernel and modules, I haven't once thought of going back to 2.0.x. 2.2.x is just too good! :)

    The only reason I'd upgrade to 2.2.1 at this point is to get around that core thing... not that big a deal as my machine is primarily a desktop.
  • Posted by Jeremy Allison - Samba Team:

    > We ALL know that the 'Linux 2.0 doesn't scale,
    > but 2.2 is on par with NT4' shit is crap.

    Well actually, no. The curves and benchmarks that Henry publishes in his article are *completely* correct. I know, as I helped him run the benchmark.

    Personally, apart from the 'tweaking config files' as source code error (and I actually tweaked the rc.local to change the max open files, max inodes and bdflush parameters to make sure the machine would reboot with the correct settings, not the smb.conf file) I though the article was completely accurate and very positive, thanks Henry !

    Now the interesting question is why the NT Netbench numbers on the same machine in the same network configuration were not published as well......

    But that's another story (and should be covered more fully later :-) :-).

    Regards,

    Jeremy Allison,
    Samba Team.
  • Nope, you're right, they were trying to say that applications won't benefit from the new capabilities, when it's obvious that they will, if they are properly written.

    Examples:

    glibc (threads)

    make -j (fork :)

    The Gimp (plug-ins are separate processes)

    graphical mp3 players (fork off the GUI, the player, and sometimes even the oscilloscope)

    Compare to Photoshop on the Mac, the only application that supported *any* kind of extra processor usage for a long time. :)

    Or, for that matter, NT, which has much the same problem, actually, apparently unless you configure it properly and say the magic words and stuff... (I have yet to see this)
  • Always love the fact that they mention that Linux/*nix in general has a steep learning curve.

    Should we bring up the fact that there are GIGANTIC VOLUMES written about the registry? I know i've seen several ~2k page books... Given the fact that my Perl book is less than 1k pages...and that's for a complete programming language...

    We need to destroy this FUD tactic.
  • What initially start out looking like a very nice article, turns into something awful. What I don't understand is comments like "... now on a par with NT 4.0 on similar hardware..." when we all saw the Smart Reseller article pointing out that linux _before_ 2.2 is faster than NT.

    Then it goes on to say "the community developing Linux moves at a snail's pace". What? When? Where? Last thing I knew I was installing RedHat 5.1, and now I'm on the 2.2 kernel. When did any other OS vendor move that fast?

    Finally, they end on "Linux is still in flux". Who writes this FUD? Linux is far far far from being "in flux". NT2000, now there's an OS that's in flux. Linux 2.1.x was in flux. 2.2 isn't by definition, barring bug fixes and minor improvements.

    Disappointing, although as they say - all press is good press.

    Matt.
    --
  • Hey - what's wrong with those Win32:: perl modules anyway :-)

    Matt - Win32:: module author.
    --
  • Sorry, I just don't totally get this statement:

    dearth of applications that can exploit
    new SMP capabilities.

    It's under the Linux Negitive side at the bottom of the page. Is there some exploit that screws up SMP Linux boxes running 2.2.0 or something? Or, is this his flowery way of saying that there aren't many things that really use SMP yet?

    Believing it's the latter, isn't that totall BS because

    • Since in general source is avaliable for almost every app, and compile time flags, and smp optimization can be done for just about any app? (Doesn't SAL [kachinatech.com] have a pretty big section about compilers for specific needs like this?
    • Even applications that are not SMP optimized benifit from good SMP implementation, since, this is suppose to be a server, and more than one job will be happening at a time, thus, there is more than one CPU to use?

    Yes, not my subject, I could be totally wrong, but, I didn't think that that statement was totally true.

  • Or 'pico', if Emacs is still too much for them. :op
  • You know, it's really funny. Compaq mentions off-hand that it intends to rebrand Digital Unix to Tru64 Unix, and the whole world takes notice - lots and lots of mainstream articles in the week since then.
    Whereas, there's a groundbreaking new release of Linux, and hardly a peep. Meanwhile, Linux in the press in general is healthy, but nothing concentrating on the release. I wonder if it's because of incoherent media co-ordination inherent to the open-source model? Any ideas?
    --
  • You gotta use threads or separate processes and that requires a whole different approach than some applications use. You'd basically have to rewrite the code to use parallel algorithms.

    Not true. There's an automatic speed increase when more than one job is running, since there's more CPU time to go around.

    Note that netscape or quake are not threaded, therefore you wouldn't get get any performance gain with the exception that you could be using the other processor for other processes.

    Netscape is threaded. Also, the X server runs in a separate process and can be scheduled on the other CPU, so that UI operations can be run asynchronously on another CPU. While Quake isn't multithreaded, the extra CPU keeps routine system operations (like email and web server) from slowing down your game. SMP makes for a massive improvement when you're playing quakeworld on the same machine that runs the server, since the server runs in a separate process and can be scheduled on the other CPU.

    This might be good for something like ftp or http, where processes are forked off the main process (or inetd or whatever), but not for something like an SQL database.

    It makes for a great improvement with HTTP servers, especially when lots of CGI scripts get running. SQL databases like PostgreSQL fork separate processes for each client connected, and I belive mySQL uses threads. Both SQL servers can easily be scheduled across multiple CPU's to decrease execution time when multiple queries are run.

    All this talk about smp starts me drooling for an smp motherboard to learn more about all this stuff :)

    Intel pr440fx, $100 on most discount hardware and auction sites. Has two PPro CPU sockets, 4 dimm sockets (up to 512mb), USB, 4 PCI slots, onboard UW SCSI, 2 IDE ports, onboard sound, onboard temperature, voltage, and fan speed monitoring, and 100baseTX ethernet.

  • Hmm let's see pico, emacs, xemacs, jed, joe, axe and the list goes on and on. I once counted and SuSE had at least 10 text editors. Hell you could use Word Perfect if you wanted to.

    Of course I use vi, because every where I go it exists. (At least any machine I want to use.) It fits on a boot floppy, and once you know how to use it it's dam fast. You could say the same for emacs.
  • by Palin ( 3182 )
    Since when has Windows been able to: "scale on high-end PC servers." From what I understand, unless the application was specifically written to take advantage of multiple processors on NT then the OS itself would make little use of the other processors. This is what I've been told by people who write SQL database software for Informix/MS SQL/etc...

    Now that I read the whole article, they (PC Week) still missed the 'LINUX' point. Oh well...Maybe one day we can all live in a free software world....
  • Why do these columnists always think it's important to mention what text editor they use? As if you can't modify a Samba config file with anything other than VI.
  • did you not know that you only need to change your registry to make your workstation a server? I guess microsoft does not want the world to know that they are not charf\ging more for server because it is different, but because they can.
    Here, go to this link, and see the light, maybe even make a program that can bring others into the light.

    http://www.uni-duesseldorf.de/ftp/pub/doc/books/ oreilly/examples/windows/win95.update/ntno diff.html
  • if you can code, or have a friend who can code, you can write a program like they did that can intercept microsoft's calls from turning back the settings. And thats that.
  • I'm an expert C hacker. I can do C++ and perl too.
    Good for you.
    I tolerate all the config files with inconsistant syntax. If forced, I can even deal with fucking useless Scheme or LISP.
    Personally I like LISP a lot, you should have a look at Haskell too.

    Makefiles are another matter. I have a template one with explicit dependancies, so I normally just hack something out of that. For more advanced stuff though... Check this out:
    Makefiles are very useful general purpose tool. Give functional (kinda LISP) and rule based (Makefiles) programming styles a fair go, it'll expand your mind much more than learning yet another imperative language will.

    _Groovy_ horizontal rule.


    modules: $(patsubst %, _mod_%, $(SUBDIRS))

    $(patsubst %, _mod_%, $(SUBDIRS)) : include/linux/version.h
    $(MAKE) -C $(patsubst _mod_%, %, $@) CFLAGS="$(CFLAGS) $(MODFLAGS)" MAKI NG_MODULES=1 modules

    Let's not start with this one :-)



    What is that shit? Here is some more, from a different package:


    %.o : %.c
    $(strip $(CC) $(CFLAGS) -c $^)

    Now %.o : %.c means this is a generic (also called implicit) rule for making any .o file (say foo.o) out of a .c file (foo.c).
    The left hand %.o is the target and the right hand side %.c is the dependency. Make makes targets out of dependencies and saves (the computers) time by only making the target if it is older than the dependency (ie if the target is out of date).

    Okay, consider $(strip $(CC) $(CFLAGS) -c $^) start from the inside and work out
    $(CC) and $(CFLAGS) are environment variables defined previously (probably at the top of the Makefile).
    $(CC) is your c compiler say egcs and $(CFLAGS) are optional flags to be passed to your compiler (for optimization, debugging etc), like I said it's optional so let's assume it's blank.
    so $(strip $(CC) $(CFLAGS) -c $^) expands to $(strip egcs -c $^)

    Now $^ is a special makefile variable (also known as an automatic variable). It evaluates to the list of dependencies, in this case there is just one %.c, so if foo.o was being made it would evaluate to foo.c.
    So we are left with $(strip egcs -c foo.c)

    Strip is a make function that strips out redundant white space so this evaluates to egcs -c foo.c

    Cool, onto the second rule:
    oldps w uptime tload free vmstat sessreg utmp: % : %.o
    $(strip $(CC) $(LDFLAGS) -o $@ $ $(LIB_TGT) $(EXTRALIBS))

    Now this is simple, it'll make any of the (multiple) targets (called %) by making the file %.o and executing the command
    $(strip $(CC) $(LDFLAGS) -o $@ $where $@ is the target being made and $so executing make oldps will result in Make firing the second rule, with target oldps and dependecny oldps.o. This will result in the first rule being fired which will result in the command
    egcs -c oldps.c
    being executed if oldps.c is newer than oldps.o, now that the first rule has succeeded in making all its dependencies it executes
    egcs -o oldps oldps.o
    if oldps.o is newer than oldps.

    That wasn't so hard was it? Now go change the makefile so it takes header (.h) files into account
  • Mr.Chowdhry,

    While it is most appreciated that you are testing Linux esp. the new
    kernel, a few point need to be made.

    "The community developing for
    Linux moves at a snail's pace."

    Applications dealing with user friendliness have accelerated much
    quicker than their Microsoft counterparts. Keep in mind that Linux is a
    multi-user OS (as compared to Microsoft's single user OS, NT) and the
    kernel will not be released until it is ready to be. Commercial
    interests are nice on the desktop but they do not effect the development
    of the kernel as the Linux community is under no commercial pressure to
    release beta quality products under the guise of being enterprise ready.
    Kernel development typically is much slower than application
    development. Already both the KDE and GNOME desktops have more
    functionality than the GUI of MS-Windows.

    Installing Linux from a raw kernel is something that IS difficult. But
    you can not classify installing a raw development kernel with a
    commercial product release. I can install a Red Hat 5.2 server in 13
    minutes and have it network ready with only one reboot. The same can not
    be said for WindowsNT.
    My 8 year old daughter installed Red Hat Linux at her school for science
    day. No help from a grown-up needed. Red Hat 6.0 will have an even
    easier install.
    Again the point is do not paint Linux in a broad brush as Linux is a
    comprehensive operating system with many facets.

    "However, I
    find it difficult to applaud a product
    for mimicking the capabilities of its
    competitors--"

    This is an interesting comment as Linux has been 64 bit on the alpha
    since 1995. Microsoft WindowsNT is still not at 64 bit. The clustering
    ability in NT is little more than a fallback for overload and does not
    compare with the PVM and SMP abilities of Linux. Clustering means
    something different to a Linux engineer than to a Windows MCSE.

    While it is true that may OS vendors have been providing SMP for a
    while, hardware specifications had been closed up until recently making
    development much slower than needed.

    "Like Linux, Samba requires IT managers to dig into source code to
    master it. We could reconfigure Samba 2.0 servers using the simple new
    Web interface, but for more advanced configuration chores, such as the
    reconfiguration of file cache memory, we had to use VI editor to
    manually tweak our settings for optimal performance."

    Any tweaks that are needed now will be fixed in the commercial releases
    of Linux coming up. By the way the smb.conf file is not source code it
    is simply an ascii text configuration file.

    "Although administration lags, technical support option for Linux
    are improving."

    I'm not sure by what you mean by administration? Does Linux (as any good
    OS) should have a technically competent individual to administer it.
    LinuxConf (available at http://www.solucorp.qc.ca/linuxconf/) is a
    fairly comprehensive administration solution allowing everything from
    email,user,firewall,ftp,SAMABA,Apache,virtual domains, etc.
    I haven't seen anything from Microsoft that comes close either on the
    NT4.0 or the NT5.0 beta with it's management console.
    Caldera has the LISA tool as well.

    I commend you for your efforts at evaluating Linux however I would ask
    you to evaluate all the possibilities available for it.

    Good feedback on some of the technical aspects of your article can be
    glanced at by viewing http://www.slashdot.org and look at the comments
    section. Just a warning though...some people there can be rather harsh
    so you may have to dig for the gold nuggets in the mudslinging. ;-)

    Take Care,

    Sincerely,

    Nicholas Donovan
    Linux Systems Group
  • This is a case where the author of the article in PC-Week was most likely a Windows person who wants to be a Unix person so throwing out neat words like vi make him feel so important...;-)

    Nick
    LSG
  • Remember what I said about a month ago, M$ will say "Linux is a worthy competitor to NT4.0"

    After the trial they will say
    "Linux is nothing compared to Windows2000"
    and of course ZD-Net and it's ilk will fall in lock step.

    Trust me. This is exactly how it will go down.

    Nick
    LSG

  • Trust me. This is exactly how it will go down.

    Give me one reason why I should trust you...

  • Well, if you'd look at those books, you'd discover that 90% of what's in there are the keys for your desktop wallpaper or screen saver or whatever. I'm sure if you published a book with every parameter used for the Linux kernal, KDE, WindowMaker, GNOME, Wine, X, sendmail, etc it would be 5 times as thick as those NT books.

    Personally, I don't think the idea of a central configuation database is all that stupid. What's dumb is the structure that MS chose to implement, with application settings scattered to and fro in a manner such that not even the installers can keep track of it all. However, even in the mess that it's in, it's a lot more consistant than the myriad of unix config files with their widely differing structures.

    Furthermore, the stated design of the Registry was that it was not supposed to be user-editable. Of course, there's a ton of useful settings in there that MS provided no dialog box for. For example, MS DNS (joke) virtually requires that you dig around in the reg. Furthermore, enforcing policies requires detailed knowledge of the reg.

    (BTW, they are somewhat improving the sitation with a scripting interface that exists now in IIS and is system-wide in Win2K. I also should add that I've never seen an NT registry go corrupt like a 9x registry is certain to do.)



  • Actually there is one cache related setting you can only do in the source, something about honoring the "write directly to disc" flag. I turns out windows clients use this bit more often than they should, so theres a big performance gain disabling it. I read it some time ago on the samba site, might not be valid in 2.0

    (sorry bout the english)
  • How can one magazine print that Linux is just now getting to NT's performance level (Linux on a 386 vs NT on PII 450 maybe), but another magazine supposedly owned by the same company using the same testing lab say linux "kicks NT's butt"

    ??????
  • Now comes the millenium (well, almost) and all the trade rags are falling all over themselves to praise Linux (while they're at it, they should be praising *BSD, that's one of the pass/fail tests for these mainstream observers to see if they understand what they are talking about rather than just repeating the latest pack journalism buzz).

    But be careful what you wish for. Behind the buzz is the inevitable backlash when people discover that Linux is *not* like NT and that GUI-based program installation and system management is a bit funkier and requires a different mindset (thank Gopod for that).

    However, as anyone who has wrestled with the update-and-rebind-and-reboot SEVEN times process when trying to work on TCP/IP configuration in Win 95/98/NT will attest, this is where Linux has it all over the Microsoft stuff. I installed DHCP on my Debian box, hooked it up to my DSL router and it just . . . worked. Not so with NT, which required a fair bit of finagling to figure out a very simple change I needed to make.

    phred@sunlight.portland.or.us

    -------
  • People just dont get it. The only market where NT server dominates is in the Workstation acting like a server market. I know of No one that relies on NT for any SMP tasks. People Use NT as a domain manager for other Windose boxes. It is not, never has been, and probably never will be an enterprise Server like Sun , HP, AIX, and LINUX. People spend too much time reading staged press releases. Win NT is a joke, and people should begin to realize this. IT IS NOT AN ENTERPRISE SERVER, it is a small workgroup domain server. All IT people know this, and all managers should learn this!
  • Too bad they wrote it before 2.2.1
  • "The advanced caching will make Web servers, such as those from the Apache Group, much faster once scalable versions of the software for Linux become available."

    Why would Apache need to be changed?
    Why wouldn't it be faster unchanged on 2.2?
    Did someone tell ZDnet this? Or did they just "pluck it from the air"?

    FUD, hidden in good press, abscured by compliments.. but, it's still FUD
  • OK, so I am lame because I don't know all the acronyms, flame away. But could someone point me to some sort of document that gives meanings for a good # of them.

    Thanks

    "Trouble is, just because it's obvious doesn't mean it's true"

  • 2.2.0 has really nice ATYFB, too bad they didn't hilight that!

A morsel of genuine history is a thing so rare as to be always valuable. -- Thomas Jefferson

Working...