Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Microsoft Software Linux

Microsoft Claims Firms 'Hitting a Wall' With Linux 717

maxifez writes writes to tell us that Microsoft has released yet another independent study downplaying the viability of Linux at the enterprise level. The study claims that Windows is "more consistent, predictable, and easier to manage than Linux." From the article: "The study, commissioned by the software giant from Security Innovation, a provider of application security services, claimed that Linux administrators took 68 per cent longer to implement new business requirements than their Windows counterparts." Vnunet.com has also provided a PDF of the original report.
This discussion has been archived. No new comments can be posted.

Microsoft Claims Firms 'Hitting a Wall' With Linux

Comments Filter:
  • by khasim ( 1285 ) <brandioch.conner@gmail.com> on Wednesday November 16, 2005 @04:05PM (#14046517)
    The key, as always with these "studies", is to find the portion where it deviates from Reality. That is, where it uses some strange definition or where the sysadmins choose some bizarre action.

    In this "study", that step into UnReality begins where all systems are required to stay on the same time-line for upgrades.

    This means that what would otherwise be a normal upgrade from SLES 8 to SLES 9 instead becomes a strange mix of back-porting patches from SLES 9 to SLES 8. In other examples, the sysadmins are downloading code from the glibc and mysql sites and applying it to those server WITHOUT TESTING. So, over time, the SLES systems become unstable.

    Meanwhile, no non-Microsoft supplied code is applied to the Windows boxes.

    Of course, the one who commissions the "study" gets to choose the criteria ...
  • I don't get it (Score:5, Informative)

    by krgallagher ( 743575 ) on Wednesday November 16, 2005 @04:17PM (#14046647) Homepage
    The article says:

    "The study compared two teams of experienced IT administrators running Windows Server 2000 and Novell SUSE Enterprise Linux 8, then monitored their progress as they upgraded to Windows Server 2003 and Novell SUSE Enterprise Linux 9."

    But the PDF says:

    "Specifically, for the database server role, we considered three configurations; Microsoft SQL Server 2000 on Windows Server 2003, Oracle 10g on Red Hat Enterprise Linux 3 and MySQL on Red Hat Enterprise Linux 3. In order to produce a meaningful comparison of platforms, the systems studied were manually installed and their configurations were verified."

    Red Hat Enterprise Linux 3 is the only Linux distribution listed in the PDF. Also the fact that "the systems studied were manually installed" is probably why the upgrade was problematic. If you want your upgrade to be easy, install from the distribution, not manually. I also wonder why they did not test MySQL and Oracle 10g on windows. There are windows versions of these software packages. When you are comparing systems running different software, you are not just doing an OS comparison. You are also comparing the software packages. They might just as well have compared Red Hat Enterprise Linux 3 running Oracle 10g to Windows Server 2003 running Microsoft Access 2003.

  • Actually, it does. (Score:5, Informative)

    by khasim ( 1285 ) <brandioch.conner@gmail.com> on Wednesday November 16, 2005 @04:22PM (#14046712)
    Weekly reboots.

    Get a copy of Win2K3 on your box. Create a directory that's 3 directories below the root.

    Put 200,000 files in that directory (size of each file does not matter).

    Now, watch the application that reads and writes files to that directory get slower and slower over time. Until you need to reboot the box.

    For an instant problem, open that directory in Explorer. All of your processor speed will be eaten by the "system" process. Even after you close Explorer. Rebooting is the only thing that will clear the problem.
  • Do you have any idea what you are talking about? NO

    -I have never had any issues with corruption.
    -The kernel doesn't need a bi-weekly recompile. It's up to you.
    -I also have no issues with KDE, I like it more then Gnome.
    -I don't have problems compiling software from online either.
    -Games? It has plenty of fun games, but it's not a gaming system anyway, most people use it for serious work.
    -No future? They've been saying that for years, yet somehow, I have no problems finding mirrors to get it.

    Perhaps who ever setup linux for you just sucks.

  • by khasim ( 1285 ) <brandioch.conner@gmail.com> on Wednesday November 16, 2005 @04:33PM (#14046807)
    Where did you find that information? The PDF at the website seems to be a completely different study.

    The problems start at page 25. Here's the beginning:
    For SLES 8, all required and recommended security patches were applied to the system. The same criteria was applied for Windows patches. These patches were applied in 1 month increments to the system. On the SuSE side, during the one year period under study, patches were released for the core components from multiple sources spanning package developers, individual contributors in the open source community, individuals and corporations. In this analysis, we only consider those patches issued by the operating system vendor (Novell/SuSE). From an enterprise management standpoint, this is the most common scenario given that the chief benefits of using an enterprise Linux distribution is the compatibility testing done by that Linux vendor on patches and the support extended to administrators. By going outside this channel for patches, both benefits are forfeited. In the period from July 1st, 2004 to June 30th 2005 there were 187 patches that were applied to the system. Of these patches, 13 affected the kernel. While kernel patches did not require an immediate reboot during installation, the majority of them need a system restart to immunize the system against a specific vulnerability. In general, patch application on SuSE proceeded well and most patches installed without error or conflict. Beginning at Milestone 1 however, some upgraded components were out of support from SLES 8 and updates for those components had to be obtained from the package distribution sites. As of Milestone 1, MySQL patches were obtained from the MySQL distribution site and as of milestone 2, glibc and directly related packages were maintained through manually applying SLES 9 patches. 3rd party component installations were performed according to the installation procedures specified by those vendors.


    Whitepaper location:
    http://www.securityinnovation.com/reliability.shtm l [securityinnovation.com]
  • LOL WINDOWS CRASHES (Score:4, Informative)

    by Mancat ( 831487 ) on Wednesday November 16, 2005 @04:34PM (#14046822) Homepage
    My 2000 Advanced Server uptime:

    C:\Documents and Settings\wysoft>uptime office
    \\office has been up for: 121 day(s), 0 hour(s), 39 minute(s), 23 second(s)

    Estimate based on last boot record in the event log.
    See UPTIME /help for more detail.

    Bite it.
  • In summary... (Score:3, Informative)

    by mikael ( 484 ) on Wednesday November 16, 2005 @04:37PM (#14046849)
    According to the article they compared the following platforms:

    Windows Server 2003 with SQL Server 2000
    Red Hat Enterprise 3 with Oracle 10g
    Red Hat Enterprise 3 with MySQL 3.23

    They measure two items:

    (1) The number of vulnerabilites reported over a period of time and
    (2) The average number of days of risk

    For each platform they record the number of security advisories reported
    for the kernel, libraries and all related applications. These include
    all low, medium and high risk reports.

    The time period was between March 1 2004 and February 28, 2005, and only
    included those vulnerabilities fixed in this period.

    Unfortunately, they don't go into the exact details of each advisory.
    But here is the summary count:

    Windows = 63 (16 Internet Explorer)
    RHEL/Oracle = 207 (Linux kernel = 38, Oracle = 30)
    RHEL/SQL = 116

    They then count the number of days until each security risk (low/medium/high) was fixed.
    These get accumulated and then divided by the number of reports filed to give the
    average number of days at risk:

    Windows = 31.98
    RHEL/Oracle = 38.73
    RHEL/MySQL = 61.64

    Obvious there is a bias here, as they don't explicitly list the security advisories listed,
    and this is based entirely on the number of components that are considered to be needed for
    each server.
  • Re:Well (Score:5, Informative)

    by Karzz1 ( 306015 ) on Wednesday November 16, 2005 @04:38PM (#14046855) Homepage
    What similar technology even exists in windowsland?
    Not to be a MS fanboi, but sysprep works pretty well alongside Ghost.
  • Re:My servers . . . (Score:1, Informative)

    by Anonymous Coward on Wednesday November 16, 2005 @04:43PM (#14046906)
    What are you running, NT 4.0?

    You shouldn't have to reboot more than once a month, if that.

    Not all patch Tuesdays require a reboot so there may be times where your server can stay up for months until the next update is required.

    Windows 2003 is far better than 2000 and especially NT 4 and rebooting.

    IIS 6.0 can automatically restart application pools after periods of time or if their health descreases.

    You shouldn't have to reboot once a week. Also, patch Tuesday was just recently so of course your uptime values are low.

    And while 99.9% is quite different than 99.999%, if you schedule the reboots during early AM hours, very few if anyone will be affected by the reboot, which lasts only a few seconds to a few minutes depending on the complexity of the hardware.
  • by khasim ( 1285 ) <brandioch.conner@gmail.com> on Wednesday November 16, 2005 @04:47PM (#14046941)
    The link posted in the story is not correct.

    Just click through and don't give them any info. You can still download it.

    http://www.securityinnovation.com/reliability.shtm l [securityinnovation.com]
  • by Anonymous Coward on Wednesday November 16, 2005 @04:53PM (#14047012)
    I can see this being true. This isnt really a "Linux" problem, but an issue with the open source program asterisk. This was really our companys first dive into linux/open source. We hired a linux guy and planned on implementing VOIP throughout the company. After about a months worth of problems we finally switched back to the old system. Its not really an issue of open source as an idea but rather that there is no company out there taking responsibility for things working and nobody to call when they dont. I dont know the specifics of what the problem was but 68% longer to setup Linux systems seems more than possible just due to the scattered nature of the community and how things are put together. However, this has certainly improved over the past couple years, its just not at the Microsoft "ease of use" level yet. Just my 2 cents.
  • I'll bite (Score:3, Informative)

    by Vainglorious Coward ( 267452 ) on Wednesday November 16, 2005 @04:54PM (#14047022) Journal
    [Linux uptimes : 468, 331, 664 ; Windows uptimes : 3, 9, 11]
    My work machine and home machine both have better uptimes. And I've seen (laid my hands upon) windows servers with uptimes orders of magnitudes higher.

    Better than his Windows uptimes, or his Linux uptimes? Even if it's the latter (and I doubt that, see below), all that says is that you never apply updates to Windows. So you never update, yet you have the temerity to question his "fucking" windows admin skills?

    As to "orders of magnitudes" higher uptime, that means at least one hundred times better - I am quite confident neither you nor anybody else has ever seen a Windows server with *tens of thousands* of days of uptime.

    Maybe you should change your nick to everphullofshitski ?

  • by TubeSteak ( 669689 ) on Wednesday November 16, 2005 @04:57PM (#14047049) Journal
    I don't know if you RTFA, but I did...
    then I looked at the linked PDF and got confused,
    because that PDF is about database security.

    The correct Link:
    MS Summary Page [microsoft.com]
    The PDF [microsoft.com]

    [Your Complaint About /. Editors Here]
  • by Hymer ( 856453 ) on Wednesday November 16, 2005 @05:03PM (#14047109)
    I am sorry for you... especially if you REALLY think that 4 months of uptime is much for a server... I've (my place of work) got 5 AlphaServers with Tru64 wich has run for 3 years (that is OVER 1000 days) without a reboot... and that is considered NORMAL in UNIX/Linux, NetWare, Vax, AS/400 and S/390 environments...
    --
    anything is better than Windows... well allmost anything...
  • by nubbie ( 454788 ) on Wednesday November 16, 2005 @05:12PM (#14047203) Homepage
    FTA:
    Acknowledgements

    This study and our analysis were funded under a research contract from Microsoft


    o_0
  • Re:Well (Score:3, Informative)

    by metlin ( 258108 ) on Wednesday November 16, 2005 @05:46PM (#14047493) Journal
    That much is probably true. Implementing some new process on a Linux box probably does take a bit longer. But here's the thing: Once it's done, it's done.

    True, for the most part.

    I've seen enough gawd-awful in-house software and scripts in Microsoft shops to know better than to be impressed by how much "faster" it is to adapt their shit. If you count all the down-time and set-backs which can happen after implementation, you probably ultimtely save a lot of time by going with a Linux-based enterprise.

    Now I've a bone to pick with this point - the poor quality of code is by Microsoft shops, which is not really Microsoft's fault. I can point you to equally God-awful pieces of code by several "Open Source" shops, if you get my drift.

    Sure, Microsoft encourages writing easy code, but don't blame them because some MS shop decided to hire an MCSE/D who learnt to write a few lines of ASP and VB code and called himself a "programmer".
  • by mfifer ( 660491 ) on Wednesday November 16, 2005 @05:55PM (#14047573)
    Two of my Windows 2003 servers for this calendar year...

    File server:

               System Availability: 99.9786%
                      Total Uptime: 316d 14h:11m:34s
                    Total Downtime: 0d 1h:37m:29s
                     Total Reboots: 21
         Mean Time Between Reboots: 15.08 days
                 Total Bluescreens: 0

    Mail server:

               System Availability: 99.9859%
                      Total Uptime: 319d 15h:45m:56s
                    Total Downtime: 0d 1h:4m:43s
                     Total Reboots: 13
         Mean Time Between Reboots: 24.59 days
                 Total Bluescreens: 0

    For a small biz, we'll take 99.97/98% uptimes and be DAMN glad about it!  ;-)

    I'm nobody's Windows fan either (OSX is my preferred), but the claims of wild instability need to be taken with a grain of salt, IMHO...

  • Re:Well (Score:5, Informative)

    by FatherOfONe ( 515801 ) on Wednesday November 16, 2005 @06:05PM (#14047660)
    Active Directory is integrated but going with any type of directory service makes the overall desgin more complex. Does it help "some" organizations? Yes, but you pretty much have to use AD if you want to use Microsoft. Now could someone please explain to me why Microsoft still uses Domains with AD? Doesn't a true directory service not use Domains? Also can you have two people in different OU's on the same "domain" with the same exact name. Something like
    ou=marketing,uid=myLogin
    ou=hr,uid=myLogin

    with only one server?

    NTFS vs Unix file permissions. This use to be true but no longer, read up on ACL's in Linux and Unix, they have been around for a while. I would point to secure Linux and say that Microsoft doesn't have anything that competes in this arena. Granted this is somewhat complex and a lot of shops don't need it.

    IIS is easy to configure, but then again using YAST or any of the webmin tools make Linux/UNIX a snap to configure. I would argue it is easier to admin a server with webmin than it is to learn all the Microsoft admin tools.

    SMS is finally a decent package for Windows only shops. So is WinInstall and other products.

    Oracle VS SQL Server. Oracle is free for one processor, 2GB of RAM and a 4GB database size. It runs on multiple platforms and it's target market is for higher end databases. It can mount XML, TAB delimeted and other files natively as tables. That is very very nice to developers. SQL Server has the DTS stuff. DTS is very nice for moving data around, but not as nice as actually mounting files as tables. Oracles Enterprise manager is very comparable to Microsofts, and at least with Oracles EM you can actually sort data after you view it AND you can see the SQL that is being generated by the query. I will say that the query builder in SQL server is very nice. I can't comment on DB2... All in all I would say that both are very friendly to developers, but one is free for small to mid size shops and one is not.

    Now I find the core difference in Windows and Linux is that most shops do a LOT more on one Linux/Unix box than one Windows box. Most Windows shops (ours included), have a Windows server for one specific task, perhaps two tasks. Most Linux and Unix boxes run many different tasks and as such you need far less of them. Perhaps this is just the attitude of Windows users to purchase more servers because they are "cheap" but I can say that every place I have been this is the case. Most Unix/Linux guys you talk to mention two things, their uptime AND the amount of crap that is running on their boxes. Most Windows guys I talk to mention the number of servers they manage. So in short this needs to be factored in as well. This issue may also come from all the DLL hell that has plagued Microsoft for years, or the fact that it was difficult to impossible to run different versions of SQL server on the same box.

    You are correct in mentioning security as a major concern. The constant amount of patches and reboots needs to also be factored in. You start to really need tools like SMS when you have 100 to 500 Windows servers that need patched as often as they do. Now if you replace those servers with say 10-20 high end Linux boxes then the need for an SMS type of application starts to diminish. This is not to say that you couldn't use a product like E-Directory and Red Carpet to manage those boxes, but the need isn't as great.

  • Well.... (Score:3, Informative)

    by einhverfr ( 238914 ) <chris...travers@@@gmail...com> on Wednesday November 16, 2005 @06:11PM (#14047706) Homepage Journal
    Ok, on some of my systems, I don't worry too much about local root exploits. These systems are extremely hardened and have very limited access to anything. Because of this, I don't worry about the local exploits too much. After all, if all your box is doing is filtering packets, and you can only log in with public keys from a designates system, and no other services are exposed, then the uptime may be more important than the marginal security gain of a reboot.

    However, these are the exception rather than the rule. Once you have squid, apache, MySQL, PostgreSQL, BIND, or any other network service exposed then local exploits become important. Why? Imagine if I find a way to break BIND such that I can cause it to do something arbitrary. Now I can use the remote vulnerability in that service to attack the local root vulnerability and gain root access.

    In other words, remote code execution in *any* service plus local root vulnerability == remote root vulnerability. If you must prioritize, fixing the local vulnerabilities might well buy you more security.
  • Re:Well (Score:3, Informative)

    by zariok ( 470553 ) on Wednesday November 16, 2005 @06:12PM (#14047718)
    Kickstart - http://www.tldp.org/HOWTO/KickStart-HOWTO.html [tldp.org]

    Welcome to the new world.
  • Re:Nice to know (Score:5, Informative)

    by swillden ( 191260 ) <shawn-ds@willden.org> on Wednesday November 16, 2005 @06:51PM (#14048018) Journal

    The reason Windows locks an executable file that is in-use is that it uses it as a kind of mini-swap file. If you need to swap part of that binary's code out to disk, windows doesn't - it just forgets it. If it needs that code back in memory, it reads it directly from the file on disk.

    All modern Unix-type systems, including Linux, do the same thing. Yes, that means you can have a situation where:

    1. Program 'foo' is executed.
    2. Program 'foo' is swapped out (or perhaps just never loaded -- application code is paged in on-demand, so if there are big parts that were never executed, they were never loaded).
    3. Program 'foo' is deleted, while the process is still running.
    4. The running process needs to page in a portion of the deleted file.

    What happens? Nothing much. It works just fine. How? Because when I said the program was "deleted" in step three, I wasn't being precise. What really happened was that the program was "unlinked". That removes the directory entry and makes it so no process can create a new reference to the file. But any running processes already have a reference to the file, and the actual file stays in existence until all references (both filesystem references and process references) to it go away.

    This holds true for all files, too, not just executables. For example, it's not uncommon for me to start a download then, while the download is running, decide I don't like where it's being written. No problem. I just move it. As long as I'm not moving it to a different file system, the download process doesn't care, because it isn't writing to "/home/shawn/foo.tar.gz", it's writing to "the file handle referencing inode 274327". It doesn't matter a bit if that inode happens to get relinked into a different part of the file system.

    No, there's no excuse for this particular bit of Windows braindamage. The Unix solution is better in every way.

  • Re:Nice to know (Score:3, Informative)

    by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Wednesday November 16, 2005 @06:53PM (#14048039) Homepage Journal

    think about it for a minute, Unix has the exact same issues with replacing in-use files as Windows does. At some point you have to stop using the old and start using the new, even with Unix, and you cannot delete the old until you've finished using it.

    Uh, I think you're the one who needs to think about it. You can delete all links to the file, and while the inodes are not freed until the last reference to the file is closed, the file is for all other intents and purposes deleted. The only discrepancy is in the realm of free space. (Immortalized in the eternal brain teaser, "df vs. du")

    Meanwhile, on NT, you can't even delete a file that is in use. You just can't do it! You can't rename it to move it out of the way either, like you [generally] could in DOS and Win9x. Finally, in the case of shared libraries, even if you could, Windows only allows a single instance of a DLL to exist, and the instances are identified with names.

    Neither of these problems exist on Unix. Hence, Unix does not have the same problems as NT. Thanks for playing, though.

  • by Anonymous Coward on Wednesday November 16, 2005 @06:55PM (#14048060)
    I tend to believe the report due to the fact that I had to work with a Linux advocate. Here is the story:

    We needed a new server to host+share files for internal use and host Apache/PHP/MySQL for our intranet application. This guy I works with decided to go with Linux. Having played around with RedHat/Mandrake/Knoppix, I thought, yes that sounds good, the whole process should only take a few hours. However, this guy decided to install Gentoo and compile everything from scratch. Instead of a few hours install, it took one week to compile (and the server still has no GUI) and another week for this guy to learn about all the config files he has to alter to make things work!

    Personally I have nothing against Gentoo, their goals, etc. However, when you are working in a business, downtime equates to lost time, which equates to money. Just yesterday, an emerge update that was interrupted by a power failure meant our web server was down for 3 hours. This is utterly ridiculous (I know we should have had backups, however we are talking about someone willing to spend two weeks to set up a server in a 10 PC network).

    Contrast this to Windows Small Business Server (SBS) 2003. Install took a day, after some mistakes on my part, reinstall took another day, then by the third day we had remote access to Outlook, remote access to each users' desktops, a web server, a DNS server, active directory, incoming fax manager (that can route faxes to e-mail, intranet site, or print), print server, and more.

    While the cost of SBS 2003 may have been about the same as Mr. Gentoo's two week salary, when you take into account the disruption to the business what works with minimal intervention wins. I just hope this story can help others going down the path with a similar network admin: DO NOT USE GENTOO FOR SMALL BUSINESS!
  • Re:Nice to know (Score:2, Informative)

    by Trepalium ( 109107 ) on Wednesday November 16, 2005 @07:02PM (#14048107)
    I complained about this once, and someone directed me to a thread from Raymond Chen [msdn.com] on his blog which explains the rational behind this design. The basic part of the argument is that there can be intercommunication between components, and replacing one could cause a running program to suddenly malfunction. For an example of this, try an online update of Firefox or Thunderbird without restarting the programs. The program will act very strangly (about window won't work, options may not work, etc) until you restart.

    Now, I don't fully agree with his conclusions, because if you take the argument to it's logical conclusion, it's never safe to overwrite a file on the system without a reboot. Microsoft decided to be conservative in their approach to files in-use to protect the user from himself. In the Linux world, the ability to replace files that are in-use does cause some problems. Replacing glibc and/or PAM can cause authentication problems without a restart of certain services. Replacing mozilla products cause some of the problems I mentioned above. Replacing certain Gnome/KDE desktop components can occasionally cause failures to communicate between the old and new version. For every one of these that cause problems, there are dozens more that don't. Letting you replace files for most services (Apache, MySQL, Samba, etc) means you can limit your downtime to seconds rather than tens of minutes. Most desktop apps will continue to run the old version until you actually restart the programs in question.

    Raymond Chen's blog is probably one of the best sources of information on why some things are done the way they are in Windows, especially when they seem completely illogical. He talks about why Windows uses Ctrl-Z to end files [msdn.com], complaints about people wanting more ways to hide files [msdn.com], etc. He has some interesting tales to tell, and if you deal with Windows on a regular basis, it can also be quite revealing.

  • Re:Well (Score:5, Informative)

    by jimmyharris ( 605111 ) on Wednesday November 16, 2005 @07:31PM (#14048297) Homepage

    I don't have a lot of experience with Windows, but Kickstart [redhat.com] is one of the most impressive pieces of Linux software that I've used.

    Network PXE boot, enter a configuration file location and sit back while Kickstart configures and partitions your server, downloads and installs all your packages, runs post-installation scripts to install updates and start all your services, and finally reboots your completed server. All without any intervention.

    Not to mention that if you ever need to re-deploy that server, or deploy a similar server, you can reuse the configuration file to guarantee the server is identical.

  • by Anonymous Coward on Wednesday November 16, 2005 @09:38PM (#14049017)
    It seems all the 'analysis' posts so far have read a different PDF?

    My summary would be:

    1) They compared Windows Server 2002 with SuSE Linux Enterprise Server 8, the final step in the study was to upgrade / migrate to Windows Server 2003 and SuSE Linux Enterprise Server 9 respectivly.
    2) Both systems were running a common e-commerce stack, MS-Everything for Windows, LAMP for Linux. the e-commerce software used was available for both Windows and Linux (they didn't say what the software was).
    3) Both systems were patched each month - patches on both systems went ok, no major differences.
    4) They set 4 tasks spaced through the year, involving adding additional features to the e-commerce site by applying additional modules from the software supplier.
    5) The second module required a newer version of glibc. This is where it fell apart as they were not allowed to upgrade to SLES9. They attempted to upgrade glibc through three ways. One downloaded the latest version of glibc from a package distribution site, and ran into a large number of broken dependencies. One downloaded the glibc component from SLES9, and ran into the same problem, and the third downloaded individual files and replaced various components by hand, ending up in a working state but with RPM no longer reporting dependencies correctly.
    6) Aside from the glibc issue, all milestone tasks were compleated more quickly on the Linux solution that the Windows solution.

    So my analysis is that the conclusion drawn by this report is excessivly influenced by the requirement of the e-commerce software to have the glibc version updated at task 2, without allowing the upgrade of Enterprise Server. This is a major undertaking in most systems, the closest I can think of in the Windows world is a full OS upgrade, a very large number of packages are dependent on glibc. That one administrator was able to do it by hand shows their skill imho. There was no comparable task for the Windows admins.

    Were I an admin faced with this situation, I would consider the following to be more suitable options:
    1) upgrade to SLES9 (it supported the required glibc version)
    2) use a different 3rd party tool - the authors of the report acknowledged that there were a wide range of alternatives
    3) change to a different e-commerce suite
    4) migrate rather than upgrade, provided that a version of SLES8 was available with the new version of glibc.
  • Re:Well (Score:3, Informative)

    by rifter ( 147452 ) on Wednesday November 16, 2005 @10:02PM (#14049139) Homepage

    easier and quicker to deploy? Compared to what? Any shop using, say, redhat enterprise, can deploy a box in a few minutes, including a full lockdown, using kickstart. What similar technology even exists in windowsland?

    It's called an unattended installation in windowsland. And they had it before redhat had kickstart. And yes you can apply a full set of patches and if you're wily enough you can get in lockdowns and such. The other people are touting Ghost because that is much more often the method used to deploy servers. This is because most of the things that make a windows machine useful are not and cannot be distributed with the operating system, even when they are free-as-in-beer things like acrobat or compression programs.

    Ghost essentially does what dd does, with a few extra things thrown in that make it worth buying, like allowing you to change sids, compressing the images, etc, etc.. and it's an off the shelf product that works whereas to come up with an equivalent solution with free tools there would definitely be some cobbling to do..

    But essentially kickstart == unattend.txt done the right way.

  • Re:Well (Score:3, Informative)

    by einhverfr ( 238914 ) <chris...travers@@@gmail...com> on Thursday November 17, 2005 @02:47AM (#14050239) Homepage Journal
    But is this really so different from self-proclaimed college-drop-out "Linux gurus" who whip together sucky and insecure "solutions" in MySQL and PHP using the "powerful open Enterprise OSS LAMP-stack" ? You can write good as well as bad code both on Linux and Windows, and there are more than enough examples for both on both platforms.

    True.

    But there are inherent differences that should not be overlooked.

    Windows is not particuarly scriptable in the way that Linux is. Yes, you can do some basic things, but it is not a toolkit. It is a set of large blocks and if you want to put them together a certain way, you have to do real programming.

    On Linux, one can often string a large number of components together with very light-weight scripting (i.e. nothing more than simple system commands and not even using anything as complex as sed of awk).

    This study mostly whines about Linux being unsupportable. Given how frequently it is used in ecommerce apps, how likely is this? On average I have found that I can impliment new features *faster* on Linux than I can on Windows.

    I was very disappointed in this study. The GetThe"Facts" campaign is actually going down hill when they have gone froms sponsoring surveys (as in the IDC document) to sponsoring simulations (as in this one). Well, at least they are up front with their bad methodology.
  • by leonbrooks ( 8043 ) <SentByMSBlast-No ... .brooks.fdns.net> on Thursday November 17, 2005 @06:06AM (#14050763) Homepage
    Four to six times as expensive if you go the SBS route.

Without life, Biology itself would be impossible.

Working...