Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Linux Software

How To Create a Linux Network for Peanuts 350

securitas writes: LinuxWorld has the first installment of a series on how to go from being a Windows based shop to a Linux one." One of the article's points, one that I strongly agree with, is how overpowered the machines are that most people buy.
This discussion has been archived. No new comments can be posted.

How To Create a Linux Network for Peanuts

Comments Filter:
  • Thinnet, yuck (Score:4, Informative)

    by BobandMax ( 95054 ) on Sunday August 26, 2001 @11:27AM (#2218322)
    I thought that I might never have to hear or read that word again. The bad memories of downed networks because some user unplugged his machine or knocked off the connector or removed the terminator are still way too fresh.

    Why can't we all just get along without it? Splurge the eleven dollars for a 10/100 NIC and put in CAT 5.

    This article is way off-base on several points. If my employer suggested that I maintain a garage sale network as described, I'd find another job. Yes, X-windows terminals are a perfectly valid way to go, but put a halfway decent machine on the job. You and your users will be much happier.
    • I thought that I might never have to hear or read that word again. The bad memories of downed networks because some user unplugged his machine or knocked off the connector or removed the terminator are still way too fresh.

      Why can't we all just get along without it? Splurge the eleven dollars for a 10/100 NIC and put in CAT 5.


      No kidding. Chances are the CAT5 is already in place and it would be cheaper all around to get some old ISA 10base-T cards from a bargain bin than to rewire with BNC.


      my employer suggested that I maintain a garage sale network as described, I'd find another job. Yes, X-windows terminals are a perfectly valid way to go, but put a halfway decent machine on the job. You and your users will be much happier.


      Sure, for programming and other exotic uses for a computer. But if the only thing the users need to do is word processing, email, net browsing, and maybe a few other apps, then this is perfect. My last year in high school we experimented with setting up half a dozen thinclient library computers for internet access. And let me tell you, it was only about 10,000 times easier to keep those things running.

      • These things would be perfect even for programming, really. Especially if you're into server side development and use Perl or Python.

        If you need a sophisticated IDE then yeah, X terminals may not cut it. KDevelop should work on them. I'm not sure about Kylix.

        Even things like the GIMP work fine over X over ethernet. I've done it.

        But things like video editing and of course games would not be well suited to X terminals. But how many office users use (or should use) such things?
    • Re:Thinnet, yuck (Score:2, Insightful)

      by Kalabajoui ( 232671 )
      No doubt! I can build a very powerfull workstation
      with a decent 17 inch monitor, a keyboard, and
      mouse that don't have twenty years of crud built
      up in them for under four hundred dollars. I don't
      care what you run on a 386DX based computer, even
      the DOS prompt is so slow that I can type faster
      than my input will be displayed on the screen!
      The author of the artical is probably the kind of
      guy that would look at a burned out, eye-straining
      monitor and think that it's good enough. Then there is the graphics adapter, which will surely be inadequate in both visaul quality and display speed, good monitor or not. He makes excellent points, however, I think he takes the cheap hardware idea to cheap and obsolete realms to which it doesn't need to go. At least not for a modern office: Third world countries, sweat shops, and businesses that don't use their workstations on a daily basis being a few possible exceptions. Obsolete hardware leaves no room for upgrading to new and usefull applications that require the additional
      horsepower of a modern CPU and graphics adapter.
      I would rather GNU Linux and other free software
      be associated with 'frugality', not 'cheapness' or parsimony.
    • no kidding. (Score:2, Troll)

      by No-op ( 19111 )
      I agree with you completely... I read that and went "my god". why would anyone in their right mind consider using that technology again? I still cringe thinking about trying to maintain that stuff.

      For that matter, the disgustingly low cost of decent quality 10/100 pci nics (netgear comes to mind- I prefer intel or 3com, but cheap is cheap, right?) and the low cost of cat5 or at worst cat3 really makes thinnet an insane concept. For that matter, having all those collisions is not really my idea of fun- investing in a few decently priced switches would improve his network performance by quite a bit. (there's such a thing as LATENCY, besides pure bandwidth :P)

      It seems to me that the guy writing this article is some kind of nutjob just out of school or something, who sees a piece of crap PC and says "Hey! that'd make a great (DNS/DHCP/SMTP/whatever) server." and then he proceeds to build it, and go from there.

      Now here's my take on it- if that guy worked for me, or I was hired to manage him, I'd fire his ass faster than you can say "GET OUT." people like that are dangerous, because they don't think about some of those important things... like stability, downtime costs, etc. I don't care of the bargain basement box was super cheap, I'd prefer to spend a few more hundred and be sure the damn thing will always run and be something I can get parts for if it breaks.

      If I built my array of DHCP servers, or DNS servers, or something like that out of generic desktop 200-300mhz boxes (like he suggests) I would be gone. and I would deserve to get canned. to do that when you need to guarantee that things work is just blatantly retarded.
      • So you're suggesting that new hardware is always going to be more stable? I'd like to find out where you're buying your stuff.

        Quality in the last few years is total crap. It's almost impossible to find retail equipment that doesn't cut corners wherever possible.

        Now, assuming speed and the latest technology isn't an issue at all, I'd certainly trust some of the old huge, heavy boxes I have here over just about anything that you can buy new in the sub-$1000 market.

        Power supplies that die way too soon, a CPU fan that craps out and burns your processor up in 3 minutes, flimsy cases that cut the back of your hand because they're too cheap to finish the edges, sorry case fans that start rattling after a few months... the list goes on.
        • first off, I'm not talking about his consumer hardware, although having conformity in that would be good as well... I was mostly referring to his servers, which should be good, stable, quality boxes. If you read the article you would have noticed he suggested slapping together a crappy old desktop to use as an X server for the environment- it wouldn't hurt him to use an older server (if he's cost constrained) to build a slightly slower (than the bleeding edge) but totally rock solid and decent performing box. There's more to putting things together than just using parts- you need to think about what happens if those parts break, etc. I find that using old compaq equipment is good since I can source those parts almost anywhere. (I do a lot of volunteer work building systems and environments for non-profits and schools, and nothing works better for a cheap server than a 3 yr old compaq 1600. cheap and fast, with great subsystems.)

          anyway, nobody doing infrastructure work gives much of a damn about the desktops anyway. they're just end devices :P
  • by JoeShmoe ( 90109 ) <askjoeshmoe@hotmail.com> on Sunday August 26, 2001 @11:33AM (#2218337)
    Often times you simply cannot find cheap hardware to purchase, unless you want to build it yourself or go with refurbished units.

    Build it yourself is a poor option because it is very hard to find the quantities of parts you need, especially since business environments value similarity in desktop platforms. So you end up with groups of five or ten machines with whatever was on sale that week at Fry's Electronics.

    If you are like most Windows-based companies you turn to vendors like Dell/Compaq/IBM and then the problem is that the cheaper machine you can buy is still a 900MHz Celeron with 256MB of RAM and a 20GB hard drive (granted it's only $600 but still what if you just need it to run training applications through a web browser?). Plus since you are riding the tail end of the cost range, you again enter the problem of having a month go by and suddenly you have completly different hardware.

    So it's a choice between

    * one vendor to resolve problems
    * one platform to support/rollout
    * one price that's not so great

    or

    * many vendors fingerpointing each other
    * need a different image for every 5th system
    * a price hovering around the lowest possible

    For home/small business users I think the second choice is a valid one, but for large business and corporations I just don't think they'll ever see the value in it.

    - JoeShmoe
    • For home/small business users I think the second choice is a valid one, but for large business and corporations I just don't think they'll ever see the value in it.

      If you're talking about a large business, that savings could be quite sizable...say, $500 per terminal (which is probably less than the real savings) times 1000 terminals...that's half a million. I'd say you could pay one or two sysadmin's salaries to make sure all that different hardware worked with that kind of cost savings. And really, once X is set up, you pretty much don't have to touch the system after that unless the hardware fails (since the user is not really using software locally).

      Plus, when you need to upgrade in 2-3 years, you really only have to upgrade the main servers, which is a massive cost savings.

      I'd say that large businesses are exactly the ones who can benefit most from this. Especially places where the user base does not do system-intensive things (the government comes to mind here...;-)

      • You're not thinking like a corporation...you are falling into the same trap logic that real people (not companies) use.

        Let me put it in perspective for you: The cost of an average OSHA-compliant workspace chair is $500. Now you can find perfectly usable chairs at OfficeMax for $200. So with 1000 users that would be a cost savings of wow, $300,000 right?

        Wrong. The cost of workman's comp lawsuit for backpain due to less than stellar lumbar support could end up being in the millions. Same with monitors. I'd like the idealistic author of this article to find a 21" monitor that fits his peanuts budget. Because that's what any user with glasses an inch thick is going to demand. If you don't fill that request, prepare for a discrimination suit.

        To put it in perspective...the author is suggesting that companies spend LESS on computers than they spend on LIGHTING or TOILET PAPER. There are certain things that can be considered the cost of doing business...well lit cubicals, ergonomic chairs, and stable name-brand computers are three of them.

        Even under your scenario where the company is saving a half million dollars...if just one of those less-than-top-of-the-line PCs fails while performing a mission critical severity-one application then it could cost a company a hell of a lot more than the half-million in savings. Yes, any PC can and will fail but if you buy name brand components from one a major computer manufacture, you will literally have engineering trampling each other to get it back up before they lost a multi-million dollar customer.

        So, I stand by my earlier post. I see the value for home and small business applications (maybe even a department-wide deployment, particularly in with smarter IT users) but that's it. But corporations love risk management a LOT more than they lost penny-pinching.

        - JoeShmoe
    • Identical hardware is not so important when you are using the clients for X servers only, and running applications on a server. Also, if your computers cost $100 we really are talking about disposable computing. If it fails, throw it out. Maybe keep the hard drive and save yourself 20 minutes reinstalling on a new PC. You can do that 6 times before you touch the baseline of the cheapest machines you are likely to find are.

      I don't know why you think you would need a different image for each machine. Unlike windows which has to reinstall drivers every time you move a card from one slot to another, a single kernel image can support just about any hardware you are likely to throw at it, at least for the purposes of an X terminal, where the only relevant devices are keyboard, mouse, NIC, HDD, and VGA. Just about every NIC every made is supported under linux, and most graphics cards you are likely to run into on obsolete machines. If it became a problem, you could probably find a large lot of obsolete VGA cards or NICs being sold for a few bucks per unit somewhere, and just drop those into every machine.

      If you want sound, things are more complicated, but sound is really just a nicety for most business settings.

      I personally think the latency when running apps over remote X is too high for comfort, but the management issues probably make that a worthwhile tradeoff for non-technical users.

      The only thing I strongly disagree with him is the use of thinnet. Thinnet is fine for connecting a handful of machines in a single office, or even connecting computers in a large lab, but to deal with it on a larger / more spread out scale than that is idiotic. Unless you have a large base of installed cable, use 10BaseTx. You can probably pick up 24 port 10BaseT hubs pretty cheap these days, what with everyone migrating to 100MBit switches. Plus, you have a lot more flexibility to upgrade to 100BaseTx, or repartition your network to keep from getting bandwidth starved as you add more clients.
    • Take a workable alternative: buy a bunch of near-identical machines from an educational institution or business that's doing an upgrade rollout. Rip the hard drives out, upgrade the RAM, keep a one-line-per-machine config file that maps MAC address onto hostname, IP, kernel/filesystem components to mount for each machine (e.g. different kernel for each network card, different image mounted on /etc/X11 for each video card).

      In my case, I wind up with three different kernels and three different /etc/X11 images mounted in various combinations for thirty machines. Each machine (Digital Venturis FX-2, P2-133) cost $Oz100 (about USD$50) including screen plus $Oz80 each for a 256MB SIMM plus about 30 minutes per machine checking it out and recording the config (total of $Oz5400 ~~ USD2700 plus two full days for the machines themselves plus cables, cabling time, and two 16-port switches for 30 workstations). Swapping via the (CAT5 100MHz) LAN saves a machine from locking up when the user does something dim, but causes it to run slowly to let them know that they goofed).

      If you build bitzer machines, be prepared for endless headaches making everything work together. If you use noname machines (and sometimes if you don't) be prepared to discover that not all of those crashes were Windows' fault. <rant>If Mr Trey ``It-Will-Work-Next-Version-For-Sure Oh-This-Blue-Screen-Must-Be-Why-Its-Called-Beta'' Gates had taken the time and trouble to sell reliable, predictable software instead of pushing pretty rubbish out the door in a flurry of false reassurances, the machines we use wouldn't be so crappy: blame for failures would land fair and square where it belongs instead of being masked behind said crappy software and the problems would actually get (ghasp) fixed!</rant>
    • The article is talking about a level of computing that I would call "Disposible". If you are dealing with terminals in the 300 mhz range, you can buy them in bulk on E-Bay or save them from being destroyed. These computers are out there and they are cheap... dirt cheap.

      So if something goes bad, you don't call the vendor. You stick another one in it's place because you could afford to buy ten times the amount of hardware you needed.

  • by VAXman ( 96870 ) on Sunday August 26, 2001 @11:36AM (#2218349)
    One of the article's points, one that I strongly agree with, is how overpowered the machines are that most people buy.

    Maybe if LinuxWorld got some decent powered machines, they wouldn't be Slashdotted already.
    • Maybe if LinuxWorld got some decent powered machines, they wouldn't be Slashdotted already


      Ha, ha. They didn't say to use cheapo equipment for web servers, just user desktops for simple users.

  • by ecalkin ( 468811 ) on Sunday August 26, 2001 @11:38AM (#2218354)
    too many people wanting the latest and greatest. there are several people that i work with that use 300-400MHtz machines with no problem. how do they do it? they haven't fallen in the the MS/Corel/Intuit/'fill in the blank' propaganda trap of having the newest version.

    i use quickbooks 1999!

    it all comes down to understanding what you *need* to do.

    there are people out there that need/deserve powerful machines and there are people that could be just fine with second or third tier equipment.

    e.
    • Well, you could probably slap a "Pentium III Inside" label on the 486SX and most poeple wouldn't know the difference.
    • I'm a windows developer, and I'm still doing fine with my pentium II 300 (I did just upgrade to 256 meg of ram, just 'cause ram is so cheap right now). I have VS.NET installed, VS6, SQL Server 2K. I have the same stuff installed at work (PIII 1GHz) and I only ever notice a slight lag when starting up applications at home compared to work. I try and keep my machine as lean and mean as possible. Don't install crap you don't need or will never use, don't upgrade unless you have to, if you want to try something out install it by all means (but then get rid of it when you're finished having a look at it), Uninstall or disable any services you don't need (this helps security also). I think the games industry does a great job of pushing the boundaries of hardware. I'm sure if I tried to play any games on my PII 300 I'd be in for a pretty rude shock (except my favourite game is Nethack - I don't think that would tax my PII 300 too much). This means that when I upgrade next year or whenever I'll be able to get a kick-ass machine for not very much because my new machine will already be at the bottom end of the curve for playing games on.
  • Obviously, you could need alot of power in the hardware when the operating system does not have enough power to do the job correctly.

    This could be for a lot of reasons: mis-configuration, mis-design, software load on the system, bloat, whatever. There are users who are proad of the number of open windows they can have on a desktop, like this makes them a power user or something.

    of course, there is the old "it's not a bug, it's a feature" factor as well"

    Comparisons to known operating systems are obvious

    - - -
    Radio Free Nation [radiofreenation.com]
    is a news site based on Slash Code
    "If You have a Story, We have a Soap Box"
    - - -

  • As someone pointed out, build-it yourself generally sucks for any network with more than 20 computers or so. Finding antiquated parts in those quantities can be difficult. But for smaller networks, it's great. And as it happens, a smaller business or home or organization needing a small network is more likely to need to pinch pennies than a mega-conglomerate wih hundreds or thousands of machines, for whom such a setup would be too difficult.
  • by sourcehunter ( 233036 ) on Sunday August 26, 2001 @11:49AM (#2218389) Homepage
    Yo -

    Gotta love the /. effect. I had a chance to mirror it quickly here [sourcehunter.com].

    Make sure you try the original link first, please - it seems to come and go

  • The author of this article seems to totally ignore the loss of productivity and morale of employees by forcing them to use older equipment as Linux terminals.

    Let's cover the points on morale first:

    Do you want a four year old computer on YOUR desk? Of course not. You don't care if the IT manager says that it meets your needs, you just want to get your work done as quickly and easily as possible. If I tried this implimentation in my shop, I'd expect to field complaints from dozens of users saying that "their e-mail and Netscape is taking too long to load". If they bitched loud and long enough, their boss will give them the 1.4 Ghz that they want, but not without giving everyone a bunch of headaches first.

    Many of these people have faster computers at home, so they're used to having better desktop performance than what a Pentium 200 with 128MB of RAM can offer.

    Now, the points on productivity:

    Not only will these workers be very annoyed when a slow computer is put on their desk, but their work output will suffer as they wait an extra thirty minutes each day for their applications to load and to save their information. Most of these people are being payed $20+ an hour, so the cost savings from buying cheap equipment will be sucked up quickly.

    Also, If the user is a current Windows user, they'll need to be re-trained for both Linux and it's office applications. It might over a week for the less-skilled workers to get the hang of it. While they are learning, don't expect them to be happy about this, either.

    Older computers tend to break down more, as well, and without warrantees that support cost is coming out of the companies pocket.

    In short, this article makes the critical mistake of putting your users FIRST when planning an IT solution. Keeping your employees/customers productive and happy is a LOT more expensive than most companies IT costs. If you try to pass off cheap PC's on your workers, you'll pay for it tenfold with creating tons of new problems.

    • While I don't disagree with your points, the main idea of this article is to use old PC hardware *as X terminals*, and having a half-decent modern machine act as the application server for these terminals.

      I think this scheme could work, given two amendments:

      -Use high quality, modern video cards.
      -Use highest quality keyboard and mouse (you know, the latest and greatest logitech optical stuff)
      -By the best monitors (at least 17", flat screen triniton sort of monitors)
      -HIDE the ugly beige P100 from 1995 from the user.

      I agree that I would be bummed out if a dusty old 486 or early pentium was sitting at my desk. I probably wouldn't work as hard. But, this way, they never see this ugly machine, and to top it off the components that the user is actually exposed to are top notch.

  • In principle; I agree. For many office-related tasks.. these new Ghz P-4's are rediculous. Something a quarter of the speed would be adequate. HOWEVER...

    Let's say you are setting up a new office. Where, exactly, are you going to buy those machines? You can't. If you buy old, used machines, your costs in maintenance go way up. You want a bunch of machines that are the same, it makes support much easier. A problem in one applies to all, and so does the solution.
    So when you go out and buy 100 brand new mid-level dell workstations.. sure, you're buying something faster than you need... but you're buying them because they will WORK.
  • As a high-schooler whose summer job involved (among other things) a small-scale Linux deployment inside a ~200-person office, the strategies of doing so interest me greatly. However, I've always seen a few issues with remote execution and thin clients that I hope someone here with more experience can address.

    There are three levels of remote management you can do: None, mounting certain directories remotely, and launching only an Xserver on the client. The main problems I've had with the second and third options are:
    Does it take a substantial amount of bandwidth to mount (for example) the /usr tree remotely? The senior admins won't let me do any of that if it will degrade the network.
    Will the users notice the delays substantially on a 100Mb/s network? I understand that this may be ok for word processing, but some of our users (and the main reason why we have linux in our company now) are running airport simulation models that have a complex, graphics-heavy UI and generate reams and reams of data. Would putting apps like that across the network impede their performance substantially?

    I can already ssh into our machines and make them run any program I've uploaded to a certain directory overnight. Are the maintenance savings really substantial enough to outweight the speed/bandwidth issues? Thanks.

    • You bring up a good point, but the article was referring specifically to office automation (word processing, spreadsheet, email and web). For something like airport simulation, of course you need local execution. This article wasn't about that!
  • lots of AC's surpose schools out and lots of bord lawyers (-;

    X get a card that has hardware acceleration is my advice that means one that has good support
    (me I go for an S3 card every time as the old ones are well supported in XFree86 3.x)

    realistaclly you want a window manger that is low on grapgics if you can get people to run TWM the better because that is rock solid and low bandwidth
    (less XPM to shove across the pipe makes john a happy boy)

    realistaclly this setup has been tried alot and works but really

    how about storys about NIS and adding crypto into it

    how about mergeing win2k and unix logins

    lots of things I would like to know rather than beating the old TCO drum realistically who cares people go out and buy what they like in terms of cheapness whatever you want a bang for your buck then go down the tip and grab a machine put a free Word Processor on it and away you go

    what really makes the differance is manageability why do you think everyone started going down the thin client route thats because its easy to manage and means less hassles and less hassles = cheaper

    please stop trying to pull these stunts and try something out

    regards

    john jones
  • um, yah, sure. (Score:2, Informative)

    by Mikesch ( 31341 )
    Here is why this doesn't work.

    1)If you only have a few workstations for a lot of people, you are going to end up paying people to twiddle their thumbs while someone else types a memo. This negates any savings on having cheap/fewer workstations in the first couple of months.

    2)Slower workstations are, well, slower. those small groups of 30 second waits add up. If you want an efficient office, you dont pay people to wait for their machines to load data/get email, etc. New hardware is dirt cheap right now. You can get a good 1ghz system for 700 dollars or so, so why not get one.

    3)Older equipment breaks more and is harder to find parts for. Try to find 72 pin simms that are guaranteed to work for a decent price. didnt think so. How about bios issues with those old 2 dollar motherboards when you try to slap a newer hard drive on them. Digging up AT power supplies? Yes, they do still exist, but they are getting a little more difficult to find. and the pain of working with older machines when they break is hellacious. swapping the power supply in an ATX machine takes about 2 minutes. In an AT machine, it takes about half an hour since I have to pull the entire machine apart. And yes, I do do this regularly. I changed 2 AT power supplies last week. (I work at a Uni, not everyone has new machines).

    4) Old networking sucks. One of the major points of having a network is having the ability to share files. This means you want switched 100mb everywhere. Again, it is cheap enough, why do you pay people to wait. Our main fileserver is on gigabit fiber, and we use it constantly. Copper gigabit network cards are coming way down in price right now, the switches are coming down soon, so you might as well be prepared to go gigabit when you need to.

    5) No office is in a vacuum. Abiword and StarOffice may be great, but none of them read all Office file formats perfectly yet. You still need to use microsoft products to communicate with other offices, for better or worse. Not a troll, just the truth.

    6) Outlook. omygod Outlook is neat. I never saw the utility of outlook and exchange until I worked in an office that used it efficiently. It is at the point where it is indespensible. The ability to share calendars, email, move files around, schedule meetings, etc is wonderful. Yes, this does mean you have to run NT and exchange on a sever, bt we have made this concession. With the exception of our exchange server and our pdc, we are all FreeBSD.

    In an office of 20 people, a 1000-1200 bucks per every 2 years (our average upgrade cycle) for each person isn't a huge cost compared to the salary, electricity, water bills, etc. Why not spend that kind of cash to make sure that work can actually get done and you dont have a sysadmin running around saying "it almost works!" or here's a workaround.

    I'm a unix admin, I hate administering NT, but I have no doubt as to its current utility in most work environments. The benefits it provides outweighs the costs of maintaining it, at least until the unix variants get up to speed on the capabilities.

    • You are missing the point like a lot of other people on this thread. The old hardware isn't going to cause 30 second startup dealys because the applications are running on a server somewhere else, not on the terminal's cheapo CPU. Also, you don't bother upgrading the RAM or changing the power supply in a $25 computer. You throw it away and replace it. Finally, your users don't need a 100mbps network to share files because their files aren't on their desktop, they are on a server. To share them, copy them over to /share.
    • If properly designed, network servers don't have to be particularly slow. As long as you don't have too many people sharing a 10 MBit segment, bandwidth wont be a problem. Latency will be, but that is pretty much fixed. I would have gone with cheap 10/100 NICs (you can get them for $10), to at least allow the option of moving to 100BaseTx.

      If the application set is relatively small, a network server with >=256 MB of RAM is going to have them all loaded all the time. So users aren't going to have to wait that long to start up *office or netscape.

      Sharing files becomes really easy when everyone is on the same machine, or a small cluster of servers. Persumably the server cluster would be connected with 100BaseT as well.

      A number of offices (Windows based, too) use WordPerfect not because they migrated, but because they never migrated to Office. This also dramatically reduces training, since to be honest, the majority of training issues are using applications, not the OS. If they start KDE and are presented with a button that says "Corel WordPerfect", even the most addle brained users are going to figure out what to do.

      I wonder if you could set up an automated office translation server? If the filter APIs of office are exposed via COM or somesuch, someone should be able to whip up some perl and/or VB that would do the filtering on a Windows machine transparently Of course, you would have to pick an office suite whose native format MS Office had good import/export filters for.
  • by geophile ( 16995 ) <jao&geophile,com> on Sunday August 26, 2001 @12:27PM (#2218516) Homepage
    Quibble: a 486 is probably too slow to run StarOffice. That thing is a beast.

    I used to buy the very top of the line hardware and could never get enough power. A 386/33 was non-negotiable -- the 386/25 was just too weak. But now bottom of the line is more than enough.

    More serious point: WHY WHY WHY are fonts so fscking hard on Linux? I've installed RH 5.2, 6.0 and just recently 7.1, and setting up fonts was different on each one, and always a black art.
    StarOffice's cooperation with font servers actually seemed to take a step backwards at one point, and I simply stopped using it. Why don't modern Linux distributions just include the damned font server, at least in the "desktop" configurations? I understand they can't include the fonts themselves, but at least including the font server would be a great start. That is THE single biggest barricade to Linux on the desktop, given the existence of suites like StarOffice.
    • You are the 85h person in this discussion to completely misunderstand the technical details of this story. StarOffice not running on these 386s and 486s with 24MB of RAM. AbiWord is not running there, either. The fonts aren't installed there. All of the applications are being executed on a more powerful workgroup server. The terminals are ONLY handling network packets and drawing things on the screen. That's it!
  • I don't have anything against network computing but I wouldnt buy 20 junkers for business use.
    Lets look at your typical 486 beater you can pick up at a garage sale:

    1. Nearing the end of its life cycle - that means better buy some power supplies that fit that 486 chassis.

    2. You might not need much drive space but that 250 meg drive will be as sloooooooow. This may not be an issue depending on how much local drive use you expect.

    3. Video cards. Your users are going to want to run at 800x600 or higher and those cheesy cards you find on a 10 year old machines won't cut it. Better buy some cheapo modern cards.

    4. NIC, no biggie if you don't mind running at 10 mbit or using thinnet.

    5. No USB, may or may not be a problem.

    6. Floppy drives need cleaning/replacement if you want a dependable read/write. Floppies suck on new machines with new media let alone 10 year old boxes.

    7. Keyboard and mice may not be to the liking of your users. I'm using a keyboard from a 486 right now on my Duron box and love it. Clean/replace mice is required in most cases.

    That being said, in a corporate environment just buy the cheapest celeron or whatever to get some new equipment. For non-profits, hobbyists, communes, post-apocalyptic societies etc its a good idea but go for a Pentium level machine with some decent video.
  • by tcc ( 140386 ) on Sunday August 26, 2001 @12:41PM (#2218565) Homepage Journal
    Too much ANTI-MS BS, if this article was an editorial, fine, that I couldn't criticize, but if it was targetted for System administrators or people about to deploy a network in a small company, it litteraly missed the target.

    1. Who cares how much ressource MS apps sucks and costs, if we are reading that article, chances are we already KNOW all that crap and are looking for an alternative.

    2. About no one uses 386/486 anymore, writting a paragraph on how the pentium III are useless horsepower to run all these apps and a 386 would do fine is pointless, unless you plan to deploy a network in a 3rd world country.

    3. It gives you pointers, nothing good for someone comming from a windows env. You want a step by step guide, sounding easy a-la-windows install, to make it look simple and straightfoward. That's the big problem with some linux article, the authors knows their systems so well, that they can't put themselves in the shoes of someone that install linux and doesn't know how to access his floppy from the shell because he's used to a:.

    This is *NOT* a rant, but a constructive criticism about an article that attracted a lot of people (server was half dead :) ) But unfortunately, probably didn't archieve it's own objective.

    • Or even, specially not in the third world.

      The Brazilian (ok, Brazil is not a third world country, but we are far from rich) popular computer project uses a AMD K6II 500 with 64 MB of RAM. Why? Probably because huge projects can not depend on the availability of out-of-line parts.

      And the builders of this system agree with your third point also. They made an easy to install stripped down distribution based solely on a stripped down KDE (only Konqueror and KOffice, plus the supporting packges and apps). The workstation is diskless, with a 16 MB flashcard to boot from. All in all, it end up being a nice Internet/Office machine.
    • 3. It gives you pointers, nothing good for someone comming from a windows env. You want a step by step guide, sounding easy a-la-windows install, to make it look simple and straightfoward. That's the big problem with some linux article, the authors knows their systems so well, that they can't put themselves in the shoes of someone that install linux and doesn't know how to access his floppy from the shell because he's used to a:.

      What would be great is if someone were to put together a bootable CD iso that had the ability to search for dhcp servers and then Windows domains via SAMBA.
      Something any MCSE could download and burn, then drop into any old PC with a nic and a CD drive.
      Imagine you're a network admin with not a lot of time, you could hand one of these CDs to any new/visiting employee and just tell them to boot from it and use their normal password.
      All you'd need is a Linux box sitting on the network somewhere running Webmin, for the Admin to add users to.
      In fact, I'd love to burn such an ISO onto one of those 50MB business card CDs.

      I can't be the first one to think of such a thing - I'm heading over now to ltsp.org to see if it's already available...

      Cheers,
      Jim in Tokyo
    • 3. It gives you pointers, nothing good for someone comming from a windows env. You want a step by step guide, sounding easy a-la-windows install, to make it look simple and straightfoward. That's the big problem with some linux article, the authors knows their systems so well, that they can't put themselves in the shoes of someone that install linux and doesn't know how to access his floppy from the shell because he's used to a:.

      Looks like he's covering that in the next article in the series. Nuff said.
  • What this guy is pushing is time-sharing using dumb terminals. The terminals happen to run Linux and X-server (i.e. client), but they're basically dumb terminals. Echoing characters across the net, giving you that sluggish feel from the bad old days.

    Yet he's using machines for terminals that are powerful enough to run StarOffice without any trouble. Why run the apps on the server?

    There's an opportunity here. One of the remaining Linux players should build up a "Linux for business desktops" install, as a boxed product. Designed to install on low-end machines, it should install just the stuff needed by non-programmer business users, along with a suitable predetermined configuration with good security. Offer it as a boxed product, with one CD and one good manual, covering both the system and the office app, that's all you need to get work done. Offer a matching "Linux for business servers", with a compatible configuration. Sell through places like Costco and Smart and Final. Push the simplicity aspect - computers for business, without the bells and whistles.

  • Running all apps over the network via a Xserver sounds like something every one would want, unless they count when the network goes down! The older your network is, the more apt that switches, hubs, patch panels and the like will fail. Notice I said WILL not MIGHT! Do you have the same powerline protection in all of youre patch closets that you do in the computer room? DO you have a UPS in every patch closet? When power outages happen and they do, the patch closet hardware gets hit hard.

    Also, X is chatty as we all know. If your network is already chatty, imagine running X over a 10 Megabit connection! My point is that desktops are over powered. They are supposed to be. If your ran all programs on the server, you are going to need a even more expensive server with scads of ram. With the desktops and some storage on the network via Network Attached Storage or a Storage Array Network, a few servers and a production system (Database, webserving...etc etc...) and you have a complete system that even if the infrastructure is down, is still useful. When the network goes, the users can still type up a letter, do a spreadsheet ...etc etc. They may have to save to harddisk and print and move it to the server later, but at least that time was not wasted. User older equipment DOES make sense though. If the user's ain't bitching about the computer being slow, then why replace it??

    • You're overestimating the amount of work most companies can do if the network goes down.

      If our corporate network went down, we would be able to type up a letter or do a spreadsheet, but we wouldn't be able to look at the documents referenced in the letter, find the client contacts, look up the details of the help desk call, find the statistics to put into the spreadsheet. We wouldn't be able to access the machines on which we do development, and unless you were one of the few employees with an analog line and a modem, you wouldn't be able to access a client site to fix one of their problems.

      For my specific job, I'd be able to do exactly nothing without the network. This is the same situation I was in at my last job.

      The fact is, for most or at least many companies, networked resources are already critical assets. So why not put a little bit more on there?
      • And I agree. But sometimes these critical things can go down. You are also in the minority. Whle there are many companies who want to go "paperless" many have not. We have rooms dedicated to files and files. Granted, many of our letters are processed by benefits of the network and mainframe, many aren't. Simple thank you for your donation letter(I work in a college....the development office get's many donations and I supposed they may type a letter too), documenting what your doing to fix a problem if you are the guy fixing it, and many other uses I can't think of for the moment, but I am sure plain old users have plenty of things they can do without using the network.

        Now I AGREE that networks let you get work done too. I also agree that networks are critical to most every company. That's not the issue. The issue is that there's alway something you can do without a network.

        You ever had a boss that makes you check off a sheet of things that run on a automated schedule or write down everything that you may end up typing anyway? Ever wonder why companies have files full of printouts from a mainframe stored off site? You guessed it! PARANOIA! You have GOT to be paranoid when dealing with your company data because it's the company's life! All of the things that seem stupid are done for a reason. You can always glean data needed to type a personal type of letter from a printout. Also, networks haven't been around for all that long when you think about it. PC's have been around for 20 years. PC's were used for LOTS of stuff before networks were even thought of! I disagree that you could do NOTHING without a network. There's ALWAYS something to do. Clean out old crap out of your desk, CLEAN your desk, sweep the floor, suck the dustbunnies out from under your desk, wash your coffee cup out, chat with your cube mates about things other then your kids baseball games...all of this could be considered as WORK! WORK is not classified as ONLY things done on computers for god's sake! Unless you work in a hospital and are working on a patient who needs the help of another DR that you are talking to over a internet connection, your whole world won't end if you loose the network (Oh, except if it's YOUR JOB to keep it going!....Way to go dude your forgot to put that closet on a UPS or forgot to plan for enough power on the breaker feeding your closet.....:) ).

  • Nothing beats a S390 with a big batch of dumb terminals attached. Sure you only get text mode apps, but they were fast text mode apps and they worked. Currently my employer is getting rid of all those text mode apps in favor of web based interfaces which are noticably less responsive, noticably less featureful and oftentimes haven't been implemented yet. If you look at the implementation schedules, many of the teams on the project are already years behind -- many of them haven't even written a line of code yet. But we gotta get rid of the mainframes because the GUI's more user friendly and the web is the wave of the future...
  • Having perused the article, many of his suggestions like using 10base2 and 386 computers are simply moronic.

    He's penny wise and pound foolish.

    The author would be a good name to put on a blacklist.. i.e. "Don't ever hire this guy to manage a network."
    • These days, 10 meg ethernet cards qualify as 'antique' and probably go for more than a decent 100 megger would.

      Remember the good old days when you actually had to worry about things like 'ethernet collisions' when hubs were dumb?
      • I probably shouldn't be so critical, but. I went through much the same learning process when I was a youngin out of college 8 years back.

        Except then, a good 10baseT card(3c509) would set you back about $100 or so and a 8 port hub cost around $300.

        So at that time, using 10base2 was actually somewhat excusable. RG58 cable was about the same price as CAT-5, just add a couple of terminators.

        Still it was a nightmare, and when we moved offices I said we should pull in CAT-5 and buy a hub.

        I guess perhaps the difference is that back in that day of my learning. I knew that I didn't know everything, so I went around asking questions of others. I certainly didn't spout my illinformed opinion off in a column, like this guy did.

        Even today, I know the limits of my knowledge and I won't recommend to others that which I am not damn sure about.
  • by sharkey ( 16670 ) on Sunday August 26, 2001 @04:55PM (#2219319)
    Pretty simple, actually. Open up your Sunday paper, pull out a pencil and start drawing. Soon Charlie Brown, Lucy, Snoopy and all the others will have PCs with our favorite Penguin on them.
  • Just install it on server with Vmware or Win4Lin (server version) people who still need to run windows software can then log to these machines and run in.
  • "Cheap and easy" should set off alarm bells if it accurately describes the person you intend to marry, but may be precisely what you should look for in a computing environment.

    Or in a closing time companion. :-)

    But seriously folks, the catch in using older hardware is that the motherboard probably doesn't support large enough IDE hard drives for what you have in mind.

  • cost of distributed computing with the power of centralized computing.


    Original centralized machines are (believe it or not) cheaper and yes, you can provide Sally the Secretary with a Pentium 133 as an X-Windows station. It's possible to do this.


    But as several people have pointed out, just because you can doesn't mean you SHOULD. He makes a poor argument (other than cost) to return to centralized computing and several people have pointed out that, even if we ignore the advantages of distributed computing (there are several), company's are STILL willing to spend the revenue necessary for distributed computing.


    In short, cost alone isn't enough.


    A better argument would be to point out the advantages to centralized computing that are not cost related (mobile 'desktops', centralized administration (no more GHOSTING!), etc.) However, given management's previous experiences with centralized computing, this isn't likely to be persuasive arguement either.


    An argument needs to be found that shows that Linux is cheaper and invokes the use of distributed computing. (The advantages of remote administration is a start - but there's a long way to go.)

  • In sucs [sucs.org] we have a bunch of aging Sparcstations with big monitors all running remote Xs off a Cyrix 200. The sparcs themselves run RedHat 6.2 and the server runs RedHat 7.1.

    Unfortuately, the sparcs can only run in 8 bit. Many apps look terrible. Have you seen Mozilla running in 8 bit? Its theme alone chomps all the colours not to mention the problems there have been with the I-beam becoming invisible. The mere mention of Java or Shockwave is enough to send the CPU guage completely into the yellow.

    Also programs that seem fine on a local X seem to update so incredibly slowly they become practily unusable when run posted remotely to the sparcs (Abiword when tyring to wrap text to a new line was guility here but maybe it was just that early build).

    Worse still getting StarOffice up and running was nothing short of a nightmare (it would just core dump whenever it was posted to the sparcs). In the end I managed to find an IGNORE_XSESSIONERRORS envvar which let users start it up (with a core dump left behind).

    When it comes to defaults for new users there is trouble there too - I installed Ximian to let us run Galeon because Mozilla was too slow. Unfortunately Ximian's pretty installer defaults to using Nautilus which completely overloaded the server when one sesison was running let alone four or five (make it stop! I mean start)... I made a gmc setting but getting rid of Nautilus as the default desktop manager once it's installed itself isn't as simple as it could be. In fact, it simply isn't (yet) all that easy to set up sensible Gnome desktop defaults for all new users - simple things like turning off thumbnail updating is important because when several machines are doing it at the same time it drains percious cpu. Maybe KDE would be better but that seems to run even slower than Gnome.

    I've had to eject countless disks remotely because users have put them into the eject buttonless sparcs not realising that they could only access the floppy drive on the server and have then wondered exactly how they get their floppy back.

    The idea of using esound turned out to be a stumbling block due to broken esound on the sparcs (I've tired building cross compilers but they never seem to completely work).

    I need serious convincing that a bunch of dumb terminals really are better solution. Todays apps need more bandwidth and CPU than ever and when it's being shared out over a compartively slow bandwidth everyone suffers. If everyone stuck to using xterms then it wouldn't be so bad...

    Maybe things don't feel so slow on 10/100Mbit networks but people readlily point it out here (why does it take so long to login?) to the extent that I'm undecided whether using NT4 on a PII with 64Mb is actually any worse.

  • How To Create a Linux Network for Peanuts

    In any case, Linus must be well on his way.

  • by adamjone ( 412980 ) on Sunday August 26, 2001 @11:55PM (#2220115) Homepage
    I'm all for making Linux the enterprise standard, and I truly believe that there are a number of cases in which excessive computing power is used where not needed, but this article is a bit extreme. The auther leaves out a number of items which would be necessary to make this system work.

    - Monitors. How are we supposed to look at pretty X widgets? Dot matrix printout?

    - Network Equipment. A NIC card does not a network make. You are at least going to need some cable and hubs.

    - Cost of installing the network. In most places where this solution is viable (small service businesses, order entry, churches), a network infrastructure is not in place. Files are passed on the floppy-net. Running cables on open floor is not an option, as it is an OSHA and fire safety hazard. So you either need to purchase and install raised floors, or resituate your offices.

    - Scalability. The author never mentions the target number of users in this model. I can see this system comfortably supporting five users, possibly ten if all the employees need are simple text entry forms, but just try to run three instances of StarOffice and five of Netscape on the network, and watch your 300 MHz server grind to a halt.

    - Progress. This system is great... if you believe your companies needs will NEVER change. There is absolutely no room for improvement here. What happens when each clerk must scan a barcode along with an entry? Do we ask the clerks to enter the barcode by hand?

    - Customer/Employee satisfaction. No one likes to work on equipment that is known to be out dated and obsolete, even if it works well. That's why high school students bring graphing calculators to algebra courses. It would be very difficult to appeal to potential customers, no matter what business you are in, when you are using a system such as this. The same goes for employees.

    The $30 system not only lacks many components, but even when flushed out would be hard pressed to find a viable business for implementation. The wiser systems administrator will allow for future growth, and be sure to catalogue ALL components of the system before making a proposal to management.
  • by x1pfister ( 260667 )
    Anyone remember "NC" (network computers) ? The Java-only device that would cost only a few hundred dollars? If you forgot, this was like X terminals, only you sent Java, rather than bitmaps over the network.


    SUN eventually decided that 100-base-T was the only way to effectively deploy that technology.


    I used an X-Terminal for about 5 years to develop software. If I ran a local window manager, and did mostly x-terminals it ran smoothly -- graphics and complex GUI's are nowhere near as crisp and responsive. 4 trips up and down a network stack for every mouse click is much slower than anything Microsoft puts out.

Reality must take precedence over public relations, for Mother Nature cannot be fooled. -- R.P. Feynman

Working...