Become a fan of Slashdot on Facebook


Forgot your password?
Red Hat Software Businesses

Red Hat Begins Testing Core 5 237

Robert wrote to mention a CBR Online article which reports that Red Hat has begun testing on Fedora Core 5. From the article: "The next version of Raleigh, North Carolina-based Red Hat's enterprise Linux distribution is not scheduled for release until the second half of 2006 but will include stateless Linux and Xen virtualization functionality and improved management capabilities. Fedora Core 5 Release 1 includes updated support for XenSource Inc's open source server virtualization software, as well as new versions of the Gnome and KDE user interfaces, and the final version of the application suite."
This discussion has been archived. No new comments can be posted.

Red Hat Begins Testing Core 5

Comments Filter:
  • 5? (Score:1, Interesting)

    by CastrTroy ( 595695 ) on Friday November 25, 2005 @01:20PM (#14113725) Homepage
    Doesn't it seem like they're advancing a little fast through the versions here. It won't be long before Fedora Core is beyond RHES in terms of version numbers. The kernel is only at version 2.6. Why is Fedora using a number scheme like this? do they want to make it seem more mature?
  • by shane2uunet ( 705294 ) on Friday November 25, 2005 @01:45PM (#14113863) Homepage
    Why do a lot of the postings to articles boil down to

    "that is crap use this"

    Don't these people realize that no solutions fits every situation? It blows the mind.

    Anyway, I love Fedora Core. I use it on my desktop at work, Running FC 4 right now. Stable as can be, gives me the tools I need. See, I'm a system administrator. I have about 7 RHEL systems under my administration that I personally over see. Fedora Core allows me to see what will soon be included in RHEL and get familiar with it.

    Why Redhat? If you have to ask, you don't know linux or open source. They contribute millions of dollars to opensource and to linux development. Sure they're making a buck off support and I'm glad to pay it, in return I get a rock solid OS that is guarenteed to be there in 7 years. Oh, and Redhat seems to be doing pretty good finacially too, as seen on Slashdot here recently. 1732235&tid=110&tid=187&tid=106 []

    I just don't understand why they are upbraided for that. They're just trying to make a living at linux, same as me. I mean, if you don't want to pay, RH has even allowed (by the GPL) others to make almost identical OS (CentOS), only thing missing is the shadowman.

    I can't wait for FC5 to go live, I'll be upgrading.
  • Stateless Linux (Score:3, Interesting)

    by RichiP ( 18379 ) on Friday November 25, 2005 @02:47PM (#14114141) Homepage
    The description and whitepaper on Stateless Linux reminds me of how lab computing used to be back in college (around 1996) where all of our lab computers didn't have harddisks but would boot from an image on a Novell Netware server (via network PROM boot). All the programs and the user's data would reside on the server but the processing power used would be the client workstation's. Seems to me Novell would be one of those companies who'd be interested in this approach and would get on the Fedora Stateless Linux bandwagon. It would be nice if the two companies would actually work on this since the Fedora project is neutral grounds.

    I think Stateless Linux is a great idea. In fact, I think Gnome should be extended so that a session can span several computers where the person logs on to. Then we could couple up distributed computing on top of that and make it part of the Stateless Linux-Gnome system.

    Exciting times!
  • Re:Stateless Linux (Score:4, Interesting)

    by Anonymous Coward on Friday November 25, 2005 @03:17PM (#14114282)

    I think Stateless Linux is a great idea. In fact, I think Gnome should be extended so that a session can span several computers where the person logs on to. Then we could couple up distributed computing on top of that and make it part of the Stateless Linux-Gnome system.

    Gnome had saved session stuff for a while now... and it all sucks.

    What we need more is...

    Have you ever used 'screen'? It's a multiplexor for the unix shell. Allows you to open up multiple shell instances on different computers on the same terminal, and then be able to disconnect the shell and still leave everything running on the background. Allows you to move from computer to computer while disconnecting and reconnecting over ssh and such without loosing anything.

    It's very handy.

    X Windows is a networking protocol. The X Clients are just programs like Firefox, or Nautilus, or Abiword, or any game that runs ontop of your X server, which is simply the program that controls your inputs and monitors and displays the outputs of your X Clients on your local machine.

    X Clients can be anywere (once the networking is enabled.. there are certain security considurations with X, which is why networking outside your local computer is disabled by default) on your network.. They can be on your local machine, remote machine, on the internet anywere.. It doesn't matter.

    Think of it like your X server is your X Browser and the X clients are like frames or websites on that you interact with. They can be anywere.

    What we need is a standard way for X windows to have a thing like 'screen' were you can save your current output and move it to any computer that can handle X windows.

    Sun already has this for their excellent X terminals that they sell.

    Not only that we need a way to move programs from one X Server to another. You can run multiple X servers on your machine, I do that all the time. I also run X servers on my laptop and other computers that I have aviable.. I should be able to move the a X client from machine to machine, from output device to output device without stopping or restarting any programs.

    If you combine that with network-based home directories, some sort of networked sound system, and network authentication and directory system, then you should be able to use any system transparently. It will be roaming desktop.. but on steroids. Not only you could use and have your home enviroment on every single computer in the system.. but also be able to use any program on any computer on this system.

    Combine that with clustering capabilities, such as distributed file systems and the ability to migrate not only proccesses from computer to computer, but using Xen moving entire running operating systems from computer to computer.. then we would have a true Network-based operating system.

    The entire computer network of a corporation, school, or other orginization will be able to share proccessor, memory, and disk resources transparently. Any part of the system, any computer, would be a plug-n-play system.

    You buy a Dell. You format Windows off of it, you plug said Dell into network. Thats it. Thats all it would take to install Linux on it and make it work with the rest of your networked computers.

    This is what stateless linux is working for. Stateless linux is the first major step in this direction.

  • Linux on the Desktop? Not if the user has a wireless card.

    The problem with the wireless hardware is that:
    1. Most of the manufacturers haven't released any specs so the driver writing has needed lots of reverse engineering.
    2. Much of the hardware has gone through rapid development cycles, meaning that by the time the drivers are available you probably can't get the hardware anymore.
    3. Linked with (2), many of the manufacturers sell their updated revisions under the same name, model number and even FCCID in some cases, even though the new revision is *completely* incompatable with the old revision, so you may end up researching which hardware will work only to find that when you buy that hardware is is an incompatable revision.
    4. Most cards require uploadable firmware which the manufacturers won't release under good licences so can't be shipped with most linux distributions as standard so you have to download it yourself.

    The Prism54 drivers are a good example of (2) and (3) - the drivers were of good quality but by the time they made it into the stock kernel Intersil had stopped making the supported chipset and had replaced it with a completely incompatable SoftMAC based chipset. A number of the manufacturers, such as SMC, released the cards using the SoftMAC chipset under the same name and model number as the old ones and it was nigh on impossible to know which version you were going to end up with because even the retailers didn't know there were 2 incompatable versions of the same card.

    I understand that the new Prism54 drivers now support the SoftMAC chipsets so maybe I'll fetch the incompatable SMC card I ended up with off the shelf. Interestingly, the Prism54 website says they're working on an open GPL firmware and I hope they succeed in producing it as that means we can at last have some hardware *completely* supported by a vanilla kernel. Having GPLed firmware also opens up some possibilities for new uses for the hardware since interested parties can hack the firmware to do strange new things (enhanced Mesh networking, etc?)

    Speaking from experience of setting up supported Prism54 802.11g cards under both Fedora 3 and 4, it's simply a case of grabbing the firmware and sticking it in the right place and then it Just Works - you can't get a lot easier than that unless the distributor breaks the firmware licence and bundles the firmware illegally.

    The last time I installed Fedora Core 4 off a boot CD I was amazed that to do an ftp install I still had to punch in manually what mirror I wanted to do the install from. Computer games have been grabbing "master server lists" for some time now. Can't something similar be worked into the FTP install?

    Maybe you don't want to install off one of the official mirrors?
  • Re:Mature? (Score:2, Interesting)

    by saikatguha266 ( 688325 ) on Friday November 25, 2005 @04:00PM (#14114491) Homepage
    > upgrading from an already unstable FC2 to FC3

    I admit it is quite easy to break FC and make it unstable (even inadvertantly). In my experience, unstability has been primarily a result of installing software not packaged properly for FC. For instance, DRI nightlies are tarballs and not well built RPMs, Sun's Java RPMs don't use the /etc/alternatives convention, NVIDIA's drivers are not RPMs etc. There is an absolutely a need for properly packaging these softwares (and there are efforts underway -- JPackage for Java, ATRpms for Nvidia etc).

    I completely agree with you that FC is not perfect, and has fewer software packages than Debian -- thus tempting FC users to install 3rd party packages that haven't received as much attention or testing. But that is quite different from saying that FC itself is unstable. Ofcourse, it would be much nicer if FC included that software in the core system in the first place. Perhaps someday.
  • Every other one... (Score:3, Interesting)

    by mcrbids ( 148650 ) on Friday November 25, 2005 @05:03PM (#14114813) Journal
    Looks like (for me) that my use of Fedora Core is falling into the same pattern that I always had with the earlier RedHat releases - every other one.

    I started on RH 5.1. Briefly hit 6.2 on the way to 7.x. Still have a number of servers running 7.x.

    Never touched 8.x, and was moving into 9 when RedHat EOL'd their "RedHat Linux" product.

    Now, I'm using CentOS for most of my (smaller) servers, and Fedora for personal use. I used Fedora Core 1, never touched Core 2, now happy on Core 3. Haven't touched 4, but am considering 5.

    Why upgrade on each one, unless there's some OMFG Do0d feature you just gotta have...
  • by eno2001 ( 527078 ) on Friday November 25, 2005 @06:13PM (#14115136) Homepage Journal
    Just as a side not to those of you who are unaware, Xen is probably the coolest thing to ever happen to computing evar. It is a paravirutalization system. How many times have you fired up VMWare or VirtualPC and wished you didn't have to run as heavy of a Host OS? Well.. Xen is your answer. Xen is a special kernel all unto it's own that boots directly on x86 and presents a new virtual architecture to the guest OS. This new virtual architecture (think PPC vs. x86 vs. amd64) is called 'xen'. And when your OS is compiled to operate on top of the Xen kernel, you get EXACTLY what was mentioned above: a system that boots a very minimal "OS" that plays host to your VMs. Not only that but at speeds that are near native! So Redhat is making the right move by incorporating this into Fedora (and eventually their commercial offerings). Now, the only other thing that needs to be done is make Xen work for grandma. Then you'll never have to ever worry about fixing people's PCs ever again...
  • Re:FC3 - FC4 - FC5 (Score:1, Interesting)

    by Anonymous Coward on Friday November 25, 2005 @09:54PM (#14116173)
    Keep in mind though that upgrades are NOT supported with Test releases and are discouraged.
    FC4 -> FC5. Sure, no problem.
    FC4 -> FC5 Test n (and then -> FC5 Final) is a no-no. You wanna run test releases, then be prepared to do fresh installs.
  • Re:100% FUD (Score:3, Interesting)

    by zerocool^ ( 112121 ) on Saturday November 26, 2005 @12:27AM (#14116811) Homepage Journal

    You're right, of course. And I'm sorry I went off in that last post.

    But, you gotta understand my perspective. I was deep into the small-business webhosting business when the redhat swing went down. There was no way out. At that time, there were "other" linux distros, but other mainly consisted of Mandrake (which was falling off the map, despite bein based on RH), Debian (which most people considered a fringe distro), and slackware (outdated and hard to administer, at least when time-to-learn is a factor). Red Hat was it in the linux world. People distributed binaries in RPM format. Hardware proclaimed RedHat support, not linux support. It was RedHat or it wasn't professional.

    And then they pulled the rug out. I mean... we were in the middle of a sign-up boom, we were adding a new server every week practically. We had to find something.

    A paralell I thought of after I posted a lot of these comments... It's a similar situation to Apache today. Apache is by far the most popular webserver on posix OS's these days. Imagine if all of a sudden, it cost $300/yr for apache (hypothetical, bear with me). Yes there are other http servers, but none of them have the maturity, support, or general ubiquitiousness of apache. There would be a mad scramble for the leftovers to bring themselves up to maturity, and at the same time, applications that previously depended on and hooked into apache would be facing a partial API rewrite and debug to work with other products. Eventually, two or three other successors would emerge as the contenders, and maybe someone would take an old version of Apache and start re-developing it, but it would fragment the http world and cause mass confusion.

    That's what happened in the small business world. It was several months (or a year) until Fedora was usable, and even though it's relatively stable, it's not suitable for a production environment due to support issues. FC isn't perfect, either; there have been problems with compatability between applications in the same distro. And it was years before CentOS came out. We needed something immediately, and we simply could not afford RHEL. We eventually switched to Debian, which was (is) kind of the warm-fuzzy of the linux world. You know it's going to work, because it's been tested for so long, but as a result, it's usually a little behind the times. Debian went through a lot of growing-up in the months after RHEL happened.

    Anyway, it shook us pretty hard. We felt like the rug had been pulled out from under us, and that we simply were a market segment who just didn't matter, because we didn't have deep pockets.

    Again, sorry everyone for the explosion, but I think this post really explains where my feelings for RH come from.


"Oh my! An `inflammatory attitude' in alt.flame? Never heard of such a thing..." -- Allen Gwinn, allen@sulaco.Sigma.COM