Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Red Hat Software Businesses

Red Hat Begins Testing Core 5 237

Robert wrote to mention a CBR Online article which reports that Red Hat has begun testing on Fedora Core 5. From the article: "The next version of Raleigh, North Carolina-based Red Hat's enterprise Linux distribution is not scheduled for release until the second half of 2006 but will include stateless Linux and Xen virtualization functionality and improved management capabilities. Fedora Core 5 Release 1 includes updated support for XenSource Inc's open source server virtualization software, as well as new versions of the Gnome and KDE user interfaces, and the final version of the OpenOffice.org application suite."
This discussion has been archived. No new comments can be posted.

Red Hat Begins Testing Core 5

Comments Filter:
  • Does anyone know what "final version of the OpenOffice.org application suite" means? Are they simply referring to whatever the current version of openoffice is at the time?
    • Thanks, I was just going to ask this. I'm hoping it was just poorly worded, because OOo has a ton of projects [openoffice.org] in the development pipeline.
    • I'm assuming they just mean the final version of OpenOffice.org 2.0, which had been in testing for quite some time.

    • The Fedora devs are pretty involved with OpenOffice. When Core 4 was released it was shipped with OpenOffice.Org 1.979 or something like that. Obviously Core4 has since been update to 2.0, but they are either referring to 2.0 or maybe 2.1x which is still in development but will be more stable by release time (and Fedora will be undergoing a ton of testing and stability checks over the next 3 months now that the test releases are out). Fedora was the first distribution to have OpenOffice.org use a native int
  • This is hardly an "article". The submission above is 80% of the article itself, and short on details.

    But more importantly: can someone expound a little on what "stateless Linux" is?

    • Re:skimpy (Score:5, Informative)

      by un1xl0ser ( 575642 ) on Friday November 25, 2005 @12:40PM (#14113837)
      Stateless Linux (from http://fedora.redhat.com/projects/stateless/ [redhat.com])

      The Stateless Linux project is an OS-wide initiative to ensure that Fedora computers can be set up as replaceable appliances, with no important local state.

      For example, a system administrator can set up a network of hundreds of desktop client machines as clones of a master system, and be sure that all of them are kept synchronised whenever he or she updates the master system. We provide several technologies for doing this.

      The scope of the project is the entire OS, since we are trying to improve configuration throughout all packages. However, there are some packages which are specific to Stateless Linux:

              * readonly-root
              * stateless-common
              * stateless-client
              * stateless-server
      • Stateless Linux (Score:3, Interesting)

        by RichiP ( 18379 )
        The description and whitepaper on Stateless Linux reminds me of how lab computing used to be back in college (around 1996) where all of our lab computers didn't have harddisks but would boot from an image on a Novell Netware server (via network PROM boot). All the programs and the user's data would reside on the server but the processing power used would be the client workstation's. Seems to me Novell would be one of those companies who'd be interested in this approach and would get on the Fedora Stateless
        • Re:Stateless Linux (Score:4, Interesting)

          by Anonymous Coward on Friday November 25, 2005 @02:17PM (#14114282)

          I think Stateless Linux is a great idea. In fact, I think Gnome should be extended so that a session can span several computers where the person logs on to. Then we could couple up distributed computing on top of that and make it part of the Stateless Linux-Gnome system.


          Gnome had saved session stuff for a while now... and it all sucks.

          What we need more is...

          Have you ever used 'screen'? It's a multiplexor for the unix shell. Allows you to open up multiple shell instances on different computers on the same terminal, and then be able to disconnect the shell and still leave everything running on the background. Allows you to move from computer to computer while disconnecting and reconnecting over ssh and such without loosing anything.

          It's very handy.

          X Windows is a networking protocol. The X Clients are just programs like Firefox, or Nautilus, or Abiword, or any game that runs ontop of your X server, which is simply the program that controls your inputs and monitors and displays the outputs of your X Clients on your local machine.

          X Clients can be anywere (once the networking is enabled.. there are certain security considurations with X, which is why networking outside your local computer is disabled by default) on your network.. They can be on your local machine, remote machine, on the internet anywere.. It doesn't matter.

          Think of it like your X server is your X Browser and the X clients are like frames or websites on that you interact with. They can be anywere.

          What we need is a standard way for X windows to have a thing like 'screen' were you can save your current output and move it to any computer that can handle X windows.

          Sun already has this for their excellent X terminals that they sell.

          Not only that we need a way to move programs from one X Server to another. You can run multiple X servers on your machine, I do that all the time. I also run X servers on my laptop and other computers that I have aviable.. I should be able to move the a X client from machine to machine, from output device to output device without stopping or restarting any programs.

          If you combine that with network-based home directories, some sort of networked sound system, and network authentication and directory system, then you should be able to use any system transparently. It will be roaming desktop.. but on steroids. Not only you could use and have your home enviroment on every single computer in the system.. but also be able to use any program on any computer on this system.

          Combine that with clustering capabilities, such as distributed file systems and the ability to migrate not only proccesses from computer to computer, but using Xen moving entire running operating systems from computer to computer.. then we would have a true Network-based operating system.

          The entire computer network of a corporation, school, or other orginization will be able to share proccessor, memory, and disk resources transparently. Any part of the system, any computer, would be a plug-n-play system.

          You buy a Dell. You format Windows off of it, you plug said Dell into network. Thats it. Thats all it would take to install Linux on it and make it work with the rest of your networked computers.

          This is what stateless linux is working for. Stateless linux is the first major step in this direction.

          • Re:Stateless Linux (Score:3, Informative)

            by Coryoth ( 254751 )
            What we need is a standard way for X windows to have a thing like 'screen' were you can save your current output and move it to any computer that can handle X windows.

            You mean like xmove [debian.org]? Basically xmove starts up a pseudoserver which clients can connect to. At startup clients connecting to the pseudoserver display on the default XServer, but can be moved to any other display on the network.

            I agree that a cleaned up easy to use xmove system would be a nice idea though.

            Jedidiah.
      • One can get largely the same results with cfengine or something like that. Well, except for the diskless support, which I guess can be useful.
    • here [fedoraproject.org] you can find some more info on what is likely to go into FC5.
    • can someone expound a little on what "stateless Linux" is?
       
      It's a rogue Linux out in international waters.
  • My experience trying to setup wireless with Fedora Core 4 was brutal. Nothing I needed was in the initial install. With no net connection in linux I had to keep booting into my windows partition to search for any help at all on how to set things up and then download what I needed. And then go back into linux to toil and then fail. And then repeat the process. Eventually I got my card at least detected, but when I activated it the whole machine hung. So I gave up on Red Hat.

    Ubuntu detected my wireless
    • by spazimodo ( 97579 ) on Friday November 25, 2005 @12:42PM (#14113849)
      Ubuntu has WPA support - search in Synaptic for WPA_supplicant. (You may need to enable Universe/Multiverse)

      This post brought to you on a Dell D600 running Ubuntu Breezy Badger using WPA.
      • When you say Universe/Multiverse that means what exactly? Something on the install CDs but not on the Live CDs? Or something that is downloaded?

        I can't download when I don't have my wireless working. Why isn't WPA_supplicant included by default at the beginning? It's a 50k file! Couldn't cram it onto the CD?
        • Yes, something you download.

          In Synaptic, click Settings / Repositories, click Add, tick the Universe box, click OK. Now search for WPA again and you should see the package. Except if you don't have a working network connection :-(

          You'll also notice more packages available: my Synaptic has 17,000+ of them, heh.

    • Amen to that. Linux wireless is beyond pathetic.

      A few weeks ago I tried out Linux, downloading a couple of distros (Ubuntu and SUSE) that were recommended to me.

      I was plesantly surprised that these distros detected the hardware on my Toshiba laptop as well as it did.

      Except for the wireless card (a Linkyss WPC54G), since I wasn't about to run a cat5 cable across the apartment for an laptop.

      So I rebooted into Windows, saved to a flash drive all the instructions and files I supposedly needed to get wireless wo
    • I think you're being a little unfair. You can't simply install FC4 and expect everything to just go like Windows does, because the latter operating system often has vendor support — so Linux does damn well to get as far as it does! Sure, some post-install work is required, but once it's set up it works like a charm. I have WPA EAP/TLS working quite happily with my IPW2200. OK, I had to download and build the drivers and wpa_supplicant, but is that much less hassle than the rigmarole of sorting it all
      • I understand I need to grab the driver from the manufacturer. But that should essentially be it. Ndiswrapper should be good to go as well as WPA_supplicant. Why should I have to futz around with these things at all?

        Shouldn't getting a network up be somewhat high on the list of things a linux system should do automagically at the very beginning? If you don't have the networking then a user is plain dead in the water so far as grabbing updates to get other things working.
      • Once NetworkManager is completed, with full WPA support, things will be much smoother.

        Does NetworkManager still do caching DNS (either builtin or using nscd)? Last time I tried using NetworkManager DNS was too slow. I like the interface it provides for configuring wireless, but I just couldn't handle the slow DNS.

    • Why dont you try Mandriva 2006?

      I tried to install FC-4 on my laptop USB HD without success, first tried booting from CD, but FC would not recognize the USB disk.

      Then installed using VMWare (which only made FC see my USB disk as a normal HD) using a persistent native disk config. After that tried to boot from the USB disk and it was almost done until I got a FC kernel panic because it didnt find the USB disk (WTF i had just booted from there lol).

      Anyway, I got Mandriva 2006 and installed flawlesly from the D
    • There is a very good reason for having to tell the installer where you want to download the files from. In an organisation with several systems, you would be better of copying the RPMS directories from the CD/DVD's to a FTP/NFS/HTTP server on your own network. Point the installer at that resource and you can install the whole lot a great deal faster than over the internet.

      Here is what I do.
      1) install say FC4 on a server box. Select EVERYTHING.
      2) then setup a cron job to do a daily "yum update". Add some log
      • I'm not saying to include it in place of manually entering whatever you need, but alongside it.

        I don't even think you understood what I said. What are you talking about copying all sorts of junk to a server?

        You get a Boot ISO. You boot from it. You choose FTP install. As it exists now you type in some server that you copied down the info for from the mirror list that you grabbed from the web page with mirrors on it.

        What I would like to see is simply a list of the mirror sites during the install that I c
    • Linux on the Desktop? Not if the user has a wireless card.

      The problem with the wireless hardware is that:
      1. Most of the manufacturers haven't released any specs so the driver writing has needed lots of reverse engineering.
      2. Much of the hardware has gone through rapid development cycles, meaning that by the time the drivers are available you probably can't get the hardware anymore.
      3. Linked with (2), many of the manufacturers sell their updated revisions under the same name, model number and even FCCID in some cases, even though the new revision is *completely* incompatable with the old revision, so you may end up researching which hardware will work only to find that when you buy that hardware is is an incompatable revision.
      4. Most cards require uploadable firmware which the manufacturers won't release under good licences so can't be shipped with most linux distributions as standard so you have to download it yourself.

      The Prism54 drivers are a good example of (2) and (3) - the drivers were of good quality but by the time they made it into the stock kernel Intersil had stopped making the supported chipset and had replaced it with a completely incompatable SoftMAC based chipset. A number of the manufacturers, such as SMC, released the cards using the SoftMAC chipset under the same name and model number as the old ones and it was nigh on impossible to know which version you were going to end up with because even the retailers didn't know there were 2 incompatable versions of the same card.

      I understand that the new Prism54 drivers now support the SoftMAC chipsets so maybe I'll fetch the incompatable SMC card I ended up with off the shelf. Interestingly, the Prism54 website says they're working on an open GPL firmware and I hope they succeed in producing it as that means we can at last have some hardware *completely* supported by a vanilla kernel. Having GPLed firmware also opens up some possibilities for new uses for the hardware since interested parties can hack the firmware to do strange new things (enhanced Mesh networking, etc?)

      Speaking from experience of setting up supported Prism54 802.11g cards under both Fedora 3 and 4, it's simply a case of grabbing the firmware and sticking it in the right place and then it Just Works - you can't get a lot easier than that unless the distributor breaks the firmware licence and bundles the firmware illegally.

      The last time I installed Fedora Core 4 off a boot CD I was amazed that to do an ftp install I still had to punch in manually what mirror I wanted to do the install from. Computer games have been grabbing "master server lists" for some time now. Can't something similar be worked into the FTP install?

      Maybe you don't want to install off one of the official mirrors?
  • by shane2uunet ( 705294 ) on Friday November 25, 2005 @12:45PM (#14113863) Homepage
    Why do a lot of the postings to articles boil down to

    "that is crap use this"

    Don't these people realize that no solutions fits every situation? It blows the mind.

    Anyway, I love Fedora Core. I use it on my desktop at work, Running FC 4 right now. Stable as can be, gives me the tools I need. See, I'm a system administrator. I have about 7 RHEL systems under my administration that I personally over see. Fedora Core allows me to see what will soon be included in RHEL and get familiar with it.

    Why Redhat? If you have to ask, you don't know linux or open source. They contribute millions of dollars to opensource and to linux development. Sure they're making a buck off support and I'm glad to pay it, in return I get a rock solid OS that is guarenteed to be there in 7 years. Oh, and Redhat seems to be doing pretty good finacially too, as seen on Slashdot here recently.
    http://linux.slashdot.org/article.pl?sid=05/11/15/ 1732235&tid=110&tid=187&tid=106 [slashdot.org]

    I just don't understand why they are upbraided for that. They're just trying to make a living at linux, same as me. I mean, if you don't want to pay, RH has even allowed (by the GPL) others to make almost identical OS (CentOS), only thing missing is the shadowman.

    I can't wait for FC5 to go live, I'll be upgrading.
  • by Tim Ward ( 514198 ) on Friday November 25, 2005 @12:51PM (#14113889) Homepage
    I know this site is for technically literate people, but really!!

    "improved management capabilities" I can cope with, but "stateless Linux and Xen virtualization functionality" and "open source server virtualization software" are worthy of the worst type of social science academic paper or local government policy document!
    • why not just look it up on say, google?

      here I'll even link you, www.google.com [google.com].

      If you're technically literate enough to read slashdot you should know that google is your friend. I promise you that the first documents for search terms 'xen virtualization' and 'stateless linux' are very useful.
    • Calm down, dude. Stateless Linux [redhat.com] and Xen [xensource.com] are the actual names of projects included in Fedora Core. They are not buzzwords or marketspeak. "Open source server virtualization [redhat.com] software" was slightly redundant, but it is also a plain English description of Xen, which is exactly what you're asking for.
    • I was browsing a baseball site the other day and they kept using terms like "suicide squeeze" and "relief pitcher". Bastards.

      Clue : If you're reading a tech news site with a leaning to Linux, it'll probably help to have some idea of the latest major developments in technology, as they relate to Linux. If you don't know what Xen is, or what a virtual server is, it's not as if it's hard to find out [wikipedia.org]
      • Clue : If you're reading a tech news site with a leaning to Linux, it'll probably help to have some idea of the latest major developments in technology, as they relate to Linux.

        Oh, well, if the site is only intended to preach to the converted then that's fair enough of course. I was somehow under the mistaken impression that, as someone who is paid to work on Windows more often than I am paid to work on Linux, the site could be useful for me to keep up to date with "the latest major developments in technolo
        • Oh, well, if the site is only intended to preach to the converted then that's fair enough of course. I was somehow under the mistaken impression that, as someone who is paid to work on Windows more often than I am paid to work on Linux, the site could be useful for me to keep up to date with "the latest major developments in technology", rather than a pre-existing knowledge of "the latest major developments in technology" being essential before attempting to understand the site.

          Virtualization has been talk

  • One of the problems highlighted with OpenOffice.org is its faster load time. I wonder whether RedHat will do the needful and preload most of the libraries needed at boot time in order to reduce the beast's load time. My hope is that they have not spoiled KDE with the Bluecurve theme.
    • I know Novell have at least one guy working on improving OO.o load times. Sun are still responsible for most of the OO.o development, with Novell a distant second. I don't know if RedHat has anyone working on it at all.
      • You can hide the load time by running /usr/lib/openoffice.org2.0/program/soffice -nodefault -nologo. I have a perlscript running that restarts this after exiting OOo. This cuts the subsequent load time to almost nothing.

  • Final Version? (Score:2, Informative)

    by SnarfQuest ( 469614 )
    and the final version of the OpenOffice.org application suite.

    Did I miss some news? Have they actually stopped development of Open Office?
  • I think the big question is how to get to that nice FC5 from the existing FC3 or FC4. Is there a clean/supported/documented upgrade path? Just get the ISO, burn it, boot from it, and FCN+1 will be smart enough to do that _upgrade_?
  • Every other one... (Score:3, Interesting)

    by mcrbids ( 148650 ) on Friday November 25, 2005 @04:03PM (#14114813) Journal
    Looks like (for me) that my use of Fedora Core is falling into the same pattern that I always had with the earlier RedHat releases - every other one.

    I started on RH 5.1. Briefly hit 6.2 on the way to 7.x. Still have a number of servers running 7.x.

    Never touched 8.x, and was moving into 9 when RedHat EOL'd their "RedHat Linux" product.

    Now, I'm using CentOS for most of my (smaller) servers, and Fedora for personal use. I used Fedora Core 1, never touched Core 2, now happy on Core 3. Haven't touched 4, but am considering 5.

    Why upgrade on each one, unless there's some OMFG Do0d feature you just gotta have...
  • by eno2001 ( 527078 ) on Friday November 25, 2005 @05:13PM (#14115136) Homepage Journal
    Just as a side not to those of you who are unaware, Xen is probably the coolest thing to ever happen to computing evar. It is a paravirutalization system. How many times have you fired up VMWare or VirtualPC and wished you didn't have to run as heavy of a Host OS? Well.. Xen is your answer. Xen is a special kernel all unto it's own that boots directly on x86 and presents a new virtual architecture to the guest OS. This new virtual architecture (think PPC vs. x86 vs. amd64) is called 'xen'. And when your OS is compiled to operate on top of the Xen kernel, you get EXACTLY what was mentioned above: a system that boots a very minimal "OS" that plays host to your VMs. Not only that but at speeds that are near native! So Redhat is making the right move by incorporating this into Fedora (and eventually their commercial offerings). Now, the only other thing that needs to be done is make Xen work for grandma. Then you'll never have to ever worry about fixing people's PCs ever again...

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...