Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Red Hat Software Businesses

Red Hat Interviewed about Red Hat Linux 7 153

theridersofrohirrim writes "Linuxtoday has a very interesting summary and some interviews with redhat staff, regarding redhat 7, gcc 2.96, and more. It also includes some embarassing (but justified in my opinion) comments for Slashdot's redhat 7 bug story. Linuxtoday's article can be found their site."
This discussion has been archived. No new comments can be posted.

Red Hat Interviewed about Redhat 7

Comments Filter:
  • Hell, Mandrake 7.0 uses XFree86 3.6 and 7.1 uses 4.0, so X isn't a big enough change...
  • ...and doesn't RH70 have USB support? I don't know for sure (I wash my hands of RedHat), so go ahead and tell me (no, no, not all of you, homo flamiens!)
  • 6.2 has been running great for me

    I don't doubt it. I ran 6.0 (yeah, that's right, a "buggy, unstable x.0 release") on my machine at work 8-10 hours a day for over a year with no problems at all. If 300+ days of uptime is "unstable", I can't wait until someone finally releases a stable OS.

  • As everyone knows, "7" is an even number...

    Oh, really? ;-)

  • In reality if Redhat wants to do this and keep a business they are going to have to have 2 distributions. One would be bleeding edge, and the other would be stable and tried.

    What is the difference between this and choosing to use the last point release vs. the current major release? And isn't "brand new, stable and tried version" an oxymoron? Were you suggesting that they should have also released a stable and proven distro that's 2.4 kernel ready, includes XFree 4, etc..?

  • I'm just guessing here, but this seems something like the pain and suffering switching from the old to the new C libraries. The .0 release gives some warning that something fundamental is different. Expect some growing pains. Red Hat is just there a bit earlier than most everybody else.
  • People often want new. Usually the last point release does not include the bug fixes for that release. What I am talking about is maybe releasing a version of 6.2 and calling it 6.2.1 and having all the fixes for 6.2 already applied, and releasing this at the same time as 7.0. In essance 6.2 with its updates is stable (I'm using it), however they don't sell it this way. Then they could market any x.x.1 release as stable with fixes.

    As far as a 2.4 release goes we all know that is going to be buggy. There are lots of changes from 2.2 to 2.4. netfilter instead of ipchains. Yes you can use ipchains, but then what is the sense in that. Just convert your rules. I imagine that when they do release 2.4 Redhat will have it in there distro and people will complain about problems. The thing is that from a business perspective it is not good to release unstable products. Look at M$ and how people think about them.

    I don't want a lot, I just want it all!
    Flame away, I have a hose!

  • That's a nice ad you have there.

    --
  • Well answer this question. The article claims that there is a kernel compiler that is one of the stabler gcc versions. I still got errors compiling the kernel. Now I didn't try especially hard to make it work. It was a friends machine and the bloated, wanna dymanically load modules like solaris so I can be plug and play like M$ default redhat kernel worked fine. I gave up after gcc -v revealed that the boys at redhat decided to put the buggy version of a compiler that won't compile most rpm's. I use to give newbies Redhat and tell them to learn to hate it on there own and switch to slack or FreeBSD, but Now i think I will be giving those people SuSe. Besides German pages rock!!!
  • OK, i misunderstood what you meant by a stable release. When I suggested using the latest point release I was assuming that the updates would of course be applied before putting it to work.

    Releasing a 6.2.1 would definately be a convenience, but would enough people be interested for Redhat to justify this. I hope most people would update long before something like this were available. BTW, I was running 6.0 updated up until this past weekend.

    Look at M$ and how people think about them.

    Yeah, but in this case, instability is just the tip of the iceberg. ;)

  • I've seen quite a transformation for a cute little Linux startup company to a potentally monopoly. What Red Hat should do to prove this wrong, is to um... well NOT SUCK. Seriously, piss on yer Red Hat distro. and go out and buy debian or slackware. These are distributions that will never betray their user base like Red Hat has.
  • Next thing you know, you'll want us to RTFM!

    Nahh - with new releases, it's not RTFM, it's RTFS

    Liquor
  • It's hardly Red Hat's fault that ISO C++ doesn't specify an ABI. Expecting any two C++ compilers to generate the same binary layout, even if they are the same compiler at the same version, isn't right IMHO. Fix the standard, rather than rely on compiler specific "features", I'd say.
  • RedHat are getting far too much criticism for the 7.0 release. Yes it is buggy, but that's what you'd expect from a .0 release. If people want a less buggy version, simply wait for another to come out. If the whole point of linux for you is to use slightly more imperfect distros and software, and you enjoy seeking out these bugs and informing redhat of them in a helpful manner then you shouldn't complain of the numbers of them. Finding bugs is fun. Yes admittedly slightly annoying if you actually want to do something proper and you find one, but ultimately you cannot expect a perfect system, and many people use linux for the fact you can find bugs and report them, thus helping everyone for future releases. It's like trial and error, and on a .0 version like this there is naturally more error.
    I suggest if people can locate these bugs they report them to RedHat in a sensible and uncomplaining fashion, as then they will be more willing to work harder for their users. Simply complaining doesn't help at all. Constructive suggestions will always be welcomed, and it is our duty as linux users to help improve and fix bugs for future, and ultimately better distros.
  • Well, you might think 7 is better and 6.2, and I might (or might not) agree with you. The point is, what does Red Hat think? The statement by Troan could be interpreted to mean that a .0 release is of lower quality than the previous non .0 release. Especially regarding number of bugs. Sure, 7.0 might have more bugs fixed, but how many more has it introduced? Is there a net improvement in reliability, or is it worse?

    The point is, there seems to be a qualitative difference, in terms of the net number of bugs, between a .0 and a non .0 release. I guess we should expect this, but ordinary users aren't always going to realize that there's something magic about the .0 in the release -- that this implies risk, possibly more bugs. Since this seems to be the case, and Red Hat seems to be admitting this, why not make it more clear in the labelling of the release? Call it the "experimental, but really up to date" release or something. So people can know what kind of trade offs they're making.

  • Wow, I am amazed that my response got marked troll. I reread my comments to see if I inadvertantly insulted the pope or RMS or something, but no, I only stated my own beliefs as to why I think that the current redhat release should have been labelled 6.3 rather than 7.0.

    I am grateful to those who marked it up as underrated. This has taught me a lot about the underlying fairness of the slashdot site.

    I want everyone to know that I use redhat, have for years, I got my information about rh7.0 from several close friends who tried it out and told me what they had found.

    I will listen to my friends evaluation of rh7.1 as well and pass it along at the proper time. If they say it is good, I will try it too. I would try every distribution that comes out, but I am too busy actually using my computers to get work done (including GPL projects on source forge) to install a new distribution every weekend.

    I saw no one give any convincing arguments as to why this shouldn't have been released as rh6.3. I have used the international security patch for years, And have used both the open SSL and open ssh packages for a while now as well. USB is being included in the 2.2 series of kernels, installing XFree86 4.0.1 on a computer is as easy as downloading the X packages from XFree.org and installing them. I have been running XFree 4.0.x for the past three months on a rh6.1 installation.

    The linux2.4.0-test2 kernel worked perfectly with rh6.2, but the later 2.4 kernels have a different structure to their modules directory and you have to run insmod with the path to the module that needs loading. Evidently fixing this problem is as easy as loading a new mod utils package.

    All in all this has been a learning experience. But I do think that in the end my voice was heard on this site, even if my voice was labelled as a troll. So I know that I was heard, but have a strange feeling that my voice was promptly ignored.

  • My thoughts exactly. Red Hat, in their official comments (especially the one by Troan saying that 7.0 is much better than 6.0 or 5.0, implying that 7.0 might be worse than 6.2) seems to be implying that there is something special ('innovative') about the .0 releases (in addition to binary incompatibility with the previous major release, which is understandable). That's fine, but they should let users know more explicitly.

  • The kernel depends on compiler-specific behaviour in many places, which is why e.g. 2.95 isn't blessed as a kernel compiler. Most of these dependencies are removed from the 2.4.0test series, but there might be cases hiding in the code still
  • We had updated CDs in the past, but it was unpopular with sysadmins etc. who wanted to know exactly what was on a CD: Changes and updates could be applied after install. So we stopped years ago.
  • by bat'ka makhno ( 207538 ) on Monday October 09, 2000 @09:13AM (#720619)
    Freedom to innovate (through incompatibilities with existing standards), providing the product that the consumer wants even it means releasing buggy beta software, discontinuing support for multiple platforms. I think I've already seen that somewhere.

    Thank God for visionaries (or mere realists?) like RMS.
    --
    Violence is necessary, it is as American as cherry pie.
    H. Rap Brown
  • ...though my 6.2 CD is currently serving as a coaster, along with the second disc of UT and the Daikatana CD halves (yes, I whipped out a pair of wire cutters and cut my Daikatana CD; I plan on mailing the fragments [along with a strongly worded letter] to John Romero).

    6.2 was annoyingly unstable; GNOME kept crashing, I couldn't makefile anything, smbclient wouldn't logon to my Win2K tower, et cetera, et cetera. I just got sick of it fast.

  • There were plenty of reasons to release a new RH major, as others have pointed out, also the release timing of kernel 2.4 has not been pinned down, and with AC there they may have better info than the general public as to realistic release time. I dont want to have to wait for 2.4 to get XFree86 4.0. (Sure I can udate the packages myself, but what about the average user etc.)
  • by g_mcbay ( 201099 ) on Monday October 09, 2000 @09:22AM (#720623)
    Well, I'm about to defend Microsoft (a rare thing for me). I have some karma to spare anyway. Don't try this at home kids.

    Eric Trohan said (quoted in this article as saying, anyway):

    "Moving Linux forward is important, however. Doing that requires changes that can make it difficult to move applications from newer systems to older ones. This is inevitable, and every platform vendor has this type of problem (applications built for Windows 2000 apps do not work on Windows 98, for example)"

    This, actually, is bullshit. Windows 2000 is fully binary compatible with Windows 98 (and Windows 95). I build software on Windows 2000 all the time that runs perfectly fine on the 'lesser' Microsoft OSes. There are some APIs that by default are only shipped with 2000/NT, and there can be API differences (true 32-bit GDI in NT/2000 as opposed to 16-bit thunked), but Trohan is vastly overstating incompatibilities to cover for his company's boneheaded move.

  • http://www.mandrakeforum.com/article.php3?sid=2000 1005082533
  • I'm not exactly sure what kind of I18N support gcc 2.95.2 is lacking, but whether or not it is worth having incompatable binaries requires a wait-and-see approach. Generally, I install things from the source, but it's not always possible to do that. Sometimes the software I want is only available in rpm. The real question is, is everybody going to provide Redhat 7.0 binaries AND normal binaries.

    I've run into a few developers who only test their software on the latest version of Redhat and claim it will only run on such a machine when in fact it works just fine using other distros. We will just have to wait and see if this new Redhat causes more people to take this approach.
  • Because it uses XFree86 4.0 as the default. That's a pretty significant change.
    Its more of a timing issue that X 4.0 got in a .0 release. Basically Redhat has a three step release cycle. Release a buggy .0. Fix it with a .1 and get it almost right with a .2. Then continually support .2 releases with bugfixes three major versions back.
    XFree86 4.x is mostly stable although there are major security issues and I only use it with my ATI All-in Wonder because the 3.3.6 driver for rage128 is terribly buggy with that particular card. However, I'm not going to use it on my other machines until OpenBSD includes it in there base distro or says they fully auduted it and are not including it for non-security reasons.
    Since the new kernel is significant as well I'm curious to see if they switch to 8.0 when they add that. :-) Well auctually RH 7.0 is designed so when 2.4.0 comes out all they gotta do is add all the patched that they plan on adding make a SRPP compile it and add it to the updates section of there ftp site. The were kinda banking on 2.4, but lived without it.
  • Ya know, it bothers me that people in the Linux community are acting in this manner. This is America, the most powerful, wealthy nation on the planet. We got this way as a result of something called capitalism. Red Hat, like many other companies, are trying to blend a socialistic software development methodology (open source) with a capitalistic business model. Making money is the name of the game. It ain't easy, give them credit, they're young. Linux, without RedHat, would be nothing more than a geeks after work hobby and would never challenge the great behemoth you all dispise in Redmond.
  • If you find a bug in the compiler, remember to put it in bugzilla [redhat.com]

    That way, we can fix it and even put the fix back in the gcc tree.

  • We're bringing the techonology out for developers and users to use....

    ... and in the process of so doing, mature and bring the technology forward. Note that most changes to gcc, gdb, rpm and good parts of other systems (gtk, GNOME, kernel, glibc) come from Red Hat employees - we are actually creating the technology. Another company which does much of the same, is SuSE. Other distributions, like Slackware and Mandrake don't come close to this level of innovation and influencing on the future of opensource technologies (this doesn't apply to Debian, which by nature couldn't do such things - Debian users contribute as individials to the different projects)

  • Red Hat Linux 7 has the USB backport (done at linux-usb [linux-usb.org], which seemed to stabilize this summer and is now even going into mainstream 2.2)

    We only support mice and keyboard, the rest of the modules are included "as is" - most seem to work, while others have problems (especially usb-storage, which to be really good would need a backport of the 2.4 SCSI layer)

  • When last you installed Internet Explorer, or Media Player? Heck almost any "major" MS applications have lots of files that are different depending on which OS you are installing it on. I don't see Media Player 7.0 for NT 4.0 do i?

  • RH was one of the first distros to start using glibc2 when it was considered unstable. They are just building a pattern for themselves. 5.0 through 5.2 were very buggy. 6.0 through 6.2 were moderatly buggy but pretty stable. 7.0 is buggy as probably will be 7.1 and 7.2. 8.0 should be a stable release that will have a stable 2.4 release. (later 2.4 like 2.4.6 or later).

    This is redhat. They are known for being more on the edge than other distros. So what. They made a judgement call and they screwed up.

    Like windows did with 95. They made a call to release a peice of software that was not ready. Get over it!

    First if you did not back up your system and just blindly upgraded. Your an idiot and I would not want to work with you! This means that you do not test your software, and you'd be willing to crash a site before testing it. Rehat is releasing a distribution. x.0 and x.1 releases are almost always pretty buggy. Give them till 7.2 before you gripe.

    Second you probably did not try to find out anything about what you were installing. I.E. rpm 4.0 is incompatable with rpm 3.0 which is wh y I am not upgrading. They have lots of software that is not stable in this release. gcc, gtk 1.3, gimp, shall I go on? Yeah someone here will argue that gimp is stable, but the 1.1 is not. That is what 1.1 stands for. Go to www.gimp.org and see for yourself. It is under development.

    In reality if Redhat wants to do this and keep a business they are going to have to have 2 distributions. One would be bleeding edge, and the other would be stable and tried. That is what I'd personally do if I were them. Then those that need stable reliable systems could get the tried and stable, while those who wanted the bleeding edge would get the bleeding edge.

    Lastly this is Linux and this is a computer. Gee you should be prepared to spend some time figuring out how the things work and what to do. I've been using it long enouth to know that if you manually change a file you are probably advised to back that file up.

    Here is what I typically back up: apache conf fules, these don't change that much from 1.3.x revisions, so this is pretty safe; ipchians-save > ipchians is backed up; isapnp.conf, just in case; rc.custom from /etc/rc.d/ which is where I keep anything that I want my system to do on start up. Hey all the init scipts get replaced when you upgrade the initscr package; apcupsd.conf, and several other config files, for sane, etc. I also back up any software I install, if I have the tar I keep that and the srpm and rpm just in case, as well as the /home dirs, and my other dirs.

    I don't want a lot, I just want it all!
    Flame away, I have a hose!

  • Or Pay Alan Cox to work on the linux kernel...

    everyone just pretends to look the other way while they toss rocks at RedHat

  • I found 2 things disturbing about redhat 7:
    • xmms would hang on exit each and every time
    • jdk-1.2.2 (sun) would not run (segfault)
    Now, given that pinstripe (redhat 7 beta) was out for a long time and given that redhat is a pretty huge company (for Linux-related companies'standards), why weren't these errors detected? BTW: Those bugs have been just fixed. They were glibc bugs. See http://linuxtoday.com/news_story.php3?ltsn=2000-10 -09-009-04-NW-RH
  • every distro is buggy. that's why they have errata pages, update pages, etc.

    watch for patches, and install them when they come out. it's that simple.

  • Why do so many people feel the need to post this crap about about RedHat x.0 releases being notoriously bad. All releases have their problems, hence the updates they release. Sure the x.1 and x.2 releases are usually better, but the x.0 releases are not as bad as people like to point out.

    And why do so many slashdot people feel the need to bash RedHat and insult their users? Is there a reason other than it being a commercial company that is trying to make money in addition to put out a good Linux distribution? Or is it because they are on top and it has become fashionable to bash the people on top? All it is doing is making me lose almost all respect for Debian and their users. And that is a shame since I am sure it is probably just as good as any other distro in certian respects. So why do all you Debian losers (and I am just refering to those vocal ones who spread FUD about RedHat and insult their users) out there feel the need to do this? Can someone point to some real facts instead of worthless anecdotal stories?

    ~Jazbo
  • "they changed the kernal hacking icon in the graphical install"

    yeah, who is that now? I don't recognize the picture....
  • ...but that it's just a Very Hard Thing to get a quality release out at the same time as doing new things. When your customers will be running hardware you don't have (and you *can't* test all possible PC hardware combos) and using the programs you provide in ways you didn't intend, it's all but impossible. Give 'em a break -- it happens to the best of us.
  • by idcmp ( 93227 ) on Monday October 09, 2000 @09:26AM (#720639) Homepage
    Linus has blessed an old version of gcc (ala egcs) as the stable version of gcc to build kernels, so RH includes it as "kgcc". But if you compile your own kernel, you knew that, right?

    The big GCC complaint is that object files, *.o, won't be compatable between 2.95.2 and 3.0 - except for C (and Fortran?), so basically C++.

    This means you can't distribute *.o files made in C++ from one OS to another.

    Maybe some of you have had to do this before in your life, but I never have. And if you do, older versions of GCC are freely available for you to downgrade to (but if you're the type that sends C++ object files around, you knew that).

    The ELF binaries that GCC makes are still 100% portable.

    RH7 looks like it was made for 2.4.x kernels, but when they realized that 2.4.x was still down the road they decided to release it with 2.2.x.

    If I were RH, I would have released it for two reason: (1) there comes a time when you must freeze something for QA and only do bug fixes, and from that point on there are no new features. If RH sat on 7.0 too long it would have been completely out-dated by the time 2.4.x came along.

    And (2) releasing now means that they can drop 2.2.x support and start focusing properly on 2.4.x, and getting DRI/AGPGART/GLX/etc/etc worked in properly.

    There have been 613 bugs logged against RH7 in Bugzilla (as of earlier today). Something like 287 of them are "NOTABUG" or "DUPE", and another 300 or so are "FIXED".

    This says to me that out of everyone calling Red Hat unstable, and unusable, etc, etc, less than 600 people have taken the time to do something productive about it.

  • 6.2 was annoyingly unstable

    eh? .. works fine for me.

    GNOME kept crashing

    Did ya get the latest GNOME RPMS? .. and what crashed GNOME?

    I couldn't makefile anything,

    You what? .. you mean you couldn't compile anything? .. egcs works and has worked fine for me for ages.. you don't say what you had problems with..

    smbclient wouldn't logon to my Win2K tower,

    Did you get the latest SAMBA RPMS? .. and did you read the SAMBA FAQ? :)

    et cetera, et cetera. I just got sick of it fast.

    Sorry you had that experience.. 6.2 has been running great for me. I of course keep all the packages that I use up to date .. and that's good enough for me. Not like that takes a long time either with autorpm from Kirk Bauer [kaybee.org] .. just run it as often as you like and that'll help you keep things up to date. It's all nice and simple IMHO.

    --
  • ...charging that because of its inclusion of a compiler that was not binary compatible with anything else, Red Hat was beginning an attempt to create a proprietary distribution.
    Cox denied these charges in the discussion, reiterating his point that Red Hat's efforts were innovative, and not divergent.
    Wow. Shades of The Man.

    Bing Foo

    ---

  • They DIDN'T use a 2.4 test kernel. They used 2.2.16. They did however make sure all the libraries and support programs are up to date enough that a 2.4 kernel will drop in with an rpm.
  • by Anonymous Coward on Monday October 09, 2000 @09:28AM (#720643)
    you know what really ticks me off?? its how EVERYONE loves to slam RH all day and all night. however NO ONE ever sees any of the good things that Redhat has done ... like how they gave major donations to xFree86 so that they could continue to work on X ... or how Redhat is what gets most people in this world interested in Linux in the very first place

    No, instead all you ever hear is how Redhat sucks and Debian is the best thing in the world

    Redhat may not compare to Debian in many aspects, however has anyone ever looked how redhat may actually be BETTER than Debian??

    think about it ...

    its sad to see that much of the lInux/BSD community are nothing but Anti-Microsoft and Anti-Redhat bigots?

    maybe its just me .. or does it seem politically correct to hate the "winner"

    Sunny Dubey
    dubeys at bxscience dot edu

  • XFree 4.0, RPM 4, filesystem reorg, ...

    Most significantly, they have changed the kernel header files (to support the 2.4 version). That in itself should be enough to warrant a major version number increment. (That was how we did it in the good old VMS days!)

    Red Hat lists [redhat.com] the enhancements as

    • OpenSSL with 128-bit encryption for secure web communication
    • 2.4 kernel ready
    • USB support for mice and keyboards
    • XFree 4.0.1 for improved video performance
    • Cleaner, faster, more customizable GNOME desktop and Sawfish window manager
    • Graphical kernel tuning tool
    • Graphical firewall configuration tool
  • frustrated that by the time I post this will have been moderated up as "interesting" instead of down as flamebait

    Yeah, I guess all differences of opinion from your should be considered flamebait.


    --

  • I used to like Redhat - long ago. Back when 4.2 was the latest version, and the Powertools distro was cheap. It had alpha + sparc + x86 in the one 6 CD set. It had scadloads of useful software (LyX / Gimp (motif!) / gimp (GTK!) / Lesstif / cbb ) . It had some simple but useful and documented GUI config tools - a simple installer. A semi decent package manager, and a good front end to it (glint)

    RANT ON
    It all went sour at 5.0 for two reasons. The libc 5 / glibc 2 switch, and another broken compiler. 1st of all, the libc5 compat libraries were not installed by default, and were too old anyway - StarOffice even then had binary problems on Redhat - wanting the current libc5 version. The glibc 2.0.5 version was also unstable - other distros held off longer for stability reasons.

    Secondly, they managed to ship a gcc version that failed to compile anything on any cyrix cpu based system. Nice. They also moved their GUI tools to using a snapshot GTK version - and it wasn't replaceable with a subsequent version - needed for tracking the GIMP releases.

    5.1 was just as bad. Broken image libraries, non standard file locations, and more. As well as Redhats travesty of a desktop - anyone remember fvwm-95?

    When Linux 3rd party binaries become common, some are for Redhat, some are for SuSE. some Debian, some Corel etc, and some target all. Making a binary work on more than one distro is hell - StarOffice is a large example - anything in C++ is another, WordPerfect 8. RPM is not a good package manager - it doesn't handle auto upgrades well, and is widely used in incompatible ways. Try a SuSE or Mandrake RPM in Redhat. Or rather, don't. RPM is fubared, and the only compatibility method is to track Redhats latest and (ahem* greatest) version - and let the rest follow. This is redhats stealth move to monopolise the market for users running Linux - who wish to also run purchased third party prepackaged binaries - such as an Office Suite, DTP package, Industrial quality scalable DB, whatever.

    Redhat's shoddy quality control has always been a problem for KDE users, never mind there erratically principled stance against KDE / Qt ( We want a friendly Linux / We won't distribute KDE / We will in Germany, cos our competition do / Its OK now, but we really want you to use our sponsored desktop (GNOME), We'll test the install so much that we don't notice that installing KDE installs Gnome Instead (6.1 - nice bug, in spiffy rushed GUI installer), and we mess the QTDIR variables to point to the wrong Qt to stop you compiling Qt based software) , and we now mass with the C++ compiler and libs needed, for say, C++ opensource, KDE the most prominent example?

    In short Redhat claim to be user friendly.

    In truth, they have several major faults.

    • Lack of QA
    • Poor Technical decisions
    • Core library and compiler instabilties
    • Randomly breaking admin tools
    • Rushing releases
    • Requiring massive amounts of updates

    To summarize - Redhats QA, Compability with outher Linux distros and itself, scheduling, admin tools, packaging system, and software selection suck. (planetary bodies through straws)

    RANT OFF

    I no longer use Redhat or Linux, did you guess?

    FWIW, FreeBSD 4.1.1-STABLE

  • gcc 2.96 (C++) crashed on me, twice. Once was when it was compiling code with errors, which was mildly worrisome, and once while it was compiling perfectly good code, which was very worrisome.

    One has to wonder if this is a compiler problem or a hardware problem. RedHat says they used this version *a lot* before sending it out and I've used it a bit and haven't had a lick of trouble. Something tells me that is a memory problem, as GCC (and other compilers) will find them and crash on them before most other applications.

  • We're not discontinuing support for multiple platforms - just wait and see. We haven't announced anything (as in will do/won't do) yet, so saying we won't is way premature.
  • No, we made a consciuous effort (that is, QA, testing, fixing) to include the best tool currently available. It's way better than egcs or gcc 2.95.2 which was why we did it in the first place.
  • FWIW, the compiler was used in compiling the entire Red Hat Linux 7 - this is quite a testbed.
  • [gelinas@theory gelinas]$ cat /etc/issue

    Red Hat Linux release 7.0 (Guinness)
    Kernel 2.2.16-22 on an i686

    [gelinas@theory gelinas]$ man kgcc
    No manual entry for kgcc
    [gelinas@theory gelinas]$


    I found little documentation regarding the second
    compiler and had to do a deja search to find out
    what the Redhat folks were up to WRT kernel
    compilation.

    As a long time Redhat user and supporter, I have
    to admit that I'm really unhappy with RH 7.0
    because of the inclusion of a patched compiler
    and libc off the gcc main tree. Really, I want
    my x86 binaries to be compatible with the
    other distributions. I want a compiler that's
    being actively supported by the gcc community.
    I definately want a compiler that won't be
    completely obsolete by the time RH-7.2 rolls
    around. Anyone want to bet on whether a RH
    distribution with gcc-3.0 is going to be a major
    release up? Betcha it will...
  • ...for debian, linus, and the others who would rather release a late, quality product than just wrestle with the market.

    RedHat 7.0 cost me hours of grief. It's an absolute piece of shit -- worse than 5.0, worse than 6.0, worse than the horrid 2.0-2.1 debacle.

    This was released for one reason only -- to try and stem the losses to Suse in the exploding european market. The number one seller of PCs in europe ain't dell, compaq or HP, it's Fujitsu, and they are selling millions through grocery stores.

    So they needed better internationalization. BFD. How many will stick with RH when it goes south on all their projects?

    AFAIK, the latest Suse has a lot of bugs, too. Y'all need to stop beating on each other and start serving the consumer.

    This is not to say I don't strongly appreciate RHs return to the community. Just go look at some of the Cynus and native java stuff they are funding -- that's great. But by continually releasing horrid product, you aren't helping anyone.

    But this release should have been pinstripe for a bit longer. And how dare you use the word "Innovate" to defend this POS.

    A new low for RHAT.

    Wanna fix your issues? Make a helixcode gnome release based on Debian, switch over to it at V8, and douse every magazine and university in free/freely distributable cds.

    But don't call it innovative.
  • We run quite a variety of Linux machines here. Binary incompatibilities waste everybody's time and complicate management because multiple releases have to be compiled and maintained. The change from libc5 to libc6 a few years ago was a right pain for a lot of people, and we certainly don't want to repeat it unnecessarily. Yet, we do want to evolve our systems progressively as new distros are released. No Big Bangs if they can be avoided, just gradual change.

    This puts us in a bit of a dilemma. Ordinarily we'd be installing new machines with RedHat 7, but if the object format is not backward compatible then this is impossible to do in an evolutionary way. It would be a major transition, roughly on a par with the libc5/6 one. I think I'll wait for RH7.2 before embarking on that kind of headache.

    Meanwhile, what's the most modern non-RH distro out there that is still object-compatible with the binaries in the pre-7 RedHat releases?
  • C'mon, Slashdot posters on crack?? Nah! That unposible!
  • RedHat 7.0 cost me hours of grief. It's an absolute piece of shit -- worse than 5.0, worse than 6.0, worse than the horrid 2.0-2.1 debacle.


    Would you care to comment as to exactly what happened on your system? I've been using 7.0 on a workstation for a few days, and everythings seems to be going ok--I'm going to bring up a test server today or tomorrow...


    Micro$oft(R) Windoze NT(TM)
    (C) Copyright 1985-1996 Micro$oft Corp.
    C:\>uptime

  • by ch-chuck ( 9622 ) on Monday October 09, 2000 @09:43AM (#720656) Homepage
    the market debugging windows for free is helping to fix their tightly held closed private property, whereas debugging a distro of open software is helping out the entire community of GPL users, including oneself. XFree86 ver 4 isn't RH 'property', nor is their own GPL'd creations. I'll help debug Msft property for free the day BG comes over and mows my grass and paints my windows for free. That's one of the secrets of their vast riches and there's still a lot of MS suckers out there working for nothing, or maybe for the priviledge of an early preview of upcoming products.
  • by Anonymous Coward
    ...more specifically, it's "riders of rohan" OR "rohirrim"
  • You can't be serious. How many 3rd graders do you know that don't already know worse language then this? Who is honestly offended by this game? I am so tired of this PC attitude in the world today (especially in the US) that says we have to sanitize our society for the sake of protecting the children, who already know every curse word there is by the age of 6 or earlier. I for one like being an adult and enjoy the freedom that comes with it. I don't want to live in a crippled and censored world just because some parents are too lazy to do their job if they want to try to protect their children from our decadent world and pretend to live in some imaginary, boring and perfect little PC world.

    ~Jazbo
  • Agreed! All are newbies originally. This newbie started by downloading Debian (wanted my first one to be FREE), and I was compiling things the first day.
  • The problem was repeatable. Hardware problems usually aren't.

    When I was compiling good code and it crashed, I was making use of a g++ specific feature, named return values. I expect this feature to disappear and be made into an automatically done optimization instead, but, until then, it makes vast improvements in code in the right places.

    I think the bad code that crashed the compiler was using deeply nested classes in combination with templates. This is also not commonly seen.

  • Why do you feel the need to upgrade.

    Because if one doesn't upgrade then managing the ever-older systems becomes more and more difficult over time.

    This is not all that relevant on systems which are static in function, but that's not the case here --- our services seem to change on a daily basis sometimes. If you're rolling out new functionality quite frequently then legacy systems add a huge amount of work, as each rollout also becomes a partial upgrade. Managing dependencies can become a nightmare.

    It's best to avoid all that trouble by keeping systems reasonably up-to-date, not necessarily on the very latest release (and definitely not bleeding edge) but fairly close by.
  • Oh, I remember...

    It was using a forward declared nested class. The crash happened when I forgot to put in the include file that defined the nested class.

    Again, I've not seen forward declared nested classes used much. Most of STL uses typedefs to rename a top level class to a nested type name, so the class definition itself is top level.

  • Yes, and thanks for letting me run it again.
    --
  • by photon317 ( 208409 ) on Monday October 09, 2000 @09:45AM (#720664)
    I checked the Redhat Bugzilla server the day the 2500 bugs story was released. Unresolved bugs against Redhat 7.0 came to a total of something around 120, not 2500.

    Slashdot was on crack that day.

  • Wow, you're defending Microsoft. And you're using selective quoting to do it, too.

    This, actually, is bullshit. Windows 2000 is fully binary compatible with Windows 98 (and Windows 95). I build software on Windows 2000 all the time that runs perfectly fine on the 'lesser' Microsoft OSes. There are some APIs that by default are only shipped with 2000/NT, and there can be API differences (true 32-bit GDI in NT/2000 as opposed to 16-bit thunked), but Trohan is vastly overstating incompatibilities to cover for his company's boneheaded move.
    Keep reading. You obviously missed this:
    "Actually, C and Fortran code will probably be compatible, but code in other languages, most notably C++ due to incompatibilities in symbol encoding ('mangling'), the standard library and the application binary interface (ABI), is likely to fail in some way," Pfeifer clarified in the GCC announcement, "Static linking against C++ libraries may make a binary more portable, at the cost of increasing file size and memory use."


    What's that Virginia? There is binary compatibility? Hell, if you cared you could have quoted earlier in the article where Troan (note the spelling. It's not Trohan.) talks about maintaining API compatibility as well. Your fabled Windows doesn't bother to do that, now does it?

  • by DG ( 989 ) on Monday October 09, 2000 @09:51AM (#720666) Homepage Journal
    I, for one, am very happy to see that *someone* is still compiling and releasing bleeding-edge distros.

    "Release early, release often" should be burned onto the foreheads of every single distro manager. It's the whole engine that powers Open Source.

    And as for "GCC 2.96"... That's a great idea. The GCC folks have a really nasty-bad habit of living behind their ivory walls, and tossing a release over the gate once every *year* or so. Rubbish! Get it out there, let us use it, let us find the bugs!

    And if the widespread adoption of a GCC newer than 2.7.8 finally convinces Linus to fix the hackery in the kernel that exploits non-standard GCC behaviour in older versions of the compiler, well, then so much the better.

    Hey, newbies! One of the whole points of this "Linux" thing is to actively find and report software bugs, so that they can be fixed. Linux is a participatry experience, not a "product". Red Hat's "product" is the service of gathering up all the bleeding-edge stuff, testing it for a certain level of usability, and then packaging it in a convenient format for you to get at. To expect a distro- any distro - to be bug-free is to miss the whole point!

    I *strongly* recommend starting with a RedHat *.0 release - you get to see the newest stuff, and you get to actually contribute to the process.

    Go Red Hat! The distribution for people with balls!
  • This is not a bug. They made a fundamental design decision to include a buggy development version of gcc and glibc. It was not as though this was some unknown bug that they couldn't do anything about. They made a conscious effort to include the buggy and compatibility-breaking development release of gcc. There is a BIG difference.
  • I'm not exactly sure what kind of I18N support gcc 2.95.2 is lacking

    I am having big problems with I18N support in my programs when using RH7, could you please point me to where you heard about this? I read the article, scanned the redhat bug database but I haven't found any info on what might be wrong with I18N.
  • by Palin Majere ( 4000 ) on Monday October 09, 2000 @10:00AM (#720669)
    So in summary RedHat seem to be saying that the 6.2 version is fine, but the 7.0 version enables people to use development sources ... is this going to be clearly labaelled on their packaging? I bet not! If I walked in to a shop and saw RedHat 6.2 and RedHat 7.0 I'd (fairly reasonably) assume that RedHat 7.0 was the newer, better version. If I've understood the story correctly, RedHat 7.0 might actually pose *more* problems for a newbie!
    Stop and think a moment. What problems are a "newbie" going to run into? A newbie doesn't compile things. A newbie doesn't upgrade from a previous version of RedHat. A newbie grabs the shiny new 7.0 cd, installs it, and has an easier time of things because of the improved graphics support, easier installation, and the significantly more secure packaging of various software.

    RedHat 7.0 only poses a "problem" for those people interested in backwards compatibility. And I hate to break it to you folks, but backwards compatibility eventually goes the way of the dodo. It dies out. It goes away. And Linux far more strenuous about removing old cruft than Microsoft is. See going from 2.0.x kernels to 2.2.x kernels and libc5->libc6 as some recent, balatant examples. Going from good ole egcs to gcc is another step up the evolutionary ladder. It's not like the other distributions aren't going to move to it when it's released. RedHat is just leading the pack to adopt it, and will have support for it already in the distribution when it comes out. Along with support for 2.4.x. kernels waiting as well. All of these things benefit the newbie at the cost of backwards compatibility. The newbie gets to run the shiny apps. The newbie gets the spiffy performance figures. The newbie gets stuff that works because the newbie isn't trying to walk against the wind.

    Yes, backwards compatibility is nice. It's wonderful. You just have to be willing to pay the price to get it.
  • by philj ( 13777 ) on Monday October 09, 2000 @10:27AM (#720670)
    It also includes some embarassing (but justified in my opinion) comments for Slashdot's redhat 7 bug story.

    I know that the slashdot Editors/Hemos/etc say they're overworked & don't have time to check their facts, but just how much damage do you think that the original story that slashdot posted did?

    The mainstream media (who don't know any better) often look to slashdot as a source for stories - I just wonder how much trouble it stirred up for no apparent reason? Isn't that FUD?

    I've been using RH7 for a few days & have found it to be a good, if not a little bloated, distribution.

    Please, please, please start checking the stories before you post them.....
  • I challenge you to find the last piece of non-GPL'd code that RH wrote...
    I'll take that challenge!

    Can you show me the server-side code release for their fancy little Update-Agent?

    The last time I checked, there was no code available (although the protocol used is documented).

    I'm actually a RedHat user and have been quite happy with their releases. But since you issued the challenge, I couldn't resist answering it.

  • Hrm, good point.

    The included KDE1 libraries (which are C++) were built with the backwards compatable egcs C++, so at least KDE is okay.

  • Posted by polar_bear:

    ...with the assertion that Red Hat's release cycle isn't too quick. From a commercial, never mind technical, standpoint it is a nightmare. In the Windows world, the idea of a new OS every two or three years is hellish for IT managers and folks who maintain production systems and desktops. For the home computer user it's annoying to have to upgrade (because you have to in order to use the latest and greatest version of program x) every two to three years. However, Microsoft has found people will put up with upgrades on a cycle of every two or three years, and that drives sales.

    To ask users to contend with new versions every four months is insane! It drives retailers nuts, almost as soon as you have Linux distro 7.0 stocked and promoted, version 7.1 is out. So, you're doing returns every couple months and consumers who are fence-sitting about Linux continue to do so because they perceive that there's never a right time to jump in. People who produce books and materials for Linux are finding it's extremely difficult to coordinate. Try writing a book, getting it through the editorial process and published before a distro goes through at least a point revision. It's nearly impossible. Even if you accomplish it, if you label it "Using Red Hat 6.2" you're nearly doomed because 7 will be out before the book has been on the shelves for six months. And who is going to buy "Using Red Hat 6.2" when 7.1 is out?

    I'm not arguing they shouldn't provide updates - the way the Debian project handles it is optimum, in my opinion. They do official releases slowly, but updated software and bugfixes are only an apt-get away. Slackware handles it pretty well too, by releasing very stable releases only once or twice a year. (I believe 7.1 has been the only
    release this year...)

    Since Red Hat is the market leader, and they've supposedly got all this marketing might behind them now, I'd hope they'd see the problems they're creating. If they must do a buggy .0 release, do it as a developer's release. Make subscriptions available for those who want to live on the bleeding edge, but stick to one boxed version a year, max.
  • by Big Jojo ( 50231 ) on Monday October 09, 2000 @10:39AM (#720674)

    Nah, please save the flamage. RH7 came up fine, "gcc 2.96" even compiles a decent kernel. (Though some of those CPP warnings are clearly kernel source bugs...) X11 update, Gnome update, ... lots and lots and lots of updates, it feels better than 6.2 already.

    You know, I've been wondering when the heck the GCC team would move past the 2.95.2 release ... considering that I've been wanting SOME release with GCJ support for a really long time. I know a lot about the C++ ABI problems, as does anyone who's developed production code in C++; and I just don't see RedHat as having worsened any of those problems. Frankly, more conformant C++ is a major step forward ... and didn't just a few compiler optimizations get out of the "research" world (of gcc developers) this way? We've been wanting better GCC code generation a LONG LONG TIME.

    Why is RedHat getting flamed, instead of the GCC folk? GCC created a problem ... and hasn't been seen to be fixing it. Where's even a draft schedule for "GCC.next" releases? Say, bugfixes to the 2.95.2 release of last year??

    I know why RedHat's getting flamed. Slashdot, and the flamers that keep the LKML noise content too high for me to tolerate. However, the signal in those flames is pretty much invisible.

  • Hey, newbies! One of the whole points of this "Linux" thing is to actively find and report software bugs, so that they can be fixed.

    But then we might have to actually contribute code to open source and actually work, rather than profit off the code of others!

    Next thing you know, you'll want us to RTFM!

  • Well I have to return to my experiences with
    RH 5 and the many many bugs I endured. I put the
    worst one that was totally killing me (severity 1)
    on the bug tracking RH database. It sat there like
    many of the rest. Zero. No replies , zilch.

    As in the past I started searching for others who
    had the problem and finally got the source and
    found the problem.

    Weeks had gone by. My report was still untouched
    so I sent them the resolution. Guess what! They
    duped it to something else off the wall. BTW I found that they basically let reports stew a loong
    time then dup a bunch out. This is what you get
    when you don't really give a shit about your
    customers who are debugging your product for free.

    I lived and worked in RTP for many years as a pgmer/consultant. I knew the area well and thought
    that some of the high tech atitude and methods would ensure that RH would field worthy products.
    Wrong.

    I like RPM. I like Gnome. I don't like the way
    RH does biz with the users who buy their distribution.

    I see chickens winging in...tired chickens...chickens looking for a roosting pole.
  • If the signal is invisible, then just flame and troll away, you can't tell if you're missing or hitting anyone!
  • XMMS hanging is not Red Hat's fault. I have the same issue on Debian, pisses me off to no end.
  • This means you can't distribute *.o files made in C++ from one OS to another. Maybe some of you have had to do this before in your life, but I never have. And if you do, older versions of GCC are freely available for you to downgrade to (but if you're the type that sends C++ object files around, you knew that).

    IMHO, this is not the issue. The issue is with people who use Redhat in a development environment. With the inclusion of gcc 2.96, companies that distribute linux binaries will never use Redhat 7 because it there would be binary incompatibilities with other flavors of linux.

    I know that I will be staying with 6.2 for a while.
  • Slashdot posters are looking more and more like CIO's and less like techies.

    You people are totally missing the point of the Linux creed. The GNU/Linux operating system is an operating system for people who like to tinker and hack and play around with operating systems! It's free software. There are certain freedoms that come with it. Say....the freedom to modify source. If you don't like some of the things that Red Hat does...use the force and change the source. It's totally customizable. rpm -e still works. Replace packages, downgrade to RH 6.2, I don't care. Just quit complaining about it. If your car needs some sort of modification, and the mechanic that you take it to tells you that he is going to do some innovative things...they haven't been done before; you should make the decision whether or not to let him do it. If you let him make these modifications, you were the one who made that decision. You are at fault. Either take your car to another mechanic, or god forbid, do it yourself.

    I don't have a single problem with RH 7. Any incompatibilities that I have found, I have fixed. I have modified source, and made it work.

  • Redhat 7.0 will cause more problems for newbies who want to use compiled binaries from somewhere else (fairly common) or who want to make binaries for others (fairly uncommon among newbies).

    This isn't new. Every RHL *.0 release breaks binary compatibility with the previous version. That's why they upped the major version.

    And, among the problems that newbies will be likely to see, there are some big ones and some small ones:

    • Big: Corel WPO2k and Photopaint won't install without serious work. This isn't a binary compatibility problem, actually, it's caused by the change in locations of init files and Corel's method of packaging the RPMs.
    • Less big, but still matters to some: Libc 5 compatibility libraries aren't there. This will affect WP8 and other older apps (someone mentioned matlab).
    • Maybe big, depending on your needs: C++, it's changed. Things compiled with gcc 2.96 won't link vs. C++ compiled with egcs. This will be significant for some people, but all I can think of that would affect newbies is KDE, and KDE1 on RHL7 was compiled with egcs, so it should run with any binary downloads for KDE. (I don't know how happy it will be if you try to compile vs. it, though.)


    These are all pretty similar to the problems with 6.0 (Remember Star Office and realplayer headaches?) and are to be expected when you have a distribution using software released after commercial software. But even still, the vast majority of software I've used (including old binaries hanging around) have worked fine. I don't compile things on my RHL7 box and expect to run it on older boxes, but then... I never did.
  • Don't real geeks download source and compile everything themselves? I mean, it's a hotrod OS, so why would I go buy the Chrysler wannabe muscle car when I can build my own from a classic frame, body, and engine block, fitting the other parts myself? And RPM/apt-get? What a wuss way to get "packages"! You people make me sick! Is tar broken? Has years of reinstalling Micros~1 binaries softened your brains?
  • by devphil ( 51341 ) on Monday October 09, 2000 @10:59AM (#720694) Homepage

    First, in your statement:

    ...a "2.96" version of GCC, one which GNU would obviously like to see waiting until a more publicly acceptable major release
    Let me try to clear this up a bit. Taking a snapshot, heavily testing it, and releasing it with a slightly changed version string ("experimental&quot -> "RH 7.0&quot) isn't a major problem by itself. Yeah, it's binary incompatible, but GCC 3.0 was going to be so anyway.

    What annoys many of us on the gcc-bugs mailing list is that RH did not also change the bug reporting email address, or anything else, to indicate that this is a technically unstable release. So the list gets all these messages complaining about an unstable release that should have gone to Bugzilla instead. The GCC team is not RedHat's front-line helpdesk system.

    Second problem, from the article itself:

    Pfiefer could not be reached for clarification on which specific distributions the announcement addressed.
    *boggle* Which ones do you think they were addressing?!? There's only been one distribution so far to do this...
  • I could care less about C++ binary compatibility. It largely doesn't exist now. C++ is a moving target from that perspective, and is likely to remain so for awhile longer.

    I do care about my compiler's stability. I expect my compiler to work. Any hint that it does not work worries me deeply. It makes me wonder if any bug I find is a code generation error, or my problem.

    gcc 2.96 (C++) crashed on me, twice. Once was when it was compiling code with errors, which was mildly worrisome, and once while it was compiling perfectly good code, which was very worrisome.

    Above all else, I want a stable compiler I can trust. gcc-2.96 has lost my trust. I want gcc-2.95.2 back, and kgcc is not gcc-2.95.2, it's gcc-2.91.66 (which I bet has much worse C++ support).

  • by Morgaine ( 4316 ) on Monday October 09, 2000 @11:08AM (#720698)
    Half the people seem to want "innovation" (or bleeding at the edges -- what you call it is a matter of opinion) and the other half want "stability" (or lagging behind the times, ditto). It's just not possible to satisfy both factions at the same time.

    Instead of compromise, how about adopting a 3-digit release scheme instead? Ie. let the label "7.0" be the thing that appears on pretty RH box fronts, but let it be 7.0.X in reality, depending on the date of production. If the X is available as a patch upgrade on the RH website, nobody loses.

    This would satisfy the release early and often brigade, while at the same time it would reduce the software sellers' nightmare of carrying rapidly obsoleting stock, and production houses would be more likely to upgrade to new .0 releases if they knew that in a couple of weeks' time there would be a .0.1 patch to deal with some of the running wounds.
  • by Dionysus ( 12737 ) on Monday October 09, 2000 @11:09AM (#720699) Homepage
    Can you show me the server-side code release for their fancy little Update-Agent?

    The GPL doesn't specify that you have to give the source to anyone who ask for it, only to those who you distribute your software to. So, you distribute a piece of software, you distribute the code too.

    As far as we know, the server-side Update-Agent could be GPL, and RedHat would still have no obligation to give out the source code.

  • So in summary RedHat seem to be saying that the 6.2 version is fine, but the 7.0 version enables people to use development sources ... is this going to be clearly labaelled on their packaging? I bet not! If I walked in to a shop and saw RedHat 6.2 and RedHat 7.0 I'd (fairly reasonably) assume that RedHat 7.0 was the newer, better version. If I've understood the story correctly, RedHat 7.0 might actually pose *more* problems for a newbie!

    Perhaps Redhat should consider labelling them RedHat 6.2 and RedHat Experimental?

  • Hey. We've been waiting for this distro to come out for how long? Seriously, it's a Red Hat point-0 release, it's gonna be buggy. I personally am of the belief that if you can't make a better one yourself, don't whine about someone else's distro.

    Either that, or just be satisfied waiting for 7.1 or so. It's fresh, it's not gonna be perfect. Isn't that what open source is all about? Companies who can release a product that's less than perfect and admit it (ahem, micro$oft, ahem)?

  • by evilphish ( 128599 ) on Monday October 09, 2000 @09:03AM (#720712) Homepage
    historicaly haven't redhat's .0 releases been buggy? I installed 7.0 the other day and am pleased to see some new small things like a graphical lilo menu (hey it's prettier then nt's boot loader menu)and they changed the kernal hacking icon in the graphical install (hehe i know it isn't important) i'll deffinatly wait to use it on anything important.

    one thing i am dissapointed in is the lack of sparc support for 7.0 but i guess you can't have everything.

    i've noticed that some readers of slashdot have stuck there noses up at redhat because of various reasons shouting things like "Debian is better" or "slackware is superior", but i have to admit that i was weened off of windows with redhat, and alot of my skills came from useing and tinkering with redhat. and it will always have a place on one of my boxen.
    sorry about the fragmented comment, but i'm tired

  • Red Hat's release of "gcc 2.96" will provide wonderful ammunition for those who claim that Open Source software is too unpredictable for prime time.

    What's to prevent some other distro from releasing a snapshot of Apache, or Gnome, or of any other package? Not one d@amned thing is the answer... and as distros diverge in the packages they include, it becomes harder and harder to develop *one* solution that runs on "Linux" (whatever that happens to be!)

    I'm a developer; I don't have time to be debugging the d@mned compiler when I'm trying to get real work done. I don't want to fight bleeding-edge, beta-level software; my customer need solid solutions, not experiments.

    Red Hat is no longer an option for this coyote...

  • well, i am a newbie still, and i sure do compile things, so i think you MAY want to reconsider that pretty stupid statement. i'm sure i'm not the only one.


    You're not the only one who thinks they're a newbie when they really aren't. Your average "newbie" has never heard of the word "make", and doesn't know a think about this strange thing called "compiling". A "newbie" is a representative of the masses. You know, the ones currently responsible for the monopolistic market share Windows has in the PC world? A "newbie" is someone who's just been introduced to the big wide world of Linux, and much like Alice tumbling down the rabbit hole, has no clue where they've wound up, what to do, or how to go about doing it.

    If you know how to compile things, you've moved beyond being a newbie and into the realm of being a "user". Are you a power user? Most likely not, but newbie, my friend, you are most certainly not.
  • This is nearly the same release as RH 6.2 with the updates added. Why not just call it 6.3?

    Because it uses XFree86 4.0 as the default. That's a pretty significant change.

    Since the new kernel is significant as well I'm curious to see if they switch to 8.0 when they add that. :-)

  • by Talonius ( 97106 ) on Monday October 09, 2000 @09:04AM (#720730)
    I don't know what the ruckus is all about. With the release of 7.0, more posts were made about "I hope it's not as buggy as 6.0!" and "I'll wait for the .1 release!" then most others, yet people still insist on installing and then complaining? (loudly, rudely)

    The introduction of gcc was justified to me. Perhaps not to you, but I understand their reasoning. The inclusion of kgcc should have been enough; RTFM.

    I understand that without complaints problems may not get solved, but there's probably better ways to spend your energy - utilizing Bugzilla and the like. If all you do is cause the RedHat crew to expend energy to answer allegations of abuse and bugs, then you're taking them away from SOLVING the problems.

    And in either case, it simply goes to prove that you should *ALWAYS* wait for a service pack or release greater than 0.. *grin*

    The LinuxToday article just goes to prove that those of us who post on Slashdot have greater impact than we may realize, for good or ill.

    -- Talonius
  • Wrong. Among the things I can think of off the top of my head (and considering I use Debian, I'm sure there are lots more that I can't think of) include XFree 4, a brand-spanking new RPM (4.0, IIRC), and a filesystem reorganization to make things FHS compliant. All of these are major changes, striking at the core of how the distro works and is installed, and as a result merit a new release. Please, take the time to actually find things out before you post.
    ~luge(frustrated that by the time I post this will have been moderated up as "interesting" instead of down as flamebait)
  • As someone close to the product cycle of a large software product, I can say these guys are going in the right direction.

    Their techies are fighting a losing battle: to include a feature, or not. You see, the product/build/whatever manager has to commit to a delivery date so that the marketing, delivery, and sales efforts can come together at a reasonably coherent time.

    The techies then have to get nose to the grindstone to match up with the expectations of the product/build/whatever manager, and get the features they promised out the door in time. Even if they have an even cooler feature that didn't make the list, but they feel makes the grade.

    By seeing them releasing a "2.96" version of GCC, one which GNU would obviously like to see waiting until a more publicly acceptable major release, they're showing they have a structure in place that favours "release early, release often" more than "wait for the proper product". We're seeing here the coming together of all the best ideas of Open Source, with the best management control. Alan Cox approves, he's even allowing the inclusion of a patch that hasn't made it into the mainstream kernel...

    Having said that though... I can't wait for version 7.1 :-)

    /prak
    --
    We may be human, but we're still animals.
  • Does this mean that Red Hat 7 is of lower quality than Red Hat 6.2? I think that is what the original poster was getting at. Not (just) that 7.0 is binary incompatible with 6.2. That is to be expected, as you pointed out.
    The term "quality" is extremely subjective. What you consider a high quality distro might be considered a POS by someone else. Not everyone's standard of quality are the same.

    RedHat 7.0 offers more software packages than 6.2 It offers a more updated system base. Better package optimizations, more fixed bugs, enhanced and improved tools, etc...

    Imo, these improve the 'quality' of the distribution.

    Now, what's the catch here? The catch is that since this is a new major version number, there are new packages that have not been in previous versions of RedHat. Ever.

    In much the same way that you can compare two different major releases of kernels, you can compare two different major releases of RedHat. Was the 2.2.1 kernel a 'higher quality' kernel than, say, 2.0.34? It depends on who you ask.

    Its a lot like comparing two different kinds of apples. Yes, they're both apples. Do they taste different? Sure. Different texture and qualities? Yup. But are they both apples? Of course.
  • by Tony Shepps ( 333 ) on Monday October 09, 2000 @12:34PM (#720745)
    Can you find a qualified big-5 consultant who would tell clients to install the latest version of HP-UX, two weeks after its release, without running a parallel system and without testing any of the old applications for compatibility -- much less compiling legacy code to test that?

    Can you find one quality hacker that wouldn't put the latest gcc up on their own system, disregarding what it might do until it does it -- and then telling you that knowledge of the fix is just part of what any self-respecting admin should know anyway?

    The upset stomach that /. has had over 7.0 is just fanatical kiddies trying to put down one distro over another. Too bad their wanking is going to hurt the entire Linux community. Red Hat trading at $5/share doesn't mean anything to these jokers, until they graduate and are forced to slave over a Win2003 "console".

    And the value of /. is diminished by such ranting. I agree with Taco's recent chat denying any actual level of responsibility. But that means all the rest of us have to be *very* responsible. *Mostly* moderators and meta-moderators.

    If the community is going to reward anti-Red Hat group-think, or even anti-MS group-think, the community will pay dearly. People, please moderate and meta-moderate well...
    --

  • Okay this is not a flame or a troll but here we go: Why do you feel the need to upgrade. You can update the packages on each person's machine by yourself with a bit of perl scripting. This is what confuses me the most. Our workstations at the office JUST upgraded to 2000 after it's been out going on a year. Want to know why? I wanted to research and see how many bug reports came out before we moved over. (We have to run ms products because for outlook here.) Just because it's out there doesn't mean you have to upgrade. I still have a nameserver running 5.2 and I don't feel ANY need to upgrade IT. I keep the kernel up and patch what few packages have bug reports released but thats it. ANYBODY who puts something untested on a production machine is an idiot and asking for trouble. I learned early on in my corporate career...you DO NOT fuck with production boxen. It's that simple.
  • by Cire LePueh ( 26571 ) on Monday October 09, 2000 @09:09AM (#720750)
    Seems that the folks at RH have some very good reasons for what they did. The gcc bit is still a bit bothersome though. They should have communicated better with the GCC dev team, otherwise...

    The thing that gets me is the bit about 2500 bugs etc. Especially the reporting on /. According to Troan First of all, the 2,500 number was all of the open bugs in our ticket system for all releases of all products. It includes engineering requests as well as our engineers internal todo lists for future releases. Although I didn't follow the original slash posting (thought it was pretty low S:N ratio) and haven't checked the bug list personally, if what he says is true then the poster and in part the /. community has done a disservice to RH. Sensationalistic journalism is all to prevelant elsewhere we shouldn't support it here. Unfortunately too many people, especially the other news mongers out there dont delve very deep into comments (if at all) which just spreads the misinformation.

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (1) Gee, I wish we hadn't backed down on 'noalias'.

Working...