Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Operating Systems Red Hat Software Linux

Red Hat's Linux Changes Raise New Questions 433

itwbennett writes "Last month two Red Hat developers proposed to replace the 30-year-old syslog system with a new Journal daemon. Initial reaction was mostly negative and 'focused on the Journal's use of a binary key-value form of data to log system events,' says blogger Brian Proffit. But now, says Proffitt, it seems that the proposal to replace syslog has less to do with the fixing syslog's problems than with Red Hat's desire to go its own way with Linux infrastructure."
This discussion has been archived. No new comments can be posted.

Red Hat's Linux Changes Raise New Questions

Comments Filter:
  • by Todd Knarr ( 15451 ) on Thursday December 01, 2011 @02:08PM (#38229710) Homepage

    That's one of the advantages of Linux: RedHat can go their own way without needing the rest of us to buy in, and without really messing things up for us. If they provide a reasonable API, it'll either be compatible with syslog with a simple library substitution or we'll quickly see a wrapper library that allows programs to use either syslog or Journal without needing code changes.

    I think going to binary's a bad idea, myself. The fewer tools you need working to find out what the error is, the easier it is to debug and fix the problem. But let RedHat try this and see how it works, and then we can decide once we've got some real-world data to compare.

    • by LordLimecat ( 1103839 ) on Thursday December 01, 2011 @02:12PM (#38229758)

      There are advantages to not having everything in ascii text, or else we would never see relational databases used for anything. You are right that we will see. I like plain text logs because I am still learning the ins and outs of the major Linux breeds, and not having to learn a special tool for every config file and log makes things easier; but I wont say that there couldnt be benefits to a more robust system.

      • by MightyMartian ( 840721 ) on Thursday December 01, 2011 @02:23PM (#38229952) Journal

        No matter your experience, plain-text logs make more sense, especially in *nix operating systems. You have a vast array of tools to search log files with; my favorites being tail and grep. The minute you go to binary logging your options shrink or you end up having to use additional tools to reconvert it to text (ie. the Windows event log).

        • by LordLimecat ( 1103839 ) on Thursday December 01, 2011 @02:34PM (#38230116)

          As has been pointed out, there is no reason you couldnt use a new tool to get the output you want out of the database.

          You cannot, for example, convince me that noone is able to script MySQL databases, despite their binary nature.

          • by rsilvergun ( 571051 ) on Thursday December 01, 2011 @02:44PM (#38230266)
            Yeah, but you add a bunch of overhead to get back to text.
          • by DiegoBravo ( 324012 ) on Thursday December 01, 2011 @02:56PM (#38230448) Journal

            Many times with a (semi)broken operating system, you don't have all the usual tools.... sometimes your only clue is a syslog driven console text message.

            • Also if it's broken enough that you can't actually boot, but you can still get access to the files on the disk, then plaintext will be readable by any other OS. If it's a weird binary format, then you have to worry about whether you have the correct viewer installed, if there's even one available.
          • You can script mysql databases, assuming the data files are intact, mysql is intact and runs on the server, and you have functioning tools and the skill to use them. ASCII logfiles will be readable in spite of spot corruption and can be processed with grep / head / tail -- if these primitives don't work it probably doesn't matter what's in the log.
          • tail -f /var/log/messages

            In mysql? How?

            • by bill_mcgonigle ( 4333 ) * on Thursday December 01, 2011 @05:49PM (#38232404) Homepage Journal

              tail -f /var/log/messages

              In mysql? How?

              You missed a requirement: in a form that's still usable when the machine keeps going down hard in the middle of a boot. 'tail messages' still works, nothing to get corrupted or worry about a write-ahead log that can't get consistent.

              Not that I spent the day today troubleshooting one of those or anything...

              • by mabhatter654 ( 561290 ) on Thursday December 01, 2011 @08:17PM (#38233546)

                Wow.. I forget there are OSes that don't have serial panel output for error codes! Or Operator "key" codes to force diagnostic modes?

                Coming from an AS400 background this article shows how silly "normal" system management is. To answer the parent, the system can be issued boot time commands in HARDWARE (which are also available for virtual machines) that will bring the system to a minimal "restricted" console state. That's like a cornerstone of the system and IBM doesn't mess with that.

                Next, the proposal Red Hat has is a very AS400 concept. The History Log (QHST) and the Security Audit Journal (QAUDJRN) are both binary structures that have hard-coded readers built into the kernel. The system maintains internal integrity of the files with extra fields you never see. Of course the AS400 native file system is "DATABASE"-based. So any command that outputs from these displays to a screen or to something that can be instantly searched with SQL.

                I'm certain that is what they are trying to make here. Ultimately in security you care about the integrity of the logs more than even recovering the system... Especially when to don't have to restart for a year at a time or more. Frankly, they should add an output option for SQLite file types and everybody can be happy.

                Realize that when places like banks use AS400's they mirror the raw audit journals off to another system, often hundreds a day. Because the are binary journals, they are difficult to tamper with because they interlink with eachother. Yet at the same time because they are a data type programs can monitor them for specific events automatically and it's trivial to set up actions to take.

                • by swalve ( 1980968 )
                  Linux can feed errors to serial. My problem with switching to database models for configs and logs is that there is no point to it. It seems like it might be one of those deals where it saves the programmers some time, but costs the users tons of time.
          • by MightyMartian ( 840721 ) on Thursday December 01, 2011 @05:14PM (#38232058) Journal

            MySQL requires the daemon to be running, or at least access to some utility with the MySQL library. If a system has crashed or has reduced functionality due to system problems, a text log that can be scanned with the basic *nix stdio tools is a helluva lot more useful than a binary log.

            I hate the Windows eventlog and binary logs in general precisely because they become rapidly less accessible the more issues a system has, which is quite often why you need to delve into syslog anyways. What exactly is the point to reinventing the wheel?

            • I hate the Windows eventlog and binary logs in general precisely because they become rapidly less accessible the more issues a system has

              Not really, the requirements for reading event logs boil down to one or two DLLs, and MMC.exe. Ive been on systems where the whole windows folder disappeared (due to a wigging out drive and filesystem), and was still able to read data out of the event log by pulling those one or two files over.

              And whoever said they were using mysql?

        • by epiphani ( 254981 ) <epiphani@dal . n et> on Thursday December 01, 2011 @02:45PM (#38230276)

          Agreed. I submitted this post [gerhards.net] yesterday, by the lead developer for rsyslogd (the most common syslog daemon in linux these days). He makes the point that most of the complaints made are actually wrong if they'd bothered to look at the last 10 years of development and IETF work around syslog.

          • by jgrahn ( 181062 ) on Thursday December 01, 2011 @03:55PM (#38231178)

            Agreed. I submitted this post [gerhards.net] yesterday, by the lead developer for rsyslogd (the most common syslog daemon in linux these days). He makes the point that most of the complaints made are actually wrong if they'd bothered to look at the last 10 years of development and IETF work around syslog.

            But about this part of what he wrote:

            "Ages ago (2006?) I implemented high-precision timestamps (including TZ info) in rsyslog, and RFC5424 has brought them to the on-the-wire protocol. As far as I know, syslog-ng supports them for quite a while as well (but I am not a syslog-ng expert ;)). However, all distributions turn high precision timestamps off and set the dumb old format as this is a requirement to keep old tools working."

            I enabled high-precision timestamps on my Debian system to get a feel for them. But I had to turn them off again: not readable enough, and took too much screen space making more log lines wrap. The tools weren't the problem; I just couldn't eyeball the damned things!

            • by cas2000 ( 148703 ) on Thursday December 01, 2011 @05:46PM (#38232354)

              yeah, me too. while it's probably better to have the high-precision timestamps, for me it's more useful to have them readable.

              I have the same problem with squid logs - they use unix time_t with milliseconds for the timestamp. more precise but less readable. I filter the lines through a small perl script to reformat the dates when i need to tail or process them:

              #! /usr/bin/perl -p
              use Date::Format ;
              s/^\d+\.\d+/time2str("%Y-%m-%d\/%H:%M:%S", $&)/e;

              this is similar to what is mentioned in http://wiki.squid-cache.org/Features/LogFormat [squid-cache.org] but with the improvement (IMO) that the timestamp still only takes one column (compared to localtime() making it take 5 columns), so it doesn't mess up other processing scripts that depend on the detail being in specific columns)

              from this;
              1322779413.527
              to this:
              2011-12-02/09:43:33

              It would be annoying to have to do that for syslog logs too. I don't really need millisecond precision for my system logs anyway, near enough is good enough. All i need is accuracy and consistency across multiple systems - and ntp gives me that.

        • Re: (Score:3, Insightful)

          by lucm ( 889690 )

          No matter your experience, plain-text logs make more sense, especially in *nix operating systems. You have a vast array of tools to search log files with; my favorites being tail and grep. The minute you go to binary logging your options shrink or you end up having to use additional tools to reconvert it to text (ie. the Windows event log).

          The more a system becomes complex, the more one needs to see events as part of a whole and do some kind of analysis and correlation. This type of work is done more easily with databases. I like grep like everyone, but if I want to have a nice rollup of events based on time and source, I will get the info much more easily with a SQL query than with a regex piped into a reporting utility piped into a paging utility.

          Also I think one has to adapt to a technology, not try to make it work like what was there befo

          • by ThePhilips ( 752041 ) on Thursday December 01, 2011 @04:34PM (#38231610) Homepage Journal

            The more a system becomes complex, the more one needs to see events as part of a whole and do some kind of analysis and correlation. This type of work is done more easily with databases. I like grep like everyone, but if I want to have a nice rollup of events based on time and source, I will get the info much more easily with a SQL query than with a regex piped into a reporting utility piped into a paging utility.

            Typing 'grep <whatever>' is much much faster than: connecting to DB, typing query and realigning rows/columns on screen for readability.

            I have to dig quite often through audit-log-like tables in DB created by our software and let me tell you that SQL doesn't make any correlations easy. Especially if we are talking about some production system were you end up self-joining a table with few dozen million rows (what you need to display for example the trivial thing as the time to the next/prev interesting event).

            Neither the usual SQL tools are any good at displaying the data - as compared to displaying the SQL itself e.g. syntax highlighting. On text side of things, it takes minutes to create custom syntax for VIM for the problem at hand.

            Why would you want to "reconvert" the Windows event log to text?

            How many 3rd party applications actually use the Windows Event Log? I have seen probably one or two.

            You know why? Because using it is a PITA - I have tried that twice as SW devel already in times of NT4 and W2K. (I was hoping to simplify critical error reporting of the Windows applications (including one GUI-less) and thought myself "WEL is just like syslog!" Oh gosh, Windows API proved me wrong.)

            On Windows there is a lot of built-in capabilities for log exploring in Powershell or even in VBS/WMI. A toolbox contains many tools, not just grep.

            Oh, so you like all that stuff over something as fool-proof, robust and simple as the grep? OK.

            • by alcourt ( 198386 ) on Thursday December 01, 2011 @05:51PM (#38232440)

              For a moment, I thought you were one of the people on our log analysis team that I work with.

              There are places where a database is good. I recommend them heavily, but the original log output shouldn't be database. If you want a database of logs (appropriate in many cases), send a copy of the logs off host (which you have to do anyway for PCI and many other audits) and store that copy in a database. That makes it easier to comply with other requirements.

              Best of both worlds. Plain text local to box in case Things Go Wrong, and database for cross-system forensic analysis.

            • by lucm ( 889690 )

              On Windows there is a lot of built-in capabilities for log exploring in Powershell or even in VBS/WMI. A toolbox contains many tools, not just grep.

              Oh, so you like all that stuff over something as fool-proof, robust and simple as the grep? OK.

              I am curious: are you using a fool-proof, robust and simple approach such as print-fax-scan-ocr instead of that crazy thing called email? And when you assemble IKEA furniture, are you sandblasting the screws so they are easier to hammer down?

          • The issue is based on what you need in different scenarios and to meet that I can't see anything wrong in doing both writing to syslog and a database.

            Why do both? In larger systems the amount of data is difficult to cross reference and analyse as files due to the amount of sources, size of data, tools to visualise it all, etc. Writing syslog data to centralised syslog services that do use database backends to centralise logs and query/report against them are a key tool in these scenarios. Its one of a
          • by alcourt ( 198386 )

            Actually, if you are at that point, you need to centralize your logs or use a proper alerting system beyond traditional logging because you are at the point you need to correlate events across systems. "Alert if this event occurs on system A unless you see this other event on system B or C"

            Putting a database to manage logs on each system causes grief. We tried that once with wtmp. It hasn't worked out well, and we still don't have a clean replacement for wtmp that records logout times. So instead, I'm f

        • by Crudely_Indecent ( 739699 ) on Thursday December 01, 2011 @02:50PM (#38230350) Journal

          What I don't understand is why you can't achieve both log security and log usefulness with the existing tools.

          In a previous job (seems like a different life) - I set up all of the servers to utilize remote syslog. The syslog server then offered the log directory as a read-only NFS exports to each of the servers.

          It was quick, it was easy, and it was secure. You could view the local logs on individual servers, but you couldn't alter them in any way except by generating log output.

        • by DrXym ( 126579 )

          No matter your experience, plain-text logs make more sense, especially in *nix operating systems. You have a vast array of tools to search log files with; my favorites being tail and grep. The minute you go to binary logging your options shrink or you end up having to use additional tools to reconvert it to text (ie. the Windows event log).

          Except you already see binary log files on Unix. Log files are frequently compressed with gzip. I don't see a big difference between someone typing (for example) "zcat file.gz | grep somestring" and "redhat-log-cat | grep somestring" assuming that was the name of the tool they used to crap out a logfile from the binary db.

          I'd also note that tools like git are happy to store things in binary objects yet still present stuff in textual format to the user.

          • by hawguy ( 1600213 ) on Thursday December 01, 2011 @04:40PM (#38231682)

            No matter your experience, plain-text logs make more sense, especially in *nix operating systems. You have a vast array of tools to search log files with; my favorites being tail and grep. The minute you go to binary logging your options shrink or you end up having to use additional tools to reconvert it to text (ie. the Windows event log).

            Except you already see binary log files on Unix. Log files are frequently compressed with gzip. I don't see a big difference between someone typing (for example) "zcat file.gz | grep somestring" and "redhat-log-cat | grep somestring" assuming that was the name of the tool they used to crap out a logfile from the binary db.

            The difference is that one zcat tool can handle any gzipped file, it doesn't need to know a thing about the data in the file.

            But if when developers start using binary data, yourredhat-log-cat parser will need to understand how each developer writes their data.

            Some developers may just use ascii key-value pairs. Some may write out numbers as binary ints and floats. Some may serialize java objects and write those to the logs.

            With ASCII readable files you can usually (but not always) decipher the data and write a parser for it. Once you've got a stream of binary data, this becomes much harder.

            Which would you rather try to decode:

                    Perfdata: 0 100 123123123 20.1

            or

                    Perfdata=00000000000000640756b5b35468a000

            Of course, with proper discipline, this can be prevented from becoming an indecipherable mess, but the same could be said for the current syslog.

      • by Tomato42 ( 2416694 ) on Thursday December 01, 2011 @03:46PM (#38231092)
        Just yesterday I was debugging why my syspreped Win 7 images weren't installing properly. So I go to sysprep directory and start reading the text log. It didn't have any useful info. So I grabbed the binary log and tried to import it on the same machine in pre-exec environment. It couldn't do it. So I copied it to different computer and tried to open it there, the system claimed that the file was damaged. After 4 hours of struggle to read, copy or convert the bloody thing I went the "Microsoft recommended way" (seriously, that's the solution they suggest in MSKB) and bisected which program caused the install failure. In "only" 6 reinstalls I finally found the culprit.

        If it was a Linux distro, a simple cat or tail would have sufficed and it would have been a 15 minute job, not 2 days. If they want my text logs they will have to pry it from my cold dead hands.
      • by Todd Knarr ( 15451 ) on Thursday December 01, 2011 @06:41PM (#38232822) Homepage

        The problem here though is that the whole reason for the logs being plain text is that the time you most need to be able to read the logs is exactly when things are broken, most services won't start because of the breakage, and your special tools may not be working because most of the system just isn't there. With plain text files, if you can boot into the single-user maintenance shell (not even single-user mode, literally running the shell as PID 1) and get the filesystem the logs are on mounted, you can read the logs and see what happened. With a more complicated system you end up in a catch-22 where you need to fix the breakage to get the tools working to find out what you need to fix to get the tools working.

        This is, BTW, why /sbin exists separate from /bin. You couldn't always guarantee that libc was OK, so /sbin had statically-linked copies of critical tools that you could use to fix the system after something had trashed the critical system libraries.

    • Because if they go their own way the next time you're trying to compile something a bit more complex it won't screw everything. I like red hat based distros because they tend to stay more or less the same over the years, leading to easily configurable systems. The moment they deviate it'll be their death.

    • by skids ( 119237 ) on Thursday December 01, 2011 @02:17PM (#38229840) Homepage

      That's one of the advantages of Linux: RedHat can go their own way without needing the rest of us to buy in, and without really messing things up for us.

      Not quite true. If PHB insists on RHEL, you're stuck coping with whatever poor choices they make.

      Why do I get the sense that all the chafing at the "restrictions" of the LSB/linux-instinct/unix-way/common-sense is just the bellyaching that happens when you realize you're short the talent/energy/whatever to progress and start looking for ways to re-arrange the deck chairs?

      • If PHB insists on RHEL, you're stuck coping with whatever poor choices they make.

        And that's RHEL's bread and butter, corporate support & name recognition, always a requirement at businesses with a NEGFFBIBM attitude. Anyone who doesn't need it will just go with CentOS or some other distro. Those who do need it...well I guess when the troubles become to great they'll have to make a hard decision.

      • by Anonymous Coward on Thursday December 01, 2011 @02:32PM (#38230100)

        You will also be stuck with all the good choices they make.

        Reading what they are proposing it seems that is actually a very good idea. When you get out of hobbyist and small environments and into environments with more demanding requirements about security auditing the traditional syslog has not cut it for years anymore. The first step in many environments is usually to rip it mostly off and replace with some more or less proprietary environment.

        The new ideas such as improving the reliability of log shipping, reducing possibilities towards tampering, and improving chances for more advanced log analysis are really awesome things - especially for people who are serious about their logging. Syslog and its text format are legacy poison and it will be good to see them die and vanish. Hopefully that happens fast.

        Also, keep in mind that that RedHat is still open sourcing that stuff. They will provide tools and APIs - as they require those also themselves.

        • by mlts ( 1038732 ) * on Thursday December 01, 2011 @02:46PM (#38230312)

          Even though the syslog is in a binary format, it would be nice to have it also stored in text as well. For example, on some sensitive machines, I would have the syslog redirect to an IBM3151 serial terminal for real time monitoring. This way, I could immediately tell if a job started at its appropriate time, finished, or caused issues.

          IMHO, the best way RedHat should implement this is similar to how AIX does logging. It has its own format for logs that are read using the errpt command. However, one can turn on plain old syslog logging and have that able to be stored in a file, forwarded to a log server, or shipped via a serial connection to a secure log drop that has no network access. It would be nice to have a signed, secure format for logs, but also nice to have plain text to watch in realtime and search from without requiring specialized commands.

          • by LWATCDR ( 28044 ) on Thursday December 01, 2011 @03:38PM (#38231000) Homepage Journal

            Well yes you can.
            If you read the post from the developer
            "My application needs traditional text log files on disk, can I configure journald to generate those?
            No, you can’t. If you need this, just run the journal side-by-side with a traditional syslog implementation like rsyslog which can generate this file for you."
            Just run journal and rsyslog and you have both systems in place.
            Untile journal does everything odds are that it will be run with rsyslog in parallel. A boots and suspender type of solution. if you want to have it also generate a traditional syslog I would think that adding that functionality wouldn't be all that hard. If needed someone will add that feature.

            • by Cyberax ( 705495 )

              "My application needs traditional text log files on disk, can I configure journald to generate those?"

              Well, I'm going to write a FUSE filesystem that allows to read Journal files as if they were simple text log files. With features like customizable representation.

        • by alcourt ( 198386 ) on Thursday December 01, 2011 @06:03PM (#38232526)

          Actually, I work in security logging in a very large environment. The last thing we permit is ripping out syslog on generic systems. We do send a copy of the logs to a central system, but we don't allow the client systems to be touched.

          The central copy cannot be tampered with. The local copy is not for security, but stability and immediate usage.

          There is no such thing as a secure local log, and pretending otherwise is shameful. As to reliability, you get to pick between two evils. The possibility of logs not being delivered, or the possibility of logs not being delivered. The more complex the protocol, the more likely it is that a message gets devoured by the system. Simple protocols may not have guaranteed delivery, but their simplicity has actually helped ensure things just Don't Go Wrong.

        • by sjames ( 1099 )

          The most secure logging of all is to send it out to a line printer behind a vault door. Try editing that after you root the machine!

      • by LordLimecat ( 1103839 ) on Thursday December 01, 2011 @02:35PM (#38230132)

        Not quite true. If PHB insists on RHEL, you're stuck coping with whatever poor choices they make.

        Package management: use it. I would be very surprised if RedHat prevented you from installing whatever logging facility you wanted on your server.

    • by esocid ( 946821 )
      Maybe entrenchment has some bearing on it. The article mentions the arguments for switching from syslog, and that you can do some of them already in syslog, but nobody does.

      However, I agree with you. Let RH try it out. It's not like they can't revert if they find that it was a bad choice. If we wanted all distros to be the same, we'd run windows instead.
    • by CAIMLAS ( 41445 ) on Thursday December 01, 2011 @02:26PM (#38230004)

      Looks like they're pulling the same shit Ubuntu pulled with upstart (init replacement). "Let's replace something simple and elegant with something complex, incomplete, and very difficult to fix when it goes wrong".

      Sorry, but no thanks. I can see the need for something else, in a limited/special purpose role, but these assholes are aggregately destroying the very basis of what makes Linux a good, robust server choice:

      * you can use traditional unix tools from ssh to manipulate and analyze the system
      * there are literally thousands of tools for analyzing, manipulating, and storing syslog data
      * init is purely linear, whereas upstart is threaded, increasing the possible ways in which it can fail as well as increasing the difficulty of troubleshooting
      * KISS means broken things are more obvious.
      * KISS means there's less that can go wrong.
      * Most Windows guys don't even read the logs, from what I've seen. This could quite possibly be related to the complexity and lack of utility of Event Viewer itself, granted, but even Event Logs can be exported to syslog...

      While we're at it, why don't we start using XML or sqlite as a replacement for /etc.

      • by Compaqt ( 1758360 ) on Thursday December 01, 2011 @02:36PM (#38230160) Homepage

        I agree in general with "if it's not broken, don't fix it". Witness /. opinion regarding Unity/Gnome changes.

        About Upstart, my lowly sysadmin opinion is this: It seems different from the other stuff Ubuntu's been doing in that, AFAIK, it's not alone in this. I think Fedora's going that way too.

        Also, with Upstart I know if the webserver crashes for some reason, it'll restart without intervention. Yeah, I know, you're not getting to the root of the problem, but it beats being stuck to a top display looking if something burned.

      • by LordLimecat ( 1103839 ) on Thursday December 01, 2011 @02:38PM (#38230190)

        Looks like they're pulling the same shit Ubuntu pulled with upstart (init replacement). "Let's replace something simple and elegant with something complex, incomplete, and very difficult to fix when it goes wrong".

        One could make that argument about solid-state electronics, the move away from punch-cards, the move from paper-based filing, the move to journaled filesystems, etc.

        Sometimes progress means letting go of the past, and sometimes it takes a while to fully bake; thats why RedHat doing the QA, testing, and development for the rest of us is a good thing. If it sucks, it will die, and noone really has to acknowledge that it ever existed.

        • by Hatta ( 162192 ) on Thursday December 01, 2011 @03:12PM (#38230688) Journal

          If it sucks, it will die.

          On what do you base this assumption? History is littered with sucky technologies that became standard because someone important was pushing it.

          • Because the beauty of RPM and the linux CLI is that you are 2 commands away from having the sucky technology replaced with the syslog you love. Its really hard to have an entrenched, hard-to-get-rid-of software based on Linux that can be managed by a package manager.

    • Re: (Score:3, Insightful)

      by Anonymous Coward

      That's one of the advantages of Linux: RedHat can go their own way without needing the rest of us to buy in, and without really messing things up for us. If they provide a reasonable API, it'll either be compatible with syslog with a simple library substitution or we'll quickly see a wrapper library that allows programs to use either syslog or Journal without needing code changes.

      I disagree. In fact, I'd call this a real disadvantage of the FOSS world (bazaar-style development in particular): change comes

    • by Iphtashu Fitz ( 263795 ) on Thursday December 01, 2011 @02:36PM (#38230166)

      RedHat can go their own way without needing the rest of us to buy in

      The only problem with your argument is that Red Hat has a huge base of paying customers, and money talks.

      I manage a small research cluster at a university. It's running Red Hat linux on over 100 nodes. The university has a site license for Red Hat so licensing for the cluster isn't an issue. The decision to go with Red Hat had to do mainly with what distros are directly supported by commercial products like Matlab, Mathematica, Abaqus, Maple, Comsol, Ansys, etc. All these vendors sell lots of software & services to universities, research labs, etc. and they all support Red Hat linux.

      I've personally dealt with support departments when trying to run commercial software on non-RH distros, and in some cases they pretty much tell you you're on your own if you're not using RH or one of the other top two or three distros. Most commercial vendors will only state that they support RedHat, SUSE, and maybe Ubuntu and/or Debian.

      If/when Red Hat comes out with a new way of doing things then customers like us will start pushing on the vendors to support those new ways. After all, we're tied into using Red Hat, and we need their products to run on it. So the commercial software vendors will start supporting the Red Hat way of doing things to appease their customers. And once the commercial vendors start supporting it then it will slowly but surely make its way into other distributions as well so that these apps can run on distros that other people want to use.

    • by mlts ( 1038732 ) *

      I'd disagree. RedHat is only one of two Linux distros that is FIPS and Common Criteria certified. Of course, to people who work with Linux, this doesn't mean much. However when it comes audit time, the auditors either want to see certifications of the OS, or one better has to have a damn good reason (and not a technical one -- one that will appeal to a bean counter who is looking for any excuse to shut your operation down) why the OS isn't certified. Usually one has to explain in great detail why the OS

  • First post (Score:2, Offtopic)

    WTF!? First post and the linked article is already slashdotted?

  • by Anonymous Coward on Thursday December 01, 2011 @02:08PM (#38229722)

    When everything else is failing ... you still need to be able to dig into the the syslogs reliably no matter what! One little hiccup and you can easily lose everything in most binary type implementations, while at worst you see a little garbage in the syslogs!

    • The problem is, they can be easily cleaned. I see where they are coming from with this, to be honest.

      Instead of going binary they could just use a database system, with queries and whatnot. Each new application would bring a new table.

      This would deal with two things: With proper access configuration it'd be safe from tampering, and it'd be easy to dig through. The amount of garbage that ends up in the logs is mind numbing. (yeah yeah, I know grep works and I still think this would be a better solution)

  • by tomhudson ( 43916 ) <barbara@hudson.barbara-hudson@com> on Thursday December 01, 2011 @02:09PM (#38229724) Journal

    Keep on fragmenting each distro ... at a certain point, people will just get tired of distro-hopping and dump the whole mess.

    And people ask when the Year f the Linux Desktop will be. It's things likie this, and the constant breakage because of change for the sake of change or to "be different", rather than focusing on stability, that drive people to non-free vendors.

    • Not to mention everybody who is running Red Hat in production, who will probably be forced to accept the new features if they like security updates.

      • Have you ever used RHEL / CentOS? I hightly doubt they will push out a completely different logging system as a normal update in an existing version.
    • by LordLimecat ( 1103839 ) on Thursday December 01, 2011 @02:14PM (#38229794)

      Keep on fragmenting each distro

      The whole point of a distro is that it is DIFFERENT from the others around it, not that it is similar. They all have their strengths and weaknesses, and the various things they try can be pulled into other projects.

      For instance, Canonical has been talking about rolling Wayland in as a replacement for X in Ubuntu. It might be a phenomenal failure, or it might be incredibly successful. If it works well, Im sure RedHat, CentOS, Debian, etc will all pull it in as well, and some bit of progress will have been made. If it sucks and dies, well, that too is progress.

    • by guruevi ( 827432 )

      You can also simply substitute your own binaries on the same platform. It's not because RH gives Syslog by default that you can't install Journal and vice versa. You can have them even running concurrently (where necessary). Most people and even sysadmins don't care what is running the logs as long as it's readable unless they're running a dedicated syslog server where any such changes will be monitored.

      The standard daemon for mail used to be Sendmail later Postfix. Yet people still run either out of necess

    • Hey before Ubuntu went off the deep end it really seemed like the Year of the Linux Desktop was this close!

    • by mlts ( 1038732 ) *

      Believe it or not, this is one selling point for AIX. IBM touts that if it runs in an earlier version of AIX, it almost certainly will run on AIX 7. In fact, IBM actually has a binary compatibility guarantee that anything running on AIX 5.x or 6.x will run on the latest version.

      Of course, change is important, because an OS that ends up stagnant will eventually fall to the wayside, but in a production environment, people want to have to change as little as possible during an upgrade cycle.

  • Error prevention? (Score:4, Insightful)

    by esocid ( 946821 ) on Thursday December 01, 2011 @02:10PM (#38229738) Journal
    Not that it bothers me, but in forums people are quick to point out that they think Fedora's choice of kernel numbering is stupid. I mention I'm on 2.6.41.1-1.fc15.x86_64, and the first response is, "that kernel doesn't exist." (And yes, Fedora will move to the standard numbering scheme with 17 if I'm not mistaken)
    I've found most of RH's decisions to do something their way is to prevent problems down the road. Same for kernel numbering, it was supposedly to prevent repo errors. I don't know for certain, but I'd expect this to also be the case here.
    • by broken_chaos ( 1188549 ) on Thursday December 01, 2011 @02:29PM (#38230066)

      I can understand the kernel numbering issue -- the 3.0+ kernels are functionally unchanged from the late 2.6 series, but the version number change did break some userspace tools/scripts that (in a poor, but understandable, decision) relied upon the kernel version, often as being in the 2.6 series. Of course, this was a holdover from the old huge swap from 2.4 to 2.6, where almost everything changed.

      With Linus suddenly deciding, "Hey, this version is now 3.0, even though there are no compatibility-breaking changes from the last 2.6 kernel.", some of those userspace tools/scripts broke in unexpected ways with version checks, but didn't actually break in effect. So re-numbering the kernel to stave off needing to immediately fix them without much warning was a fairly reasonable thing to do, for the short term.

      Overall, I agree with the decision to move to the 3.0 version numbering, though a bit more warning may have helped. Considering just how much the kernel development cycle changed from the 2.4/2.6 transition (namely in there never being a 'full' development branch), making a very clear, albeit arbitrarily timed, version number swap seems sensible to me.

      • Overall, I agree with the decision to move to the 3.0 version numbering, though a bit more warning may have helped.

        "A bit more warning" is why we're still on IPv4 (though im grossly over-simplifying).

  • by Animats ( 122034 ) on Thursday December 01, 2011 @02:15PM (#38229806) Homepage

    This is just whining by some guy who wrote a log analyzer that will no longer be necessary.

    QNX has had a simple structured log daemon [qnx.com] for years. Reading their log never tails off into junk; you always get a clean, current last record. Their solution even works on diskless systems. In many real-time applications, logs are transmitted to some remote location, rather than being kept on each local machine.

    • Excuse me, it's true their logging facility might be good, but I was under the impression that the junk you see are the junk applications send, so I have no clue how QNX deals with that.

      And sending them to a remote location is possible in linux already.

      Never tried to use linux in a diskless system though, so I have no idea how syslog would react.

    • by CAIMLAS ( 41445 )

      I don't see you describing anything which isn't possible (and common) with the syslog format. Most environments have remote syslog servers, you can filter events by event type, etc. and you can get a "clean, current last record" in a number of ways quite trivially (tac or last -f, pick your poison).

      Changing from syslog format will cause all sorts of problems. There are literally thousands of syslog analyzers in use, custom scripts, and people who use common utilities (such as the above, plus grep/sed/awk/et

  • Is he not aware? (Score:3, Insightful)

    by C_Kode ( 102755 ) on Thursday December 01, 2011 @02:20PM (#38229902) Journal

    Is he not aware how terrible syslog is? syslog is ancient and has several series flaws from security to just stupid limitations. It should have been replaced ages ago.

    • Re:Is he not aware? (Score:4, Informative)

      by Nos. ( 179609 ) <andrew@nOSPAm.thekerrs.ca> on Thursday December 01, 2011 @02:29PM (#38230062) Homepage

      syslog the application or syslog the protocol? syslog the application? Yes, its past due, and things like rsyslog are much better.

      syslog the protocol is fine.

      The problem with this proposed replacement is that it does not fix anything. The only advantage it gives is to be able to tell if the logs were altered. That's it. You're far better off with a secondary/centralized logging system. Store your logs in text, compressed, encrypted, in a database, it doesn't matter. Just get them to a different location and then not only can you tell that the originals were altered, you can tell what was removed. All while using existing tools.

  • by account_deleted ( 4530225 ) on Thursday December 01, 2011 @03:48PM (#38231116)
    Comment removed based on user account deletion
  • by bored ( 40072 ) on Thursday December 01, 2011 @09:03PM (#38233862)

    I didn't see anyone else mention this, but on windows and AIX, one of the reasons for using a binary log format is internationalization. Log messages are little more than application/facility id, log id, and parameters. The when the user displays the message the ids are looked up in a localization table and formatted according to the attached parameters.

You are always doing something marginal when the boss drops by your desk.

Working...