Follow Slashdot blog updates by subscribing to our blog RSS feed


Forgot your password?
Operating Systems Red Hat Software Linux

Red Hat's Linux Changes Raise New Questions 433

itwbennett writes "Last month two Red Hat developers proposed to replace the 30-year-old syslog system with a new Journal daemon. Initial reaction was mostly negative and 'focused on the Journal's use of a binary key-value form of data to log system events,' says blogger Brian Proffit. But now, says Proffitt, it seems that the proposal to replace syslog has less to do with the fixing syslog's problems than with Red Hat's desire to go its own way with Linux infrastructure."
This discussion has been archived. No new comments can be posted.

Red Hat's Linux Changes Raise New Questions

Comments Filter:
  • by Todd Knarr ( 15451 ) on Thursday December 01, 2011 @03:08PM (#38229710) Homepage

    That's one of the advantages of Linux: RedHat can go their own way without needing the rest of us to buy in, and without really messing things up for us. If they provide a reasonable API, it'll either be compatible with syslog with a simple library substitution or we'll quickly see a wrapper library that allows programs to use either syslog or Journal without needing code changes.

    I think going to binary's a bad idea, myself. The fewer tools you need working to find out what the error is, the easier it is to debug and fix the problem. But let RedHat try this and see how it works, and then we can decide once we've got some real-world data to compare.

  • by Anonymous Coward on Thursday December 01, 2011 @03:08PM (#38229722)

    When everything else is failing ... you still need to be able to dig into the the syslogs reliably no matter what! One little hiccup and you can easily lose everything in most binary type implementations, while at worst you see a little garbage in the syslogs!

  • Error prevention? (Score:4, Insightful)

    by esocid ( 946821 ) on Thursday December 01, 2011 @03:10PM (#38229738) Journal
    Not that it bothers me, but in forums people are quick to point out that they think Fedora's choice of kernel numbering is stupid. I mention I'm on, and the first response is, "that kernel doesn't exist." (And yes, Fedora will move to the standard numbering scheme with 17 if I'm not mistaken)
    I've found most of RH's decisions to do something their way is to prevent problems down the road. Same for kernel numbering, it was supposedly to prevent repo errors. I don't know for certain, but I'd expect this to also be the case here.
  • by LordLimecat ( 1103839 ) on Thursday December 01, 2011 @03:14PM (#38229794)

    Keep on fragmenting each distro

    The whole point of a distro is that it is DIFFERENT from the others around it, not that it is similar. They all have their strengths and weaknesses, and the various things they try can be pulled into other projects.

    For instance, Canonical has been talking about rolling Wayland in as a replacement for X in Ubuntu. It might be a phenomenal failure, or it might be incredibly successful. If it works well, Im sure RedHat, CentOS, Debian, etc will all pull it in as well, and some bit of progress will have been made. If it sucks and dies, well, that too is progress.

  • by Animats ( 122034 ) on Thursday December 01, 2011 @03:15PM (#38229806) Homepage

    This is just whining by some guy who wrote a log analyzer that will no longer be necessary.

    QNX has had a simple structured log daemon [] for years. Reading their log never tails off into junk; you always get a clean, current last record. Their solution even works on diskless systems. In many real-time applications, logs are transmitted to some remote location, rather than being kept on each local machine.

  • by skids ( 119237 ) on Thursday December 01, 2011 @03:17PM (#38229840) Homepage

    That's one of the advantages of Linux: RedHat can go their own way without needing the rest of us to buy in, and without really messing things up for us.

    Not quite true. If PHB insists on RHEL, you're stuck coping with whatever poor choices they make.

    Why do I get the sense that all the chafing at the "restrictions" of the LSB/linux-instinct/unix-way/common-sense is just the bellyaching that happens when you realize you're short the talent/energy/whatever to progress and start looking for ways to re-arrange the deck chairs?

  • Is he not aware? (Score:3, Insightful)

    by C_Kode ( 102755 ) on Thursday December 01, 2011 @03:20PM (#38229902) Journal

    Is he not aware how terrible syslog is? syslog is ancient and has several series flaws from security to just stupid limitations. It should have been replaced ages ago.

  • by MightyMartian ( 840721 ) on Thursday December 01, 2011 @03:23PM (#38229952) Journal

    No matter your experience, plain-text logs make more sense, especially in *nix operating systems. You have a vast array of tools to search log files with; my favorites being tail and grep. The minute you go to binary logging your options shrink or you end up having to use additional tools to reconvert it to text (ie. the Windows event log).

  • by GameboyRMH ( 1153867 ) <gameboyrmh@gmai l . c om> on Thursday December 01, 2011 @03:25PM (#38229986) Journal

    Or just use a network log server, which is both better from a security standpoint and lets you keep your plaintext logs.

  • by CAIMLAS ( 41445 ) on Thursday December 01, 2011 @03:26PM (#38230004) Homepage

    Looks like they're pulling the same shit Ubuntu pulled with upstart (init replacement). "Let's replace something simple and elegant with something complex, incomplete, and very difficult to fix when it goes wrong".

    Sorry, but no thanks. I can see the need for something else, in a limited/special purpose role, but these assholes are aggregately destroying the very basis of what makes Linux a good, robust server choice:

    * you can use traditional unix tools from ssh to manipulate and analyze the system
    * there are literally thousands of tools for analyzing, manipulating, and storing syslog data
    * init is purely linear, whereas upstart is threaded, increasing the possible ways in which it can fail as well as increasing the difficulty of troubleshooting
    * KISS means broken things are more obvious.
    * KISS means there's less that can go wrong.
    * Most Windows guys don't even read the logs, from what I've seen. This could quite possibly be related to the complexity and lack of utility of Event Viewer itself, granted, but even Event Logs can be exported to syslog...

    While we're at it, why don't we start using XML or sqlite as a replacement for /etc.

  • by broken_chaos ( 1188549 ) on Thursday December 01, 2011 @03:29PM (#38230066)

    I can understand the kernel numbering issue -- the 3.0+ kernels are functionally unchanged from the late 2.6 series, but the version number change did break some userspace tools/scripts that (in a poor, but understandable, decision) relied upon the kernel version, often as being in the 2.6 series. Of course, this was a holdover from the old huge swap from 2.4 to 2.6, where almost everything changed.

    With Linus suddenly deciding, "Hey, this version is now 3.0, even though there are no compatibility-breaking changes from the last 2.6 kernel.", some of those userspace tools/scripts broke in unexpected ways with version checks, but didn't actually break in effect. So re-numbering the kernel to stave off needing to immediately fix them without much warning was a fairly reasonable thing to do, for the short term.

    Overall, I agree with the decision to move to the 3.0 version numbering, though a bit more warning may have helped. Considering just how much the kernel development cycle changed from the 2.4/2.6 transition (namely in there never being a 'full' development branch), making a very clear, albeit arbitrarily timed, version number swap seems sensible to me.

  • by Anonymous Coward on Thursday December 01, 2011 @03:32PM (#38230100)

    You will also be stuck with all the good choices they make.

    Reading what they are proposing it seems that is actually a very good idea. When you get out of hobbyist and small environments and into environments with more demanding requirements about security auditing the traditional syslog has not cut it for years anymore. The first step in many environments is usually to rip it mostly off and replace with some more or less proprietary environment.

    The new ideas such as improving the reliability of log shipping, reducing possibilities towards tampering, and improving chances for more advanced log analysis are really awesome things - especially for people who are serious about their logging. Syslog and its text format are legacy poison and it will be good to see them die and vanish. Hopefully that happens fast.

    Also, keep in mind that that RedHat is still open sourcing that stuff. They will provide tools and APIs - as they require those also themselves.

  • by LordLimecat ( 1103839 ) on Thursday December 01, 2011 @03:34PM (#38230116)

    As has been pointed out, there is no reason you couldnt use a new tool to get the output you want out of the database.

    You cannot, for example, convince me that noone is able to script MySQL databases, despite their binary nature.

  • by Anonymous Coward on Thursday December 01, 2011 @03:34PM (#38230120)

    That's one of the advantages of Linux: RedHat can go their own way without needing the rest of us to buy in, and without really messing things up for us. If they provide a reasonable API, it'll either be compatible with syslog with a simple library substitution or we'll quickly see a wrapper library that allows programs to use either syslog or Journal without needing code changes.

    I disagree. In fact, I'd call this a real disadvantage of the FOSS world (bazaar-style development in particular): change comes so slowly because no one else will commit to it. I think it's hard to really take advantage of changes if others just slap an abstraction layer on top of them and otherwise ignore them. I haven't studied this specific example in detail, but off the top of my head:

    • software written by people with that attitude won't take advantage of the key/value pair format because syslog doesn't have that
    • likewise nothing will switch from a proprietary format to the new common infrastructure
    • log analysis software probably won't do anything too interesting with the output files because god forbid someone be left behind

    and so this won't be as useful as it would be otherwise.

    To take another example I have paid a little more attention to, init was around for far too long, and yet I'm amazed init->upstart happened at all. Even when something is clearly superior, if it takes buy-in from people who want their software to work on the greatest possible range of systems, it's hard to make that happen. init->upstart probably worked because the startup is a relatively unobtrusive piece of the software, relatively easily replaced by distributors.

    In contrast, proprietary or cathedral-style development of a whole system does much better at this. They have the sort of unity of purpose where they can agree that something is valuable and exploit it through the whole system. I've seen this at work. Maybe the best example is something that hasn't happened in the Linux world, though: a widely used high-level language. The GNOME people in particular won't commit to anything but plain C, with their painful object model on top of it.

  • by LordLimecat ( 1103839 ) on Thursday December 01, 2011 @03:35PM (#38230132)

    Not quite true. If PHB insists on RHEL, you're stuck coping with whatever poor choices they make.

    Package management: use it. I would be very surprised if RedHat prevented you from installing whatever logging facility you wanted on your server.

  • by assertation ( 1255714 ) on Thursday December 01, 2011 @03:36PM (#38230152)

    It seems like every time a distro tries to innovate they get a lot of screaming from the linux community.

    There is this change, the screaming about Ubuntu going with Unity, screaming with every change GNOME makes.

    Is the FOSS really about innovation or just mouthing the words?

  • by Iphtashu Fitz ( 263795 ) on Thursday December 01, 2011 @03:36PM (#38230166)

    RedHat can go their own way without needing the rest of us to buy in

    The only problem with your argument is that Red Hat has a huge base of paying customers, and money talks.

    I manage a small research cluster at a university. It's running Red Hat linux on over 100 nodes. The university has a site license for Red Hat so licensing for the cluster isn't an issue. The decision to go with Red Hat had to do mainly with what distros are directly supported by commercial products like Matlab, Mathematica, Abaqus, Maple, Comsol, Ansys, etc. All these vendors sell lots of software & services to universities, research labs, etc. and they all support Red Hat linux.

    I've personally dealt with support departments when trying to run commercial software on non-RH distros, and in some cases they pretty much tell you you're on your own if you're not using RH or one of the other top two or three distros. Most commercial vendors will only state that they support RedHat, SUSE, and maybe Ubuntu and/or Debian.

    If/when Red Hat comes out with a new way of doing things then customers like us will start pushing on the vendors to support those new ways. After all, we're tied into using Red Hat, and we need their products to run on it. So the commercial software vendors will start supporting the Red Hat way of doing things to appease their customers. And once the commercial vendors start supporting it then it will slowly but surely make its way into other distributions as well so that these apps can run on distros that other people want to use.

  • Good (Score:2, Insightful)

    by magamiako1 ( 1026318 ) on Thursday December 01, 2011 @03:40PM (#38230210)
    It's a good move. Parsing syslog sucks. And I don't care how awesome you think you are as a developer--you need to use the system logging facilities to make it easier on those of us who adminster systems.

    At the very least a unified format similar to Microsoft's format would be nice.

    ID / DATE-Time / Severity / BLOB OF TEXT
  • by rsilvergun ( 571051 ) on Thursday December 01, 2011 @03:44PM (#38230266)
    Yeah, but you add a bunch of overhead to get back to text.
  • by lucm ( 889690 ) on Thursday December 01, 2011 @03:46PM (#38230298)

    No matter your experience, plain-text logs make more sense, especially in *nix operating systems. You have a vast array of tools to search log files with; my favorites being tail and grep. The minute you go to binary logging your options shrink or you end up having to use additional tools to reconvert it to text (ie. the Windows event log).

    The more a system becomes complex, the more one needs to see events as part of a whole and do some kind of analysis and correlation. This type of work is done more easily with databases. I like grep like everyone, but if I want to have a nice rollup of events based on time and source, I will get the info much more easily with a SQL query than with a regex piped into a reporting utility piped into a paging utility.

    Also I think one has to adapt to a technology, not try to make it work like what was there before (unless he is a one-trick pony). Why would you want to "reconvert" the Windows event log to text? On Windows there is a lot of built-in capabilities for log exploring in Powershell or even in VBS/WMI. A toolbox contains many tools, not just grep.

  • by mlts ( 1038732 ) * on Thursday December 01, 2011 @03:46PM (#38230312)

    Even though the syslog is in a binary format, it would be nice to have it also stored in text as well. For example, on some sensitive machines, I would have the syslog redirect to an IBM3151 serial terminal for real time monitoring. This way, I could immediately tell if a job started at its appropriate time, finished, or caused issues.

    IMHO, the best way RedHat should implement this is similar to how AIX does logging. It has its own format for logs that are read using the errpt command. However, one can turn on plain old syslog logging and have that able to be stored in a file, forwarded to a log server, or shipped via a serial connection to a secure log drop that has no network access. It would be nice to have a signed, secure format for logs, but also nice to have plain text to watch in realtime and search from without requiring specialized commands.

  • by DiegoBravo ( 324012 ) on Thursday December 01, 2011 @03:56PM (#38230448) Journal

    Many times with a (semi)broken operating system, you don't have all the usual tools.... sometimes your only clue is a syslog driven console text message.

  • by Hatta ( 162192 ) on Thursday December 01, 2011 @04:12PM (#38230688) Journal

    If it sucks, it will die.

    On what do you base this assumption? History is littered with sucky technologies that became standard because someone important was pushing it.

  • by jd ( 1658 ) <imipak AT yahoo DOT com> on Thursday December 01, 2011 @04:29PM (#38230894) Homepage Journal

    Innovation is fine. Invention is better, but if you can't have that then innovation makes a decent replacement. However, Unity isn't really inventive or innovative, and attempting to force someone to use one DWM is definitely a regression.

    You are confusing change/novelty with creativity. They're not the same.

    And, yes, there SHOULD be push-back. Once it goes past the early adopters, it will make its way to the Real World(tm) where the REAL critics hold multi-million dollar contracts in one hand and a fine sherry in the other. Those critics know nothing about the value of technology, but they know the price of everything, especially that of technology. You WANT the flaws ironed-out before then. You WANT to have put the software not just through the reliability and quality tests but also through the user acceptability tests and the PR tests. You WANT well-tempered systems, honed to damn-well near perfection.

    Because, in the end, without those multi-million dollar contracts, the Ubuntus and the Red Hats of the world simply aren't going to bother. There won't be any development at all if we lose the big players at this stage. Linux isn't a garage development project any more, or hadn't you seen the kernel contribution stats on LWN? We NEED the corporations to want to invest not just the time and money they're spending now but more of it. And we won't get that without the PHBs.

    Do the PHBs care about Unity or loggers? Directly, no. They care about image and if the unwashed masses turn away from Linux, that's bad image. If there's a security flaw, that's major bad image. If it costs more for the developers to do the same amount of work because of added inefficiencies, especially when the shareholders are baying at the door, that's lethal image. Doesn't matter if Windows would be worse, PHBs won't think like that. Linux is a gamble and it HAS to pay and pay big.

  • by jmitchel! ( 254506 ) on Thursday December 01, 2011 @04:45PM (#38231086)
    The wise professional is intensely mistrustful of innovation. Innovation introduces new failure modes and deprecates tested methods and experience. Innovation is always born half baked, unreliable and unsupported. By the time all the problems are solved with a given innovation, it's design will typically prove to be just as compromised and unsexy as its predecessor - and then the cycle of "innovation" repeats. Some innovations have sufficient value to overcome their costs - many don't.

    And who said that OSS was about innovation?
  • by Tomato42 ( 2416694 ) on Thursday December 01, 2011 @04:50PM (#38231128)
    If your system needs special tools to see the correlations in errors then your systems is way past its usefulness and should be re-designed with greater modularity in mind.

    As for windows logging, sorry to burst your bubble, but it fails when it's most needed, read my previous post: []
  • by TheNinjaroach ( 878876 ) on Thursday December 01, 2011 @05:00PM (#38231236)
    I think the move to binary storage for syslog files could be great for efficiency all the way around. A very simple CLI tool that dumps the ASCII syslog equivalent would make for a very nice transition piece.

    You could continue using your existing syslog-based tools to monitor / alert / debug / whatever without having to change much at all. As an added bonus, the tool could accept optional search & filter parameters that are applied to the binary form before dumping ASCII output. That would save the CPU a bit of time to grep through thousands of lines of unrelated logs just to report on the one or two system services that you want to monitor.
  • by hawguy ( 1600213 ) on Thursday December 01, 2011 @05:40PM (#38231682)

    No matter your experience, plain-text logs make more sense, especially in *nix operating systems. You have a vast array of tools to search log files with; my favorites being tail and grep. The minute you go to binary logging your options shrink or you end up having to use additional tools to reconvert it to text (ie. the Windows event log).

    Except you already see binary log files on Unix. Log files are frequently compressed with gzip. I don't see a big difference between someone typing (for example) "zcat file.gz | grep somestring" and "redhat-log-cat | grep somestring" assuming that was the name of the tool they used to crap out a logfile from the binary db.

    The difference is that one zcat tool can handle any gzipped file, it doesn't need to know a thing about the data in the file.

    But if when developers start using binary data, yourredhat-log-cat parser will need to understand how each developer writes their data.

    Some developers may just use ascii key-value pairs. Some may write out numbers as binary ints and floats. Some may serialize java objects and write those to the logs.

    With ASCII readable files you can usually (but not always) decipher the data and write a parser for it. Once you've got a stream of binary data, this becomes much harder.

    Which would you rather try to decode:

            Perfdata: 0 100 123123123 20.1



    Of course, with proper discipline, this can be prevented from becoming an indecipherable mess, but the same could be said for the current syslog.

  • by MightyMartian ( 840721 ) on Thursday December 01, 2011 @06:16PM (#38232082) Journal

    So now on top of a crippled system, you've got to move the logs over to a system so you can read them? This is exactly what you're faced with when a Windows system takes a dive, and it sucks.

  • by rathaven ( 1253420 ) on Thursday December 01, 2011 @06:35PM (#38232250)
    The issue is based on what you need in different scenarios and to meet that I can't see anything wrong in doing both writing to syslog and a database.

    Why do both? In larger systems the amount of data is difficult to cross reference and analyse as files due to the amount of sources, size of data, tools to visualise it all, etc. Writing syslog data to centralised syslog services that do use database backends to centralise logs and query/report against them are a key tool in these scenarios. Its one of a number of interfaces you have to analyse what is taking place on your systems.

    However, I'd rather use the simplest method of getting log information out of a system if I'm going to use it for debugging an odd situation. There are situations where the overhead of writing to a database or a write remote data might fail and cause no debug information to be written. I'd rather a simple logging system locally.
  • by alcourt ( 198386 ) on Thursday December 01, 2011 @06:51PM (#38232440)

    For a moment, I thought you were one of the people on our log analysis team that I work with.

    There are places where a database is good. I recommend them heavily, but the original log output shouldn't be database. If you want a database of logs (appropriate in many cases), send a copy of the logs off host (which you have to do anyway for PCI and many other audits) and store that copy in a database. That makes it easier to comply with other requirements.

    Best of both worlds. Plain text local to box in case Things Go Wrong, and database for cross-system forensic analysis.

  • by LordLimecat ( 1103839 ) on Thursday December 01, 2011 @08:17PM (#38233082)

    Um, no, im assuming your bin folder hasnt been corrupted, right? Is there any reason to assume that 'cat', 'tail', and 'grep' would function and 'readJournalLog' would not? Are the cat binaries more robust or something?

    Or are you just making assumptions about what would be required to read the new journal?

  • by mabhatter654 ( 561290 ) on Thursday December 01, 2011 @09:17PM (#38233546)

    Wow.. I forget there are OSes that don't have serial panel output for error codes! Or Operator "key" codes to force diagnostic modes?

    Coming from an AS400 background this article shows how silly "normal" system management is. To answer the parent, the system can be issued boot time commands in HARDWARE (which are also available for virtual machines) that will bring the system to a minimal "restricted" console state. That's like a cornerstone of the system and IBM doesn't mess with that.

    Next, the proposal Red Hat has is a very AS400 concept. The History Log (QHST) and the Security Audit Journal (QAUDJRN) are both binary structures that have hard-coded readers built into the kernel. The system maintains internal integrity of the files with extra fields you never see. Of course the AS400 native file system is "DATABASE"-based. So any command that outputs from these displays to a screen or to something that can be instantly searched with SQL.

    I'm certain that is what they are trying to make here. Ultimately in security you care about the integrity of the logs more than even recovering the system... Especially when to don't have to restart for a year at a time or more. Frankly, they should add an output option for SQLite file types and everybody can be happy.

    Realize that when places like banks use AS400's they mirror the raw audit journals off to another system, often hundreds a day. Because the are binary journals, they are difficult to tamper with because they interlink with eachother. Yet at the same time because they are a data type programs can monitor them for specific events automatically and it's trivial to set up actions to take.

  • by LordLimecat ( 1103839 ) on Friday December 02, 2011 @05:13AM (#38235472)

    Also text is very disk corruption resistant compared to binary

    Thats a gross oversimplification; text is a subset of binary. Binary allows for things like journals and integrety checking.

egrep -n '^[a-z].*\(' $ | sort -t':' +2.0