Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Red Hat Software Businesses Open Source

Interviews: Red Hat CEO Jim Whitehurst Answers Your Questions (redhat.com) 133

You asked, he answered!

For Slashdot's 20th anniversary -- and the 23rd anniversary of the first release of Red Hat Linux -- here's a special treat.

Red Hat CEO Jim Whitehurst has responded to questions submitted by Slashdot readers. Read on for his answers...


What...
by Master5000

...is your day like?

JW: I can tell you this, no two days are the same. Broadly speaking, I strive to prioritize time with customers, partners, and Red Hat associates above other meetings.

When I'm in town, my day starts at 5:30 am with a run. I'll scan email and the news during breakfast and take my kids to school. My first calls usually start at 8 am as I'm driving to the office. Today for instance, I'll meet with a few members of our Corporate Leadership team. I'll then sit down with our chief technologist to hear what's happening in the Office of the CTO.

I usually grab lunch around 11:30 am. I tend to bring my lunch, but will occasionally head to our cafeteria for a sandwich or salad. In the afternoon, I'll get briefed on my schedule for some upcoming events, which will include meetings with partners, customer panels, press, and analysts. I usually spend a few hours a day responding to emails and coordinating activity through email. I try to get home by 6 pm to eat dinner with my family and spend time with my kids. I'll usually jump back on email once everyone is asleep before knocking out around 10 pm.


The plans for CentOS?
By Anonymous Coward

Now that CentOS has received a more official status in the Red Hat world, what are the plans for the project?


JW: The ecosystem around Red Hat Enterprise Linux is sprawling and complex, and that's one of our strengths. You have midnight hobbyists working together with multinational corporations. You have people working on GPU hardware, and you have people working on Ruby apps. Some want the latest-and-greatest, and some want to keep everything exactly the same for years and years. So lots of different kinds of people are doing lots of different kinds of work, and all of them are contributing to this massive project called "Red Hat Enterprise Linux". It's not surprising that we can't accommodate all of that innovation in a single project.

That's one of the reasons we split Fedora and Red Hat Enterprise Linux: we freed up Fedora to be innovative and move quickly, which freed Red Hat Enterprise Linux to be more careful, more conservative, and handle the very important and difficult work of stability and security for code that upstream communities have long since moved past. Fifteen years later, we're still very happy with how that's worked out, and Fedora remains a thriving engine for new ideas that make their way down into Red Hat Enterprise Linux and many other projects.

CentOS solves a very different problem for us. First, there are some people that we can't serve with Red Hat Enterprise Linux today, but we still want to participate in the Red Hat ecosystem. Folks using Xen, for example, may not be able to run today's Red Hat Enterprise Linux, but they can absolutely work with the CentOS project and still participate in the broader ecosystem. Second, there are people and partners who are building software that needs a more stable, Red Hat Enterprise Linux-like lifecycle but want to experiment at the kernel level, stuff which would be impossible for us to support in Red Hat Enterprise Linux. OpenVSwitch and DPDK are a perfect example of this, and the CentOS SIG process has served them really well. They can do all the things they need to do in development and with their partner communities, and their innovations still pass from the upstream communities into Fedora, and ultimately into Red Hat Enterprise Linux, Red Hat Virtualization, and OpenStack.

Meanwhile, changes in hardware and software are changing how we think about a traditional operating system distribution. Things are more automated, hardware is moving faster and less predictably, and containers force us to differentiate between bringing up hardware and creating a stable platform for applications. To address all of these changes, Red Hat is going to need every element of our ecosystem -- Fedora, CentOS, and Red Hat Enterprise Linux -- to respond.


Systemd, WTF???
by rknop

As I understand it, one of the stated goals was to speed up boot times. It's had exactly the opposite effect on my Ubuntu system -- that is, when the boot doesn't die altogether when I try to mount NFS shares. (Also, thanks to systemd, I can't even *reboot* or shut down the machine when there's a hung NFS process. I am forced to hard-reset it.)

For years, warning flags have been raised about systemd. It more or less seems that we're bringing all the disadvantages of the Windows architecture to Linux, without any of the advantages of running Windows.

So, again: systemd, wtf???


JW: We had a lot of systemd questions, so I am replying to them all collectively.

==========================================================================================================
My question is related: is Red Hat, as an organization, at all concerned about the damage that systemd has done to Linux's usability, its reputation, and its community? Is Red Hat concerned with how systemd has driven so many Linux users to FreeBSD?

................................................................................................

And a follow up, why not spend some of RedHat's money on a sane init system?

I'm sure you can put a few dollars and bright minds on a system that works reliably. The last thing I want my embedded system to do is get hung up on an init failure.

................................................................................................

This begs the question, so I'll just ask it: Have any customers ever moved away from Red Hat because of systemd?

==========================================================================================================

JW: First, allow me to address why Red Hat adopted and invested in systemd as it helps to address many of the other questions. Traditional init systems, like System V init, served the UNIX and Linux communities well for decades, but that is a long time and it is not surprising that they have their limitations. The problems an init system needs to solve today are different from the ones that traditional init systems were solving in the 70's, 80's and even the 90's.

Red Hat considered many available options and even used Canonical's Upstart for Red Hat Enterprise Linux 6. Ultimately we chose systemd because it is the best architecture that provides the extensibility, simplicity, scalability, and well-defined interfaces to address the problems we see today and foresee in the future. Of all the passionate debates and disagreements, the fact remains that systemd is the cornerstone of nearly all Linux distributions on its own merits.

Any change like systemd is going to disruptive. We understand that many were not happy with this change and we appreciate the passion of the community. The continued growth and adoption of Red Hat Enterprise Linux, as well as other systemd based distributions, tell us that most users have embraced systemd and there was not a large exodus to FreeBSD or alternatives. We partner with the largest embedded vendors in the world, particularly in the telecom and automotive industries where stability and reliability is the number one concern. They easily adapted to systemd.

We see new users (both new to Linux and prior SysV init users) who truly take the time to learn systemd embrace the simplicity of the interface and its capabilities. We also hear that it is no more difficult to learn than the complexities of init and rc scripts to a new user. It's simply different.

The Debian community provides a thorough, independent evaluation of the systemd initsystem debate. Additionally, the systemd developers provide a list of the biggest myths around systemd.

There are some real advantages, too. Because systemd tracks processes at the service level, daemons can be properly killed, rather than trusting them to do the right thing. This also makes it easy to use cgroups to configure SLAs for CPU, memory, etc. Likewise, security with SELinux and sandboxing become much simpler. The dependency resolution between services is a significant improvement over the sequential ordering of the init rc script mechanism.

Looking forward to all of the exciting innovation taking place around large cloud scalability, OpenStack, Kubernetes, and Containers, we see continued integration and innovation with systemd that would either not be possible or very difficult with init based systems.

So we'll continue to invest in systemd, as it meets our customer's expectations around capabilities, stability, maturity, and community momentum. There's not a realistic alternative today that comes close in terms of adoption and functionality. That said, we're always watching how projects and communities evolve and in that way, systemd is no different from any other component that we ship.

Lastly, I wouldn't dare to debug anyone's setup here, but mounting NFS at boot time is notoriously problematic if you do not have highly available NFS servers. This is a problem that existed before systemd and I think it's much safer to use autofs to mount those volumes on demand or other mount options such as nofail or nobootwait. It is best to not blame systemd for issues that also affect init or are misconfigurations. Ironically, systemd provides more troubleshooting and debug options than init, so that might be helpful to you.


==========================================================================================================
Why isn't Linux on the desktop more widespread?
by snooo53

I'm curious your thoughts on why Linux hasn't grabbed more laptop/desktop marketshare from Windows and MacOS over the years? It seems that with the privacy concerns around Windows 10 and Apple's lack of focus on MacOS there may be a huge opportunity in the near future. What things need to happen in the consumer marketplace and within the OSS community for it to really take off? Can 2017 be the year of the Linux desktop?

................................................................................................

Why not have a consumer desktop?
by Danathar

Given Ubuntu's success at providing a stable, developed and popular desktop environment for non-technical consumer users, why doesn't Red Hat provide the same thing? Why is that right for Ubuntu but not Red Hat?

................................................................................................

Strategy
by olau

Red Hat is big and getting bigger. Where are you heading at the moment? Would Red Hat ever try to move into the the more consumer-focused places where Ubuntu has ventured, or is that just not profitable enough?

................................................................................................

Why does GNOME have such an unusable UI?
by Anonymous Coward

GNOME is a Red Hat project due to the amount of people and funding they get from Red Hat. Then, why does GNOME have such an unusable UI, particularly to the mayor audience of your products? The UI makes basic tasks such as switching between windows a chore unless you install shell extensions, which break frequently and cause unstability.

................................................................................................

Proprietary driver support
by ARos

Many proprietary hardware vendors continue not to take the Linux desktop and workstation markets seriously. Recall, e.g., Linus's rant against NVIDIA. As a leader in the Linux and FOSS communities, what will you do to persuade major vendors to write and maintain functional drivers for Red Hat Enterprise Linux and Fedora? ==========================================================================================================

JW: We also had a lot of great questions on the Linux desktop - let me try to answer collectively:

A functioning, useful desktop is obviously critical to the success of the Linux community. A nice GUI makes Linux more accessible and approachable, and that's why we continue to make investments in projects like GNOME, Wayland, and nouveau. Everyone benefits from improvements in this area, so let's call that the baseline. The primary driver for that work is in Fedora, and I was really glad to see such great reviews of F25. If you haven't tried Fedora in a while, now's a good time to jump in. Personally, I love it.

Of course, one of the perils of the desktop is that "desktop success" is so specific to each individual, since everyone has their own opinion about what a desktop should or must do. That means that even when we think about our "baseline" investment in the Linux desktop, someone's going to be disappointed. What's worse, it's very difficult to make money on a "baseline," since it's something that people just expect to have in the first place. Nevertheless, we spend a lot of time and money on getting these projects right because it is so important to the broader community and the success of our own products.

There's another category of desktop, let's call it the "enterprise desktop". This category requires features that just don't come naturally through a community, and they need some additional investment. The "enterprise desktop" customers who pay for a Linux desktop want that same functioning, useful "baseline" desktop, of course. They also want things like enterprise management features, security tools, compliance tools, identity management, and even simple things, like the windowing system should scale correctly when it's run in a VM on Windows.

You've probably already read my comments on the future of the desktop, and you know that I think the "enterprise desktop" market is changing dramatically. You can see this in how Microsoft has changed their own strategy. Among other things, tablets and phones are far more important than they were just five years ago. We don't think about the software on tablets and phones as part of our core business, so we've left that space alone. But their influence is still there, so the "enterprise desktop" features people are willing to pay for has changed, and that's has an influence on how we invest our resources.

There's a third category, which is the "technical workstation". These are power-hungry people with domain-specific applications, like 3D visualizations, animation, fluid dynamics simulations, stuff like that. They naturally gravitate to Linux because that's where the tools and research that makes them successful starts. We've had great success in that space, and we continue to make investments here.


How do you monetize Open Source?
by mykepredko

What would you recommend to somebody who feels they have a great application idea and are probably ready to go for angel/first round funding but feels that the application should be Open Source?

Do you put in customization/support as the way to fund the endeavor long term or is there another approach for the OSS conscious entrepreneur?


JW: Open sourcing an idea is great because you will be able to innovate faster with the community than you would by going it alone. There are many, many open source startups doing exciting things, and many with VC backing. So, there is clearly a path for the OSS conscious entrepreneur. Red Hat chose a subscription model for our business; others have gone the customization/open core route. We believe in an upstream first development model, so open core/customization does not work for us. But, there are certainly many successful open source companies that use this model, and the true answer here is that there are likely a lot of variables depending on what your app is focused on.

Most importantly, recognize the value of the open source development model is around user participation. So building a business model around open source starts with a clear, deliberate strategy on how to get others with different perspectives and expertise involved in writing the code. If you don't have others actively involved in writing the code, then it's hard to get the leverage you need for an open source model to work.

Building a new community is hard. We've started a few at Red Hat, but most of the time we look for existing ones that already have a robust community. Where a robust community exists, open source always wins. From a business model perspective, recognize that you can't sell the value of the functionality, because the functionality is free. So think hard about how you add value around that functionality. For Red Hat products it's typically a combination of commitment to a defined life-cycle with the bits, downstream certifications/eco-system, ability to drive upstream roadmaps to meet our customers need, and support.


Open source?
by martiniturbide

What is the current commitment of Red Hat with open source for 2017? Redhat may be the most profitable software company that endorses open source for their products. What is the recommendation for other companies to be profitable and at the same time remain being good open source citizens?

JW: Red Hat's commitment to open source has never wavered. We are committed to having a 100% open source product portfolio, with an upstream first development model. This means that we do our work to get features integrated into open source projects before we integrate them into Red Hat products. Dave Neary from Red Hat's Open Source and Standards team wrote a good blog post about this approach. And we have followed through on this commitment even with the technologies we acquire â" something I think is pretty unique to Red Hat. In the last few months, we've open-sourced Ansible Tower and Codenvy.

My recommendation to other companies: contribute. In the last few years, we've seen a lot of new voices championing open source. That's great to see, even when it's your competitors. Faster innovation and more choice is always a good thing. But, open source is a commitment, not a buzz phrase. Companies that want to be good open source citizens need to walk the walk. Another must-read on Red Hat's commitment here is this blog post from Paul Cormier.


Building a strong company
by resplin

Red Hat has distinguished itself through its commitment to open source and its ability to remain profitable.

Mike Olson famously said "you can't build a successful stand-alone company purely on open source." He argues that you cannot scale an open source model that does not rely on selling proprietary components because it is too easy for competitors to undercut a vendor's services offerings when they don't have to pay for R&D.

How do you feel about that assessment? Is Red Hat's success impossible to replicate by other open source companies?


JW: First off, let me say that Mike is a great guy. I've known him for many years, since I first joined Red Hat. And I want to applaud him for his work in driving Cloudera to where it is today. I'm thrilled to see their success. But in regards to open source business models, we've agreed to disagree.

I'd argue that Red Hat is a successful company by many metrics, built purely on open source. My contention is that too few open source companies follow the Red Hat model. I don't want to overly bash open core models. Some will be successful, but competitively, I'd argue that there's no faster way to innovate at scale than through open source communities. We've said before that half open is still half closed. I think it's too easy for early adopters to find workarounds to open core offerings, which can hurt a business when it moves past the early adopter phase.

I refer to this a bit earlier in the Q&A, but the important thing to remember in an open source business model is that YOU CAN'T SELL FUNCTIONALITY because it is available for free. If you just think about functionality, then Mike is probably right - you need to add proprietary code that you can sell. But implementing a piece of software in an enterprise context is about so much more than the functionality.

Red Hat is successful because we obsess about finding ways to add value around the code for each of our products. We think of ourselves as helping make open source innovation easily consumable for enterprise customers. Just one example: For all of our products, we focus on life-cycle. Open source is a great development model, but it's "release early, release often" style makes implementing it in production difficult. One important value we play in Linux is that we backport bug fixes and security updates in supported kernels for over a decade, all while never breaking ABI compatibility. That has huge value for enterprises running long-lived applications. We go through this type of process against all of the projects we chose to productize to determine how we add value beyond the source code.

I would agree that this type of business model won't work across every technology category. At Red Hat, we look very deeply at the categories we've expanded into to ask ourselves whether our model can be effective and make an impact in a given space.

What advice do you have for building a sustainable business, especially one that is driven by open source values?

JW: Start off by reading a couple of answers above. To summarize:

1. Start (or find) an open source project that truly benefits from broad participation and work to build (or become involved) in that project. Projects where participation benefits the quality and innovation of the code are inherently advantaged over proprietary code. So you can check the first box - a technology that is superior to competitors.

2. Identify how you can uniquely add value to that technology that transcends the code. This is what I talk about above. The code is free. It's better because of yours and others' contributions. But those are freely given and free to use and therefore are very hard to monetize. Focus on how customers might implement the technology. For Red Hat, we like layers in the stack that are run-times, where enterprises will likely want long-lived support. We also like layers where hardware touches software, because there is huge value in standardization and certifications, which are not attached to the code, but to the products that we rigorously test and build joint support mechanisms for with the hardware vendors. If you identify this, you are well on your way - you have a project that is superior to competitors' and you have a vehicle to uniquely add value to that project in your product.

3. Surround yourself with like-minded, passionate people. Culture always trumps strategy. That's a short paraphrase of a famous quote. Companies too often fail because of internal strife, ethical failings, or simply losing their way. I know that startups have to begin with a product and business model, but durable success happens via people working together to make it a reality. And that's all about culture and leadership.


Recruiting open source talent
by resplin

As Red Hat has scaled, it has to remain staffed with all types of non-technical business professionals. How do you help these professionals learn to "sell free software"? Has it been difficult to train these professionals on the open source business model?

JW: I think that anyone can pretty easily put themselves in our customers' shoes and understand the benefits of open source. For one, no one wants to feel locked into a proprietary solution or data format. We all want choice and flexibility, and open source is a great way to enable that.

For another, everybody wants access to rapidly innovating technology that helps solve their business problems, and our model gives them the ability to consume the latest and greatest technology, but in a way that's stable and secure for the enterprise.

And finally, everybody's experienced the frustration of having something in their car break and not having access to fix it. It seems like many companies deliberately make it difficult for their customers to tinker with or improve their products. Open source is the exact opposite -- we welcome people to take a look under the hood, see how things work or why they're broken, and roll up their sleeves to contribute if they want to make it better. All in all, it's a pretty simple and compelling value proposition that even someone brand new to our company can understand.


Coding Chops
by CrashNBrn

So who wins in a "code off" ?

Jim Whitehurst, Mark Shuttleworth, Tim Cook, Larry Page, or Satya Nadella.


JW: That's a tough one, but I think I could at least compete! I wasn't new to Linux when I joined Red Hat. I'm actually working towards my Red Hat Certified System Administrator (RHCSA) now. It's not an easy certification to get - if I'm successful, I think I'll have hopefully proven my chops. I can compile a Linux kernel and kernel modules and can build pretty decent apps. Though OpenShift makes building apps so easy, I'm not sure that's a huge distinction. (Note: Shameless plug!)

But the actual answer to your question is Linus Torvalds. He really should be on that list!


A long term view on IoT security?
by mlts

Are there any plans or products to help with IoT security?

RedHat is one of the few companies that can step in and do something in regards to device security, even when device makers have little to no interest in this topic, as to them, security has no ROI, or as one IoT company exec told me, "the only person that has ever made money from a padlock is the lock maker."

Being able to lure IoT vendors to use secure tools wouldn't just benefit them, but it would benefit the Internet in a whole. Even something like manifest lists that interact with FirewallD to ensure a device is only able to communicate with authorized devices and cannot take input/output from rogue sources would improve the IoT ecosystem tremendously.


JW: We are already helping with IoT security indirectly. Open source and Linux powers nearly every IoT device that exists. This is an example of open source winning, you can't escape its reach any longer. That said Red Hat has always been a substantial contributor to open source projects and security is always a part of this collaboration. We were doing security before security was cool.

Rather than putting a focus on individual IoT devices, our focus is on the open source ecosystem as a whole. This is an instance where a rising tide lifts all boats. The goal is not help a single device or vendor, but to work on features that will affect the entire industry. By focusing on improving security in the Kernel, the compiler, glibc, the libraries used, even in the graphical user interfaces, we are helping build the future of IoT device security. IoT is changing the rules and perception around security. There is a lot of opportunity to get IoT security right, which means we have to focus on getting open source security right. We all win or we all lose when it comes to IoT security.


OpenStack vs AWS
by resplin

How can we improve the future of OpenStack? The dominance of Amazon has challenged the relevance of well funded players like Microsoft, Google, and IBM. How can OpenStack compete? The network effects around a dominant cloud platform threaten to relegate OpenStack to be a long term niche player, like Linux on the desktop. How can we avoid this fate?

JW: Most important is that the hybrid cloud is real, and it's increasingly part of the dialog we have with users and customers. Cloud isn't either-or. You can have a mutli-cloud deployment where you are using OpenStack for some workloads and AWS for others. We consistently hear from our customers and users that they are in public clouds like AWS *and* their on-premise cloud deployments. The public cloud providers are all great partners of ours, and I view OpenStack as a complementary technology to them.


As corporate IT loads shift to public clouds...
by Anonymous Coward

...does this marginalize the role of operating system vendors? I would imagine that most AWS customers would lean on Amazon for technical support rather than Red Hat.

JW: On the contrary, the emergence of public cloud has made the operating system even more relevant. There are several reasons why:

The first is around application mobility. The vast majority of customers I speak with plan to use more than one public cloud. So portability becomes a major requirement. And since OS is where the application ultimately touches computing resources, having an OS that can consistently run across all major platforms becomes even more important. As with any single platform provider, optimizations for provider unique hardware, architectures, or services may address specific situations in the OS and we have all seen how that played out in the single-source, vertically integrated Unix stacks - hence Linux. So we remain dogged in our drive in working with all our cloud, hardware, and software partners to ensure that RHEL (and all our products) enable as many platforms as possible to reinforce customer choice and application mobility.

Second, much of the value we provide in Linux is around life-cycle. We commit to a decade+ long life-cycle of patching and support of RHEL. That allows enterprises to confidently run long-lived applications on RHEL. That requires a massive engineering investment in skills, tools and processes. I guess others (like public clouds) could ultimately chose to do that, but it's a very different business than they are in today, and I'm not sure why they would chose to do that versus the many other areas of opportunity that more closely match their current capabilities.

Finally, new application models like containers and microservices are bringing the operating system to the forefront. Each and every container has its user-space dependencies in Linux in it, and therefore requires management of those components in the container regardless of where that container runs. As the leading Linux vendor and as a leader in many of the projects around containers, Red Hat is uniquely positioned to help customers as they build and deploy containers on public clouds or on premise.


Product vs Engineering
by Nite_Hawk

Hi Jim,

Thank you for answering our questions! How do you view top-down product driven development vs bottom-up engineering driven development? Are there situations where one excels vs the other?


JW: To be honest, I'm not sure I'm the right person to answer that question. I've had the great fortune of having a very strong engineering leadership team at Red Hat, so I have allowed them to drive how we engage with communities and build our products.

In a broad sense, Red Hat does a bit of both. Our business model is built from the project out to the product, because we so strongly believe in the power of user driven innovation. So I guess you could say that we are more bottom-up engineering developed. But a big part of our value is taking customer needs and driving those into upstream projects so that they end up in our products. So we really are a hybrid.


Puppet versus Ansible?
by waveclaw

Where do you see the configuration management market going in the next year or two?

JW: First things first, it's interesting to note that Ansible started as an orchestration platform that also happens to be able to do configuration management as well.

Orchestration is the hot topic right now for automation versus last year's configuration management tools. Ansible is more orchestration than configuration management. Puppet and Chef require tools like mCollective to pick up the orchestration piece. Red Hat now runs Tower. And Tower now ships as part of the Red Hat Ceph storage product. Red Hat's Satellite product is based on the Foreman which includes Salt, Puppet, Chef and Ansible support.

But where is this market heading? Are we likely to see consolidation? Integrations? Or even a flood of config management system tied products from vendors?


JW: Orchestration isn't a natural capability of many of the other tools on the market, but if you think about it, the ability to orchestrate configurations is really pretty critical. As it turns out, the order in which you provision IT applications and environments is really, really important. And Ansible handles this by design.

That being said, we have a number of customers that use other configuration management platforms like Puppet and Chef, and they use Ansible to deploy and manage agents, and then to orchestrate application deployments by deploying configurations as defined by these other tools. So really, it's easily a "yes, and" story, not an "either or".

Then we have Ansible Tower -- which actually, Red Hat was a paying Ansible Tower customer before we acquired them. Tower helps orgs operationalize automation across all their teams and IT environments in ways other tools cannot easily do otherwise. It's also key to plumbing automation into devops workflows.

There is some possible consolidation, but there's still a lot of market adoption to be had. We come across customers every day that have previously not used any configuration management solution at scale. This is a problem for those companies that want to scale, and running workloads in the cloud or with containers is nearly impossible without a mature automation and configuration management posture. So while there's some consolidation possible, there's still a lot of growth out there. As for config management being tied to vendors, I suspect that you'll continue to see other organizations mirror our approach to hybrid here. For an IT org that is trying to juggle deployments both on-premesis as well as in the cloud, they need tools that will work just as well in either location. This is a particular strength of things like Ansible.


Are there plans to tighten Ansible Integration
by waveclaw

We use and love Ansible, but it still seems to be a separate product. Are there plans to integrate it more? Having it as an integrated deployment option for JBOSS Operations network (JON) would be good.

JW: When we acquired Ansible, we knew we had to be careful not to immediately crush them with all of our scaling requirements. At this point, roughly 18 months post-acquisition, we can say that the Ansible team is heavily engaged with nearly every Red Hat product team. So whether you're talking about Red Hat Enterprise Linux, OpenStack, OpenShift Container Platform, Ceph Storage, CloudForms, Insights, or many of our other offerings, Ansible is either already integral to those offerings, or is being planned for a near release. It's an important piece across our portfolio.

Specifically to JBoss and our middleware offerings, several of our consulting teams came together to create a Ansible Roles to ease the deployment and management of various JBoss offerings. And I think that illustrates perfectly what Ansible means to us -- even our services teams are engaging in the Ansible community and getting involved. Which is both a testament to what Ansible can enable customers to do, but also to the love that so many different teams across Red Hat have for Ansible.


If meritocracy over democracy...
by turkeydance

if meritocracy over democracy is his choice, who decides what is "merit"?

JW: Great question. One thing that's important to me is that we continually question how well we apply the principle of meritocracy in practice. In general, we try to define our business goals and the problems we're trying to solve in clear and objective terms, so that it's obvious to everyone what the best and most feasible ideas are. You can get a feel for what kinds of information and detail we share internally by checking out the Open Decision Framework, which is a collection of our best practices for making open and inclusive decisions. We think of meritocracy as a leadership behavior, and you can see how we define it here. (PS: You can also find it on GitHub under a Creative Commons license.)

In practice at Red Hat, people with a long history of contribution and good ideas build their reputations as people to be listened to. It's not a perfect process, but because it is a "multi-round game" with reputations built over many interactions, it's a pretty good way for informal leaders to emerge.


Enterprise Desktop Market / Emerging / Demand
by GioMac

I am running more than 250 Linux desktops at the company and can get even more, but there is no centralized management solutions for that, and that's an issue with customization and security too. KDE desktop is very good at some point with it's ability to have strict configuration files and immutable options, that does about 25% of what we can get with Microsoft and Group Policy Object, and we see that a little effort is required to make things work.

Can we expect that Red Hat will enter that market in the nearest (3-4 Y) future?


JW: I appreciate the feedback and idea for a new market for us. Let me take that feedback back to our desktop team. I really can't talk about future product plans in this venue, but I'll make sure they know that you see an opportunity here.


RHCA Exams
by kamilyunis

My question is about Red Hat Certified Architect exams. It is very good and we are very happy about Red Hat's new subscription-based trainings. It is great. But when it comes to RHCA, it is limited for locations.

RHCA level exams are very expensive, and travel and accomodations make it more expensive. I am 2xRHCE, because of these exams is available in my location. Azerbaijan Baku, MIddle EAST, Caucasus does not have center to take exam. Please take this into consideration. Vmware, Cisco, Microsoft, AWS, OpenStack make their exams available in everywhere online, so it is easy for everyone to take it. Why open source company limits people passions to location.

I believe that me and people like me can become multi level RHCA if they get chance to take exam in their own location. And this will help recognition and value of RedHat in regions also. Please make this available as Cisco for us. At least make it possible on Kiosk In Georgia or Azerbaijan so we can take exams also. I am from Azerbaijan, Baku. With Loves to best open source company in the world.


JW: We recognize the need to reach people who are interested in certification throughout the world. We are constantly expanding our global testing options, and increasing the number of ways we offer testing for our certifications, including adding secure, preconfigured kiosks and laptops with our Red Hat Training Partners.


==========================================================================================================
Red Hat Enterprise Linux is too static to keep pace with kernel devel.
by nbritton

I have found that Red Hat Enterprise Linux is too stagnate / static to keep pace with the rate at which the kernel is now developed. The 3.10 kernel is four years old at this point and the fact that Red Hat Enterprise Linux 7 will be in production support until 2024 is disheartening because the enterprise industry will be a decade behind the latest kernel developments and updates from associated projects. Compared to other vendors' Linux offerings, when I use Red Hat Enterprise Linux I get the same feelings I got when I was force to use AIX, HP-UX, and Solaris. I hated administrating those products because they were stuck with defaults like ksh from a decade ago.

My question is, would Red Hat ever consider releasing a Linux distribution with a shorter development cycle and with more aggressive tracking of upstream projects? I see a place for a distribution that is somewhere in between Red Hat Enterprise Linux and Fedora. Perhaps you could morph or fork CentOS into the upstream development for Red Hat Enterprise Linux? For example: Upstream --> Fedora (Bleeding Edge) --> CentOS (Next Release of Red Hat Enterprise Linux) --> Red Hat Enterprise Linux. This would give system engineers and architects a greater range of products to choose from and it could help stabilize Red Hat Enterprise Linux even more then it already is.

In short, the Linux kernel is the largest and the fastest moving software project in the world, so what changes are you going to make to keep up with it?

.....................................................................................................................................................................................................................................

The Price of Reliability
by hcs_$reboot

Worked on SunOS, Solaris, MacOS, Red Hat, CentOS, and, more recently, Ubuntu. CIOs choose Red Hat mainly for support and reliability. Reliability is the word that comes to most engineers mind when the RH and CentOS OSes are mentioned (certainly for good reasons). Reliability mainly relies on using older kernels and features, that have been patched over and over ; sure, that works, reliability wise. But on a number of rather recent projects, comparing Ubuntu server and RH/CentOS, it appears setting services up (eg Samba) was way easier on the newer Ubuntues than on the latest RH/Centos (not mentioning the many issues migrating from 6 to 7) . Also, using newer kernels, Ubuntu performs well, taking advantage of the newest internals, memory management and sharing, IPC etc ... and no specific reliability issue (IMHO, reliability wise, Ubuntu and the like are as solid as RH nowadays).

Question: in 2017, does reliability still mean using long-tested, but older kernels and features?

==========================================================================================================

JW: There's a common misperception that Red Hat Enterprise Linux pulls a Fedora kernel and stays on it for 10+ years while the world moves onto newer kernel versions with better features and newer hardware. It's true that we standardize on a specific kernel version for the life of a major release, but that's not the whole story.

Our stability is actually in our kernel ABI (kABI), which is a promise of stability that a kernel developer can rely on for the life of a Red Hat Enterprise Linux release. When we release a major version of Red Hat Enterprise Linux we actually backport many key features from newer kernels, bugfixes, etc. and we do it in a surgical way that not only delivers new features and hardware on an older kernel, but also preserves the kABI. For example, Red Hat Enterprise Linux 6 was based on the 2.6.32, but when we released Red Hat Enterprise Linux 6 it also had an additional 2600 patches (features, hardware, bugfixes, CVEs) and this continues for the development life of the release. The stats on Red Hat Enterprise Linux 7 are similar. This provides a customer expected balance between stability and innovation. We also have a driver update program (DUP) that makes it easy to add kernel drivers prior to the public availability of the next minor release.

So don't take the kernel version at face value -- we spend a lot of time backporting newer kernel features into every major release! If you want the latest and greatest and don't care about kABI, ABI and long-term stability, Fedora is ready for you.

Enterprise customers continue to expect stability, security, scalability and reliability, but also want higher levels of automation and ease of use, multiple deployment methods (bare metal, virtualization, containers and cloud) and the new features as they appear upstream and with hardware and userspace tools that can help to exploit them. If we sense critical new features going upstream that will break kABI and can't be backported, we will plan accordingly.
This discussion has been archived. No new comments can be posted.

Interviews: Red Hat CEO Jim Whitehurst Answers Your Questions

Comments Filter:
  • by Anonymous Coward

    A complete cop-out over systemd, we're hurting from the bugs and the architecture, not the change of itself. Unfortunate, I'd hoped for more than a standard systemd marketing blurb cut and paste.

    • by ShanghaiBill ( 739463 ) on Monday October 30, 2017 @03:25AM (#55456273)

      A complete cop-out over systemd

      He answered the question. Just because you don't agree with him doesn't make his answer a "cop out". What were you expecting? Red Hat created Systemd. It is their baby. They are not going to abandon it. If it is so important to you, then install Slackware, and you will not only be able to tweak your init system, but you can tweak anything else you want, and experience pure raw Linux.

      • Re: (Score:1, Flamebait)

        by drinkypoo ( 153816 )

        It's a cop out because it's lies and misdirection. "The problems an init system needs to solve today are different from the ones that traditional init systems were solving in the 70's, 80's and even the 90's." No, it's doing the same goddamned job. "This also makes it easy to use cgroups to configure SLAs for CPU, memory, etc." No, since redhat used boilerplate for all initscripts, it would have been easy to insert the simple shell commands to create cgroups and put daemons into them into every initscript.

        • by Zero__Kelvin ( 151819 ) on Monday October 30, 2017 @05:22AM (#55456477) Homepage
          You have a reading comprehension problem combined with literally zero knowledge of systemd. I have investigated it thoroughly as well as put in the relatively minimal effort to understand and leverage it. Everything he said was spot on.
          • You haven't explained anything Zero Kelvin - please try again to prove that what you claim is accurate.
            • Why would I re-explain It? Whitehurst broke it down quite nicely, and even provided links you are clearly too lazy to follow. Every myth that you hear over and over again from the people who are claiming to be Linux experts, and anti-systemd, is exposed in that write-up. The truth, I suspect, is that these idiots either are anti-Linux. The other option is that they are woefully incompetent. People aren't leaving Linux for the BSDs, and if anyone would know Whitehurst would. And despite the claims that they
        • Comment removed (Score:5, Insightful)

          by account_deleted ( 4530225 ) on Monday October 30, 2017 @08:53AM (#55457037)
          Comment removed based on user account deletion
          • CEO is copping out and you are spewing in ignorance.

            macos is a desktop system, and I'll agree systemd might be fine for a desktop system.

            The other Unix have superior systems to systemd for init that actually address enterprise needs and also still prove traditional functionality/compatibility. Bringing them up is a fine way to start discussion of how systemd went off the rails as a failed attempt to make something better

            Systemd does not solve any problem nor address any need for the hundreds of servers I a

          • And, let's be clear here, init was always shit too. I had numerous problems back in the 1990s with it if, say, a key service couldn't be started.

            It wasn't init that was shit, it was sysvinit (i.e. all the crap under /etc/init.d and /etc/rc?.d) that was shit.

            Poor old init(1) didn't have a chance, buried under piles of crapulous /bin/sh scripts.

          • by rastos1 ( 601318 )

            Since the 1990s, we've heavily moved to hotpluggable hardware thanks to USB, networking has gone from "Basic and optional" to "Ubiquitous and complex" thanks to high speed Internet, wireless mobile Internet (be it cellular or multiple WiFi hotspots), software firewalls, etc

            Number of USB devices I've plugged in during last 10 years that are not HID keyboard, HID mouse or mass storage: 0
            Number of wifi networks in range at home: 13
            Number of wifi networks in range at work: 2
            Number of wifi networks to whic

        • by HiThere ( 15173 )

          I'm not sure how much it's lies, but it left me feeling that there was it was adopted because of some agenda that wasn't being revealed.

          Personally, I have seen *NO* advantage in systemd, and as a Debian user I felt the adoption of it was an unpleasant surprise and never justified.

          There are several features of it that I do not like, particularly the lack of transparency. With shell scripts I could figure out what was happening, with systemd It's "depend on the developers".

          That said, I can see use cases wher

        • and much of the community has rejected it outright.

          Yeah considering all the major distros use it now, you are talking bullshit. Gentoo lets you choose to not use it, but the default is SystemD.
          Slackware is the only holdout left for any distro that has ever been of any consequence. SystemD is the standard for Linux, it already won that fight.

      • Redhat's business model is selling support so creating something that easily fucks up your system is their cash cow. Expect /etc to be morphed into a single binary database next. Just you wait...

        • by xkahn ( 8544 )

          Red Hat may make money from support, but consider that every time a support case is opened, Red Hat is losing money. Most businesses work very hard to reduce the number of support cases that get opened because of this. Red Hat won't be an exception there.

        • Expect /etc to be morphed into a single binary database next. Just you wait...

          Expect Gnome 4 to build and run as a kernel module.

      • "Red Hat created Systemd. It is their baby. They are not going to abandon it."

        This isn't true, though. We would happily abandon it if something better for our purposes showed up. We've other things written by Red Hat for superior alternatives before; sometimes the alternatives were RH projects, sometimes not. Jim even said this in his answer.

    • A complete cop-out over systemd, we're hurting from the bugs and the architecture, not the change of itself. Unfortunate, I'd hoped for more than a standard systemd marketing blurb cut and paste.

      I would go further and characterize every one of Jim's responses as completely and utterly devoid of inspiration. Worse actually, he gives me reason to worry about the deleterious effect that Red Hat has had and and will have on the community if he continues shitting in the nest. But that's what you get when you hire an MBA to run a technology company: a nest full of shit. Enough shit, and there will no longer be room for birds. I say, it is high time for a change of "White" hats.

  • Never comment but... (Score:5, Informative)

    by Anonymous Coward on Monday October 30, 2017 @03:30AM (#55456287)

    Read Slashdot since 2003, but I almost never post. Sure, I'm still posting as AC. Posting here in the (hopeless) hope that somebody will read this and reconsider...

    I've been an (advanced but uncertified) RHEL and CentOS system administrator and user for 15 years now. My first work started with Redhat Linux 9. I am now holding on to RHEL 6 for dear life despite spending significant effort evaluating RHEL 7. I find systemd distributions to be buggier, less reliable in general, and harder to administer. I won't go into details in this post, because they are amply present elsewhere on the web. I find the community surrounding systemd, especially with respect to "principle of least surprise"-violating behavior to be disheartening to say the least.

    I viewed RHEL 4, RHEL 5, and RHEL 6 with excitement, and I embraced the changes that came with each distribution as mostly improvements. I am not afraid of learning a new system for the sake of improvement. With RHEL 6 end-of-life fast approaching, however, I am preparing to switch all the institutions and environments I support to FreeBSD. This isn't meant as any sort of threat but just as a fact that I have sadly resigned myself to accepting. In the early days of RHEL 7, I vehemently hoped that RHEL 8 would move off systemd. As the years marched on without any signs of such a decision and an apparently doubling down on systemd, I feel like I have no options for a reliable init system without the wealth of bugs and breaking decisions systemd developers seem to feel comfortable routinely making.

    With all due respect, and choosing my words very carefully, I think you are somewhat mistakenly perceiving the nature of the broad acceptance of systemd across distributions. I think its somewhat component-viral nature has forced its adoption in many cases, and it seems quite apparent that this has happened to the effect of far greater antagonism and technical pitfalls than of advantages.

    This is deliberately not written as a technical critique. Those can be found elsewhere. This is written as an entreaty from a random anonymous coward on the Internet on behalf of many anonymous cowards that, in my experience, systemd may not be as well-received as it seems. Definitely too late for RHEL 8, I know, and I don't approach a FreeBSD transition lightly. I know how silly it must sound to transition from the Linux to BSD philosophy and environment over an init system, but reliability is key for me and for those I serve.

    Thanks for your responses.

    • by Anonymous Coward

      SYSTEMD IS SHIT.

    • by bluelip ( 123578 )

      systemd is 'popular' for the same reason IE is. They're included with the OS. Neither would be chosen on their own merits.

      • Except that, if you want Windows, you take what Microsoft offers, which includes built-in IE. (It's perfectly usable to download a decent browser. Don't knock it too much.)

        Every Linux distro is a different but similar OS. It's entirely possible to keep systemd out of a distro, but nobody actually does. If it were really that horrible, someone would have a non-systemd distro that would become popular. Until this actually happens, I don't see how systemd can be that bad.

    • As init systems go, I actually like systemd, far more than Upstart or, especially, Solaris SMF. The XML-laden can of worms known as SMF is particularly something I hope I never have to work with again (then again, with Solaris being barely on life support now, that's a pretty good bet). The only thing I'd wish is for systemd to confine itself to being an init system. Tying important system components tightly into systemd, on the other hand, is something I think is a Bad Idea.

      I've ripped-and-replaced several

    • I viewed RHEL 4, RHEL 5, and RHEL 6 with excitement,

      You need to get out more.

    • Perfect is the enemy of good. For all of my use-cases, good is good-enough. Then again, my use-cases are not your use-cases.

      Now, you have piqued my interest. What sort of use-cases do you have that are dependent on a specific init system? The only thing I can think of is something so integrated into the low-level OS stuff, like starting/stopping the process, or system logging, that it will require a lot of development work to rewrite for systemd.

      Care to fill me in?

  • gnome 3 and others are focusing on desktop use where most of the enterprise use of RHEL I have encountered is using remote display. I feel as if the desktop team is killing the main use case.
  • Thanks! (Score:4, Insightful)

    by Mike Frett ( 2811077 ) on Monday October 30, 2017 @04:05AM (#55456353)

    Thanks Mr. Whitehurst for taking the time to answer our questions! Personally I'd like to see Red Hat or Canonical make a push to switching Government, Health Care and School systems over to something besides Windows.

    • Personally I'd like to see Red Hat or Canonical make a push to switching Government, Health Care and School systems over to something besides Windows.

      They could join Debian, Gentoo and OpenSUSE at the Public Money, Public Code [publiccode.eu] campaign.

      It sounds like you might want to sign the petition there.

  • A CEO of a multi-billion corporation who actually tries to keep up with his technology down to the nitty gritty details. No wonder RedHat is going well!
    • A CEO of a multi-billion corporation who actually tries to keep up with his technology down to the nitty gritty details.

      You are assuming he wrote all his own replies with no help from any of his 10,700 employees.

      • He probably did. Quite a lot of Red Hatters have a story about getting an email from Jim out of the blue asking for help with some crazy-ass nerd project he's working on. He once mailed me asking about getting the out-of-tree Poulsbo drivers working (which is something I was messing around with at the time) for some funky device he was trying to get Fedora running on.

        He's not a full-time engineer or anything, but he's as much of a tinkerer as many /. readers. That was one of the things that got him hired as

  • by rknop ( 240417 ) on Monday October 30, 2017 @05:29AM (#55456487) Homepage

    "[b]It is best not to blame systemd for problems that go away when you stop using systemd.[/b]"

    Do I [i]know[/i] that systemd was the problem? Nope. Have I had things go much more smoothly for me since I moved away from an extremely popuar distribution that uses systemd? Yep.

    What's more, the argument that everybody is using systemd based on its merits is specious. That's the same as the argument that Windows and Word and the like is what's most used in the corporate world on its merits. When your octopus wraps its tentacles around so many different things, it eventually gets impossible to ignore your octopus. You can keep trying to move away, but sooner or later it's just easier to give in and deal with it, rather than keep trying to fight it. This is what happened with Internet Explorer back when it had loads of incompatible extensions that sellers of corporate software would use; you had no [i]choice[/i] but to use it. And now that systemd has sucked in so many subsystems, it's become more effort to try to keep using versions of those subsystems than it is to just give in and use systemd. That's not giving in to systemd based on its merits. It's giving in to a company who has used embrace and extend as a bully tactic based on its strength and control over key things in the ecosystem.

    • by Gravis Zero ( 934156 ) on Monday October 30, 2017 @07:01AM (#55456649)

      "It is best not to blame systemd for problems that go away when you stop using systemd."

      That reminds me of the new book from O'Reilly. [imgur.com] ;)

    • by emil ( 695 ) on Monday October 30, 2017 @07:30AM (#55456723)
      I would actually use a cron @reboot entry for NFS mounts. I generally have a /root/afterboot.sh where I start Oracle database units, manipulate the firewall, then *lastly* mount NFS volumes. This method never hangs a boot, and it's more portable and less work than trying to configure an rc.local. I suppose that I could use systemd timer units to accomplish this, but I've never felt motivated. Vixie cron runs on a lot of other init systems.
      • by Anonymous Coward

        I would actually use a cron @reboot entry for NFS mounts. I generally have a /root/afterboot.sh where I start Oracle database units, manipulate the firewall, then *lastly* mount NFS volumes. This method never hangs a boot, and it's more portable and less work than trying to configure an rc.local.

        Longtime Unix admin here. You need to stop making static NFS mounts and use an automounter. It is much safer and much less error prone.

        Back in the day (early 1990s) if we had to reboot the data center (which happened for maintenance, etc) we had to remember the particular order to reboot systems. We always had to bring the NFS server up first so the NFS clients could mount it. Otherwise, the client would come up first, fail to mount the server, and now part of your filesystem is unavailable.

        Then we had the

        • My introduction to UNIX was SCO on a 386 in '87, so I might outrank you. In any case, I have two mounts among thirty servers, and I don't think the automounter is worth my time. I need to worry about /home and /rpmpatchdir. I ripped out the HP-UX automounter when I started my current job, and it was more trouble than it was worth then. All of my NFS mounts are no auto, and I am essentially using pdsh to control their Mount status. Look at Linux Journal on 11/1 for more details.
    • Comparing it to Internet Explorer? Why don't you just start an open source project to give it some competition, like Firefox did to IE?
      • by hey00 ( 5046921 )

        Comparing it to Internet Explorer? Why don't you just start an open source project to give it some competition, like Firefox did to IE?

        You are dishonest and you know it, but in the off chance you actually think you are honest, let me break it down for you:

        SysV, Runit, OpenRC, Upstart, etc. There is competition, but the playing field is not even.

        Since those have already been dropped by most distributions (and since the ease of supporting systemd was a primary factor for its adoption by most distributions), there is absolutely no way that a traditional init system will be supported by a distribution currently supporting systemd.

        I could make

        • Systemd does a number of important things. It lets software installations drop service files and start them immediately. It lets services run as non-root users, where the inittab does not. It lets services run chroot()ed, where inittab does not. It provides a (bad) interface to inotify/crown/socket/nspawn/etc, where inittab does not. To argue for inittab is to argue for a straightjacket - that is systemd.
        • but the playing field is not even.

          The playing field is exactly as even as a Luxembourg/Germany world cup qualifying match.

          • by hey00 ( 5046921 )

            but the playing field is not even.

            The playing field is exactly as even as a Luxembourg/Germany world cup qualifying match.

            No. The playing field is as even as if Luxembourg decided to play a new game of football-curling, and the fifa, referees, broadcasters, etc. decided to support and cover only that new football-curling.

            Germany is still good at playing football, but it doesn't matter if noone with power organizes football tournaments and broadcast football matches, but only show football-curling, where there is only one team.

            The state of init is the same. Systemd decided to do things radically differently (which isn't an issu

  • Missing the point (Score:4, Insightful)

    by MadTinfoilHatter ( 940931 ) on Monday October 30, 2017 @05:31AM (#55456491)

    "Any change like systemd is going to disruptive."

    And that's where he completely misses the point. In the UNIX world, swapping out one component for another doing the same thing should be like swapping out a Lego (tm) brick for a different colored one. It doesn't have to be disruptive, and if it is YOU'RE FUCKING DOING IT WRONG!!!

    • by AmiMoJo ( 196126 )

      If nothing was disruptive there would be no progress. Part of the reason Windows was so buggy and crap was all the legacy compatibility stuff in there.

    • by gosand ( 234100 )

      "Any change like systemd is going to disruptive."

      And that's where he completely misses the point. In the UNIX world, swapping out one component for another doing the same thing should be like swapping out a Lego (tm) brick for a different colored one. It doesn't have to be disruptive, and if it is YOU'RE FUCKING DOING IT WRONG!!!

      I haven't done extensive research on systemd, but I did notice stability issues with my distro of choice (Mint) when they switched to it. I can't cleanly shutdown or restart my system. I have seen no advantages with it, only annoyances. But despite all of that, what I don't understand is the flippant attitude around systemd and distros where systemd is referred to as "just another component". It's not! If it were then I, or someone with much more skill than me who works on it for a living, would be abl

  • by rknop ( 240417 ) on Monday October 30, 2017 @05:37AM (#55456495) Homepage

    Also, since he links to the partially-good-argument, partially-prevarication "systemd Biggest Myths" article, it's also worth linking to this few-years-old "biggest fallacies" [blogspot.com] article.

    • by AmiMoJo ( 196126 )

      I think a lot of the technical arguments about systemd are really just because Pottering is an asshat and poisoned the well early on. Debian are mostly correct about it, it's basically a good idea and while there are implementation issues from time to time it's not like any other init system is perfect either.

      The real problem is that the developers are dismissive and don't always take good advice on board. Maybe that's partly because of all the hostility and every minor issue being turned into a huge drama,

      • Re: (Score:1, Insightful)

        by Anonymous Coward

        Pottering created the Pulse audio system and that's pretty awful as well

        • No it's not. Pulse audio is far better then what we had before. The ability to control audio volumes per application, and the viewer which shows all input and output is really really good.
           

          • I guess software volume control is enough for the same kids who think lossy-compressed music over white iBuds is the beez kneez.
          • Where would it be "better"? It added a lot of complexity, along with its own new failures. > The ability to control audio volumes per application, and the viewer which > shows all input and output is really really good. That is like cherry picking one thing and ignore the rest. How about the systems where pulseaudio does not work, the daemon not starting and can not be started? The increasing complexity in keeping your system up to date - which systemd also pushed onto people. So, no - your comme
      • > I think a lot of the technical arguments about systemd are really just because Pottering is an > asshat and poisoned the well early on. This was never my problem. Poettering may be an asshat or not, I have no idea. What I do know is that he has been wrong numerous times and still isn't learning from his past mistakes. But this is also not the interesting part. Systemd itself is also not interesting. The, to me, MUCH more interesting part is how distributions adopted systemd without asking the use
  • You know, the whole time that meme was in force, I interpreted "petrified" as "physically turned to stone" - which made it harmless.

    But with the Harvey W. stuff that has been finally dragged to light, it seems that "petrified" was more likely to actually be"frozen in place with fear"... which is just icky and creepy. Who'd wish that on anybody?

    Not me for sure.

    Anyway, Red Hat - I still have my RH 5 floppies. I transitioned to Ubuntu a few years ago, but I still have a soft spot for RH. You never forget your

  • Boy I bet you wish you had backported Samba2 support to 2.6 kernel.

  • My first calls usually start at 8 am as I'm driving to the office.

    Scum.

Round Numbers are always false. -- Samuel Johnson

Working...