For Slashdot's 20th anniversary -- and the 23rd anniversary of the first release of Red Hat Linux -- here's a special treat.
Red Hat CEO Jim Whitehurst has responded to questions submitted by Slashdot readers. Read on for his answers...
...is your day like?
JW: I can tell you this, no two days are the same. Broadly speaking, I strive to prioritize time with customers, partners, and Red Hat associates above other meetings.
When I'm in town, my day starts at 5:30 am with a run. I'll scan email and the news during breakfast and take my kids to school. My first calls usually start at 8 am as I'm driving to the office. Today for instance, I'll meet with a few members of our Corporate Leadership team. I'll then sit down with our chief technologist to hear what's happening in the Office of the CTO.
I usually grab lunch around 11:30 am. I tend to bring my lunch, but will occasionally head to our cafeteria for a sandwich or salad. In the afternoon, I'll get briefed on my schedule for some upcoming events, which will include meetings with partners, customer panels, press, and analysts. I usually spend a few hours a day responding to emails and coordinating activity through email. I try to get home by 6 pm to eat dinner with my family and spend time with my kids. I'll usually jump back on email once everyone is asleep before knocking out around 10 pm.
The plans for CentOS?
By Anonymous Coward
Now that CentOS has received a more official status in the Red Hat world, what are the plans for the project?
JW: The ecosystem around Red Hat Enterprise Linux is sprawling and complex, and that's one of our strengths. You have midnight hobbyists working together with multinational corporations. You have people working on GPU hardware, and you have people working on Ruby apps. Some want the latest-and-greatest, and some want to keep everything exactly the same for years and years. So lots of different kinds of people are doing lots of different kinds of work, and all of them are contributing to this massive project called "Red Hat Enterprise Linux". It's not surprising that we can't accommodate all of that innovation in a single project.
That's one of the reasons we split Fedora and Red Hat Enterprise Linux: we freed up Fedora to be innovative and move quickly, which freed Red Hat Enterprise Linux to be more careful, more conservative, and handle the very important and difficult work of stability and security for code that upstream communities have long since moved past. Fifteen years later, we're still very happy with how that's worked out, and Fedora remains a thriving engine for new ideas that make their way down into Red Hat Enterprise Linux and many other projects.
CentOS solves a very different problem for us. First, there are some people that we can't serve with Red Hat Enterprise Linux today, but we still want to participate in the Red Hat ecosystem. Folks using Xen, for example, may not be able to run today's Red Hat Enterprise Linux, but they can absolutely work with the CentOS project and still participate in the broader ecosystem. Second, there are people and partners who are building software that needs a more stable, Red Hat Enterprise Linux-like lifecycle but want to experiment at the kernel level, stuff which would be impossible for us to support in Red Hat Enterprise Linux. OpenVSwitch and DPDK are a perfect example of this, and the CentOS SIG process has served them really well. They can do all the things they need to do in development and with their partner communities, and their innovations still pass from the upstream communities into Fedora, and ultimately into Red Hat Enterprise Linux, Red Hat Virtualization, and OpenStack.
Meanwhile, changes in hardware and software are changing how we think about a traditional operating system distribution. Things are more automated, hardware is moving faster and less predictably, and containers force us to differentiate between bringing up hardware and creating a stable platform for applications. To address all of these changes, Red Hat is going to need every element of our ecosystem -- Fedora, CentOS, and Red Hat Enterprise Linux -- to respond.
As I understand it, one of the stated goals was to speed up boot times. It's had exactly the opposite effect on my Ubuntu system -- that is, when the boot doesn't die altogether when I try to mount NFS shares. (Also, thanks to systemd, I can't even *reboot* or shut down the machine when there's a hung NFS process. I am forced to hard-reset it.)
For years, warning flags have been raised about systemd. It more or less seems that we're bringing all the disadvantages of the Windows architecture to Linux, without any of the advantages of running Windows.
So, again: systemd, wtf???
JW: We had a lot of systemd questions, so I am replying to them all collectively.
My question is related: is Red Hat, as an organization, at all concerned about the damage that systemd has done to Linux's usability, its reputation, and its community? Is Red Hat concerned with how systemd has driven so many Linux users to FreeBSD?
And a follow up, why not spend some of RedHat's money on a sane init system?
I'm sure you can put a few dollars and bright minds on a system that works reliably. The last thing I want my embedded system to do is get hung up on an init failure.
This begs the question, so I'll just ask it: Have any customers ever moved away from Red Hat because of systemd?
JW: First, allow me to address why Red Hat adopted and invested in systemd as it helps to address many of the other questions. Traditional init systems, like System V init, served the UNIX and Linux communities well for decades, but that is a long time and it is not surprising that they have their limitations. The problems an init system needs to solve today are different from the ones that traditional init systems were solving in the 70's, 80's and even the 90's.
Red Hat considered many available options and even used Canonical's Upstart for Red Hat Enterprise Linux 6. Ultimately we chose systemd because it is the best architecture that provides the extensibility, simplicity, scalability, and well-defined interfaces to address the problems we see today and foresee in the future. Of all the passionate debates and disagreements, the fact remains that systemd is the cornerstone of nearly all Linux distributions on its own merits.
Any change like systemd is going to disruptive. We understand that many were not happy with this change and we appreciate the passion of the community. The continued growth and adoption of Red Hat Enterprise Linux, as well as other systemd based distributions, tell us that most users have embraced systemd and there was not a large exodus to FreeBSD or alternatives. We partner with the largest embedded vendors in the world, particularly in the telecom and automotive industries where stability and reliability is the number one concern. They easily adapted to systemd.
We see new users (both new to Linux and prior SysV init users) who truly take the time to learn systemd embrace the simplicity of the interface and its capabilities. We also hear that it is no more difficult to learn than the complexities of init and rc scripts to a new user. It's simply different.
The Debian community provides a thorough, independent evaluation of the systemd initsystem debate. Additionally, the systemd developers provide a list of the biggest myths around systemd.
There are some real advantages, too. Because systemd tracks processes at the service level, daemons can be properly killed, rather than trusting them to do the right thing. This also makes it easy to use cgroups to configure SLAs for CPU, memory, etc. Likewise, security with SELinux and sandboxing become much simpler. The dependency resolution between services is a significant improvement over the sequential ordering of the init rc script mechanism.
Looking forward to all of the exciting innovation taking place around large cloud scalability, OpenStack, Kubernetes, and Containers, we see continued integration and innovation with systemd that would either not be possible or very difficult with init based systems.
So we'll continue to invest in systemd, as it meets our customer's expectations around capabilities, stability, maturity, and community momentum. There's not a realistic alternative today that comes close in terms of adoption and functionality. That said, we're always watching how projects and communities evolve and in that way, systemd is no different from any other component that we ship.
Lastly, I wouldn't dare to debug anyone's setup here, but mounting NFS at boot time is notoriously problematic if you do not have highly available NFS servers. This is a problem that existed before systemd and I think it's much safer to use autofs to mount those volumes on demand or other mount options such as nofail or nobootwait. It is best to not blame systemd for issues that also affect init or are misconfigurations. Ironically, systemd provides more troubleshooting and debug options than init, so that might be helpful to you.
Why isn't Linux on the desktop more widespread?
I'm curious your thoughts on why Linux hasn't grabbed more laptop/desktop marketshare from Windows and MacOS over the years? It seems that with the privacy concerns around Windows 10 and Apple's lack of focus on MacOS there may be a huge opportunity in the near future. What things need to happen in the consumer marketplace and within the OSS community for it to really take off? Can 2017 be the year of the Linux desktop?
Why not have a consumer desktop?
Given Ubuntu's success at providing a stable, developed and popular desktop environment for non-technical consumer users, why doesn't Red Hat provide the same thing? Why is that right for Ubuntu but not Red Hat?
Red Hat is big and getting bigger. Where are you heading at the moment? Would Red Hat ever try to move into the the more consumer-focused places where Ubuntu has ventured, or is that just not profitable enough?
Why does GNOME have such an unusable UI?
by Anonymous Coward
GNOME is a Red Hat project due to the amount of people and funding they get from Red Hat. Then, why does GNOME have such an unusable UI, particularly to the mayor audience of your products? The UI makes basic tasks such as switching between windows a chore unless you install shell extensions, which break frequently and cause unstability.
Proprietary driver support
Many proprietary hardware vendors continue not to take the Linux desktop and workstation markets seriously. Recall, e.g., Linus's rant against NVIDIA. As a leader in the Linux and FOSS communities, what will you do to persuade major vendors to write and maintain functional drivers for Red Hat Enterprise Linux and Fedora? ==========================================================================================================
JW: We also had a lot of great questions on the Linux desktop - let me try to answer collectively:
A functioning, useful desktop is obviously critical to the success of the Linux community. A nice GUI makes Linux more accessible and approachable, and that's why we continue to make investments in projects like GNOME, Wayland, and nouveau. Everyone benefits from improvements in this area, so let's call that the baseline. The primary driver for that work is in Fedora, and I was really glad to see such great reviews of F25. If you haven't tried Fedora in a while, now's a good time to jump in. Personally, I love it.
Of course, one of the perils of the desktop is that "desktop success" is so specific to each individual, since everyone has their own opinion about what a desktop should or must do. That means that even when we think about our "baseline" investment in the Linux desktop, someone's going to be disappointed. What's worse, it's very difficult to make money on a "baseline," since it's something that people just expect to have in the first place. Nevertheless, we spend a lot of time and money on getting these projects right because it is so important to the broader community and the success of our own products.
There's another category of desktop, let's call it the "enterprise desktop". This category requires features that just don't come naturally through a community, and they need some additional investment. The "enterprise desktop" customers who pay for a Linux desktop want that same functioning, useful "baseline" desktop, of course. They also want things like enterprise management features, security tools, compliance tools, identity management, and even simple things, like the windowing system should scale correctly when it's run in a VM on Windows.
You've probably already read my comments on the future of the desktop, and you know that I think the "enterprise desktop" market is changing dramatically. You can see this in how Microsoft has changed their own strategy. Among other things, tablets and phones are far more important than they were just five years ago. We don't think about the software on tablets and phones as part of our core business, so we've left that space alone. But their influence is still there, so the "enterprise desktop" features people are willing to pay for has changed, and that's has an influence on how we invest our resources.
There's a third category, which is the "technical workstation". These are power-hungry people with domain-specific applications, like 3D visualizations, animation, fluid dynamics simulations, stuff like that. They naturally gravitate to Linux because that's where the tools and research that makes them successful starts. We've had great success in that space, and we continue to make investments here.
How do you monetize Open Source?
What would you recommend to somebody who feels they have a great application idea and are probably ready to go for angel/first round funding but feels that the application should be Open Source?
Do you put in customization/support as the way to fund the endeavor long term or is there another approach for the OSS conscious entrepreneur?
JW: Open sourcing an idea is great because you will be able to innovate faster with the community than you would by going it alone. There are many, many open source startups doing exciting things, and many with VC backing. So, there is clearly a path for the OSS conscious entrepreneur. Red Hat chose a subscription model for our business; others have gone the customization/open core route. We believe in an upstream first development model, so open core/customization does not work for us. But, there are certainly many successful open source companies that use this model, and the true answer here is that there are likely a lot of variables depending on what your app is focused on.
Most importantly, recognize the value of the open source development model is around user participation. So building a business model around open source starts with a clear, deliberate strategy on how to get others with different perspectives and expertise involved in writing the code. If you don't have others actively involved in writing the code, then it's hard to get the leverage you need for an open source model to work.
Building a new community is hard. We've started a few at Red Hat, but most of the time we look for existing ones that already have a robust community. Where a robust community exists, open source always wins. From a business model perspective, recognize that you can't sell the value of the functionality, because the functionality is free. So think hard about how you add value around that functionality. For Red Hat products it's typically a combination of commitment to a defined life-cycle with the bits, downstream certifications/eco-system, ability to drive upstream roadmaps to meet our customers need, and support.
What is the current commitment of Red Hat with open source for 2017? Redhat may be the most profitable software company that endorses open source for their products. What is the recommendation for other companies to be profitable and at the same time remain being good open source citizens?
JW: Red Hat's commitment to open source has never wavered. We are committed to having a 100% open source product portfolio, with an upstream first development model. This means that we do our work to get features integrated into open source projects before we integrate them into Red Hat products. Dave Neary from Red Hat's Open Source and Standards team wrote a good blog post about this approach. And we have followed through on this commitment even with the technologies we acquire â" something I think is pretty unique to Red Hat. In the last few months, we've open-sourced Ansible Tower and Codenvy.
My recommendation to other companies: contribute. In the last few years, we've seen a lot of new voices championing open source. That's great to see, even when it's your competitors. Faster innovation and more choice is always a good thing. But, open source is a commitment, not a buzz phrase. Companies that want to be good open source citizens need to walk the walk. Another must-read on Red Hat's commitment here is this blog post from Paul Cormier.
Building a strong company
Red Hat has distinguished itself through its commitment to open source and its ability to remain profitable.
Mike Olson famously said "you can't build a successful stand-alone company purely on open source." He argues that you cannot scale an open source model that does not rely on selling proprietary components because it is too easy for competitors to undercut a vendor's services offerings when they don't have to pay for R&D.
How do you feel about that assessment? Is Red Hat's success impossible to replicate by other open source companies?
JW: First off, let me say that Mike is a great guy. I've known him for many years, since I first joined Red Hat. And I want to applaud him for his work in driving Cloudera to where it is today. I'm thrilled to see their success. But in regards to open source business models, we've agreed to disagree.
I'd argue that Red Hat is a successful company by many metrics, built purely on open source. My contention is that too few open source companies follow the Red Hat model. I don't want to overly bash open core models. Some will be successful, but competitively, I'd argue that there's no faster way to innovate at scale than through open source communities. We've said before that half open is still half closed. I think it's too easy for early adopters to find workarounds to open core offerings, which can hurt a business when it moves past the early adopter phase.
I refer to this a bit earlier in the Q&A, but the important thing to remember in an open source business model is that YOU CAN'T SELL FUNCTIONALITY because it is available for free. If you just think about functionality, then Mike is probably right - you need to add proprietary code that you can sell. But implementing a piece of software in an enterprise context is about so much more than the functionality.
Red Hat is successful because we obsess about finding ways to add value around the code for each of our products. We think of ourselves as helping make open source innovation easily consumable for enterprise customers. Just one example: For all of our products, we focus on life-cycle. Open source is a great development model, but it's "release early, release often" style makes implementing it in production difficult. One important value we play in Linux is that we backport bug fixes and security updates in supported kernels for over a decade, all while never breaking ABI compatibility. That has huge value for enterprises running long-lived applications. We go through this type of process against all of the projects we chose to productize to determine how we add value beyond the source code.
I would agree that this type of business model won't work across every technology category. At Red Hat, we look very deeply at the categories we've expanded into to ask ourselves whether our model can be effective and make an impact in a given space.
What advice do you have for building a sustainable business, especially one that is driven by open source values?
JW: Start off by reading a couple of answers above. To summarize:
1. Start (or find) an open source project that truly benefits from broad participation and work to build (or become involved) in that project. Projects where participation benefits the quality and innovation of the code are inherently advantaged over proprietary code. So you can check the first box - a technology that is superior to competitors.
2. Identify how you can uniquely add value to that technology that transcends the code. This is what I talk about above. The code is free. It's better because of yours and others' contributions. But those are freely given and free to use and therefore are very hard to monetize. Focus on how customers might implement the technology. For Red Hat, we like layers in the stack that are run-times, where enterprises will likely want long-lived support. We also like layers where hardware touches software, because there is huge value in standardization and certifications, which are not attached to the code, but to the products that we rigorously test and build joint support mechanisms for with the hardware vendors. If you identify this, you are well on your way - you have a project that is superior to competitors' and you have a vehicle to uniquely add value to that project in your product.
3. Surround yourself with like-minded, passionate people. Culture always trumps strategy. That's a short paraphrase of a famous quote. Companies too often fail because of internal strife, ethical failings, or simply losing their way. I know that startups have to begin with a product and business model, but durable success happens via people working together to make it a reality. And that's all about culture and leadership.
Recruiting open source talent
As Red Hat has scaled, it has to remain staffed with all types of non-technical business professionals. How do you help these professionals learn to "sell free software"? Has it been difficult to train these professionals on the open source business model?
JW: I think that anyone can pretty easily put themselves in our customers' shoes and understand the benefits of open source. For one, no one wants to feel locked into a proprietary solution or data format. We all want choice and flexibility, and open source is a great way to enable that.
For another, everybody wants access to rapidly innovating technology that helps solve their business problems, and our model gives them the ability to consume the latest and greatest technology, but in a way that's stable and secure for the enterprise.
And finally, everybody's experienced the frustration of having something in their car break and not having access to fix it. It seems like many companies deliberately make it difficult for their customers to tinker with or improve their products. Open source is the exact opposite -- we welcome people to take a look under the hood, see how things work or why they're broken, and roll up their sleeves to contribute if they want to make it better. All in all, it's a pretty simple and compelling value proposition that even someone brand new to our company can understand.
So who wins in a "code off" ?
Jim Whitehurst, Mark Shuttleworth, Tim Cook, Larry Page, or Satya Nadella.
JW: That's a tough one, but I think I could at least compete! I wasn't new to Linux when I joined Red Hat. I'm actually working towards my Red Hat Certified System Administrator (RHCSA) now. It's not an easy certification to get - if I'm successful, I think I'll have hopefully proven my chops. I can compile a Linux kernel and kernel modules and can build pretty decent apps. Though OpenShift makes building apps so easy, I'm not sure that's a huge distinction. (Note: Shameless plug!)
But the actual answer to your question is Linus Torvalds. He really should be on that list!
A long term view on IoT security?
Are there any plans or products to help with IoT security?
RedHat is one of the few companies that can step in and do something in regards to device security, even when device makers have little to no interest in this topic, as to them, security has no ROI, or as one IoT company exec told me, "the only person that has ever made money from a padlock is the lock maker."
Being able to lure IoT vendors to use secure tools wouldn't just benefit them, but it would benefit the Internet in a whole. Even something like manifest lists that interact with FirewallD to ensure a device is only able to communicate with authorized devices and cannot take input/output from rogue sources would improve the IoT ecosystem tremendously.
JW: We are already helping with IoT security indirectly. Open source and Linux powers nearly every IoT device that exists. This is an example of open source winning, you can't escape its reach any longer. That said Red Hat has always been a substantial contributor to open source projects and security is always a part of this collaboration. We were doing security before security was cool.
Rather than putting a focus on individual IoT devices, our focus is on the open source ecosystem as a whole. This is an instance where a rising tide lifts all boats. The goal is not help a single device or vendor, but to work on features that will affect the entire industry. By focusing on improving security in the Kernel, the compiler, glibc, the libraries used, even in the graphical user interfaces, we are helping build the future of IoT device security. IoT is changing the rules and perception around security. There is a lot of opportunity to get IoT security right, which means we have to focus on getting open source security right. We all win or we all lose when it comes to IoT security.
OpenStack vs AWS
How can we improve the future of OpenStack? The dominance of Amazon has challenged the relevance of well funded players like Microsoft, Google, and IBM. How can OpenStack compete? The network effects around a dominant cloud platform threaten to relegate OpenStack to be a long term niche player, like Linux on the desktop. How can we avoid this fate?
JW: Most important is that the hybrid cloud is real, and it's increasingly part of the dialog we have with users and customers. Cloud isn't either-or. You can have a mutli-cloud deployment where you are using OpenStack for some workloads and AWS for others. We consistently hear from our customers and users that they are in public clouds like AWS *and* their on-premise cloud deployments. The public cloud providers are all great partners of ours, and I view OpenStack as a complementary technology to them.
As corporate IT loads shift to public clouds...
by Anonymous Coward
...does this marginalize the role of operating system vendors? I would imagine that most AWS customers would lean on Amazon for technical support rather than Red Hat.
JW: On the contrary, the emergence of public cloud has made the operating system even more relevant. There are several reasons why:
The first is around application mobility. The vast majority of customers I speak with plan to use more than one public cloud. So portability becomes a major requirement. And since OS is where the application ultimately touches computing resources, having an OS that can consistently run across all major platforms becomes even more important. As with any single platform provider, optimizations for provider unique hardware, architectures, or services may address specific situations in the OS and we have all seen how that played out in the single-source, vertically integrated Unix stacks - hence Linux. So we remain dogged in our drive in working with all our cloud, hardware, and software partners to ensure that RHEL (and all our products) enable as many platforms as possible to reinforce customer choice and application mobility.
Second, much of the value we provide in Linux is around life-cycle. We commit to a decade+ long life-cycle of patching and support of RHEL. That allows enterprises to confidently run long-lived applications on RHEL. That requires a massive engineering investment in skills, tools and processes. I guess others (like public clouds) could ultimately chose to do that, but it's a very different business than they are in today, and I'm not sure why they would chose to do that versus the many other areas of opportunity that more closely match their current capabilities.
Finally, new application models like containers and microservices are bringing the operating system to the forefront. Each and every container has its user-space dependencies in Linux in it, and therefore requires management of those components in the container regardless of where that container runs. As the leading Linux vendor and as a leader in many of the projects around containers, Red Hat is uniquely positioned to help customers as they build and deploy containers on public clouds or on premise.
Product vs Engineering
Thank you for answering our questions! How do you view top-down product driven development vs bottom-up engineering driven development? Are there situations where one excels vs the other?
JW: To be honest, I'm not sure I'm the right person to answer that question. I've had the great fortune of having a very strong engineering leadership team at Red Hat, so I have allowed them to drive how we engage with communities and build our products.
In a broad sense, Red Hat does a bit of both. Our business model is built from the project out to the product, because we so strongly believe in the power of user driven innovation. So I guess you could say that we are more bottom-up engineering developed. But a big part of our value is taking customer needs and driving those into upstream projects so that they end up in our products. So we really are a hybrid.
Puppet versus Ansible?
Where do you see the configuration management market going in the next year or two?
JW: First things first, it's interesting to note that Ansible started as an orchestration platform that also happens to be able to do configuration management as well.
Orchestration is the hot topic right now for automation versus last year's configuration management tools. Ansible is more orchestration than configuration management. Puppet and Chef require tools like mCollective to pick up the orchestration piece. Red Hat now runs Tower. And Tower now ships as part of the Red Hat Ceph storage product. Red Hat's Satellite product is based on the Foreman which includes Salt, Puppet, Chef and Ansible support.
But where is this market heading? Are we likely to see consolidation? Integrations? Or even a flood of config management system tied products from vendors?
JW: Orchestration isn't a natural capability of many of the other tools on the market, but if you think about it, the ability to orchestrate configurations is really pretty critical. As it turns out, the order in which you provision IT applications and environments is really, really important. And Ansible handles this by design.
That being said, we have a number of customers that use other configuration management platforms like Puppet and Chef, and they use Ansible to deploy and manage agents, and then to orchestrate application deployments by deploying configurations as defined by these other tools. So really, it's easily a "yes, and" story, not an "either or".
Then we have Ansible Tower -- which actually, Red Hat was a paying Ansible Tower customer before we acquired them. Tower helps orgs operationalize automation across all their teams and IT environments in ways other tools cannot easily do otherwise. It's also key to plumbing automation into devops workflows.
There is some possible consolidation, but there's still a lot of market adoption to be had. We come across customers every day that have previously not used any configuration management solution at scale. This is a problem for those companies that want to scale, and running workloads in the cloud or with containers is nearly impossible without a mature automation and configuration management posture. So while there's some consolidation possible, there's still a lot of growth out there. As for config management being tied to vendors, I suspect that you'll continue to see other organizations mirror our approach to hybrid here. For an IT org that is trying to juggle deployments both on-premesis as well as in the cloud, they need tools that will work just as well in either location. This is a particular strength of things like Ansible.
Are there plans to tighten Ansible Integration
We use and love Ansible, but it still seems to be a separate product. Are there plans to integrate it more? Having it as an integrated deployment option for JBOSS Operations network (JON) would be good.
JW: When we acquired Ansible, we knew we had to be careful not to immediately crush them with all of our scaling requirements. At this point, roughly 18 months post-acquisition, we can say that the Ansible team is heavily engaged with nearly every Red Hat product team. So whether you're talking about Red Hat Enterprise Linux, OpenStack, OpenShift Container Platform, Ceph Storage, CloudForms, Insights, or many of our other offerings, Ansible is either already integral to those offerings, or is being planned for a near release. It's an important piece across our portfolio.
Specifically to JBoss and our middleware offerings, several of our consulting teams came together to create a Ansible Roles to ease the deployment and management of various JBoss offerings. And I think that illustrates perfectly what Ansible means to us -- even our services teams are engaging in the Ansible community and getting involved. Which is both a testament to what Ansible can enable customers to do, but also to the love that so many different teams across Red Hat have for Ansible.
If meritocracy over democracy...
if meritocracy over democracy is his choice, who decides what is "merit"?
JW: Great question. One thing that's important to me is that we continually question how well we apply the principle of meritocracy in practice. In general, we try to define our business goals and the problems we're trying to solve in clear and objective terms, so that it's obvious to everyone what the best and most feasible ideas are. You can get a feel for what kinds of information and detail we share internally by checking out the Open Decision Framework, which is a collection of our best practices for making open and inclusive decisions. We think of meritocracy as a leadership behavior, and you can see how we define it here. (PS: You can also find it on GitHub under a Creative Commons license.)
In practice at Red Hat, people with a long history of contribution and good ideas build their reputations as people to be listened to. It's not a perfect process, but because it is a "multi-round game" with reputations built over many interactions, it's a pretty good way for informal leaders to emerge.
Enterprise Desktop Market / Emerging / Demand
I am running more than 250 Linux desktops at the company and can get even more, but there is no centralized management solutions for that, and that's an issue with customization and security too. KDE desktop is very good at some point with it's ability to have strict configuration files and immutable options, that does about 25% of what we can get with Microsoft and Group Policy Object, and we see that a little effort is required to make things work.
Can we expect that Red Hat will enter that market in the nearest (3-4 Y) future?
JW: I appreciate the feedback and idea for a new market for us. Let me take that feedback back to our desktop team. I really can't talk about future product plans in this venue, but I'll make sure they know that you see an opportunity here.
My question is about Red Hat Certified Architect exams. It is very good and we are very happy about Red Hat's new subscription-based trainings. It is great. But when it comes to RHCA, it is limited for locations.
RHCA level exams are very expensive, and travel and accomodations make it more expensive. I am 2xRHCE, because of these exams is available in my location. Azerbaijan Baku, MIddle EAST, Caucasus does not have center to take exam. Please take this into consideration. Vmware, Cisco, Microsoft, AWS, OpenStack make their exams available in everywhere online, so it is easy for everyone to take it. Why open source company limits people passions to location.
I believe that me and people like me can become multi level RHCA if they get chance to take exam in their own location. And this will help recognition and value of RedHat in regions also. Please make this available as Cisco for us. At least make it possible on Kiosk In Georgia or Azerbaijan so we can take exams also. I am from Azerbaijan, Baku. With Loves to best open source company in the world.
JW: We recognize the need to reach people who are interested in certification throughout the world. We are constantly expanding our global testing options, and increasing the number of ways we offer testing for our certifications, including adding secure, preconfigured kiosks and laptops with our Red Hat Training Partners.
Red Hat Enterprise Linux is too static to keep pace with kernel devel.
I have found that Red Hat Enterprise Linux is too stagnate / static to keep pace with the rate at which the kernel is now developed. The 3.10 kernel is four years old at this point and the fact that Red Hat Enterprise Linux 7 will be in production support until 2024 is disheartening because the enterprise industry will be a decade behind the latest kernel developments and updates from associated projects. Compared to other vendors' Linux offerings, when I use Red Hat Enterprise Linux I get the same feelings I got when I was force to use AIX, HP-UX, and Solaris. I hated administrating those products because they were stuck with defaults like ksh from a decade ago.
My question is, would Red Hat ever consider releasing a Linux distribution with a shorter development cycle and with more aggressive tracking of upstream projects? I see a place for a distribution that is somewhere in between Red Hat Enterprise Linux and Fedora. Perhaps you could morph or fork CentOS into the upstream development for Red Hat Enterprise Linux? For example: Upstream --> Fedora (Bleeding Edge) --> CentOS (Next Release of Red Hat Enterprise Linux) --> Red Hat Enterprise Linux. This would give system engineers and architects a greater range of products to choose from and it could help stabilize Red Hat Enterprise Linux even more then it already is.
In short, the Linux kernel is the largest and the fastest moving software project in the world, so what changes are you going to make to keep up with it?
The Price of Reliability
Worked on SunOS, Solaris, MacOS, Red Hat, CentOS, and, more recently, Ubuntu. CIOs choose Red Hat mainly for support and reliability. Reliability is the word that comes to most engineers mind when the RH and CentOS OSes are mentioned (certainly for good reasons). Reliability mainly relies on using older kernels and features, that have been patched over and over ; sure, that works, reliability wise. But on a number of rather recent projects, comparing Ubuntu server and RH/CentOS, it appears setting services up (eg Samba) was way easier on the newer Ubuntues than on the latest RH/Centos (not mentioning the many issues migrating from 6 to 7) . Also, using newer kernels, Ubuntu performs well, taking advantage of the newest internals, memory management and sharing, IPC etc ... and no specific reliability issue (IMHO, reliability wise, Ubuntu and the like are as solid as RH nowadays).
Question: in 2017, does reliability still mean using long-tested, but older kernels and features?
JW: There's a common misperception that Red Hat Enterprise Linux pulls a Fedora kernel and stays on it for 10+ years while the world moves onto newer kernel versions with better features and newer hardware. It's true that we standardize on a specific kernel version for the life of a major release, but that's not the whole story.
Our stability is actually in our kernel ABI (kABI), which is a promise of stability that a kernel developer can rely on for the life of a Red Hat Enterprise Linux release. When we release a major version of Red Hat Enterprise Linux we actually backport many key features from newer kernels, bugfixes, etc. and we do it in a surgical way that not only delivers new features and hardware on an older kernel, but also preserves the kABI. For example, Red Hat Enterprise Linux 6 was based on the 2.6.32, but when we released Red Hat Enterprise Linux 6 it also had an additional 2600 patches (features, hardware, bugfixes, CVEs) and this continues for the development life of the release. The stats on Red Hat Enterprise Linux 7 are similar. This provides a customer expected balance between stability and innovation. We also have a driver update program (DUP) that makes it easy to add kernel drivers prior to the public availability of the next minor release.
So don't take the kernel version at face value -- we spend a lot of time backporting newer kernel features into every major release! If you want the latest and greatest and don't care about kABI, ABI and long-term stability, Fedora is ready for you.
Enterprise customers continue to expect stability, security, scalability and reliability, but also want higher levels of automation and ease of use, multiple deployment methods (bare metal, virtualization, containers and cloud) and the new features as they appear upstream and with hardware and userspace tools that can help to exploit them. If we sense critical new features going upstream that will break kABI and can't be backported, we will plan accordingly.