Linux Needs Resource Management For Complex Workloads 161
storagedude writes: Resource management and allocation for complex workloads has been a need for some time in open systems, but no one has ever followed through on making open systems look and behave like an IBM mainframe, writes Henry Newman at Enterprise Storage Forum. Throwing more hardware at the problem is a costly solution that won't work forever, he notes.
Newman writes: "With next-generation technology like non-volatile memories and PCIe SSDs, there are going to be more resources in addition to the CPU that need to be scheduled to make sure everything fits in memory and does not overflow. I think the time has come for Linux – and likely other operating systems – to develop a more robust framework that can address the needs of future hardware and meet the requirements for scheduling resources. This framework is not going to be easy to develop, but it is needed by everything from databases and MapReduce to simple web queries."
Newman writes: "With next-generation technology like non-volatile memories and PCIe SSDs, there are going to be more resources in addition to the CPU that need to be scheduled to make sure everything fits in memory and does not overflow. I think the time has come for Linux – and likely other operating systems – to develop a more robust framework that can address the needs of future hardware and meet the requirements for scheduling resources. This framework is not going to be easy to develop, but it is needed by everything from databases and MapReduce to simple web queries."
From the "is it 2005? department" (Score:3)
That generation has been going on for a while storagedude. People have been scaling according to load to deal with it.
Re: (Score:1)
That generation has been going on for a while storagedude. People have been scaling according to load to deal with it.
He just woke up from a coma you insensitive clod.
Re: (Score:2)
Re: (Score:2)
Fusion-io's ioDrive has been around since 2007. It's been in regular use for those who need it - like 4k video editing.
The original 7 year old drive is still faster than any SATA SSD you can find today.
Re: (Score:3)
Re: (Score:2)
Yeah, but how many people were editing 4k video in 2007? I'm sure the 3 people at the time weren't worrying about scheduling their Fusion ioDrives across workloads, either, just pounding them into submission. Wider adoption usually means mixed workloads where scheduling scarce resources matters more and is more complicated.
FWIW I don't know if I agree with the article premise -- it seems like most of these resource scheduling decisions/monitoring/adjustments are being made in hypervisors now (think VMware
Re: (Score:2)
OCZ seem to have been selling them via retail outlets for three years or more - let alone high end use.
There were various PCI things before the PCIe interface came into use.
Re: (Score:2)
IBM has DIMMs with flash memory already.
www-03.ibm.com/systems/x/options/storage/solidstate/exflashdimm/
Re: (Score:2)
This belongs in the cluster manager (Score:5, Informative)
That level of control probably belongs at the cluster management level. We need to do less in the OS, not more. For big data centers, images are loaded into virtual machines, network switches are configured to create a software defined network, connections are made between storage servers and compute nodes, and then the job runs. None of this is managed at the single-machine OS level.
With some VM system like Xen managing the hardware on each machine, the client OS can be minimal. It doesn't need drivers, users, accounts, file systems, etc. If you're running in an Amazon AWS instance, at least 90% of Linux is just dead weight. Job management runs on some other machine that's managing the server farm.
Re: (Score:3)
Re: (Score:3)
If you're running in an Amazon AWS instance, at least 90% of Linux is just dead weight
Which 90% would that be, and in what way would it be dead weight? If you don't mind my asking.
Re:This belongs in the cluster manager (Score:5, Interesting)
Yes and no.
No, large (Linux using) companies like Google, Facebook, Twitter have always used some kind of Linux container solution, not virtualization.
Yes, policy is controlled by the cluster manager.
But for example Google uses nested CGroups for implemeting those policies for controlling resources/priorities on their hosts.
Virtualization is very ineffcient and Docker/Linux containers are a perfect example of how peole are starting to see that again:
https://www.youtube.com/watch?... [youtube.com] / https://www.youtube.com/watch?... [youtube.com]
Suppposedly, CPU utilization on AWS is very low, maybe even only 7%:
http://huanliu.wordpress.com/2... [wordpress.com]
The reason for that is, is that VMs get allocated resources they never end up using. Because the host kernel/hypervisor doesn't know what the VM (kernel) is going to do/need.
For their own services Google doesn't use VMs, but Google does offer VMs to customers and to control the resources used by VM they run the VM inside a container.
Here are some talks Google did at DockerCon that mentions some of the details of how they work:
https://www.youtube.com/watch?... [youtube.com]
https://www.youtube.com/watch?... [youtube.com]
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Linux Cgroups (Score:4, Informative)
Is this not what Linux Cgroups is for?
From wikipedia (http://en.m.wikipedia.org/wiki/Cgroups):
cgroups (abbreviated from control groups) is a Linux kernel feature to limit, account, and isolate resource usage (CPU, memory, disk I/O, etc.) of process groups.
From what I understand, LXC is built on top of Cgroups.
I understand the article is talking about "mainframe" or "cloud" like build-outs but for the most part, what he is talking about is already coming together with Cgroups.
Re: (Score:2, Informative)
the article is not about "mainframe" or "cloud"... it is "advertising" for IBM... a company in the middle of multi-billion dollar deals with apple, all the while fighting to remain even slightly relevant.
IBM has the magic solution to finally allow the world to run simple web queries.
FUCK OFF
Re:Linux Cgroups are a good subset of this (Score:4, Informative)
Look better it's already there (Score:2)
Re: (Score:1)
KVM, Xen and other hypervisors make Linux systems look like IBM mainframes. The whole "Virtual Machine" hype where we have guest operating systems running on hypervisors is just like IBMs Z series.
IBM had the System Resource Manager back in the 1980's when the "zOS" was still OS/MVS.
More recently, Solaris had resource tuning features, although in my experience, people were preferring throwing cheap hardware at resource consumption over having tuning specialists or runing-aware system operations.
The recent addition of cgroups to Linux means that it also has the potential to become tunable in terms of business goals, but again the question is, are people going to pay for the required expertise or are t
Vista got this (Score:1)
Is this real or fantasy? (Score:4, Interesting)
I read the article and I can't tell if this is a real problem that is really affecting thousands of users and companies, or a fantasy that the author wrote up in 30 minutes after having a discussion with an old IBM engineer.
Sure, IBM has all these resource prioritization in mainframes because mainframes cost a lot of money. Nowadays, hardware is so cheap you don't have to do all that stuff.
If some young programmer undertook the challenge and created the framework, would anyone use it and test it? Will there be an actual need for something like this?
My point is that an insider information to what is really going on in the cutting edge usage of linux or just some smoke being blown around to an obligated write up.
Re: (Score:2)
These resources are all being managed today, there already are priorities for CPU, QoS for network bandwidth, ionice and quotas for storage and so on with a lot of specialization in each. He wants to build some kind of comprehensive resource management framework where everything from CPU time, memory, storage, network bandwidth etc. is being prioritized. It sounds extremely academic to me, particularly when I read the line:
I will make the assumption that everything at every level is monitored and tracked (...)
Besides, resource management isn't something that happens only on this level, for exa
Re: (Score:1)
Ha, your SQL server scenario is similar to one I've heard from IBM engineers (and IBM fellows) but with a priority inversion twist that requires SLAs and monitoring. That periodic consolidated report can become a nightmare when it finally grows to take longer than one period to complete! Enterprises come crashing down when these overlooked/implied invariants get violated. Eventually, increasing the job priority won't even work because it will squeeze out all the line of business workload, and what you re
Re: (Score:1)
Nowadays, hardware is so cheap you don't have to do all that stuff.
Instead of spending a bit of those resources to allocate the rest with good efficiency, the standing assumption is that resources are effectively free anyway and so wasting them with gay abandon is worth it. This is the assumption, but it's not really true.
At sufficient scale even the smallest cost becomes non-negligible. This isn't just for the few of us who write "truly web-scale" or whatever the term is today. Even in something as simple as an end-user application like, oh, a video player, "saving" progr
Straw Proposals? (Score:1)
I thought the title wanted to talk about something revolutionary, so I read through the details.
What I discovered was that the title was bullshit, so were the concerns surrounding Linux's capabilities. Some of them make sense for general all-purpose computation, some of them don't. I don't see why anybody should take these proposals too seriously for kernel inclusions.
The portion on primary memory management is perfect. Hadoop does suffer from lack of cache aware code; So far, only modified kernels have bee
complex application example (Score:5, Insightful)
i am running into exactly this problem on my current contract. here is the scenario:
* UDP traffic (an external requirement that cannot be influenced) comes in
* the UDP traffic contains multiple data packets (call them "jobs") each of which requires minimal decoding and processing
* each "job" must be farmed out to *multiple* scripts (for example, 15 is not unreasonable)
* the responses from each job running on each script must be collated then post-processed.
so there is a huge fan-out where jobs (approximately 60 bytes) are coming in at a rate of 1,000 to 2,000 per second; those are being multiplied up by a factor of 15 (to 15,000 to 30,000 per second, each taking very little time in and of themselves), and the responses - all 15 to 30 thousand - must be in-order before being post-processed.
so, the first implementation is in a single process, and we just about achieve the target of 1,000 jobs but only about 10 scripts per job.
anything _above_ that rate and the UDP buffers overflow and there is no way to know if the data has been dropped. the data is *not* repeated, and there is no back-communication channel.
the second implementation uses a parallel dispatcher. i went through half a dozen different implementations.
the first ones used threads, semaphores through python's multiprocessing.Pipe implementation. the performance was beyond dreadful, it was deeply alarming. after a few seconds performance would drop to zero. strace investigations showed that at heavy load the OS call futex was maxed out near 100%.
next came replacement of multiprocessing.Pipe with unix socket pairs and threads with processes, so as to regain proper control over signals, sending of data and so on. early variants of that would run absolutely fine up to some arbitrarry limit then performance would plummet to around 1% or less, sometimes remaining there and sometimes recovering.
next came replacement of select with epoll, and the addition of edge-triggered events. after considerable bug-fixing a reliable implementation was created. testing began, and the CPU load slowly cranked up towards the maximum possible across all 4 cores.
the performance metrics came out *WORSE* than the single-process variant. investigations began and showed a number of things:
1) even though it is 60 bytes per job the pre-processing required to make the decision about which process to send the job were so great that the dispatcher process was becoming severely overloaded
2) each process was spending approximately 5 to 10% of its time doing actual work and NINETY PERCENT of its time waiting in epoll for incoming work.
this is unlike any other "normal" client-server architecture i've ever seen before. it is much more like the mainframe "job processing" that the article describes, and the linux OS simply cannot cope.
i would have used POSIX shared memory Queues but the implementation sucks: it is not possible to identify the shared memory blocks after they have been created so that they may be deleted. i checked the linux kernel source: there is no "directory listing" function supplied and i have no idea how you would even mount the IPC subsystem in order to list what's been created, anyway.
i gave serious consideration to using the python LMDB bindings because they provide an easy API on top of memory-mapped shared memory with copy-on-write semantics. early attempts at that gave dreadful performance: i have not investigated fully why that is: it _should_ work extremely well because of the copy-on-write semantics.
we also gave serious consideration to just taking a file, memory-mapping it and then appending job data to it, then using the mmap'd file for spin-locking to indicate when the job is being processed.
all of these crazy implementations i basically have absolutely no confidence in the linux kernel nor the GNU/Linux POSIX-compliant implementation of the OS on top - i have no confidence that it can handle the load.
so i would be very interested to hear from anyone who has had to design similar architectures, and how they dealt with it.
Re: (Score:2)
Try putting a load balancer (Cisco ACE, Citrix NetScaler) on a virtual IP and load balancing the UDP packets across several nodes behind the balancer.
Re: (Score:2)
Re:complex application example (Score:5, Insightful)
> the first ones used threads, semaphores through python's multiprocessing.Pipe implementation.
I stopped reading when I came across this.
Honestly - why are people trying to do things that need guarantees with python?
The fact you have strict timing guarantees means you should be using a realtime kernel and realtime threads with a dedicated network card and dedicated processes on IRQs for that card.
Take the incoming messages from UDP and post them on a message bus should be step one so that you don't lose them.
Re:complex application example (Score:5, Informative)
> the first ones used threads, semaphores through python's multiprocessing.Pipe implementation.
I stopped reading when I came across this.
Honestly - why are people trying to do things that need guarantees with python?
because we have an extremely limited amount of time as an additional requirement, and we can always rewrite critical portions or later the entire application in c once we have delivered a working system that means that the client can get some money in and can therefore stay in business.
also i worked with david and we benchmarked python-lmdb after adding in support for looped sequential "append" mode and got a staggering performance metric of 900,000 100-byte key/value pairs, and a sequential read performance of 2.5 MILLION records. the equivalent c benchmark is only around double those numbers. we don't *need* the dramatic performance increase that c would bring if right now, at this exact phase of the project, we are targetting something that is 1/10th to 1/5th the performance of c.
so if we want to provide the client with a product *at all*, we go with python.
but one thing that i haven't pointed out is that i am an experienced linux python and c programmer, having been the lead developer of samba tng back from 1997 to 2000. i simpy transferred all of the tricks that i know involving while-loops around non-blocking sockets and so on over to python. ... and none of them helped. if you get 0.5% of the required performance in python, it's so far off the mark that you know something is drastically wrong. converting the exact same program to c is not going to help.
The fact you have strict timing guarantees means you should be using a realtime kernel and realtime threads with a dedicated network card and dedicated processes on IRQs for that card.
we don't have anything like that [strict timing guarantees] - not for the data itself. the data comes in on a 15 second delay (from the external source that we do not have control over) so a few extra seconds delay is not going to hurt.
so although we need the real-time response to handle the incoming data, we _don't_ need the real-time capability beyond that point.
Take the incoming messages from UDP and post them on a message bus should be step one so that you don't lose them.
.... you know, i think this is extremely sensible advice (which i have heard from other sources) so it is good to have that confirmed... my concerns are as follows:
questions:
* how do you then ensure that the process receiving the incoming UDP messages is high enough priority to make sure that the packets are definitely, definitely received?
* what support from the linux kernel is there to ensure that this happens?
* is there a system call which makes sure that data received on a UDP socket *guarantees* that the process receiving it is woken up as an absolute priority over and above all else?
* the message queue destination has to have locking otherwise it will be corrupted. what happens if the message queue that you wish to send the UDP packet to is locked by a *lower* priority process?
* what support in the linux kernel is there to get the lower priority process to have its priority temporarily increased until it lets go of the message queue on which the higher-priority task is critically dependent?
this is exactly the kind of thing that is entirely missing from the linux kernel. temporary automatic re-prioritisation was something that was added to solaris by sun microsystems quite some time ago.
to the best of my knowledge the linux kernel has absolutely no support for these kinds of very important re-prioritisation requirements.
Re:complex application example (Score:5, Informative)
First - the problem with python is that because it's a VM you've got a whole lot of baggage in that process out of your control (mutexes, mallocs, stalls for housekeeping).
Basically you've got a strict timing guarantee dictated by the fact that you have incoming UDP packets you can't afford to drop.
As such, you need a process sat on that incoming socket that doesn't block and can't be interrupted.
The way you do that is to use a realtime kernel and dedicate a CPU using process affinity to a realtime receiver thread. Make sure that the only IRQ interrupt mapped to that CPU is the dedicated network card. (Note: I say realtime receiver thread, but in fact it's just a high priority callback down stack from the IRQ interrupt).
This realtime receiver thread should be a "complete" realtime thread - no malloc, no mutexes. Passing messages out of these realtime threads should be done via non-blocking ring buffers to high (regular) priority threads who are in charge of posting to something like zeromq.
Depending on your deadlines, you can make it fully non-blocking but you'll need to dedicate a CPU to spin lock checking that ring buffer for new messages. Second option is that you calculate your upper bound on ring buffer fill and poll it every now and then. You can use semaphores to signal between the threads but you'll need to make that other thread realtime too to avoid a possible priority inversion situation.
> how do you then ensure that the process receiving the incoming UDP messages is high enough priority to make sure that the packets are definitely, definitely received
As mentioned, dedicate a CPU mask everything else off from it and make the IRQ point to it.
> what support from the linux kernel is there to ensure that this happens
With a realtime thread the only other thing that could interrupt it would be another realtime priority thread - but you should make sure that situation doesn't occur.
> is there a system call which makes sure that data received on a UDP socket *guarantees* that the process receiving it is woken up as an absolute priority over and above all else
Yes, IRQ mapping to the dedicated CPU with a realtime receiver thread.
> the message queue destination has to have locking otherwise it will be corrupted. what happens if the message queue that you wish to send the UDP packet to is locked by a *lower* priority process
You might get away with having the realtime receiver thread do the zeromq message push (for example) but the "real" way to do this would be lock-free ring buffers and another thread being the consumer of that.
> what support in the linux kernel is there to get the lower priority process to have its priority temporarily increased until it lets go of the message queue on which the higher-priority task is critically dependent
You want to avoid this. Use lockfree structures for correctness - or you may discover that having the realtime receiver thread do the post is "good enough" for your message volumes.
> to the best of my knowledge the linux kernel has absolutely no support for these kinds of very important re-prioritisation requirements
No offense, but Linux has support for this kind of scenario, you're just a little confused about how you go about it. Priority inversion means you don't want to do it this way on _any_ operating system, not just Linux.
Re: (Score:2)
hi mr thinly-sliced, thank you this is awesome advice, really really appreciated.
Re: (Score:2)
You're welcome - I hope you get it sorted out.
The only other thing I'd mention - you perhaps noticed I kept saying "threads like.." and "with regular threads" because it's basically introduced a number of single points of failure. Due to the lack of back channel or retransmission, things can go silently wrong without notice (network cable failure etc). In an ideal world you'd double up on some of that infrastructure and networking.
I know you need to get something up and running, but it's perhaps something t
Re: complex application example (Score:1)
Re: (Score:2)
Given this problem, there are several options for fanout... Im assuming that hardware can be added, so adding a load balancer and then three or four machines to cope with the load behind the load balancer might be the quickest (least code change) way to address the issue. Especially if there is no global state needed, this is likely the most expedient.
An option that might be a bit more flexible on a single box, while still scalable, would be to have a task that parses each incoming job and posts it to a
Re: (Score:2)
You'll need a bit of C, but consider using sched_setscheduler on the receiver process to make sure you get the packets before the buffer fills. That process can have a big buffer and keep a queue stuffed for the actual handling. Probably one thread to receive and one to stuff the queue will work.
The worker processes can remain as python processes at that point. As long as your queue is lossless and the workers are on average fast enough AND their jitter is smaller than your buffer in the high performance C
Re: (Score:2)
Honestly - why are people trying to do things that need guarantees with python?
Oh, you got that far at least? What I wonder is, why are people trying to do things that need guarantees using UDP with no back-communication, no redundancy built in to the protocol, and not even detection of lost packets? External requirement my ass, why do you accept a contract under those conditions? The correct thing to say is "this is broken, and it's not going to work". If they still want the turd polished, it should be und
Re: (Score:2)
FWIW I agree vis-a-vis using UDP for a business critical thing. I'd want exemption from responsiblity for any missed packets purely due to the infrastructure in between.
Re: (Score:2)
Totally agreed. The lack of guarantees re: UDP is built into the UDP spec, it's not a failing of the Linux kernel (nor any other OS) that it won't tell you about dropped packets. Luke, you should know better than this.
Re: (Score:2)
Re: (Score:2)
Of course it's technically possible to transmit packets with essentially 0% loss, and I'm sure there are set-ups that would work under the right circumstances. That's not the point. The point is that each and every component involved, from hardware through firmware to software, is designed under the premiss that it is okay to drop a packet at any time for any reason, or to duplicate or reorder packets. Even if you get it to work, the replacement of any single component, or the triggering of some corner case
Re: (Score:2)
The point is that each and every component involved, from hardware through firmware to software, is designed under the premiss that it is okay to drop a packet at any time for any reason, or to duplicate or reorder packets.
That entire sentence is damn near a lie. Those issue can happen, but they shouldn't happen. You almost have to go out of your way to make those situations happen. Dropping a packet should NEVER happen except when going past line rate. Packets should NEVER be duplicated or reordered except in the case of a misconfiguration of a network. Networks are FIFO and they don't just duplicate packets for the fun of it.
As for error rates, many high end network devices can upwards of an error rate of 10E-18, which pu
Re: (Score:2)
Absolutely. Soooo doomed. You cannot guarantee that the UDP packets even get across the wire to your NIC what difference does it matter whether you software gets them all out of the NIC
Re: (Score:2)
Re: (Score:2)
Honestly - why are people trying to do things that need guarantees with python?
Because they don't actually know how to do what they are claiming the requirements are and they refuse to turn it over to someone who does.
I'd have thought that was pretty clear. Trying to do real time work in python made it clear to me.
Re: (Score:2)
a) Your UDP buffers probably suck. OOB RedHat gives you 128K, and each packet takes up 2304 bytes of buffer space. Try 100MB, or whatever YOUR_RATE/2304 works out to.
b) Pull off the queue and buffer in RAM as fast as you can
c) Have a second thread read from RAM
d) Don't invoke scripts to process each packet, you're spinning all your time in process creation. In fact, don't use interpreted scripts at all.
Re: (Score:2)
Interesting. I sounds a bit like an application I have.
Like yours, it involves UDP and Python.
I have 150.000 "jobs" per second arriving in UDP packets. "Job" data can be between 10 and 1400 bytes and as many "jobs" are packed into each UDP packet as possible.
I use Python because, intermixed with the high performance job processing, I also mix slow but complex control sequences (and I'd rather cut my wrists than move all that to C/C++).
But to achieve good performance, I had to reduce Python's contribution to
Re: (Score:2)
Not being able to ack important message packets seems like a design
This is a job for QNX (Score:2)
Consider trying QNX, the message-passing real time OS, for this. This is a message passing problem, and Linux doesn't do message passing well. QNX has a scheduler optimized for message passing. You should be able to handle the UDP front end and fan-out without any problems. You can give the front-end process a higher priority than the other processes, which should let you get all the UDP packets into the fan-out program without losing any. That's what real-time OSs are for.
Trying to do anything high-per
Re: (Score:2)
> the first ones used threads, semaphores through python's multiprocessing.Pipe implementation. the performance was beyond dreadful, it was deeply alarming. after a few seconds performance would drop to zero. strace investigations showed that at heavy load the OS call futex was maxed out near 100%.
uhhm... wait what?
You are aware that python has global interpreter lock [python.org], right? And because of that multi-threaded performance in python is actually *worse* [dabeaz.com] than single-threaded? But this is an inherent flaw in
Re: (Score:2)
You need to make larger batches.
1) UDP/Job comes in, write to single-writer many reader queue(large circular queues can be good for this) and the order number, maybe a 64bit incrementing integer. If the run time per job is quite constant, then you could use several single reader/writer queues a
Re: (Score:1)
...If you're trying to achieve maximum performance I'm wondering why you're coding with python...
That was my Daily WTF too
1) even though it is 60 bytes per job the pre-processing required to make the decision about which process to send the job were so great that the dispatcher process was becoming severely overloaded
So the OP is using 1 thread, even though each incoming UDP packet can be "pre-processed" embarrassingly parallel fashion? The main issue I see with the OP's design is that each UDP packet is being worked on by at least 17 threads/processes:
1) the dispatcher (pre-processing) thread
2) all the ## "scripts" (OP said 15)
3) then the "post-precessing" thread
That is a hell of a lot of inter-process (or inter-thread) communi
Re: (Score:2)
* the UDP traffic contains multiple data packets (call them "jobs") each of which requires minimal decoding and processing
anything _above_ that rate and the UDP buffers overflow and there is no way to know if the data has been dropped. the data is *not* repeated, and there is no back-communication channel.
How are you planning on handling UDP checksum errors without a backchannel or EC? The physical ethernet layer is lossy, so you're screwed even before the packet hits the NIC.
Lossy?
I just logged into my switch at home and it has 146 days of uptime with 20,154,030,043 frames processed and 0 frame errors. I can even do a 1gb/1gb, for a total of just under 2gb/s at once, iperf, and have 0 packets dropped.
Let the network group worry about QoS. But yes, errors will eventually happen, they're just very rare. But when they do happen, it's probably pathological and you'll get a lot of them. But I wouldn't go so far to say "the physical ethernet layer is lossy", as a general statemen
disk, memory access and cpu usage (Score:2)
Weren't they added in Linux 0.01 around 1991?
Where is the market demand? (Score:2)
There is a solution that does this, it called a mainframe, they're hideously expensive, cooked a motherboard recently 1.2 million, want a 10G network card $20000. Now you can buy an awful lot of commodity hardware for much cheaper so that you have excess resources, need a dedicated system for a database buy one, run the other applications on a shared resource, you'll still end up with spare change if you dump a mainframe contract. You can replace a mainframe with commodity items you just need to plan for it
Re: (Score:2)
Linux ALREADY has it! (Score:3)
Linux has cgroups support which allows to partition a machine into multiple hierarchic containers. Memory and CPU partitioning works well, so it's easy to give only a certain percentage of CPU, RAM and/or swap to a specific set of tasks. Direct disk IO is getting in shape.
Lots of people are cgroups in production on very large scales. There are still some gaps and inconsistencies around the edges (for example, buffered IO bandwidth can't be metered) but kernel developers are working on fixing them.
_why_ can't we keep throwing hardware at it? (Score:2)
Moore's Law speaks to computational horsepower per unit per cost. But even if the computational abilities do not continue to increase, the costs will keep coming down.
Hardware is cheap. It's not an elegant solution, but it's cheap. And getting cheaper.
Focus on the UX, because without that, who cares what your kernel can do? Machines are plenty powerful enough, what you want to do is get your OS in to the hands of the most users possible .... right?
Re: (Score:2)
Hardware is cheap. It's not an elegant solution, but it's cheap. And getting cheaper.
Right, but if your company comes up with an elegant solution that gets 10x better performance out of a given piece of hardware, and your competitors cannot (or do not) do the same, then you've got a cost advantage over your competitors and can use that to get customers to choose to buy your product rather than theirs.
That will always be true, no matter how fast and cheap the hardware gets. Either your customers will be able to do 10 times more work with your product, or (if there isn't 10 times more work t
tuned (Score:2)
I don't have hard data yet, but I'm finding that EL7 is much much faster than EL6 on the same hardware for the workloads I've tried so far.
I don't know that tuned [fedorahosted.org] is most responsible, but I can see that it's running and that's what it's supposed to do.
I realize that the kernel is better and perhaps XFS helps, but those alone seem insufficient to realize the difference.
Anyway, it's somewhat along the direction people are talking about, even if only minimally.
I'll volunteer... (Score:2)
... but no one has ever followed through on making open systems look and behave like an IBM mainframe, ...
But I'll need a punch-card station and reader, build out my server room with a glass service window, hire a disinterested, snarky guy to retrieve printouts ... Or have IBM mainframes changed since my college days back in the late '80s?
Unnecessary micromanagement. (Score:2)
Not going to say that task management over a greater picture's a bad idea, but have to make it more coarse (per server, approximations) rather than fine if one is to still be able to effectively use many of Linux' performance improvements above IBM mainframe approaches. Mind, I've built a couple of systems like that for proprietary infrastr
Re:This obsession with everything in RAM needs to (Score:5, Insightful)
I know you're afraid of the garbage collector, but it won't bite. I promise.
Yes, it will. It's not common, but it happens - and when it happens, it's nasty. Pretty nasty.
But not so nasty as micromanaging the memory by myself, so I keep licking my wounds and moving on with it.
(but sometimes would be nice to have fine control on it)
Re: (Score:2)
Re:This obsession with everything in RAM needs to (Score:4, Insightful)
Garbage collector with no overhead, hmm? Easy peasy with no satanic complexity I suppose. And of course no obnoxious corner cases. Equivalently in engineering, when your bridge won't stay up you just add a sky hook. Easy.
Re: (Score:3)
Not sure what you're getting at, but the Azul collector is well known for pulling off apparently magical GC performance. They do it with a lot of very clever computer science that involves, amongst other things, modifications to the kernel. I believe they also used to use custom chips with extended instruction sets designed to interop well with their custom JVM. Not sure if they still do that. The result is that they can do things like GC a 20 gigabyte heap in a handful of milliseconds. GC doesn't have to s
Re: (Score:3)
I believe they also used to use custom chips with extended instruction sets designed to interop well with their custom JVM. Not sure if they still do that.
I could've sworn I'd read that they'd stopped with their hardware work, but I think I was wrong: Appendix A of this page [infoq.com] gives the impression (though I can't see it explicitly stated) that they're still doing custom hardware, but their software will work on ordinary Intel/AMD chips as well.
GC doesn't have to suck.
Indeed. It's Sturgeon's Law [wikipedia.org], but I think the '90%' part might be too low in this case. Major interpreters/'VMs' - even the ones with optimised native-code compilation - have awful GCs. Up until quite recently, Mono was us [mono-project.com]
Re: (Score:3)
And yes, a garbage collector with zero overhead. Who would have thought? Well, pretty much anyone in the know, I guess.
MARK / RELEASE from the Pascal days used to work pretty well - this is the less overhead "garbage collector" possible.
It's impossible to have a Garbage Collector without some kind of overhead - all you can do is try to move the overhead to a place where it's not noticed.
There's no such thing as Free Lunch.
Re: (Score:2)
If you're going to MARK/RELEASE why not malloc/free? Same goes for languages like Java - if you have to null a reference for it to get collected, how is that different from free() or delete? It's still a line of code you have to remember to put in your program at the right place.
For two reasons:
1) It's easier to MARK the heap on the beginning of the task, using it as there's no tomorrow and then just RELEASE everything at once on the end. (nothing prevents you from deleting some pointers in the job to save memory).
2) You avoid HEAP fragmentation, easing the memory management's life.
Anyway, it appears to me that you missed the point. I was criticizing the pretense "no overhead garbage collector" from Azul.
Re: (Score:1)
Why not map everything in RAM? These days even Windows gives every process 128 terabytes of address space. TERA BYTES.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: This obsession with everything in RAM needs to (Score:2)
Garbage collection necessarily wastes memory by factor of 1.5 to 2.
The collection itself also slows down the program, and in some languages cannot even happen asynchronously.
Finally, the most important aspect for program performance is locality and memory layout, something you cannot even optimize for in a language where every object is a pointer to some memory on a garbage-collected heap.
Re: (Score:2, Insightful)
Garbage collection necessarily wastes memory by factor of 1.5 to 2.
And manual memory management on a similar scale wastes CPU time. And the techniques that alleviate one also tend to help the other, or not?
Finally, the most important aspect for program performance is locality and memory layout, something you cannot even optimize for in a language where every object is a pointer to some memory on a garbage-collected heap.
There's not a dichotomy here. Oberon and Go are garbage collected without everything being a heap pointer.
Re: This obsession with everything in RAM needs to (Score:5, Funny)
Boobs.
Re: (Score:1)
So, rounded corners then.
Re: (Score:2)
wow, who knew boobs could be so controversial
Re: This obsession with everything in RAM needs to, posted to Linux Needs Resource Management For Complex Workloads, has been moderated Insightful (+1).
It is currently scored Normal (2).
Re: This obsession with everything in RAM needs to, posted to Linux Needs Resource Management For Complex Workloads, has been moderated Informative (+1).
It is currently scored Insightful (3).
Re: This obsession with everything in RAM needs to, posted to Linux Needs
Re: (Score:1)
Cloud???
Isn't that a mainframe connected over the internet with dumbed down terminals which require little complexity because the real complexity is located at a central point.
To clarify, cloud services act as the modern equivalent of the classic mainframe and the communication channels between the core system and the terminals has changed.
Re: (Score:2)
Re:mainframe is old crap for geezers (Score:4, Informative)
On the contrary, if you can increase the performance of each node by 2x with 100,000 nodes, you've just saved 50,000 of them.
That's a pretty big cost saving.
The larger the installation, the more important resource management is. If you need to add more node, not only do you need to buy them, increase network capacity and power them, you also need to increase your cooling capacity, and floor space. Your failure rate goes up too. The higher the failure rate, the more staff you need to replace things.
Re: (Score:3)
Re: (Score:2)
or you do both.
Re: (Score:2)
Re: (Score:2)
a) when was the last time you saw a single threaded node?
b) it was obviously an illustrative example. don't be a dick.
Re: (Score:1)
Re: (Score:2, Informative)
Yeah - the sky is the limit!!!
Use your Microsoft cloud capabilities without hesitation....
This message was brought by you by your friendly NSA..
Re: (Score:1)
Are you being intentionally obtuse? It would seem so, but sometimes it's hard to tell on /.
Re: (Score:2)
Re: (Score:1)
because maintaining lists of blocks and having algorithms to coalesce them and flush to disk from time to time sounds simple, but is actually very complicated, almost as complicated as the rest of the driver. It is basically implementing garbage collection in a disk driver, which introuces all sorts of asynchrony and plays havoc with latencies. Love doing that sort of thing in kernel space, no? The spec is fine, but doing that sort of thing in a driver is asking too much. It should be done in user spa
Re: (Score:2)
Re: (Score:2)
right smack dab between the previous two(!) existing standards in size
That reminds me of the (rejected) compromise that suggested that we index arrays starting with 0.5. :)
Re: (Score:2)
Yes. I just answered a call on my Samsung S3 server a little while ago in fact. I also watched some TV on my Comcast Server Set-top box. I'm thinking you either don't know very much about Linux, or what a server is.
Re: (Score:2)
> That's so painfully true because Linux still has choppy playback of Flash/HTML5 video on low-performance hardware. It still is mostly a server OS (a very good one though).
It's bullshit because EVERY platform has choppy playback of Flash video on low-performance hardware. It's a feature of how lame Flash is. It has nothing to do with Linux.
Low performance hardware will happily decode much more interesting video so long as the coders in question have bothered to hook into relevant "shortcuts".
Adobe can't
Re: (Score:1)
Re: (Score:3)
2% may be the desktop share for Linux, but when it comes to servers and handheld devices like Android it's a different story.
Re: (Score:1)
I mean, 2^64 could address all atoms in the solar system.
False. It could almost address all atoms in a milligram of matter, though.
Re: (Score:2)