Linux 3.7 Released 151
The wait is over; diegocg writes "Linux kernel 3.7 has been released. This release adds support for the new ARM 64-bit architecture, ARM multiplatform — the ability to boot into different ARM systems using a single kernel; support for cryptographically signed kernel modules; Btrfs support for disabling copy-on-write on a per-file basis using chattr; faster Btrfs fsync(); a new experimental 'perf trace' tool modeled after strace; support for the TCP Fast Open feature in the server side; experimental SMBv2 protocol support; stable NFS 4.1 and parallel NFS; a vxlan tunneling protocol that allows to transfer Layer 2 ethernet packets over UDP; and support for the Intel SMAP security feature. Many small features and new drivers and fixes are also available. Here's the full list of changes."
Improved SAMBA client support? (Score:5, Informative)
experimental SMBv2 protocol support;
This can't come soon enough for Linux clients. SAMBA already has SMBv2+ server-side support, with SAMBA 4 apparently even supported SMB 3.0. This is especially true for a high-latency connection through the VPN where the reduced chattiness of newer SMB protocols gives a nice performance bump.
You can post all day & all night about how NFS/CODA/GlusterFS/etc./etc. is better, but at the end of the day the CIFS protocols are supported by every Windows machine out there and should be supported by Linux too. Plus, if you are a free-software purist, then you could setup a 100% GPL'd installation with SAMBA servers and Linux clients, so it would totally make sense for the Linux clients to actually support the modern protocols.
Re:Improved SAMBA client support? (Score:4, Funny)
This is exactly what Linux was missing: Super Mario Brothers version 2.
Re: (Score:3)
the ending's disappointing, though...
Re: (Score:2)
Are you talking about "The Lost Levels" Mario 2, or "Doki-Doki Panic" Mario 2?
SMB2: The name sounds so dirty (Score:3)
Re:Improved SAMBA client support? (Score:5, Insightful)
purists can also get Linux into the door at the clients with windows desktops, the basics of authentication, file and print sharing are enough for most small/medium business. I've done that a few times over the last five years, clients are still happy as the server just works,and are adopting more Linux boxes including some desktops.
Re: (Score:2)
Why do you state that linux "should be supported by Linux"? And why should I, as a *nix user, care about what windows supports.
Integrate *nix with Windows (Score:2)
And why should I, as a *nix user, care about what windows supports.
Because you may end up having to integrate the *nix that you use with the Windows that an employer, client, etc. uses.
Re: (Score:2)
Windows supports WebDAV since windows98 IIRC. And I think *nix users tend to avoid windows employers/clients. There's plenty of jobs to get picky about the ones you choose.
Jobs not evenly distributed geographically (Score:2)
There's plenty of jobs to get picky about the ones you choose.
Unless you happen to have grown up in an area where there aren't plenty of jobs and need a job to save money so that you can move to where there are plenty of jobs.
Re: (Score:2)
You mean people *aren't* like virtual machine instances? You can't just kill one here and bring up another in a different availability zone?
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
Uhm, since deployed Windows systems largely don't support SMB 2.x much less SMB 3.x I fail to see how this is a major failing on the part of Linux. Although I am of course entirely for supporting the current protocols.
Re: (Score:3)
Uhm, since deployed Windows systems largely don't support SMB 2.x much less SMB 3.x I fail to see how this is a major failing on the part of Linux. Although I am of course entirely for supporting the current protocols.
Windows 8 supports SMB3, and MS claims to have sold 40 million copies already. Sources: http://www.reuters.com/article/2012/11/27/us-microsoft-windows-idUSBRE8AQ18W20121127 [reuters.com] and https://en.wikipedia.org/wiki/Server_Message_Block#SMB_3.0 [wikipedia.org]
Re: (Score:2)
And everything since Vista/Server2K8 supports SMB 2.x. Unless you're still running XP machines (in which case your time is quickly approaching) then your systems are probably already using SMB 2.x.
Re: (Score:2)
Windows XP is on "extended support" until April of 2014 [microsoft.com]. There are plenty of businesses that won't even think about upgrading to a later version until that deadline is only a few months off. Global market share for XP still 20 to 35% [wikipedia.org].
Re: (Score:2)
Re: (Score:2)
I don't care much about native linux support for Windows. However, the sad thing is that in many ways SMB is probably the best networked filesystem on linux just the same, even though it doesn't support half of POSIX. The closest competitor is NFS, and that is full of security issues.
Linux really needs a SIMPLE network filesystem solution that is secure and functional in all routine modes of operation. No, I don't want to set up a kerberos realm and openafs/etc.
Re: (Score:2)
Re: (Score:2)
Lovely - kerberos...
I just love the thought of getting linux to boot with an nfs root filesystem using kerberos for authentication... If you don't implement kerberos, then it is insecure, which was half my complaint with nfs in the first place.
DRM (Score:2)
Signed modules? Yay for tivoization!
Re: (Score:1)
except you control the keys
Re:DRM (Score:5, Interesting)
Only when you control the kernel/boot loader. I have a feeling that this will be used a lot by vendors to lock you out of your own devices, e.g. Android phones etc.
I'm as paranoid as the next geek, and the idea of secure boot etc. appeals a lot to me if done correctly. As in, if it's MY device, then I get to decide what runs on it, and no one else. But it's a tool, and as such it can be used both for you and against you. There can't be a technical solution, technology is dumb. We need a legal solution, either in the form of regulation or widespread adoption (and enforcement) of the GPLv3.
Re: (Score:2)
Unfortunately secure booting is linked so tightly with vendor lockdown, tracking, and DRM concerns that I never expect it to be embraced by any open-source community. Hysteria over treacherous computing [gnu.org] so far has been overblown. For example, the potential abuse of the unique ID features of the TPM chips were not sufficient reason for the boycott against using them when available they generated--especially if you're booting into an open-source OS.
It's pretty ridiculous that software like trusted grub [sourceforge.net] isn'
Re: (Score:2)
It has so far been overblown only because it is just now becoming a real threat. This is mainly due to the introduction of new platforms that have become insanely popular. Think about it. All the new computing devices we have are, for the most part, locked out of the box. ..and while most people don't care to mess with their devices, they're still affected negatively because they do benefit from the efforts of those who do.
Re: (Score:2)
I think there can easily be a technical solution. Just put a switch on every computer. If it's in UNLOCKED position you may install a new operating system, if it is in LOCKED position you may not install a new OS and the whole boot process is locked down.
Re: (Score:2)
the same people who think that the way to get out of a fiscal cliff that is caused by massive debt, is to take on more debt
Almost as dumb as the people who think a complicated economic problem which is being worked on by many very smart people with many postgraduate degrees, can be dismissed with a pithy one-liner and some informal layperson reasoning on Slashdot. Tell me, when global warming comes up, do you involuntarily regurgitate that old "maybe it's caused by the SUN, duh" chestnut too?
To address your specific objection; the lawmakers don't need to be smart here, it suffices that the people advocating for their own rights
Re: (Score:2)
> It's not complicated
I cannot find a single person with actual qualifications in the field of economics who agrees with this assessment. The only people who insist that it's all simple enough for laypeople to address in 1 paragraph, are laypeople. Dunning-Kruger is everywhere when federal budgets are discussed.
Re: (Score:2)
Signed modules are a two-edged sword. They can be used for Tivoization, as you say. They can also be used by you to secure your own system.
Really, it's too bad that none of the major distributions have set this up. I've had TPMs on the past 2 work laptops. I've rather wanted to "take ownership" of them, principally to prevent anyone else from doing so. But it's rather a pain, supported, but in more of an expert-only mode, so I've never had the time.
Module signing would be same type of thing. If RedHat
Re:DRM (Score:5, Informative)
Module signing has been in place with Fedora 18 and Ubuntu 12.10 as it's required to be compliant and get a signature on the bootloader for Secure Boot. I assume the code was backported.
Re: (Score:1)
If root is inserting untrusted modules into his kernel, he has bigger problems than module signing can fix.
Re: (Score:2)
Re: (Score:2)
You're absolutely correct that if an attacker is performing actions as root you have a big problem, but if that attacker is able to succeed and inject modules in to the kernel you have much bigger problems. Root's actions can still be monitored, logged, etc. where a malicious kernel module can hide any evidence of its existence from the running system.
Having this feature enabled (and of course keeping the private key elsewhere if you build your own modules) means that a root exploit turning in to a rootkit
kernel in c++? (Score:1, Funny)
kernel in c++? no? ill move on then,
Re:kernel in c++? (Score:5, Informative)
And you need a kernel in C++ why? Because you can't get your head around objects that aren't enforced by the language? Or you can't get your head around doing error cleanup without exceptions enforced by the language? The Linux kernel even does reference counting without explicit support from the language.
Just to get a complete picture, I looked at some competing kernels (I skimmed over the source really quickly):
FreeBSD kernel - C, with objects and refcounts, similar to Linux
OpenBSD kernel - C, but I have a hard time finding their equivalent to objects and refcounts, and I gave up looking
GNU Hurd - C, and I'm not even going to bother looking around too much
XNU - C, but with I/O Kit in C++ - works only with Apple software?
Haiku kernel - C++, which is interesting in itself - but supports only IA-32?
Plan9 kernel - C
OpenSolaris kernel - C
I think it's pointless to look at the rest. All the others listed by Wikipedia are even more obscure than some of the above.
C seems to dominate the kernel arena, so Next time you post, I'd like to know what you think C++ would bring to the party. No, really. I've seen many dismiss that Linux isn't written in C++, but haven't seen a single one of these trolls (yes, I'm feeding you) say what that would accomplish, and I'm really really really curious. I'll throw a bone from the XNU Wikipedia article: "helping device drivers be written more quickly and using less code", and that seems to be the only bit written in C++, yet Linux does pretty well without, and apparently so do the majority (see above).
Re:kernel in c++? (Score:5, Informative)
IIRC modern windows is a mixture of C and C++.
As to what C++ achives it's the automation of tedious and error-prone boilerplate. Rather than manually incrementing and decrementing reference counts you can have it happen automatically as values are copied and overwritten. Rather than manually building procedure address tables for polymorphism you can get the compiler to do it for you.
Re: (Score:3)
Re: (Score:3, Insightful)
For any reasonable C++ compiler and well-written program, the cost is exactly the same as if you do it manually.
In some cases it will even be less because the compiler knows what's going on and can use that knowledge in optimization, e.g. replace indirect calls by direct calls where it knows exactly the dynamic type of an object, which is generally not possible for hand-written call tables.
Re:kernel in c++? (Score:4, Informative)
Haiku have active ports to PowerPC, ARM and x86-64 in progress.
Next up 64 bit Raspberry PI? (Score:1)
Re: (Score:2)
Re: (Score:2)
Nah, but 64-bit gets work done twice as fast as 32-bit! Didn't you know? ;)
-l
Re: (Score:3)
There is some interest in ARM for low-power servers and server appliances. Support for more than 4GB of ram would come in useful there.
Re: (Score:2)
Re: (Score:2)
Raspberry Pi is just the meme. Consider what the Raspberry Pi can do for 1/8 the cost the big players were charging us. Now imagine a 64 bit server for 1/8 what one costs now.
Re: (Score:2)
I'm thinking NAS boxes. You want low-power, so they are mostly ARM already - but with 64-bit ARM, you could also throw lots and lots and lots of RAM in for disk cache.
Squid box, small business mail server etc (Score:2)
A 1GHz ARM system with bucketloads of RAM (16GB is cheap these days) woul
Re: (Score:2)
My BeagleBone with 256MB is dandy serving as DNS, DHCP server, cvs server, web and some other stuff.
Re: (Score:2)
Re: (Score:2)
It depends on the work load.
IIRC on AMD64 most programs are about five to ten percent larger if they are compiled for 64 bit instead of 32 bit with a slight slowdown. However SSL and other programs that extensively use numbers larger than 32bits tend to be twice as fast on 64bit than 32bit. So if you are doing mostly authentication or ssl on your PI then 64 bit would make sense.
Is Btrfs for real yet? (Score:2, Interesting)
Does it `Just Work' (tm)? I really want rolling snapshots ah la NetApp.
Sorry to be obtuse. Not much time for experiments.
Re: (Score:3, Informative)
SUSE enterprise linux has offered BTRFS as a supported option since Feb.
Conservative folk wont touch it until they know its been used by millions of people for many years.
I use it with backups on ext4.
Re: (Score:2, Interesting)
The SUSE implementation of Btrfs is quite good. It's quite a bit ahead of the Btrfs support I've seen on other distributions and setting it up is pretty much automated by the installer. I agree Btrfs isn't stable yet and so shouldn't be used in production yet, but it looks like it is getting closer.
Re: (Score:3)
Re: (Score:2)
https://www.suse.com/releasenotes/x86_64/SUSE-SLES/11-SP2/#fate-306585 [suse.com]
http://www.novell.com/documentation/oes11/stor_posixvol_lx/?page=/documentation/oes11/stor_posixvol_lx/data/posixvol_new_oes11sp1.html [novell.com]
I think there is the possibility that they might be planning to replace NSS on Open Enterprise Server with BTRFS.
Re: (Score:2)
I've forgot to add this funny piece of information:
SuSE Linux Enterprise SP2 doesn't support ext4. It will mount ext4 volumes as read only in order to facilitate migration from ext4 to supported file systems including BTRFS.
Not entirely true (Score:2)
Re: (Score:2)
Re: (Score:2)
Have considered Ceph ?
Btrfs finally ready? (Score:2, Interesting)
Is it finally ready for prime time? any one with experiences/horror stories?
Re: (Score:2)
I used to run btrfs roughly a year ago for half a year and had no issues with data integrety etc whatsoever. The downside at that time was that performance for working with loads of small files was noticably worse than with ext4. The result of this was that a dist-upgrade took more than 4 hours instead of the expected 1.5 to 2 hours it takes with ext4. Apart from that I had no issues whatsoever; performance on other loads was decent.
I occasionaly look for benchmarks showing that the small files performance
Re:Btrfs finally ready? (Score:4, Interesting)
a dist-upgrade took more than 4 hours instead of the expected 1.5 to 2 hours it takes with ext4.
That's not due to poor small file performance in Btrfs, it's due to poor fsync() performance (which package tools like rpm and dpkg use quite a lot). In this new kernel version the Btrfs fsync() implementation is a lot faster.
Re: (Score:2)
You can use libeatmydata to disable the many fsyncs in dpkg which will obviously solve that problem, it might be smart to make a btrfs snapshot first. So if something bad does happen, you can go back to a working snapshot.
Re: (Score:2)
How important are the fsyncs? I think a lot of software uses them due to some implementation decisions with ext4 (though Linus's decision to override the default settings set by the ext4 team alleviated many of them). However, with btrfs being copy-on-write I would think that you'd be far less vulnerable to issues if you modify a file in place without fsyncing. With btrfs you'll end up with either the original file intact or the modified file intact. With ext4 and some journal settings I think you could
Re: (Score:2)
Apparently SUSE Enterprise Linux [linux.com] thinks so, as of last week.
How fractured is ARM? (Score:3)
Re:How fractured is ARM? (Score:5, Informative)
There are variants in the instruction set (just like there are in the x86 world, where i686 is a superset of i383 for example). However, that isn't the big problem with ARM; there isn't a single-standard way of booting like there is with x86 (where most things are IBM PC BIOS compatible, with some now moving to EFI/UEFI). Also, there's no device enumeration like ACPI; lots of ARM vendors build their own kernel with a static compiled-in list of devices, rather than having an easy way to probe the hardware at run-time.
Re: (Score:2, Informative)
Pffft (Score:1, Funny)
Windows is up to 8. Obviously, it is more than twice as good.
Re: (Score:3)
Re: (Score:2)
Not to mention the massive regression from 2000.
Re: (Score:2)
The Kernel Newbies site isn't accessible for me (Score:2)
The Kernel Newbies site isn't accessible for me, clearly they're using 3.7. :)
Support for trim on software raid? (Score:2)
Does: "MD: TRIM support for linear (commit), raid 0 (commit), raid 1 (commit), raid 10 (commit), raid5 (commit)"
meen that if I run a software raid-1 on sdd disk, then Linux can do Trim on the disks?
Re: (Score:2)
Yup. Just in time for us all to avoid using md in favor of btrfs it seems. :)
Re: (Score:2, Funny)
Just proves what a wanker you are, then
Re: (Score:1, Offtopic)
Re:UDP ... (Score:5, Interesting)
Why does vxlan transfer L2 packets using UDP and not TCP? I have also seen this on other L2 protocols like L2TP and PPTP ... just curious ...
TCP has a feedback loop when packets are lost... So you'd have that at both layers, the actual session and the tunnel.
Its an engineering thing where if you embed a feedback loop inside a feedback loop, things will be OK if you're VERY careful but most are not and you'll make a lovely oscillator and just blow it all to bits.
Fundamentally, UDP doesn't guarantee delivery so its OK to shove it inside UDP, and TCP has its own repair mechanism so you don't need to guarantee its sub-layers, so its not like you're missing anything.
Finally it just kills performance because TCP loves big buffers for each connection so you need megatons of ram until you start dropping packets and letting TCP police itself. Which meanwhile results in horrific latency. But if you tunnel over UDP, you don't really need much of a buffer on the tunneler itself and you'll overall end up with better latency specs. So its cheaper and works better. Hard to beat that combo...
Re:UDP ... (Score:4, Informative)
I agree with your comments but want to add a clarification to your last paragraph for the benefit of all /. readers.
TCP needs enough buffer that it can hold a copy of each packet sent until it receives an acknowledgment because it may need to re-transmit the packet if it gets lost. Once the packet is acknowledged as having been received, TCP frees up the space. As such, there is a straight forward way of computing how much buffer TCP needs if you want to fully utilize the bandwidth of the bottleneck link along the path.
The amount of buffer is twice the round-trip time multiplied by the bandwidth of the bottleneck link (aka, "the bandwidth delay product"). More that this is a waste as it won't be used. Now, the effective round-trip time will increase if you have packet loss along the path. And congestion in the network (possibly made worse by the buffer bloat the previous post points out) will also increase the round-trip time. And the bandwidth of the bottleneck link is probably not directly knowable by the end hosts (although it can be reasonably estimated). Thus the amount of buffer space can be estimated a priori.
Note: you will still have to have this much buffer space to achieve full performance even if you tunnel TCP through UDP. It is just that you won't have to have much more than that amount. Also, having inner and outer TCP connections result in them fighting against each other, as you point out. (That is why it is not a good idea to tunnel TCP over TCP, not primarily because of buffer concerns.)
Note: you do need to have sufficient space for the inner TCP or it won't be able to operate at full speed. But you won't need double the space as you would with TCP within TCP (assuming you could solve the fighting among themselves issue).
Re: (Score:2)
TCP needs enough buffer that it can hold a copy of each packet sent until it receives an acknowledgment because it may need to re-transmit the packet if it gets lost.
Thanks for the explanation!
Re: (Score:2)
TCP also need to buffer on the receive side, since it guarantees in-order delivery. If a packet gets lost then every packet after that gets through gets buffered up until the lost packet is retransmitted/received. Also, if the odd packet gets delivered out-of-order the the a few packets need to be buffered to sort them back out again.
Re: (Score:2)
UDP port 53 might be redirected to a local server in many places though.
Re: (Score:2)
I haven't RTFA, but looking at the things you want to transport, it looks as if you're tunneling other stuff - potentially including TCP.
Tunneling TCP over TCP is generally a Bad Thing. The flow control of the tunnel and the flow control of the tunneled can interact in really ugly ways. By using UDP to create the tunnel, when you send TCP over that tunnel there will be only one flow control.
This is from the ancient days of "PPP over SSH/Telnet", when it used to be possible to get a shell account, but not
Re: (Score:2)
At that point you don't need the reliabilitiy and retransmission features of TCP. Once you stack the layers up, TCP will take care of that anyway, without running it over TCP again. Think IP: unreliable datagrams; you put TCP on it and presto: reliable, ordered, everything. Run a VPN, and you do it over UDP, and end up with something like IP -> UDP -> TCP, and then TCP again does its thing, without a care in the world about the layers below. Same principles apply with this new things too. If your unde
Re:UDP ... (Score:5, Interesting)
I forgot to mention one real life situation where UDP over TCP does not work.. UDP conceptually works pretty well with real time live streaming. "Here's 5 seconds of audio of the ball game". 5 seconds later, if lost, that packet is meaningless, don't bother re-sending it, the RX will just output 5 secs of silence or whatever. TCP does not understand that at all, so you can get serious problems with live streaming if you try to stick that inside TCP and experience significant network congestion. Buffers get bigger until they pop, "live" becomes randomly "tape delayed" based on recipient... Also TCP doesn't understand variable bit rate, so its ideas about buffer allocation bear little resemblance to what the codec actually wants to do.
Re: (Score:2)
Re:UDP ... (Score:4, Insightful)
TCP tries to (and usually succeeds in) trasfer a stream of bytes reliablly and in the right order over an unreliable packet based system.
To achieve this two things have to happen
1: the sender must resend lost packets
2: the recipiant must hold packets after
However there is no way for a sender to determine if a packet has actually been lost or just delayed. So the sender must use a timeout to deem a packet as lost and retransmit it.
Now suppose someone builds a tunnel using TCP and runs TCP over that tunnel so your stack looks something like.
Application
TCP (inner)
IP
Tunneling protocol
TCP (outer)
IP
underlying network
Everything works fine as long as no packets are lost. However when a packet is lost by the underlying network the outer TCP layer freezes all transmissions through the tunnel until it has retransmitted the packet. During this time it is likely that the innner TCP layer will also deem the packet(s) lost and try to retransmit them (possiblly more than once due to the auto-adjusting timeouts used by TCP). Then when the outer TCP does recover it will deliver both the original packet and the retransmission from the outer TCP. This behaviour is very similar to what happens when a network is congested and make cause the inner TCP to unnessacerally back off the data rate.
Re: (Score:2)
it will deliver both the original packet and the retransmission from the outer TCP
That should have said
it will deliver both the original packet and the retransmission from the inner TCP
Re: (Score:1)
Re: (Score:2)
Re: (Score:2)
now lets work on making linux work on a desktop/workstation;-)
Works quite well already on my workstation. Any particular areas of interest where it needs improvement in order to work?
Re: (Score:3)
ironically enough the problematic area was games and the linux detractors never brought it up. Let us see what Valve comes up with.
Re: (Score:2)
For many classes of device the choice comes down to either propietary firmware in a rom on the card or propietary firmware included with the operating system. Do you really belive the former is better for freedom? if so why?
Re: (Score:2)
"Do you really belive the former is better for freedom? if so why?"
Certainly it's better. Because once bought the device is static while the OS is not.
In other words: you don't want not to be able to upgrade to 3.8 just because the vendor dropped support for your otherwise perfectly working device.
Re: (Score:2)
The kernel doesn not support AD, you should look at Samba 4.
Re: (Score:3)
1. thoroughly wash your bedsheets (or get new ones)
2. throw out your mattress, and any carpeting underneath
3. thoroughly vaccum the frame
4. buy new matress
5. learn personal hygiene.
6. most important: lose the neckbeard
Re: (Score:2)
Re: (Score:2)
NAT is and never was a stupid hack, people make the assumption it stemmed from the exhaustion of ipv4 , but that isnt true... as many people do, you've made the same invalid assumption as everyone else - and thats not meant to be an insult, just a fact of life. The reality is is that we've been working with nat so long that it really doesnt break much any more, and that which it does is work-around-able. Personally I come form the corporate enterprise world and nat is and always will be a reality there (for
Re: (Score:2)