Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
Get HideMyAss! VPN, PC Mag's Top 10 VPNs of 2016 for 55% off for a Limited Time ×
Linux Software

Linus on Subversion, GPL3, Microsoft and More 350

victor77 writes "Linus has repeatedly slammed Subversion and CVS, questioning their basic architecture. Subversion community has responded...how valid is Linus's statement?" This and many other subjects are covered in this interview with Linus.
This discussion has been archived. No new comments can be posted.

Linus on Subversion, GPL3, Microsoft and More

Comments Filter:
  • by Anonymous Coward on Sunday August 19, 2007 @09:21AM (#20285261)
    Microsoft OLE DB Provider for ODBC Drivers error '80004005'

    [Microsoft][ODBC SQL Server Driver][SQL Server]Transaction (Process ID 182) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction. /efytimes/lefthome.asp, line 193
  • Article (Score:3, Informative)

    by Anonymous Coward on Sunday August 19, 2007 @09:29AM (#20285299)
    Sunday, August 19, 2007: Did Microsoft's Men In Black ever met Linus Torvalds? But why is he so critical of GPLv3? Why does he slam Subversion? What would happen to the kernel development if he chooses to do something else more important? These are some of the questions Linux/open source community from around the globe wanted to ask Linus. And, here is Linus candid and blunt, and at times diplomatic. Check if the question you wanted to ask to the father of Linux is here and what does he have to say...
    Q: What are the future enhancements/paths/plans for the Linux kernel? --Subramani R

    Linus: I've never been much of a visionary -- instead of looking at huge plans for the future, I tend to have a rather short timeframe of 'issues in the next few months'. I'm a big believer in that the 'details' matter, and if you take care of the details, the big issues will end up sorting themselves out on their own.

    So I really don't have any great vision for what the kernel will look like in five years -- just a very general plan to make sure that we keep our eye on the ball. In fact, when it comes to me personally, one of the things I worry about the most isn't even the technical issues, but making sure that the 'process' works, and that people can work well with each other.

    Q: How do you see the relationship of Linux and Solaris evolving in the future? How will it benefit the users?

    Linus: I don't actually see a whole lot of overlap, except that I think Solaris will start using more of the Linux user space tools (which I obviously don't personally have a lot to do with -- I really only do the kernel). The Linux desktop is just so much better than what traditional Solaris has, and I expect Solaris to move more and more towards a more Linux-like model there.

    On the pure kernel side, the licensing differences mean that there's not much cooperation, but it will be very interesting to see if that will change. Sun has been making noises about licensing Solaris under the GPL (either v2 or v3), and if the licence differences go away, that could result in some interesting technology. But I'm taking a wait-and-see attitude to that.

    Q: Now that the GPLv3 has been finalised and released, do you foresee any circumstance that would encourage you to begin moving the kernel to it? Or, from your perspective, is it so bad that you would never consider it? -- Peter Smith / Naveen Mudunuru.

    Linus: I think it is much improved over the early drafts, and I don't think it's a horrible licence. I just don't think it's the same kind of 'great' licence that the GPLv2 is.

    So in the absence of the GPLv2, I could see myself using the GPLv3. But since I have a better choice, why should I?

    That said, I try to always be pragmatic, and the fact that I think the GPLv3 is not as good a licence as the GPLv2 is not a 'black and white' question. It's a balancing act. And if there are other advantages to the GPLv3, maybe those other advantages would be big enough to tilt the balance in favour of the GPLv3.

    Quite frankly, I don't really see any, but if Solaris really is to be released under the GPLv3, maybe the advantage of avoiding unnecessary non-compatible licence issues could be enough of an advantage that it might be worth trying to re-license the Linux kernel under the GPLv3 too.

    Don't get me wrong -- I think it's unlikely. But I do want to make it clear that I'm not a licence bigot, per se. I think the GPLv2 is clearly the better licence, but licences aren't everything.

    After all, I use a lot of programs that are under other licences. I might not put a project I start myself under the BSD (or the X11-MIT) licence, but I think it's a great licence, and for other projects it may well be the right one.

    Q: Currently are there any Indians who you'd like to highlight as key contributors to the Linux kernel?

    Linus: I have to admit that I don't directly work with anybody that I actually realize as being from India. That said, I should clarify a bit: I've very consciously tried to set up the kernel development so that I don't end up working personally with a huge number of people.

    I have this strong conviction that most humans are basically wired up to know a few people really well (your close family and friends), and I've tried to make the development model reflect that: with a 'network of developers', where people interact with maybe a dozen other people they trust, and those other people in turn interact with 'their' set of people they trust.

    So while I'm in occasional contact with hundreds of developers who send me a random patch or two, I've tried to set up an environment where the bulk of what I do happens through a much smaller set of people that I know, just because I think that's how people work. It's certainly how I like to work.

    Also, in all honesty, I don't even know where a lot of the people I work with live. Location ends up being pretty secondary. So while I'm pretty sure that none of the top 10-15 I work with most closely, are in India, maybe after this goes public, it might get pointed out that there is actually somebody from there!

    Q: Since the Linux Kernel Development depends so heavily on you, how do you plan to organise/reorganise it for it to continue progressing without you, in case you decide to dedicate more time to your own life and family?

    Linus: I've long since come to the realisation that Linux is much bigger than me. Yes, I'm intimately involved in it still, and I have a fairly large day-to-day impact on it, and I end up being the person who, in some sense, acts as the central point for a lot of kernel activities; but no -- I wouldn't say that Linux 'depends heavily' on me.

    So if I had a heart attack and died tomorrow (happily not likely: I'm apparently healthy as anything), people would certainly notice, but there are thousands of people involved in just the kernel, and there're more than a few that could take over for me with little real confusion.

    Q: India is one of the major producers of software engineers, yet we don't contribute much to the Linux domain. What do you think is keeping Indians from becoming proactive on that front? How do you feel we could encourage Indians to get involved and contribute heavily? You have a fan following in India; could your iconic image be used to inspire enthusiasts? -- Bhuvaneswaran Arumugam.

    Linus: This is actually a very hard question for me to answer. Getting into open source is such a complicated combination of both infrastructure (Internet access, education, you name it), flow of information and simply culture that I can't even begin to guess what the biggest stumbling block could be.

    In many ways, at least those with an English-speaking culture in India should have a rather easy time getting involved with Linux and other open source projects, if only thanks to the lack of a language barrier. Certainly much easier than many parts of Asia or even some parts of Europe.

    Of course, while that is a lot of people, it's equally obviously not the majority in India, and I personally simply don't know enough about the issues in India to be able to make an even half-way intelligent guess about what the best way forward is. I suspect that an enthusiastic local user community is always the best way, and I think you do have that.

    As to my 'iconic image', I tend to dislike that part personally. I'm not a great public speaker, and I've avoided travelling for the last several years because I'm not very comfortable being seen as this iconic 'visionary'. I'm just an engineer, and I just happen to love doing what I do, and to work with other people in public.

    Q: What would be a good reason for you to consider visiting India? -- Frederick [FN] Noronha.

    Linus: As mentioned in the first answer, I absolutely detest public speaking, so I tend to avoid conferences, etc. I'd love to go to India for a vacation some day, but if I do, I'd likely just do it incognito -- not tell anybody beforehand and just go as a tourist to see the country!

    Q: Recently, you seemed to slam Subversion and CVS, questioning their basic architecture. Now that you've got responses from the Subversion community, do you stand corrected, or are you still unconvinced? B Arumugam.

    Linus: I like making strong statements, because I find the discussion interesting. In other words, I actually tend to 'like' arguing. Not mindlessly, but I certainly tend to prefer the discussion a bit more heated, and not just entirely platonic.

    And making strong arguments occasionally ends up resulting in a very valid rebuttal, and then I'll happily say: "Oh, ok, you're right."

    But no, that didn't happen on SVN/CVS. I suspect a lot of people really don't much like CVS, so I didn't really even expect anybody to argue that CVS was really anything but a legacy system. And while I've gotten a few people who argued that I shouldn't have been quite so impolite against SVN (and hey, that's fair -- I'm really not a very polite person!), I don't think anybody actually argued that SVN was 'good'.

    SVN is, I think, a classic case of 'good enough'. It's what people are used to, and it's 'good enough' to be used fairly widely, but it's good enough in exactly the sense DOS and Windows were 'good enough'. Not great technology, just very widely available, and it works well enough for people and looks familiar enough that people use it. But very few people are 'proud' of it, or excited about it.

    Git, on the other hand, has some of the 'UNIX philosophy' behind it. Not that it is about UNIX, per se, but like original UNIX, it had a fundamental idea behind it. For UNIX, the underlying philosophy was/is that, "Everything is a file." For git, it's, Everything is just an object in the content-addressable database."

    Q: Is having so many distros a good or bad idea? Choice is fine, but one does not need to be pampered with choices. Instead of so many man hours being spent in building hundreds of distros, wouldn't it be easier to get into the enterprise and take on the MS challenge if people could come together and support fewer distros (1 for each use maybe)? What's your view on that? -- Srinivasan S.

    Linus: I think having multiple distros is an inevitable part of open source. And can it be confusing? Sure. Can it be inefficient? Yes. But I'd just like to compare it to politics: 'democracy' has all those confusing choices, and often none of the choices is necessarily what you 'really' want either, and sometimes you might feel like things would be smoother and more efficient if you didn't have to worry about the whole confusion of voting, different parties, coalitions, etc.

    But in the end, choice may be inefficient, but it's also what keeps everybody involved at least 'somewhat' honest. We all probably wish our politicians were more honest than they are, and we all probably wish that the different distros sometimes made other choices than they do, but without that choice, we'd be worse off.

    Q: Why do you think CFS is better than SD?

    Linus: Part of it is that I have worked with Ingo [Molnar] for a long time, which means that I know him, and know that he'll be very responsive to any issues that come up. That kind of thing is very important.

    But part of it is simply about numbers. Most people out there actually say that CFS is better than SD. Including, very much, on 3D games (which people claimed was a strong point of SD).

    At the same time, though, I don't think any piece of code is ever ''perfect''. The best thing to happen is that the people who want to be proponents of SD will try to improve that so much that the balance tips over the other way -- and we'll keep both camps trying interesting things because the internal competition motivates them.

    Q: In a talk you had at Google about git, someone asked you how you would take an extremely large code base that is currently handled with something centralised and transition to git without stopping business for six months. What was your response to that? -- Jordan Uggla.

    Linus: Ahh. That was the question where I couldn't hear the questioner well (the questions were much more audible in the recordings), and I noticed afterwards, when I went back and listened to the recorded audio, that I didn't answer the question he asked, but the question I thought he'd asked.

    Anyway, we do have lots of import tools, so that you can actually just import a large project from just about any other previous SCM into git. But the problem, of course, often doesn't end up being the act of importing itself, but just having to 'get used to' the new model!

    And quite frankly, I don't think there is any other answer to that 'get used to it' but to just start out and try it. You obviously do not want to start out by importing the biggest and most central project you have; that would indeed just make everything come to a standstill, and make everybody very unhappy indeed.

    So nobody sane would advocate moving everything over to git overnight, and forcing people to change their environment. No. You'd start with a smaller project inside a company, perhaps something that just one group mostly controls and maintains, and start off by converting that to git. That way you get people used to the model, and you start having a core group with the knowledge about how git works and how to use it within the company.

    And then you just extend on that. Not in one go. You'd import more and more of the projects -- even if you have the 'one big repository' model at your company; you also almost certainly have that repository as a set of modules, because having everybody check out everything is just not a workable mode of operation (unless 'everything' is just not very large).

    So you'd basically migrate one module at a time, until you get to the point where you're so comfortable with git that you can just migrate the rest (or the 'rest' is so legacy that nobody even cares).

    And one of the nice features of git is that it actually plays along pretty well with a lot of other SCMs. That's how a lot of git users use it: 'they' may use git, but sometimes the people they work with don't even realise, because they see the results of it propagated into some legacy SCM.

    Q: Did they ever experiment with alternate instruction set implementations at Transmeta? [Transmeta Crusoe chip seemed like a very soft CPU -- reminding one of Burroughs B1000 interpretive machine, which actually implemented multiple virtual machines. There was one for system software, another for Cobol, another for Fortran; If that is correct, then one could implement Burroughs 6/7000 or HP3000 like stack architecture on the chip or an instruction set suitable for JVM, etc] -- Anil Seth.

    Linus: We did indeed have some alternate instruction set, and while I still am not really supposed to talk about it, I can say that we did have a public demonstration of mixing instruction sets. We had a technology showcase
    where you could run x86 instructions side-by-side with Java byte code (actually, it was a slightly extended pico-java, iirc).

    I think the app we showed running was running DOOM on top of Linux, where the Linux parts were a totally standard x86 distribution, but the DOOM binary was a specially compiled version where part of the game was actually compiled pico-Java. And the CPU ended up running them both the same way -- as a JIT down to the native VLIW instruction set.

    (The reason for picking DOOM was just that source code was available, and the core parts of the game were small enough that it was easy to set it up as a demonstration -- and it was obviously visually interesting.)

    There were more things going on internally, but I can't really talk about them. And I wasn't actually personally involved with the Java one either.

    Q: 386BSD, from which NetBSD, FreeBSD and OpenBSD were derived, was there well before Linux, but Linux spread much more than 386BSD and its derivatives. How much of this do you attribute to the choice of the licence and how much to the development process you chose? Don't you think that the GPLv3 protects the freedom that has bred Linux better than the BSDs till now, more than the GPLv2 can? -- Tiziano Mosconi from Italy.

    Linus: I think there's both a licence issue, and a community and personality issue. The BSD licences always encouraged forking, but also meant that if somebody gets really successful and makes a commercial fork, you cannot necessarily join back. And so even if that doesn't actually happen (and it did, in the BSD cases -- with BSDi), people can't really 'trust' each other as much.

    In contrast, the GPLv2 also encourages forking, but it not only encourages the branching off part, it also encourages (and 'requires') the ability to merge back again. So now you have a whole new level of trust: you 'know' that everybody involved will be bound by the licence, and won't try to take advantage of you.

    So I see the GPLv2 as the licence that allows people the maximum possible freedom within the requirement that you can always join back together again from either side. Nobody can stop you from taking the improvements to the source code.

    So is the BSD licence even more 'free'? Yes. Unquestionably. But I just wouldn't want to use the BSD licence for any project I care about, because I not only want the freedom, I also want the trust so that I can always use the code that others write for my projects.

    So to me, the GPLv2 ends up being a wonderful balance of 'as free as you can make it', considering that I do want everybody to be able to trust so that they can always get the source code and use it.

    Which is why I think the GPLv3 ends up being a much less interesting licence. It's no longer about that trust about "getting the source code back"; it has degenerated into a "I wrote the code, so I should be able to control how you use it."

    In other words, I just think the GPLv3 is too petty and selfish. I think the GPLv2 has a great balance between 'freedom' and 'trust'. It's not as free as the BSD licences are, but it gives you peace of mind in return, and matches what I consider 'tit-for-tat': I give source code, you give me source code in return.

    The GPLv3 tries to control the 'use' of that source code. Now it's, "I give you my source code, so if you use it, you'd better make your devices hackable by me." See? Petty and small-minded, in my opinion.

    Q: Slowly but steadily, features of the -rt tree are getting integrated into the mainline. What are your current thoughts regarding a merger of the remaining -rt tree into the mainline (and I'm not talking about the CFS)? -- Wal, Alex van der.

    Linus: I won't guarantee that everything from -rt will 'ever' be merged into the standard kernel (there may be pieces that simply don't end up making sense in the generic kernel), but yes, over the years we've actually integrated most of it, and the remaining parts could end up making it one of these days.

    I'm a big fan of low-latency work, but at the same time I'm pretty conservative, and I pushed back on some of the more aggressive merging, just because I want to make sure that it all makes sense for not just some extreme real time perspective, but also for 'normal' users who don't need it. And that explains why the process has been a pretty slow but steady trickle of code that has gotten merged, as it was sufficiently stable and made sense.

    That, by the way, is not just an -rt thing; it's how a lot of the development happens. -rt just happens to be one of the more 'directed' kernel projects, and one where the main developer is pretty directly involved with the normal kernel too. But quite often the migration of other features (security, virtual memory changes, virtualisation, etc) follows a similar path: they get written up in a very targeted environment, and then pieces of the features get slowly but surely merged into the standard kernel.

    Q: I'm very curious about what the future holds for file systems in the kernel. What do you think about Reiser4, XFS4, ZFS and the new project founded by Oracle? ZFS has been receiving a lot of press these days. Reiser4 delivers very good benchmarks, and xfs4 is trying to keep up, whereas the one by Oracle has a lot of the same specs as Sun's ZFS. Where are we heading? Which FS looks the most promising in your opinion? -- Ayvind Binde.

    Linus: Actually, just yesterday we had a git performance issue, where ZFS was orders of magnitude slower than UFS for one user (not under Linux, but git is gaining a lot of traction even outside of kernel development). So I think a lot of the 'new file system' mania is partly fed by knowing about the issues with old filesystems, and then the (somewhat unrealistic) expectation that a 'new and improved' filesystem will make everything perfect.

    In the end, this is one area where you just let people fight it out. See who comes out the winner -- and it doesn't need to be (and likely will not) be a single winner. Almost always, the right choice of file system ends up depending on the load and circumstances.

    One thing that I'm personally more excited about than any of the filesystems you mention is actually the fact that Flash-based hard disks are quickly becoming available even for 'normal' users. Sure, they're still expensive (and fairly small), but Flash-based storage has such a different performance profile from rotating media, that I suspect that it will end up having a large impact on filesystem design. Right now, most filesystems tend to be designed with the latencies of rotating media in mind.

    Q: The operating system is becoming less and less important. You have said several times that the user is not supposed to 'see' the operating system at all. It is the applications that matter. Browser-based applications, like Google's basic office applications, are making an impact. Where do you think operating systems are headed?

    Linus: I don't really believe in the 'browser OS', because I think that people will always want to do some things locally. It might be about security, or simply about privacy reasons. And while connectivity is widely available, it certainly isn't 'everywhere'.

    So I think the whole 'Web OS' certainly is part of the truth, but another part that people seem to dismiss is that operating systems have been around for decades, and it's really a fairly stable and well-known area of endeavour. People really shouldn't expect the OS to magically change: it's not like people were 'stupid' back in the 60s either, or even that hardware was 'that' fundamentally different back then!

    So don't expect a revolution. I think OSs will largely continue to do what they do, and while we'll certainly evolve, I don't think they'll change radically. What may change radically are the interfaces and the things you do on top of the OS (and certainly the hardware beneath the OS will continue to evolve too), and that's what people obviously care about.

    The OS? It's just that hidden thing that makes it all possible. You really shouldn't care about it, unless you find it very interesting to know what is really going on in the machine.

    Q: The last I heard, you were using a PPC G4/5 for your main personal machine -- what are you using now, and why?

    Linus: I ended up giving up on the PowerPC, since nobody is doing any workstations any more, and especially since x86-64 has become such an undeniable powerhouse. So these days, I run a bog-standard PC, with a normal Core 2 Duo on it.

    It was a lot of fun to run another architecture (I ran with alpha as my main architecture way back then, for a few years, so it wasn't the first time either), but commodity CPUs is where it is at. The only thing that I think can really ever displace the x86 architecture would come from below, i.e., if something makes us not use x86 as our main ISA in a decade, I think it would be ARM, thanks to the mobile device market.

    Q: What does Linux mean to you -- a hobby, philosophy, the meaning of life, a job, the best OS, something else...?

    Linus: It's some of all of that. It's a hobby, but a deeply meaningful one. The best hobbies are the ones that you care 'really' deeply about. And these days it's obviously also my work, and I'm very happy to be able to combine it all.

    I don't know about a 'philosophy', and I don't really do Linux for any really deeply held moral or philosophical reasons (I literally do it because it's interesting and fun), but it's certainly the case that I have come to appreciate the deeper reasons why I think open source works so well. So I may not have started to do Linux for any such deep reasons, and I cannot honestly say that that is what motivates me, but I do end up thinking about why it all works.

    Q: Did Microsoft's 'Men in Black' ever talk to you? -- Zidagar - Antonio Parrella

    Linus: I've never really talked to MS, no. I've occasionally been at the same conferences with some MS people (I used to go to more conferences than I do these days), but I've never really had anything to do with them. I think there is a mutual wariness.
  • Alternate link (Score:4, Informative)

    by MythMoth ( 73648 ) on Sunday August 19, 2007 @09:30AM (#20285305) Homepage
    This one is not (yet) slashdotted:
    http://www.efytimes.com/archive/144/news.htm [efytimes.com]
  • Re:Can't RTFA... (Score:5, Informative)

    by Oddscurity ( 1035974 ) * on Sunday August 19, 2007 @09:39AM (#20285343)
    He did slam CVS indeed, SVN likewise. In Linux talk at Google about Git [google.com][video] he mentions SVN and their credo at on time being something along the line of "CVS done right", commenting that "there is no way to do CVS right."

    The article linked here is light on details concerning SCM, though.
  • Re:Can't RTFA... (Score:5, Informative)

    by nwbvt ( 768631 ) on Sunday August 19, 2007 @10:19AM (#20285539)
    Site seems to be back up, here is what he had to say:

    I suspect a lot of people really don't much like CVS, so I didn't really even expect anybody to argue that CVS was really anything but a legacy system. And while I've gotten a few people who argued that I shouldn't have been quite so impolite against SVN (and hey, that's fair -- I'm really not a very polite person!), I don't think anybody actually argued that SVN was 'good'.

    SVN is, I think, a classic case of 'good enough'. It's what people are used to, and it's 'good enough' to be used fairly widely, but it's good enough in exactly the sense DOS and Windows were 'good enough'. Not great technology, just very widely available, and it works well enough for people and looks familiar enough that people use it. But very few people are 'proud' of it, or excited about it.

    And here is the reaction from the subversion team [tigris.org]. For those of you who don't want to RTFA, they basically say they agree, its not appropriate for something like Linux.

    BTW, isn't this all old news? His original comment on subversion was dated from 05

  • by nwbvt ( 768631 ) on Sunday August 19, 2007 @10:26AM (#20285573)

    Think that could be because its an Indian news site and the guy himself is Indian?

    Believe it or not, just because something is published on the world wide web doesn't mean it has to cut out everything of local interest.

  • Re:Article (Score:3, Informative)

    by fimbulvetr ( 598306 ) on Sunday August 19, 2007 @10:41AM (#20285661)
    Mod parent down, he has altered the article to include things like "What do you think of penis?"
  • by dknj ( 441802 ) on Sunday August 19, 2007 @11:14AM (#20285897) Journal
    and if you're a programmer or an admin that knows sql server, then you know to disable this before you go into production. again, this is not a problem with the product. saying such would be like saying solaris is trash because it enables everything plus the kitchen sink, unless you tell it not to...

    oracle is all great and fun if you have the money to cough up for it. sql server has great performance at a fraction of oracle's cost. of course, a competent architect will know when to use sql server and when to use oracle.
  • by Antique Geekmeister ( 740220 ) on Sunday August 19, 2007 @11:38AM (#20286045)
    There are other issues: the Subversion authors have made a very real mistake here in keeping unencrypted passwords online, by default, in every public Linux or UNIX client compiled from Subversion's basic source code.

    I just had to have a polite conversation with a professional peer who kept his home directory on his laptop, then turned on NFS shares "to get work done". I waited, very politely, until he put his laptop on the DMZ with his NFS shares turned on. Then I pulled his SSH keys for a set of sourceforge projects from his directory, and his password from his oher Subversion repositories. Voila! I now have write access to his Sourceforge subversion epositories.

    I'm patient. But crackers aren't, and scan for this sort of vulnerability constantly. The Subversion authors should never have bothered to include the ability to store the password, at all.
  • PARADIGM SHIFT! (Score:5, Informative)

    by StCredZero ( 169093 ) on Sunday August 19, 2007 @11:56AM (#20286153)
    Damnit, it's a paradigm shift that Linus is talking about. True distributed source code management brings an entirely new way of working. It enables very fast merging at a very fine granularity, which lets you use casually use this information (about what changed and when) in a way that changes the nature of how you work! It's the same sort of difference that code completion or Google search made. Once a certain kind of very useful information -- that has always been available, but a bit inconveniently -- becomes like running water out of the tap, it enables ways of working that just wouldn't have been practical before.

    If you really want to know what Linus is talking about from the man himself, watch this Google Tech Talk. It's over an hour, but there's nothing like hearing it straight from the horse's mouth.

    http://video.google.com/videoplay?docid=-219933204 4603874737&q=git+google+tech+talk&total=3&start=0& num=10&so=3&type=search&plindex=1 [google.com]
  • Re:Can't RTFA... (Score:3, Informative)

    by chthon ( 580889 ) on Sunday August 19, 2007 @12:00PM (#20286167) Homepage Journal

    Too bad Continuus costs too much to try, I think he would want to return to SVN after using that piece of shit.

  • Re:Can't RTFA... (Score:2, Informative)

    by coryking ( 104614 ) on Sunday August 19, 2007 @12:00PM (#20286173) Homepage Journal
    easy

    - Not mess up my working directory with a bunch of .svn hidden directory junk.
    - As somebody else said, proper branching & tagging
    - Related to the .svn directory stuff, it is *way* to easy to ruin a working copy. Why related? You ever try to version a 3rd party tree (say, the ports tree)? It is virtually impossible because when you update the ports tree, it will mess with the filesystem enough to de-sync the .svn directory and ruin the entire working copy.
    - While getting better, it isn't very fast dealing with large working copies (say, 200+ megs)
  • Re:Can't RTFA... (Score:5, Informative)

    by smenor ( 905244 ) on Sunday August 19, 2007 @12:08PM (#20286221) Homepage

    I used to use CVS (and still do for some projects). Then I switched over to SVN. It was remarkably unremarkable.

    Then, a few months ago, there was a /. article on git [git.or.cz]. It sounded interesting so I tried it... and was thoroughly impressed.

    I was up and running in about 20 minutes. You can use cvs/svn like commands, *but* you get local / decentralized repositories with fast forking and merging.

    Start a project. Type "git init" and you've got a repository in place (you don't have to initialize and then check it out). "git add ." and "git commit" and you've got your first revision.

    It took a little bit more effort to figure out how to push/pull from a remote repository, but it's fairly straightforward. A bunch of people can work in a group, have their own local repositories, and then merge their changes (along with the revision history). It's awesome.

    The only reason I haven't switched all of my projects over to it is that the IDEs I use (Xcode and Eclipse) don't have good git integration (as far as I know).

  • by someone1234 ( 830754 ) on Sunday August 19, 2007 @12:11PM (#20286239)
    You preach to the choir.
  • by Serpent Mage ( 95312 ) on Sunday August 19, 2007 @01:07PM (#20286583)

    I waited, very politely, until he put his laptop on the DMZ with his NFS shares turned on. Then I pulled his SSH keys for a set of sourceforge projects from his directory, and his password from his oher Subversion repositories.

    Considering that both the ssh keys folder and the subversion authorization folders are both chmod 700 by default, it doesn't matter if he tosses up an NFS share. You still cannot access it without being him or root. And of course if you had his password anyway then trying to access his password by him sharing his home directory is pointless anyway as you could simply just ssh into his computer and grab it. I call shenanigans on this one.

    The Subversion authors should never have bothered to include the ability to store the password, at all.

    As I mentioned above, by default, without the author changing permissions manually, the passwords are accessible only to the user. Even the group and world are not allowed access into the folder much less the files in them. And for those of us who live in the real world with real enterprise grade software to work on that span a dozen different repositories with at least 6 different authentication systems, yeah remember passwords is a godsend.

    Then again, all my "file sharing" happens with a special user account that is nothing but filesharing and I just have symlinks into that user. And it is all samba based filesharing not NFS and it is locked down with a user/password to even access the samba share.

    Subversion *does* have many flaws but the storing of passwords is not one of them. That is almost a mandatory feature requirement to work with repositories in most development organizations.
  • by gerddie ( 173963 ) on Sunday August 19, 2007 @01:45PM (#20286805)

    Considering that both the ssh keys folder and the subversion authorization folders are both chmod 700 by default, it doesn't matter if he tosses up an NFS share. You still cannot access it without being him or root.
    In the "simple" setup of an nfs server mounting the nfs share is usually independent of the user doing so, and if the other guy didn't restrict the allowed hosts of the shares, anyone on the net can do it, if he only knows the proper name, no passwords required. After you got this far, being him on nfs is just a matter of having the same user id - at least until nfs v3. Of course there are measures to restrict access, but someone exporting his home "to get work done" might not think that far ...
  • Re:Can't RTFA... (Score:2, Informative)

    by coryking ( 104614 ) on Sunday August 19, 2007 @03:34PM (#20287425) Homepage Journal

    I set up a small (two line) shell script so I wouldn't have to manually do the -rx:x+1 each time
    You had me staring at that for a minute until I realized you weren't talking about file permissions (ala chmod) and instead talking about command line switches! :-) In truth, it isn't the command line stuff that is hard, it is just svn's merging is very conceptually hard to understand. Instead of the task-based "I want the changes from this version to merge with that version" using native svn forces you to think "I want to merge the difference between two versions in time into this other point in time". The "difference between two points in time" thing is really hard to wrap your head around...

    The trick is to get svnmerge. It handles almost all of the nitty gritty details so you can actually do what you realy want to do, which is "take the changes from branch A and put them into branch B". Say you want to take the junk in changeset 1,2,5 and all the junk between 30 and 35 and merge it into your working copy:

    > svnmerge merge -r 1,2,5,30-35

    First, it remembers where you want to pulls stuff in from - you set that when you initialize a branch by telling it "link the branch in this working copy to branch svn://somewhere/else/". Every time you run it, it looks at what hasn't been merge into your working copy. You can get a list of junk it hasn't merged with:

    > svnmerge avail

    Once you do the "svnmerge merge" it will pull in all the changes and automatically keep track of what it just merged it. You then do a regular commit on the changes and off you go. I usually do this:

    > svnmerge merge -r 4,5,6 -f commit.txt || svn commit -F commit.txt || rm commit.txt

    That command pulls in the revisions to your working copy and creates a commit log "commit.txt". It then commits your crap and removes the log.

    The whole thing is a hackjob for sure, for starters TortiseSVN doesn't know about it and I'd love to do the whole process in some GUI thing. But it does show you that SVN itself is more than capable of handling merges in a "proper" way. Subversion just needs to get this hi-level stuff into it's own codebase so it knows what is up. I know they are working on it, I just hope that I can migrate from svnmerge to whatever native stuff they cook up in an elegant fashion.
  • by Slashdot Parent ( 995749 ) on Sunday August 19, 2007 @08:03PM (#20288833)

    Hopefully, the merge tracking being implemented for SVN 1.5 will make SVN a real/complete scource code control system.
    Don't get your hopes up.

    "Merge Tracking in Subversion 1.5.0 is roughly equivalent in functionality to svnmerge.py, recording and using merge history to avoid common cases of the repeated merge problem, and allowing for cherry-picking of changes." -- http://subversion.tigris.org/merge-tracking/ [tigris.org]
  • Re:Can't RTFA... (Score:4, Informative)

    by TemporalBeing ( 803363 ) <bm_witness@yah o o . c om> on Sunday August 19, 2007 @08:52PM (#20289113) Homepage Journal

    No Linus wrote Linux as a reimplementation of BSD, during the period that AT&T sued to stop the distribution of BSD. Had BSD not been held up in court, there would have been no need to rewrite BSD from scratch using inferior networking code.
    Actually, if you read Linus' own book - Just For Fun: The Story of an Accidental Revolutionary [amazon.com] - you'd find out that he wrote Linux as (a) a method for learning x86 Assembly for the i386 processor, (b) as a way to get into his school account over dial-up, and (c) as a re-implementation of Minix. It was also highly coupled with Minix for a while until around version 0.10, or shortly thereafter.

    See also: 0.10 history [kerneltrap.org], 0.02 & 0.03 history [kerneltrap.org], 0.01 history [kerneltrap.org]
  • by Lord Bitman ( 95493 ) on Monday August 20, 2007 @12:11PM (#20293675) Homepage
    Please correct me if I'm wrong, but when I first looked into git, I was left with the impression that there was /no way/ to use git as if it were centralized: every user had to have the full history, even if 99.99% of that history was irrelevant. I recall reading something where Linus noticed that if everyone used git with full history, they would all wind up with needlessly huge local copies. His solution: rather than fixing this obvious flaw in git, he chose instead to simply not import old version information. Did I read this wrong? Has something changed? These are not rhetorical questions, I have asked them previously and have yet to receive an answer. I just don't know. Why is git superior when it seems that it was fundamentally incapable of handling the full depth of the very project for which it was written?

    My goal of a "perfect" version control system is one that is decentralized, but lets me decide how much history I want, and lets me decide if something is so old as to be irrelevant, not worth having locally. If older versions can be discarded without impacting day-to-day work, why have I not seen this as an option for any decentralized systems?
    It is one of those "seems obvious enough to me that I am probably just using the wrong keywords in Google" things.

    SVN lets me check out just the "most recent" copy, and I can pull whatever I need from the remote repository if I need it.
    git, from what I've read, does not.

    I am not trying to be arrogant here, I would love to be corrected. Given history of other times I've asked this question, I don't expect to.
  • by Lord Bitman ( 95493 ) on Monday August 20, 2007 @10:20PM (#20299783) Homepage
    a brief googling reveals hard drive space was not the issue - it was bandwidth. My point is more related to archives in general, not the linux kernel itself, though (that Linus chose not to put the whole tree into git I still think is very telling, though). My primary concern is: if it wasn't worth it to have 3-year-old history THEN, why not three years from then? It just seems like a fundamental design issue that I've never heard explained other than "hard disks are cheap and bandwidth is infinite nowadays!", which is an outright lie and pretty much just says "git: not meant to be portable"
    Perhaps I use SVN in situations where something simpler would suffice, but I /like/ to be able to check out/in from resource-limited systems without resorting to YetAnotherTool.

    mostly, though, it's just the idea of "we don't want to import 3 years of history, it's not worth it right now" + "And this is good for the long term!"
  • by Lord Bitman ( 95493 ) on Tuesday August 21, 2007 @08:30AM (#20303045) Homepage
    This was very quick googling, of the type "I think I remember reading something like...", typing "linus git kernel import", and clicking until I found something similar to my memory, total time 20 seconds:
    http://kerneltrap.org/node/5014 [kerneltrap.org]

    Just because I'm the type of person who uses version control and often have access to high-speed internet and large hard drives doesn't mean I'm /ALWAYS/ in a situation where I have a lot of bandwidth and hard disk space at my disposal. When I'm in a resource-limited situation, I still like to be able to check in/out, do other "what went wrong?" type of things without using a second system.

    As for "give it a try", I did, but very early in its history so I don't think enough niceties were there at the time.
    Mostly, it comes down to: sometimes my SVN repository grows very quickly due to all the sometimes unneeded history. There's no "svn obliterate", so we just put up with it. This causes (actual, not hypothetical) storage issues even with centralized version control. I wouldn't want to multiply this problem by the number of developers, while adding bandwidth issues previously not dealt with because some things they just didn't care about, just to allow them access to histories /when they wanted it/.

Center meeting at 4pm in 2C-543.

Working...