Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Software Linux

OpenShot Video Editor Reaches Version 1.0 128

An anonymous reader writes "After only one year of development Jonathan Thomas has released version 1.0 of his impressive NLE for Linux. Based on the MLT Framework, OpenShot Video Editor has taken less time to reach this stage of development than any other Linux NLE. Dan Dennedy of Kino fame has also lent a helping hand ensuring that OpenShot has the stability and proven back-end that is needed in such a project."
This discussion has been archived. No new comments can be posted.

OpenShot Video Editor Reaches Version 1.0

Comments Filter:
  • Re:Yes but... (Score:2, Informative)

    by Max(10) ( 1716458 ) on Saturday January 09, 2010 @09:20PM (#30711344)

    "Is this one usable, unlike the other ones for linux?"

    IMO, it already features [openshotvideo.com] everything that most people will ever need [youtube.com] and it seems quite stable, too, but I prefer Kdenlive [kdenlive.org].

  • by Anonymous Coward on Saturday January 09, 2010 @09:26PM (#30711394)
    NLE = NonLinear Editor, MLT = Media Lovin' Toolkit, and TLA = Three Letter Acronym
  • by RobertLTux ( 260313 ) <robert AT laurencemartin DOT org> on Saturday January 09, 2010 @09:45PM (#30711548)

    throw that question at Wikipedia for the full details but in a NLE program you can do stuff like grab a clip from 2:45 to 5:32 in a 3 hour clip without actually making a copy until you are done (and this can be down to the frame level) sort of like they used to do with the film reels but without the nasty cutting the film problem.

  • Re:Yes but... (Score:3, Informative)

    by amiga3D ( 567632 ) on Saturday January 09, 2010 @10:31PM (#30711908)
    I must be doing something wrong, I can't get Kdenlive to crash. Cinelarra did crash on me a couple of times.
  • by cheesybagel ( 670288 ) on Saturday January 09, 2010 @10:41PM (#30711970)

    People were used to film and analog video tape editing systems. The simplest editing system for video in e.g. VHS was to have two video decks, one for playing, the other for recording. You had to wind/rewind the source tape, press play on the source deck, wait for the right time to press the recording button on the destination deck, etc. It was a pain.

    There were more sophisticated editing systems. But it was difficult to have frame accurate editing even then. You needed an embedded timecode in the video signal. Some camcorders came with this built in. You needed special video decks that ensured frame accuracy as well. Some video decks came with a jog/shuttle for easier editing control.

    Initial software video editing systems did not store the video on the computer. Computers were too slow and had limited storage to do that. I mean, can you remember 20MB hard disks being standard? Imagine storing and playing back video using a system like that. Or worse. Just not feasible. Especially when a VHS tape could store like four hours of video.

    So software for video editing just controlled the tape decks. The tape still needed to wind/rewind so this was not a non-linear video editing system. NLE only started being used once you could actually store the video in the computer or whatever.

  • Re:Feaking Sweet! (Score:5, Informative)

    by __aasqbs9791 ( 1402899 ) on Sunday January 10, 2010 @12:04AM (#30712420)

    I just installed it on my Ubuntu 9.10 system and through together some short clips I had laying around and not only did it work exactly the way I expected, but when I exported them in a couple of different formats it was very fast (I tried Kino a while back and not only did it take a long time to import clips, the export was also very slow.) I'm really glad I read this story today.

  • by Purity Of Essence ( 1007601 ) on Sunday January 10, 2010 @12:31AM (#30712542)

    All these replies miss the mark.

    Before video there was film. Editing film means finding the strip of film with shot you want, cutting it out, and splicing with tape or cement to some other footage. That's what's meant by "cutting film" and is where the editing term "cut" comes from. A cut is the simplest form of edit. Clip by clip you splice together the story. You can start anywhere you want but when it's done, the beginning of the movie is at one end, the head, and the end of the movie is at the other, the tail. Shot by shot your story plays out from beginning to end on your edited reel of celluloid. If you decide you want a shot between two others, you cut the splice between the two shots and splice the new strip of film between them. It's easy to understand and very flexible.

    When video came along editing changed and things got very inflexible. It is not practical to splice video tape because the image is not human readable and the video signal is too complex to make a simple noise free edit. The only way to edit video tape is to copy shots from a source tape to your master tape, assembling the video from the first shot to last, in order. If you make a mistake, you back up to the mistake and begin again. In video tape editing you can overwrite but you can never insert. Once a shot is down it can't shifted around in time. You can't insert a shot in the middle of an edited program without overwriting something. This is what is meant by linear editing.

    You've edited your 30 minute masterpiece. Every cut is perfect. It just needs one thing: 7 seconds of sunrise before the scene starting at the 10 minute mark. Inserting the shot means having to re-assemble the entire remaining 20 minutes. More than likely you'll decide to give up 7 seconds in a nearby shot to limit the amount of re-editing you'll have to do, or live without the shot.

    When computers came along it became possible to control video tape decks and video switchers. Such a computer can be programmed with an edit decision list (EDL), which is your entire program described shot by shot referencing source tapes and in and out times for each shot. With that information the computer can automatically assemble a video from source tapes in multiple decks. If you later decide you want to insert a shot between two others, you can change your EDL as easily as you would edit something in a word processor and tell the computer to assemble the entire video again, shot by shot, from start to finish. It's automated but it's still linear.

    Today, with digital video, we can easily and inexpensively import video into our computer editing systems. We can cut it up and arrange it and rearrange it as much as we want, and in realtime. It's at lot more like working with film but much faster and more powerful. These editing system have completely removed the linear editing aspect of traditional video editing and this the reason we call them non-linear editors.

  • Re:Yes but... (Score:4, Informative)

    by Kjella ( 173770 ) on Sunday January 10, 2010 @03:45AM (#30713240) Homepage

    There are really only two codecs to speak of IMO, MPEG2 (MiniDV, HDV) and H.264 (AVCHD) in and MPEG2 (DVD) and H.264 (online or BluRay) out. However, neither of these codecs are trivial to edit in their most effective form and there's a lot of optional encoding methods to cover it all.

    For example MiniDV is quite easy because it got rather "dumb" frames, but both HDV and AVCHD use IPB encoding [wikipedia.org] which is really nasty to edit. You can't just cut the video stream at random points, you may need frames both before and after the cut point to decode it. You can't jump to a random frame, you must find the nearest I-frame and work your way from there. That creates a lot of complexity where you must keep a whole different set of indexes than the one the user sees to get frame-accurate editing and a lot of decode logic to get only the intended frames while discarding the extras and so on.

    Pro editing tools DO have this mostly sorted out, if you're trying for the "no tool is perfect, therefore the OSS tools are as good as the commercial tools" argument then it's failing. It's not that many combinations that are really useful, it's that the few most important ones are really, really hard to do right. The decoding libs have this straight, I never have a problem playing back MPEG2 or H.264. But there sure is a problem editing them.

  • by Pecisk ( 688001 ) on Sunday January 10, 2010 @06:29AM (#30713678)

    "It looks like the author of this program spent(wasted?) a lot of time trying to use Gstreamer as the back-end for his project but basically ran into a brick wall."

    He didn't run into brick wall, he just felt that MLT will be better used for his project and he lacked initiative to communicate with Gstreamer/Gnonlin people (I have done it many times and I can say that Gstreamer guys are most accessible in Linux multimedia playground). Problem is also that Gstreamer and Gnonlin is complex for new beginnners who wants just drop the code and go. It requires insight and planning your app around framework actually. Some devs don't like it. Well, it's their choice.

    "If I remember correctly the developers of another Linux NLE called diva finally gave up on Gstreamer after years of struggling with it and subsequently abandoned their project altogether. Didn't the Diva developers also clash with the Gstreamer developers?"

    First, Diva was written in C#, which is not exactly a power horse, and it was also written in time when Gnonlin wasn't quite developed and wasn't ready for prime time. They also rewrote lot of stuff internally and in the end imho it was scrapped because of financial problems of Novell. And I really didn't saw them clash with Gstreamer guys.

    "So it appears that the above developers put a lot of effort in writing Linux NLE's using Gstreamer but still ultimately failed at their attempts. Is there something inherently flawed with Gstreamer/Gnonlin? If Video software using Gnonlin as its back-end(Pitivi) can only be written by its author(Edward Hervey), Gstreamer must be too cryptic for mere mortal programmers. I wonder if anything formidable will ever come of Pitivi."

    Gnonlin is used in at least one other media editor which uses Gstreamer as backend - Jokosher. I have been personally involved in it and can say only kind words of Edward. Sometimes he is sharp, but more or less he helped with every problem we came across using Gnonlin. Jokosher was glitchy also for some time, but for last releases it has been quite stable.

    And most important - Pitivi has serious commercial backing now and there are four core coders (including Edwards of course), all paid by commercial entities, to write it. I really put my money on Gstreamer stuff and apps, because of long term strategy Gstreamer community and app devs have. They are serious about what they doing.

    "Gstreamer must be too cryptic for mere mortal programmers"

    Well, I know hundreds of commercial coders who develop Gstreamer solutions for day's systems, like TVs, DVRs, mobile phones, etc. They must be zombies, because mortals can't handle it. Yeah, right :)

  • by RAMMS+EIN ( 578166 ) on Sunday January 10, 2010 @12:03PM (#30714816) Homepage Journal

    ``This thread made me read up on video compression, and I can now articulate more precisely why my favorite video codec is Motion-JPEG - It uses 100% I-frames, which makes editing easy, and which makes fast motion scenes look better than codecs which use P and B frames. The only downside is that Motion JPEG doesn't offer the best compression, but it's still reasonably sized.''

    For some value of "reasonably sized", I'm sure. But you are including a lot of redundant information in your stream if you represent each frame independently (which is what I frames do). By storing only the differences between frames (which is what B and P frames do), you can reduce the amount of data without losing any information. To achieve the same reduction using MJPEG, you would have to reduce the quality of your frames a lot. In short: if you use only I frames, you get larger files, reduced quality, or both.

    The reason you observe that fast motion scenes look better using MJPEG than using other codecs you have tried is not that the other codecs use B and/or P frames, but that they are throwing away too much information. What is likely happening is that they have a limited bit budget per frame, which is enough to encode scenes with few changes between frames, but not enough to encode scenes with many changes between frames. The solution, then, isn't to use only I frames (that would probably make the problem even worse!), but to allow more bits per frame for frames that require it.

    A little thought experiment to make it all a little easier to understand: suppose you have two frames that are very similar. Given the choice between storing each frame independently (I frames) or storing one frame completely (I frame) and the other as a diff against it (B or P frame), I think it should be clear that the latter will allow for a better bits:quality ratio. If you are only allowed to store full frames, you will have to sacrifice quality, increase the number of bits, or both. So allowing frames to be encoded as B or P frames is always a good idea. In those cases where it isn't beneficial, you can always still use I frames.

"I've seen it. It's rubbish." -- Marvin the Paranoid Android

Working...