Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Software Linux

Does Anyone Make a Photo De-Duplicator For Linux? Something That Reads EXIF? 243

postbigbang writes "Imagine having thousands of images on disparate machines. many are dupes, even among the disparate machines. It's impossible to delete all the dupes manually and create a singular, accurate photo image base? Is there an app out there that can scan a file system, perhaps a target sub-folder system, and suck in the images-- WITHOUT creating duplicates? Perhaps by reading EXIF info or hashes? I have eleven file systems saved, and the task of eliminating dupes seems impossible."
This discussion has been archived. No new comments can be posted.

Does Anyone Make a Photo De-Duplicator For Linux? Something That Reads EXIF?

Comments Filter:
  • exactly what you mean by deduplication is kind of vague, but whatever you decide on, it could probably be done in a hundred lines of perl (using CPAN libraries of course).

    • by Anonymous Coward on Thursday January 23, 2014 @06:38PM (#46051341)

      ExifTool is probably your best start:

      http://www.sno.phy.queensu.ca/~phil/exiftool/

      • by shipofgold ( 911683 ) on Thursday January 23, 2014 @07:01PM (#46051619)

        I second exiftool. Lots of options to rename files. If you rename files based on createtime and perhaps other fields like resolution you will end up with unique filenames and then you can filter the duplicates

        Here is a quick command which will rename every file in a directory according to createDate

          exiftool "-FileNameCreateDate" -d "%Y%m%d_%H%M%S.%%e" DIR

        If the files were all captured with the same device it is probably super easy since the exif info will be consistent. If the files are from lots of different sources...good luck.

      • Re: (Score:3, Informative)

        by Anonymous Coward
        I use VisiPics for Windows. It's a free software that actually analyses the content of images to find duplicates. This works very well because images may not have exif data or the same image may be different file sizes or formats.

        I don't know if it will work under Wine, but it's worth a try.
        • I use VisiPics for Windows. It's a free software that actually analyses the content of images to find duplicates. This works very well because images may not have exif data or the same image may be different file sizes or formats.
          I don't know if it will work under Wine, but it's worth a try.

          Visipics is the only tool I have ever found that will reliably use image matching to dedupe; it is Windows only but I have used it on my own collections & it works very well indeed: http://www.visipics.info/ [visipics.info]

          Now (v1.31) understands .raw as well as all other main image formats & can handle rotated images; brilliant little program!

          • I've used <a href="http://www.duplicate-finder.com/photo.html">Duplicate Photo Finder</a> for a while, but VisiPics looks like it's probably better. That said, I have tested and Duplicate Photo Finder worked for me with WINE.
      • by niftymitch ( 1625721 ) on Thursday January 23, 2014 @11:31PM (#46053421)

        ExifTool is probably your best start:

        http://www.sno.phy.queensu.ca/~phil/exiftool/

        find . -print0 | xargs -0 md5sum | sort -flags | uniq -flags

        There are flags in uniq to let you see pairs of identical md5sums as a pair.

        Multiple machines drag the full file to the next machine and concat the
        local files....

        Yes exif helps. but some editors attach exif data from the original...
        The serious might cmp files as well before deleting.

    • Imagine tons of iterative backups of photos. Generations of backups. Now they need consolidation. Something that can look at file systems, vacuum the files-- but only one of each photo, even if there are many copies of that photo, as in myphoto(1).jpg, etc.

      • General case (Score:5, Informative)

        by xaxa ( 988988 ) on Thursday January 23, 2014 @07:00PM (#46051603)

        For the general case (any file), I've used this script:


        #!/bin/sh

        OUTF=rem-duplicates.sh;

        echo "#! /bin/sh" > $OUTF;

        find "$@" -type f -print0 |
            xargs -0 -n1 md5sum |
                sort --key=1,32 | uniq -w 32 -d --all-repeated=separate |
                    sed -r 's/^[0-9a-f]*( )*//;s/([^a-zA-Z0-9./_-])/\\\1/g;s/(.+)/#rm \1/' >> $OUTF;

        chmod a+x $OUTF; ls -l $OUTF

        It should be straightforward to change "md5sum" to some other key -- e.g. EXIF Date + some other EXIF fields.

        (Also, isn't this really a question for superuser.com or similar?)

      • by vux984 ( 928602 )

        If the files are in fact identical internally, just backups and backups of backups then it should be pretty straightforward.

        Simplest would be simply to:

        start with an empty destination

        Compare each file in the source(s) tree(s) on each file in the destination by filesize in bytes, then if there is a match there, do a file compare using cmp. Copy it to the destination it if it doesn't match, otherwise move to the next file. Seems like something that would take 10-20 lines of command line script tops. Its a one

    • by khasim ( 1285 ) <brandioch.conner@gmail.com> on Thursday January 23, 2014 @06:40PM (#46051377)

      If they are identical then their hashes should be identical.

      So write a script that generates hashes for each of them and checks for duplicate hashes.

    • by thorgil ( 455385 )

      or python, using 10 lines.

    • by nemesisrocks ( 1464705 ) on Thursday January 23, 2014 @06:47PM (#46051473) Homepage

      whatever you decide on, it could probably be done in a hundred lines of perl

      Funny you mention perl.

      There's a tool written in perl called "findimagedupes" in Debian [debian.org]. Pretty awesome tool for large image collections, because it could identify duplicates even if they had been resized, or messed with a little (e.g. adding logos, etc). Point it at a directory, and it'll find all the dupes for you.

      • by msobkow ( 48369 ) on Thursday January 23, 2014 @07:12PM (#46051739) Homepage Journal

        Why do I have this sneaking suspicion it runs in exponential time, varying as the size of the data set...

        From what this user is talking about (multiple drives full of images), they may well have reached the point where it is impossible to sort out the dupes without one hell of a heavy hitting cluster to do the comparisons and sorting.

        • What you want, is a first pass which identifies some interesting points in the image. Similar to microsoft's photosynth. Then you can compare this greatly simplified data for similar sets of points. Allowing you to ignore the effects of scaling or cropping.

          A straight hash won't identify similarities between images, and would be totally confused by compression artefacts.

          • What you want, is a first pass which identifies some interesting points in the image.

            There is an algorithm for that called SIFT (scale-invariant feature transform), but it's patented and apparently unavailable for licensing in free software.

        • Re: (Score:3, Informative)

          Why do I have this sneaking suspicion it runs in exponential time, varying as the size of the data set...

          It's actually pretty nifty how findimagedupes works. It creates a 16x16 thumbnail of each image (it's a little more complicated than that -- read more on the manpage [jhnc.org]), and uses this as a fingerprint. Fingerprints are then compared using an algorithm that looks like O(n^2).

          I doubt the difference between O(2^n) and O(n^2) would make a huge impact anyway: the biggest bottleneck is going to be disk read and seek time, not comparing fingerprints. It's akin to running compression on a filesystem: read speed is

          • Why do I have this sneaking suspicion it runs in exponential time, varying as the size of the data set...

            It's actually pretty nifty how findimagedupes works. It creates a 16x16 thumbnail of each image (it's a little more complicated than that -- read more on the manpage [jhnc.org]), and uses this as a fingerprint. Fingerprints are then compared using an algorithm that looks like O(n^2).

            I doubt the difference between O(2^n) and O(n^2) would make a huge impact anyway: the biggest bottleneck is going to be disk read and seek time, not comparing fingerprints. It's akin to running compression on a filesystem: read speed is an order of magnitude slower than the compression.

            O(n^2) vs O(2^n) is a huge difference eve for very small datasets (hundreds of pictures).
            You have to read all the images and generate the hashes, but that's Theta(n).
            Comparing one hash to every other has is Theta(n^2).

            If the hashes are small enough to all live in memory (or enough of them that you can intelligently juggle your comparisons without having to wait on the disk too much), then you'll be fine for tens of thousands of pictures.
            But photographers can take thousands of pictures per shoot, hundreds of

            • O(n^2) vs O(2^n) is a huge difference eve for very small datasets (hundreds of pictures).

              Hopefully it's actually something like O(p * 2^n) vs O(p * n^2) where n is the thumbnail size and p is the number of images.

        • I've used findimagedups. IIRC, it rescales each image to a standard size (64x64 or something) then filters and normalizes it down to a 1-bit-depth image.

          It then builds a database of these 'hashes'/'signatures' and can output a list of files that have a threshold of bits in common.

          That's how it can ignore small changes, it loses most detail and then can ignore a threshold of differences.

          It would fail if an image was cropped or rotated, for instance. It could handle picture orientation it it was modified
      • The real answer is to make a hash over the image content. The ImageHash python package [python.org] comes with a program to discover duplicate images. It is more powerful than what is needed here: It can find images that looks similar (different format, resolution, etc.).

        I think the ImageHash package uses a better algorithm than findimagedupes (description here [github.com], actually you can choose between several), and is shorter in code.

    • Why use perl when a bash script will do?
    • I wrote a file deduplicator. Build a table of file size ---> name. If two files have the same size, run md5sum on them or just use cmp -s. It's a trivial program.

      But if you have photos which you consider duplicates but which have different sizes or checksums, then it's a visual gig and lots of boring tedious work,

  • fdupes -rd (Score:5, Informative)

    by Anonymous Coward on Thursday January 23, 2014 @06:37PM (#46051337)

    I've had the same problem as I stupidly try to make the world a better place by renaming or putting them in sub-directories.

    fdupes will do a bit-wise comparison. -r = recurse. -d = delete.

    fdupes would be the fastest way.

    • by Xolotl ( 675282 )

      fdupes is excellent and I second that (please mod the parent up!)

      The only drawback to fdupes is that the files must be identical, so two identical images but where one has some additional metadata e.g. inside the EXIF won't be deduplicated.

  • fslint (Score:3, Informative)

    by innocent_white_lamb ( 151825 ) on Thursday January 23, 2014 @06:40PM (#46051385)

    fslint is a toolkit to find all redundant disk usage (duplicate files
    for e.g.). It includes a GUI as well as a command line interface.

    http://www.pixelbeat.org/fslin... [pixelbeat.org]

  • by Oceanplexian ( 807998 ) on Thursday January 23, 2014 @06:41PM (#46051399) Homepage
    I would try running all the files through ssdeep [sourceforge.net].

    You could script it to find a certain % match that you're satisfied with. Only catch to this is that it could be a very time-intensive process to scan a huge number of files. Exif might be a faster option which could be cobbled together in Perl pretty quickly, but that wouldn't catch dupes that had their exif stripped or have slight differences due to post-processing.
  • I'm pretty sure I wrote something like this in perl/bash in like 20 minutes.
    1 - do an md5sum of each file and toss it in a file
    2 - sort
    3 - perl (or you language of choice) program, basicly:
    sum = "a"
    newsum = next line
    if newsum == sum delete file
    else sum = newsum
    • Re: (Score:3, Insightful)

      by Cummy ( 2900029 )

      Why do people on this site believe that everyone who is interested in tech is a programmer? This"just write it" is foolishness of the highest order. For many of us non-programers "just write it" is like telling some one living in Florida to "just build a plane and fly to that concert in Vienna after work tomorrow". If that seems like a ridiculous ask, then so is asking a person without the skill to write a script for that. So it can be done in 20 minutes, use that 20 minutes to help someone by writing the p

      • Re: (Score:2, Informative)

        This"just write it" is foolishness of the highest order. For many of us non-programers "just write it" is like telling some one living in Florida to "just build a plane and fly to that concert in Vienna after work tomorrow".

        Computer literacy used to involve typing a terminal command. All the PC folks in the 80's and 90's did it. I can't be fucked to care if folks are too stupid to learn how to use their computers. If you can't "write it yourself" in this instance, which amounts to running an operation across a set of files, then sorting the result, then you do not know how to use a computer. You know how to use some applications and input devices. It's a big difference.

        This is part of the reason Windows is successful: think of a problem, there is likely program out there that solves it already, and if there isn't one someone will soon write one

        Which is why it's a nightmare to administer windows.

        • by kesuki ( 321456 )

          "some of those would have adware, some would have malware. At least the ones in the FLOSS repositories wouldn't."

          repositories are a layer of security. yet malware repos are widely promoted on some websites of so called help doing things like playing back movies configuring firewalls etc, also trusted repos are in fact compromized sometimes like http://www.techrepublic.com/blog/linux-and-open-source/linux-repository-hit-by-malware-attack/2989/ [techrepublic.com]
          i remember one site no link as i forgot where i found it, was a gu

        • by tftp ( 111690 )

          Computer literacy used to involve typing a terminal command. All the PC folks in the 80's and 90's did it.

          Yes, all the 0.07% of the population. The rest was in fear of the computer, for a good reason. Back then computers were not very useful unless you were a programmer, or your specific need was covered (MS Word, Excel, WP.)

          If you can't "write it yourself" in this instance, which amounts to running an operation across a set of files, then sorting the result, then you do not know how to use a computer

        • Image deduplication is a much harder problem than you (and many of the posters here) seem to think. It's certainly not terrifically hard, but it's not as simple as comparing file size and content hash.
          What if the image was resized?
          What if a watermark was added?
          What if the image was saved in a different format, eg PNG and JPEG?
          What if the image had its lighting curves adjusted?
          etc.
          You may still want to find these duplicates, but size/hash methods will fail.
          The findimagedupes tool works well in most of these
      • by jedidiah ( 1196 )

        > This is part of the reason Windows is successful: think of a problem, there is likely program out there that solves it already

        No. Windows is successful because it's the followup to a product that already owned the market: MS-DOS.

        Now you want to talk about nasty user hostile shit, MS-DOS has "script it yourself" Unix beat by a wide margin.

      • "Why do people on this site believe that everyone who is interested in tech is a programmer?"

        A Bash one-liner or even a 100-line script doesn't make you a programmer.

        On the other hand, if asked "how I do move this car from here to a town 100 miles away" the answer is "the most cheap and efficient way is for you to drive it there" and whinning "why do people on this site believe that I should learn to drive" is just that: whinning.

        Oh, and learning to drive will help you a lot of times, not, only on this task

    • Step one is to compare file sizes. Since file sizes need to be identical in order for the files to be identical, and file sizes are already calculated and stored as metadata, this will greatly reduce the time needed.

      1. List all files with their respective sizes.
      2. Sort
      3. For each consecutive file in the list with the same size as the previous file, compare the MD5 hashes.
  • Geeqie (Score:5, Informative)

    by zakkie ( 170306 ) on Thursday January 23, 2014 @06:42PM (#46051425) Homepage

    Works excellently for this.

    • +1. The reason: it has a fuzzy-matching dedupe feature. It'll crawl all your images, then show them grouped by similarity and let you choose which ones to delete. It seems to do a pretty good job with recompressed or slightly cropped images.

      Open it up, right click a directory, Find Duplicates Recursive.

      fdupes is also good to weed out the bit-for-bit identical files first.

  • by Jody Bruchon ( 3404363 ) on Thursday January 23, 2014 @06:43PM (#46051443)
    fdupes will work and is faster than writing a homemade script for the job. The big problem is "across multiple machines" which might require use of, say, sshfs to bring all the machines' data remotely onto one temporarily for duplicate scanning. fdupes checks sizes first, and only then starts trying to hash anything, so obvious non-duplicates don't get hashed at all. Significant time savings. Across multiple machines, another option is using md5deep to build recursive hash lists.

    The only tool so far that I've used for image duplicate finding that checks CONTENT rather than bitwise 1:1 duplicate checking is GQview on Linux. It works fairly well, though it's a bit dated by now it's still a good viewer program. Add -D_FILE_OFFSET_BITS=64 to the CFLAGS if you compile it yourself on a 32-bit machine today though.
    • by rwa2 ( 4391 ) * on Thursday January 23, 2014 @07:04PM (#46051655) Homepage Journal

      Yeah, this Ask Slashdot should really be about teaching people how to search for packages in aptitude or whatever your package manager is...
      Here are some others:

      findimagedupes
      Finds visually similar or duplicate images
      findimagedupes is a commandline utility which performs a rough "visual diff" to
      two images. This allows you to compare two images or a whole tree of images and
      determine if any are similar or identical. On common image types,
      findimagedupes seems to be around 98% accurate.
      Homepage: http://www.jhnc.org/findimaged... [jhnc.org]

      fslint :

      kleansweep :
      File cleaner for KDE
      KleanSweep allows you to reclaim disk space by finding unneeded files. It can
      search for files basing on several criterias; you can seek for:
      * empty files
      * empty directories
      * backup files
      * broken symbolic links
      * broken executables (executables with missing libraries)
      * dead menu entries (.desktop files pointing to non-existing executables)
      * duplicated files ...
      Homepage: http://linux.bydg.org/~yogin/ [bydg.org]

      komparator :
      directories comparator for KDE
      Komparator is an application that searches and synchronizes two directories. It
      discovers duplicate, newer or missing files and empty folders. It works on
      local and network or kioslave protocol folders.
      Homepage: http://komparator.sourceforge.... [sourceforge.net]

      backuppc : (just in case this was related to your intended use case for some reason)
      high-performance, enterprise-grade system for backing up PCs
      BackupPC is disk based and not tape based. This particularity allows features #
      not found in any other backup solution:
      * Clever pooling scheme minimizes disk storage and disk I/O. Identical files
          across multiple backups of the same or different PC are stored only once
          resulting in substantial savings in disk storage and disk writes. Also known
          as "data deduplication".

      I bet if you throw Picasa at your combined images directory, it might have some kind of "similar image" detection too, particularly since its sorts everything by exif timestamp.

      That said, I've never had to use any of this stuff, because my habit was to rename my camera image dumps to a timestamped directory (e.g. 20140123_DCIM ) to begin with, and upload it to its final resting place on my main file server immediately, so I know all other copies I encounter on other household machines are redundant.

  • Requires WINE but should work fine on Linux.

    http://www.anti-twin.com/ [anti-twin.com]
  • Under *buntu
    sudo apt-get install fdupes
    man fdupes:
    fdupes - finds duplicate files in a given set of directories
  • by MrEricSir ( 398214 ) on Thursday January 23, 2014 @06:53PM (#46051515) Homepage

    As a former Shotwell dev I might point out that most photo manager apps can do this.

  • In addition to the other methods (ZFS, fdupes, etc), I personally use git-annex.

    Git annex can even run on android, so I keep at least two copies of my photos spread throughout all of my computers and removable devices.

  • See http://www.librelogiciel.com/s... [librelogiciel.com]

    I haven't modified nor used it in years (I don't own a digital camera anymore...) so I ignore if it still works with up to date libraries, but its "--nodupes" option does what you want, and its numerous other command line options (http://www.librelogiciel.com/software/DigicaMerge/commandline) help you solve the main problems of managing directories full of pictures.

    It's Free Software, licensed under the GNU GPL of the Free Software Foundation.

    Hoping this helps

  • by Khopesh ( 112447 ) on Thursday January 23, 2014 @07:07PM (#46051689) Homepage Journal

    This will help find exact matches by exif data. It will not find near-matches unless they have the same exif data. If you want that, good luck. Geeqie [sourceforge.net] has a find-similar command, but it's only so good (image search is hard!). Apparently there's also a findimagedupes tool available, see comments above (I wrote this before seeing that and had assumed apt-cache search had already been exhausted).

    I would write a script that runs exiftool on each file you want to test. Remove the items that refer to timestamp, file name, path, etc. make a md5.

    Something like this exif_hash.sh (sorry, slashdot eats whitespace so this is not indented):

    #!/bin/sh
    for image in "$@"; do
    echo "`exiftool |grep -ve 20..:..: -e 19..:..: -e File -e Directory |md5sum` $image"
    done

    And then run:

    find [list of paths] -typef -print0 |xargs -0 exif_hash.sh |sort > output

    If you have a really large list of images, do not run this through sort. Just pipe it into your output file and sort it later. It's possible that the sort utility can't deal with the size of the list (you can work around this by using grep '^[0-8]' output |sort >output-1 and grep -v '^[0-8]' output |sort >output-2, then cat output-1 output-2 > output.sorted or thereabouts; you may need more than two passes).

    There are other things you can do to display these, e.g. awk '{print $1}' output |uniq -c |sort -n to rank them by hash.

    On Debian, exiftool is part of the libimage-exiftool-perl package. If you know perl, you can write this with far more precision (I figured this would be an easier explanation for non-coders).

  • fslint is the tool you are looking for.

  • This will find duplicate files in the general sense:
    http://packages.debian.org/sid... [debian.org]

  • fdupes.

    Done :)

  • by Anonymous Coward

    http://en.wikipedia.org/wiki/List_of_duplicate_file_finders

  • They use a database of hashes of kiddie porn to identify offending material without forcing anyone to look at the stuff. Seems like it would be ready to use Perl to crawl your filesystem and identify dupes.

    • Wrong. OSI explained to us that a person is "victimized" again every time someone looks at an image of them in child porn, and the hash of images is used so that they don't feel that pang in their stomach when an FBI investigator double-clicks 0FEDCABE1.jpg.

  • I wrote a shell script that looked at the datestamp for each photo and then moved it to a directory called YYYY/MM/DD (so 2000/12/25). I'm going off the assumption that there weren't two photos taken on the same day with the same filenames. So far that seems to be working.

  • by alantus ( 882150 ) on Thursday January 23, 2014 @09:08PM (#46052645)
    #!/usr/bin/perl
    # $Id: findDups.pl 218 2014-01-24 01:04:52Z alan $
    #
    # Find duplicate files: for files of the same size compares md5 of successive chunks until they differ
    #
    use strict;
    use warnings;
    use Digest::MD5 qw(md5 md5_hex md5_base64);
    use Fcntl;
    use Cwd qw(realpath);

    my $BUFFSIZE = 131072; # compare these many bytes at a time for files of same size

    my %fileByName; # all files, name => size
    my %fileBySize; # all files, size => [fname1, fname2, ...]
    my %fileByHash; # only with duplicates, hash => [fname1, fname2, ...]

    if ($#ARGV < 0) {
    print "Syntax: findDups.pl <file|dir> [...]\n";
    exit;
    }

    # treat params as files or dirs
    foreach my $arg (@ARGV) {
    $arg = realpath($arg);
    if (-d $arg) {
    addDir($arg);
    } else {
    addFile($arg);
    }
    }

    # get filesize after adding dirs, to avoid more than one stat() per file in case of symlinks, duplicate dirs, etc
    foreach my $fname (keys %fileByName) {
    $fileByName{$fname} = -s $fname;
    }

    # build hash of filesize => [ filename1, filename2, ...]
    foreach my $fname (keys %fileByName) {
    push(@{$fileBySize{$fileByName{$fname}}}, $fname);
    }

    # for files of the same size: compare md5 of each successive chunk until there is a difference
    foreach my $size (keys %fileBySize) {
    next if $#{$fileBySize{$size}} < 1; # skip filesizes array with just one file
    my %checking;
    foreach my $fname (@{$fileBySize{$size}}) {
    if (sysopen my $FH, $fname, O_RDONLY) {
    $checking{$fname}{fh} = $FH; # file handle
    $checking{$fname}{md5} = Digest::MD5->new; # md5 object
    } else {
    warn "Error opening $fname: $!";
    }
    }
    my $read=0;
    while (($read < $size) && (keys %checking > 0)) {
    my $r;
    foreach my $fname (keys %checking) { # read buffer and update md5
    my $buffer;
    $r = sysread($checking{$fname}{fh}, $buffer, $BUFFSIZE);
    if (! defined($r)) {
    warn "Error reading from $fname: $!";
    close $checking{$fname}{fh};
    delete $checking{$fname};
    } else {
    $checking{$fname}{md5}->add($buffer);
    }
    }
    $read += $r;
    FILE1: foreach my $fname1 (keys %checking) { # remove files without dups
    my $duplicate = 0;
    FILE2: foreach my $fname2 (keys %checking) { # compare to each checking file
    next if $fname1 eq $fname2;
    if ($checking{$fname1}{md5}->clone->digest eq $checking{$fname2}{md5}->clone->digest) {
    $duplicate = 1;
    next FILE1; # skip to next file
    }
    }
    • I'm replying to you because one of my two solutions has the same name :)
      https://github.com/caluml/find... [github.com]

      I have another solution, written in Python. It is pretty efficient but very limited. It walks two folders, sorting files into bins according to size. If any bins match between the two folders, it does a hash once on each file in each bin and then compares them. That way, the files are not read repeatedly and hashes are only done if necessary. It could be sped up further by only doing partial file matches

      • by alantus ( 882150 )

        My script is can save some I/O and cpu cycles, but has to keep more files open at a time (could run out of filedescriptors in extreme cases).
        The script you describe must be shorter and easier to understand, but I would only use it for smaller files, where discarding duplicates before reading the whole file doesn't make a big difference.
        The next step is to create some UI that allows deleting duplicates easily.

        • I was using it where one of the directories was mounted over the network, so I didn't want to read the files unless I had to... a directory listing is a pretty cheap operation. One problem that I ran into was that Macs can add resource forks to some files, so if one of the folders was on a Mac you could have weird file sizes. For photos and pdfs and such, the resource fork is disposable so it was driving me nuts... some "unique" files were not unique at all.

    • by fisted ( 2295862 )
      what a long and convoluted pain.
      consider the POSIX shell variant [slashdot.org]
      • by alantus ( 882150 )

        It is long and convoluted in the same way that an airplane is long convoluted compared with a bicycle ;)

  • DigiKam will do everything you want. It works by creating hashes. You set your level of similarity and digiKam will find the files. It can handle multiple locations, and even "albums" on removable media. If you have a lot of images it can be slow, but if you limit any particular search you can greatly improve performance. It is available for Linux and Windows both.
  • However it's fairly easy to do with a unix shell and only standard tools...
    Something along the lines of:

    find /path/to/pics -type f -print0 | xargs -0 md5 | sort | while read hash r; do
    if ! [ "$lasthash" = "$hash" ]; then
    echo "$rest"
    fi
    lasthash="$hash"
    done | while read dupe; do
    echo rm -- "$dupe"
    done

    That would, once the echo is removed, delete all files that are dupes (except one of each).
    Typed it right into the /. comment box, though, so it's probably wrong somewhere. Only intended t
  • I have used http://www.duplicate-finder.com/photo.html (MS Windows only) because I could not find anything on Linux with similar functionality. It does work very well, it can find similar, but not identical images, such as the same picture saved in a different format or with different compression settings. It tends to slow down when working directories with multiple thousands of images.
  • No need to roll your own. If the redundant files are identical (the
    problem as stated lets me assume that), use fdupes.

    "Searches the given path for duplicate files. Such files are found by
    comparing file sizes and MD5 signatures, followed by a byte-by-byte
    comparison.
    "

    It's fast, accurate, and generates a list of duplicate files to handle
    yourself - or automatically deletes all except the first of duplicate
    files found.

    I've used it myself with tens of thousands of pictures to exactly do
    what the OP wan

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...