Juggernaut GPLd Search Engine 86
real bio pointed us to Juggernautsearch which actually looks interesting. Its GPLd. It can index 800 million pages every 3 months and deliver 10 million pages a day on a Pentium II. So I guess if you want to run your own Altavista, you can.
Re:Distributed effort ? (Score:1)
I'm wondering if this would be THE weak link in this idea. It seems to me that with the speed that the spidering engines works, you'd need a huge amount of processors at the "server" level just to eliminate the doubles. I use a freeware tool which collects data from 10-12 different search engines and attempts to eliminate the duplicates, only to get the same page numerous times from different sourced origin points, not even counting sites which are mirrored.
Any thoughts out there on how to solve this problem?
The end of PortalMania (Score:1)
These are all big-name, big-money comapnies living on borrowed time.
Just as every ISP provides DNS, mail and usenet services to their clients, the time is rapidly approaching where they will provide search/indexing services based on open industry standards. Products that integrate the search process into the OS, like Copernic or Apple's Sherlock are a clear indication of where the technology will go.
All it takes is a co-operatively networked "juggernaut search" system, the logical successor/complement to DNS, to topple the search/portal companies.
SoupIsGood Food
Actually, that's even better. (Score:1)
And most crawlers can easily be limited to a particular site, or set of sites. Even wget does that.
Re:Juggernaut: Ouch. (Score:1)
Moderate this down, please.
Re:Distributed effort ? One way how (Score:1)
The idea of a client reporting crawling is interesting but I have two issues with it. First, it's essential what is being done with services like AllAdvantage or free ISPs. They may monitor the sites you go to in order to build a database for advertisers, instead of for searching. The second issue is, net surfing would be bogged down considerably unless there was high bandwidth for the project.
One way someone could do this though, is to create an open proxy server on a big pipe, which would log all the sites users went to. This would be voluntary of course, and the database of sites could be added to the findings of a crawling bot.
Food for thought...
Re:Distributed effort ? One way how (Score:1)
Free, scalable search engines (Score:1)
- the IB project at Basis Systeme netzwerk [bsn.com],
- the former commercial products from PLS [pls.com] that AOL is now giving away,
- the ASF [etymon.com] project,
- the Webglimpse [webglimpse.net] pages,
- the pages for the mg [rmit.edu.au] system.
For a comprehensive presentation on the subject, see the searchtools [searchtools.com] site.
Okay, post-IPO, /, needs some changes... (Score:3)
I mean, really! This search engine hardly works at all, only the search part is free (and that's the no-brainer part of any search engine), it certainly doesn't index 800 million pages (I rarely got any results on any queries) and yet they still appear on here like some news item.
Did they pay slashdot? Are they a major stockholder now? What's the deal? Or was once again a story posted that wasn't checked first.
Give me seven million dollars, I'll double check my stories...
Re:Juggernaut: Ouch. (Score:1)
Puts me in mind of GPLTrans. "Tests have shown it's more reliable than Babelfish and InterTran," they claimed. Which was actually correct, provided you only want to translate the one sentence they tested it with :-).
Davenet (Score:1)
SoupIsGood Foof
Re:Juggernaut: Ouch. (Score:2)
Clearly not obvious to the casual observer, and the entire page just doesn't reflect the claimed quality of the engine itself.
It's a botched launch, and right after GPLTrans too.
Yours Truly,
Dan Kaminsky
DoxPara Research
http://www.doxpara.com
Juggernaut: Ouch| Google, yea!! (Score:2)
Google rocks!!
Re:Hideous (Score:1)
down to the second (Score:2)
bumppo
Re:Juggernaut: Ouch| Google, yea!! (Score:2)
No really. I can't even use my work's internal search engine anymore--I use Google, which finds more useful content in the public documentation than our lousy engine can find in the private!
Yours Truly,
Dan Kaminsky
DoxPara Research
http://www.doxpara.com
Re:Good Things To Come! - How many zillions (Score:1)
Not to mention the hardware expertise that comes with 10 years of upgrades.
There need to be more games on Linux, it helps draw in that next gen. or hackers (ones that will start using *real* OSes at 12). If you really want the tide to turn (faster, I guess) support Linux gaming.
Re:Make wild claims; get free /. publicity (Score:1)
We'd obviously love feedback in how well it scales since we rarely get such reports. It's an area that we'd like to improve (since many of the developers don't run "mini-Altavistas" themselves).
I haven't been able to check out the Juggernaut code since it's heavily slashdotted right now. But suffice to say, we'll be checking out whatever code they've made available to see if there are any interesting optimizations.
Dizz-net (Score:3)
http://www.dizz.net/ [dizz.net]
Basically, we need to get down exactly what to do and how to do it. More developers would be nice too...
Here's part of one of my messages on the list:
You can get on the list at http://www.egroups.com/group/dizz-net [egroups.com].
Filtering (Score:2)
Re:Filtering (Score:1)
Re:Fine Print (Score:1)
Ultimately, their method of business may change in unexpected ways. Let's say someone reverse-engineers their database. Suddenly their revenue stream will disappear (unless they have some sort of patent, but that's another story). So they'll have to make money on support and/or hosting the indexing/searching for people w/o the hardware.
Let's not look a gift horse in the mouth. Ultimately the community will derive benefit from this code, either through cross-polinization with projects like ht://Dig, or simply by getting people interested in the concept of an open-source version of large search engines.
Re:Make wild claims; get free /. publicity (Score:1)
I consider the claims of indexing 800 million pages to somewhat exaggerated, since no one else is oing this at this time simply because the hardware required to do so is so intensive. The figure is now doubt based entirely on an "estimate" performed on a much smaller sample and extrapolated - and these are often wrong.
Re:Distributed effort ? (Score:1)
An extract:
Databases, by definition, mean dealing with huge amounts of data. They also often contain very small computational requirements (although this is not always the case). This means that the bottleneck for database operations usually isn't CPU horsepower, but disk bandwidth. This means that distributed.net would be ill suited to help.
Re:Distributed effort - That's how Inktomi works (Score:1)
And other posters have pointed out Harvest.
Re:Suggestion... (Score:1)
Re:Uh, why? (Score:1)
Of course this idea will raise questions about privacy and the such, and the most popular pages would get evaluated most frequently (which isn't such a bad feature actually, some search engines work this way, they index cnn.com multiple times a day but put mypage.com on a lower priority).
Re: spidering (Score:1)
Would this work for database driven sites as well?
Re:Uh, sorry (Score:1)
NO CHANCE (Score:2)
Anyone have experience with it? (Score:2)
--
Una piccola canzone, un piccolo ballo, poco seltzer giù i vostri pantaloni.
Juggernaut: Ouch. (Score:3)
Don't say you can index 800 million pages in three months when your database gives less results that Lycos circa 1996.
Hyperbole is rife in the computer world in general, and it's one of the genuine strengths of the Open Source community that we're very results oriented--Apache gets *results*. Samba *works*, and actually *does* knock NT out of the park in terms of flexibility and feature sets. And so on.
There are exceptions, granted, but we don't stretch our credibility to the breaking point nearly as much as stock-price-manipu^H^H^H^H^H^Hmaximizing corporations practically have to.
My problem with Juggernaut is that, while their technology might be awesome, their online index *isn't*. When you don't even get enough hits back to compare whether the hits are delivered in an optimum order, you know there's a problem. That, combined with the fact that the site looks decidedly 1996'ish(sorry, I know there's a webmaster out there who doesn't like me right now), tarnishes the otherwise excellent announcement that we now ostensibly(pending testing) have an extremely high quantity and quality search engine system, not to mention the birth of a new business model--the internal search engine of external content.
Honestly, I must admit there's something to be said about companies purchasing internal versions of large search engines, just so no outside source can watch the unencrypted stream of queries coming from a given company to deduce what projects they're working on.
The Juggernaut guys may be on to something, but I'm still a Google addict.
Yours Truly,
Dan Kaminsky
DoxPara Research
http://www.doxpara.com
not too impressed (Score:3)
>>
first fully automated crawler that can reindex all 800 million World Wide Web pages every three months fully available to the public for a nominal two year subscription fee.
Does that mean that they give away the search engine but you have to purchase the database???
I think that there are better options out there right now. One GPL'd search engive out there that I have liked a lot is HTDIG (http://www.htdig.org). It does not have the horse power the the juggernautsearch "claims", but it is great for intranet/corporate/university website search.
If you are looking for a good search engine, you may also want to read the ask slashdot thread from last year on this topic. (http://slashdot.org/askslashdot/98/10/24/1756224
Hideous (Score:2)
Are the claims true? (Score:1)
I'd like to see a site that really uses the software to see if it is any good. Also I noticed that they seem to be selling a URL list for the search engine... It would seem to me that the engine should be able to find its own URL's...
Re:Search Engine: GPL - Database/Crawler: $$$$ (Score:1)
Anybody want to contribute one ?
Search Engine: GPL - Database/Crawler: $$$$ (Score:4)
-
<SIG>
"I am not trying to prove that I am right... I am only trying to find out whether." -Bertolt Brecht
Progress (Score:1)
Distributed effort ? (Score:4)
Hmmm... maybe I should patent this...
Bandwidth issues? (Score:1)
Sure it will run on a PII but does it need a T3 to run efficiently?
Josh
Good Things To Come! (Score:2)
You can run your own altavista. . . and as the open source 'canon' grows, folks will also be able to have an amazon.com, a slashdot, and whatever else you want to do on the Web.
But why just the Web? With enough open-source game engines, applications, and other code to build on . . .
Well, just imagine what happens when the first Open Source 'killer App' is released. (Not that sendmail, apache, and others aren't already -- I'm talking userland, here.) What if the Next Big Computer Game was Open Source? How many zillions would install Linux to play it?
What if Open Source was suddenly the dominant software paradigm?
Can I just say, 'Oh, YEAH!'?
-Omar
It would be great, if it were true. (Score:2)
try searching on slashdot. You get one link which is at least 2 years old
Dazzle them with bullshit.
What the hell? (Score:2)
So you can only search on one keyword at a time, it has a butt-ugly page, it doesn't return relevant links, and it has a horrible domain name to boot. What a waste.
Oh wait, it's GPL'ed! Hooray! Down with the software monopolies! We'll take over the world!
Groan...
Re:Hideous (Score:1)
Re:Good Things To Come! - How many zillions (Score:1)
no zillions. Linux is hard, people. Maybe not to the regular
Re:Anyone have experience with it? (Score:1)
Re:Anyone have experience with it? (Score:1)
We have the indexer running on a cron job twice a week during the middle of the night. It does kinda screw up webalizer results, but you can work around that.
Theres also one called glimpse, but my experience with that a few years ago showed it to not be as useful as htdig. things might have changed, though, and YMMV.
Re:Good Things To Come! - How many zillions (Score:1)
getting off topic here, alas
Mirrors? (Score:2)
Re:Distributed effort ? (Score:2)
It sucks! (Score:2)
Not one single hit for "wide open beavers"! And the colors are just awful.
Stupid question (Score:3)
Oh, I get it, I got EXACTLY what I searched for!
Make wild claims; get free /. publicity (Score:4)
Examination of their ftp distribution site reveals this is an early work in progress...most docs are "under construction," and even their helpers.txt (supposedly giving credit to others) is basically empty.
I'll post more if/when their src tarball ever finishes downloading (54M - whew!...and the site is getting /.'ed right now). My guess is they drew heavily from ht://dig, WAIS, SMART and other public-source search engines and spiders.
For those who can't get through to the site: they hope to sell subscriptions to their database, so that you can run their search engine internally. It's not clear whether they intend to license the spider/crawler or just the database.
Meanwhile, to those who have complained that easy searches turn up with nil results: read the page, dudes! It says clearly that you're searching a minimal test collection, but can search the whole thing (on your local system, seems like) for a subscription fee.
Credibility break: I'm an information science professor and design/evaluate alternate information retrieval systems.
,,, (Score:1)
-The bandwidth that most users have is not commiserate with their processing capability
-The index might become stale if a site is not visited repeatedly, but in the distributed spidering case, this risks either duplication of work or gaps where nobody visited the pages in a large web site
-The ability to rank pages based on "relevance" or "linkability" (ala Google) is decreased in this scenario
--
Flames? Think I'm a karma whore?
Fine Print (Score:2)
With the search engine being GPLed it still relies on a subscription service in in order for it to function. It mentions nothing about the crawler needed to create the database, but it also mentions that you are free to create your own database. Is it just me or is this a contradiction.
For the smallest subscription it gives 1.6 million urls at $100 a year. This price goes up to $500 for 10 Million urls.
For such a useful program, it is limiting itself to its own database which costs money to use.
Just my
Another nice GPL'd indexing system: MG (Score:2)
BTW, I'm not associated with the university, the book or whatever. I just enjoyed reading it.
Re:Distributed effort ? (Score:1)
Uh, why? (Score:3)
Making this a distributed effort would only be useful for a clustering environment ala beowulf where tight syncronization would be needed to prevent machines from revisiting the same websites. Other than that, distributed processing for web crawlers is... dubious.
Re:Distributed effort ? (Score:2)
The main problem, as other posters have commented in doing anything like this in a co-operative fashion is the large commercial value of the results. It also requires those taking part to have a significant amount of bandwidth (to pull in all of the content and then to exchange indexes).
The spidering part of the process is one of the least processor intensive - once you've completed it you're left with a large glob of data. You then need to convert that into an inverted index, which would still be large and then need passing to a central server, which would then have to do further processing in order to actually merge it in to the whole.
The Harvest Indexing system (http://www.tardis.ed.ac.uk/harvest) sought to develop a system like this. It seperated the searching and crawling tasks, so it would be possible to have a large number of crawlers (probably topologically close to the sites they were indexing), which then gave their results to an indexing system which collated them and presented them to the world.
The problem here is that you've still got one large, monolothic system at the indexing end. TERENA, as part of the TF-CHIC project developed a referral system (based on WHOIS+) to allow there to be one central gateway which then passed search requests to a large number of individual engines, each of which could run different software. Kind of like a fancy metasearch engine.
Originally the plan for devolving things locally was that if the indexes were generated by people who know the pages, then you'll get a higher standard of index. Aliweb, for instance, had a file per server which contained an index of all of the objects available on that server.
The problem with this is easily shown up by metatag abuse. If the person running the spider has a commerical interest in the sites they're indexing, they'll often go and fabricate the index so that their sites appear higher on searches.
Cheers.
Simon.
But it's a crawler... (Score:2)
However, it really does not work when you would like it to find pages that no one points to. Those unique pages are well hidden from crawlers, even those you can e-mail all of your friends about them. Until one of your friends puts a link on his start page, you're immune.
For an organization, it's the wrong avenue of approach. Organizations tend to keep their internet files on a small set of machines, in very specific directory structures. The best search engine for those machines should have permission to look at the directory structures and go through every file in them when it uspdates it's database. This insures that every file in that organization is collected and that no links going outside the organization are followed.
Ken Boucher
Re:Are the claims true? (Score:1)
I highly doubt any of the claims that they make... why? Well the facts just don't seem to add up.
The biggest flaw I found was that these guys claim they have been 'programming' for 'the last four years straight' to be 'able to provide the most efficiently coded and fastest search engine on the Internet' and yet the engine requires that you have Perl (leading me to believe that is what the whole thing is written in) as stated on the support page [165.90.48.2] (which isn't more that 1 small paragraph saying 'this is a work in progress' it says:
And if these guys did have the world's greatest search engine, would they not have enough pride to design a website a little better than one using a tiled background and a pink coloured table?
Finally, while would the online demo suck and not have many hits? Well they admit to that at least...
But why only have a few sites indexed only? Why not have the whole thing but only show us a few? If I search for slashdot I should get more than 1 hit...
I usually don't like to be a sceptic, but the whole thing just smells funny, especially when there are very few concrete details about the whole thing.
-dr
Re:Distributed effort ? (Score:1)
Wanna bet? (Score:1)
Those other sites aren't just search engines/indexes, that's why they're called portals. Have you seen Yahoo! or MSN lately? Shopping, auctions, ticketing, weather, scores, communities, calendaring, gaming, stock quotes, maps, make-your-own web pages, chat, news, e-mail, messaging, etc., etc. etc.? And you think Juggernaut or anything similar is going to make these companies go away? Borrowed time? Good God, man, I've got some beachfront property to sell ya! :)
Cheers,
ZicoKnows@hotmail.com
Re:Anyone have experience with it? (Score:1)
Bandwidth? (Score:2)
On a side note, I was very dissapointed when a search for "deez nuts" came up dry.. oh well.
//Phizzy
Re:Anyone have experience with it? (Score:1)
He was clearly not talking about just trying a simple search, he was asking if anyone had actually downloaded the code and gotten it running. This, I'm sure you'll agree, is likely to be a process that takes a bit longer than typing his message did.
Re:Distributed effort ? (Score:1)
Re:Search Engine: GPL - Database/Crawler: $$$$ (Score:3)
Personally... (Score:3)
Unlike Juggernaut, it's a complete search engine system (crawler, database & front-end), it was developed over a long time, and has capabilities that even most modern search engines don't (such as relaxed spelling).
IMHO, it would be better for the Open Source community, as a whole, if someone picked up Harvest, modernised it and maintained it. At present, it's the best "openish" source Search Engine out there, and it's going to waste.
Re:Fine Print (Score:1)
BUT...
When you release a piece of software under the GPL, there have come to be certain expectations about how things will work. One expectation that I have is that the software will be fully functional.
This search engine is GPL in only the most technical sense. It is certainly not GPL in spirit. What we have here is just good old-fashioned shareware. It's free enough to test it out, but if you really want to make use of it, it's going to cost you. That's certainly not how I've come to expect things to work when I run GPL'd software.
These guys have every right in the world to make some cash from their hard work. I just wish they'd use a licensing scheme that more closely reflected the way they are doing business.
Or am I just nuts?