Searching with Xapian

The Xapian search engine, and associated topics

Archive for the ‘Uncategorized’ Category

Xapian 1.2

leave a comment »

It’s been a long time since I posted here, and there are a lot of recent developments in Xapian to talk about – not least, the release of Xapian 1.2.0, the first in the new stable release series, which brings a lot of new features over the last stable release series (the 1.0 series). Of particular interest in the 1.2 series are things like the ability to get faceted classifications for search results, and to use custom “PostingSource” subclasses to perform custom weighting operations, or to bring in external data to modify the set of matching documents. It also includes a replication framework to allow scaling a search installation horizontally, and a new default backend database format which is considerably smaller on disk, while still storing some additional statistics which are used to optimise the searches further. We’re still working on writing up the full set of changes since the last release in the 1.0 series, which will ultimately appear at http://trac.xapian.org/wiki/ReleaseOverview/1.2.0.

Each of these features deserves a blog post to themselves, and I’ll try and write some of these over the coming weeks, but in the meantime I encourage you to download 1.2.0 and play with the new features.

Written by richardboulton

May 24, 2010 at 3:55 pm

Posted in Uncategorized

Xapian talk slides

with 2 comments

Last night, I gave a brief 13 minute talk to the “Cambridge Geek Night” – a group of around 30 very switched on and interesting technology people, meeting above a pub in Cambridge. The atmosphere was wonderful, and I had a great time. There were also talks from Taylor Vinters solicitors about intellectual property, and Michael Brunton-Spall about many of the Guardian’s electronics services and experiments.

I’ve put the slides up on slideshare here – I’ll try and find some time to record an audio track to go with them soon.

Written by richardboulton

July 30, 2009 at 2:35 pm

Posted in Uncategorized

Ultrafast xapian

leave a comment »

I spent the weekend away from my computer, visiting a sunny and pleasant Sheffield. This left the back of my mind free to churn, and now I’m back at a computer I need to write down some of the conclusions I came to, so that I can focus on what I’m meant to be working on (… about which, more soon). I should warn you that I’ve not thought through all of the following as carefully as I usually might – I’d welcome discussion based on it, though!

New backend performance measurements

Performance tests I’ve been running lately with Xapian have been a mix of encouraging and dispiriting. I’ve got performance figures which vary, but go up to around 8000 searches per second for a 100,000 document collection, which is pretty reasonable (and comparable with / slightly better than Lucene, depending on the kind of searches being done). We’ve also got database sizes down to about the same size as Lucene. So some things are “okay”. With this size of database, we’re largely CPU bound, and I’ve tried various code optimisations which have quite an effect (none of these are applied yet, since I was just feeling my way around to see what made a difference, but there are various patches in ticket #326 in Xapian’s tracker).

However, the new “chert” backend isn’t performing as well as earlier hoped; it’s size is considerably smaller than the old “flint” backend’s size, but search speeds are much worse. This is because it stores the document length information separately from the postings; reducing duplication, but adding an overhead for seeking to the right place in the document-length list, and looking up each document. The current code in SVN is rather pathological; one of the patches improves the seek time to be half the current time (on average); but even if we managed to remove the seek time entirely, the additional call overhead for unpacking the document length list means that chert would still be slower than flint for searches with a single term (it should start beating flint for searches with 2 or more terms, according to my calculations). Of course, this is with small databases (around 600Mb for my main current test), which rapidly get cached fully in memory, so CPU overhead dominates. Chert should do much better than flint when IO dominates, due to its smaller database size.

Indexing figures are less encouraging; we’re really quite slow compared to (Java) Lucene for indexing. My current understanding (based on watching block accesses, and knowledge of the code) is that, for small collections, each time we flush we’re updating pretty much every block in the posting table, and therefore we’re incurring a cost proportional to the current size of the database: we currently flush every D documents (where D defaults to 10,000); so this means indexing time is given by the number of flushes multiplied by the size of the database at each flush; which works out as O(N*N). This is bad – we should be able to achieve O(N*log(N)) at least. The problem is that our strategy is optimised for very large databases; once the databases start being nice and large we do much better, because we start skipping large parts of the database as we pass through it during each update; the cost of a flush becomes proportional to the size of the changes, rather than the size of the database, which in the long run means that we should get the total time being something like O(N). What we really want is either a separate strategy for small databases (and by small I mean up to a few Gb of data) , or a new strategy which works well for both small and large databases. A hierarchical merge process for the posting lists might fit the bill (similar to what Lucene does). My plan is to prototype such a technique in Xappy by building multiple sub-databases, and calling xapian-compact to merge them; this should give some idea if this is a worthwhile approach.

Indexing for small databases also appears to be quite CPU heavy, so the same kind of code optimisations which work well for searching may well help here.

So, the picture is of a new backend which improves some things considerably (50% smaller databases, less IO, etc), but has some major drawbacks which need to be addressed before it’s production ready. When these are addressed it will hopefully be a large improvement over flint, perhaps boosting speeds by up to 50%, but there’s still a fair way to go to get there.

Radical new approaches

A long time ago, search engines used to have a list of documents for each term in a query, and they’d perform a search by reading each of these lists in turn, making a big array in memory holding the score-so-far of each document, and adding values to this array from each list. This approach doesn’t work well for “web-scale” databases – you need to be able to keep an entry for all the potential matches in memory at once, which can be impractical, and you also can’t avoid doing any work – you need to read the whole of each list.

You normally only want to get the top few results for each search, so “modern” search engines (like Xapian and, as far as I know, Lucene) work the other way around; they open each list in parallel, and move through them in lock-step, keeping track of the best results seen so far, and immediately discarding any results which aren’t good enough to get into the top N. This requires much less memory, but also allows various optimisations to be performed; for example, for an AND operator for which one term is frequent and the other is rare, it is only necessary to read a few entries from the frequent term’s list to check if they’re there, rather than having to read the whole list. Xapian implements an efficient skipping mechanism to take advantage of this kind of optimisation. Also, when a list doesn’t have any entries left, the maximum possible weight of any remaining documents goes down, and it can be possible to shortcut the search in this way; and even if it isn’t possible to shortcut the search, it’s often possible to convert an OR operator into an AND operator, if it’s necessary for both terms to be present in order for a document to get enough weight to get into the top 10. Xapian’s matcher has been pretty heavily optimised over the years – there are still plenty of cases where it could probably do a better job, but it’s pretty decent at taking advantage of whatever information is available.

However, the Xapian matcher is restricted by it’s need to read all the lists in step. We’ve discussed changing this in certain cases in the past – for example, a phrase search can be split into two parts; a (fairly cheap) AND search to require that all the terms are present, followed by a check in the position lists to see if the terms occur in close proximity, and in the right order. It would be nice to skip the position list lookup for documents which wouldn’t have enough weight to get into the result set anyway, but Xapian’s matcher can’t currently accommodate this.

What would be nice would be to relax the processing of the term lists, so that they’re only “roughly” processed in lockstep. We’d keep a list of “candidate” documents – ones which contain all the terms, but still need the phrase check to be performed. When the candidate list gets full, we’d check the positions for the highest weighted item in this list, and either discard it or add it to the result set. This way, we’d avoid checking the positions for documents which are currently seen early enough in the matching process that they have a chance of getting into the “top-documents-so-far” list, which could be a big win.

More information

To get radically better search performance, though, I think we’d need to start storing some specialised information in the database. Various things spring to mind:

  • Storing the entries in each list with the best combination of wdf and doclength separately, so they can be accessed without iterating through the whole list.
  • Storing information about the term frequency ranges in each chunk of the posting lists, which might allow chunks to be skipped more often.
  • Storing extra “index” terms in the database – for example, for phrases, storing terms which represent each pair of words.

The first of these seems the most likely to be useful, to me. It would allow single-term searches to look only at the top documents, which would make these return more frequently. It could also allow the matcher to skip more entries in multi-term searches, since the list of “non-top” documents for the term would have a lower “maximum possible weight”, and could perhaps be terminated early more often than the full list of documents for the term.

Generalisation

To take best advantage of the opportunity presented by having a list of the best entries for each term, though, I think we’d want a redesigned matcher (which is obviously not a quick-hack project)!

My basic idea is that at indexing time we’d split the posting lists for each term into several pieces; the highest weighted documents in one, lower weighted documents in the next, and possibly have several levels in which each level contains entries with a higher weight than all lower levels. (Note – this would only be possible for fixed values of the BM25 weight parameters, so we’d need to fix these at index time. We could even store the resulting weight in each list, rather than the wdf/doclength combination – though it might be more space efficient to store them separately!)

The matcher would have a working set of documents (perhaps around 10000 – experimentation would be required to find the optimal size), and would start by reading the top level lists for each term, to populate this set. The key thing is that the matcher wouldn’t always know the exact weight for the documents in the working set, because only some of the terms would have been checked; instead, it would know an upper and lower bound for the weight, and have a list of terms which haven’t been checked for that document yet.

After processing the top-level lists for each term, it’s possible that the working set would contain enough documents for which all terms have been seen that the search could finish. Alternatively, it might be possible to derive a high “minimum weight” based on the lower bounds for the documents in the working set, which would allow some of the other posting lists to be discarded. It might be tricky to combine this with the boolean operators which Xapian supports, but it doesn’t seem implausible to me that this approach could work well.

This might be an area in which Whoosh could be useful for prototyping – being written in Python, it might be easier to experiment with these ideas there.

Tag searching

Another thing I’ve been pondering is the best way to calculate the list of tags attached to the results of a search. For this problem, a tag could be a user-entered tag, used to generate a tag cloud, or a category or facet used for faceted browsing. The fundamental search problem is the same.

Currently Xapian has some support for this (on a branch, and in the Xappy tarballs), which works simply by looking up the tags in each potential match, and combining them together. It’s fairly efficient, but unfortunately to get an accurate result it requires that most of the optimisations discussed so far are disabled.

What would be nicer would be if we could store the list of tags relevant to the documents containing a particular term, and simply combine these sets appropriately. Nice in theory, but I don’t immediately see a way to make this work with anything other than a straightforward OR query; if the terms are ANDed together, it’s not possible to know whether the tags occur in the same document or not. It gets even worse if you have a match-decider, range search, or weight-cutoff in play, too.

I’ve not really come up with any bright ideas for this one yet – give me a suggestion and I’ll explain why it doesn’t work!

Written by richardboulton

March 16, 2009 at 10:55 pm

Posted in Uncategorized

Xapian performance comparision with Whoosh

with 5 comments

There’s been a bit of buzz today about “Whoosh” – a search engine written in Python. I did some performance measurements, and posted them to the Whoosh mailing list, but it looks like there’s wider interest, so I thought I’d give a brief summary here.

First, I took a corpus of (slightly over) 100,000 documents containing text from the english wikipedia, and indexed them with whoosh, and with xapian.

– Whoosh took 81 minutes, and produced a database of 977Mb.
– Xapian took 20 minutes, and produced a database of 1.2Gb. (Note: the unreleased “chert” backend of xapian produced a 932Mb database, and compacting that with “xapian-compact” produced a 541Mb database, but it’s only fair to compare against released versions.)

Next, I performed a search speed test – running 10000 1 word searches (randomly picked from /usr/share/dict/british-english) against the database, and measuring the number of searches performed per second. I did tests with the cache cleared (with “echo 3 > /proc/sys/vm/drop_caches”), and then with the cache full (by running the same test again):

– With an empty cache, whoosh achieved 19.8 searches per second.
– With a full cache, whoosh achieved 26.3 searches per second.
– With an empty cache, xapian achieved 83 searches per second.
– With a full cache, xapian achieved 5408 searches per second.

In summary, whoosh is damn good for a pure-python search engine, but Xapian is capable of much better performance.

Written by richardboulton

February 12, 2009 at 5:49 pm

Posted in Uncategorized

Linux disk cache not being ideal

with 2 comments

It’s unfair to expect Linux’s disk caching or IO to be perfect, but these traces look like it failing in a fairly easy situation to me (though there could be all sorts of other things going on – these traces were taken on a moderately busy machine):

Firstly, here’s a trace of a (5 term) xapian search, being repeated 100 times. All accesses in this trace are reads – horizontal axis is time, vertical is offset in file, green is a fast read, red is a read taking over 0.001 seconds. Mostly the 20-odd disk reads performed by the search are very fast, but there’s a single read which takes about 0.01 seconds – and looking at the trace by hand, I can see that the exact same 8k stretch of disk was read almost immediately before this read (there was 1 read intervening). So, either the cache dropped that block just before it was needed, or something else locked up the IO system for a moment:

100_searches_top_10

This isn’t an isolated event, either. The next trace I ran was the same search, with checkatleast set to dbsize. For those unfamiliar with xapian, this basically means that a bit more IO is done during the search. A similar, though less severe, event happened here: this time, there were 8 intervening reads between a fast read of the block, and the slow read (visible again as a red line).

100_searches_check_all

It may well be worth implementing a small disk cache inside xapian, if these events are widespread…

Written by richardboulton

February 10, 2009 at 4:22 pm

Posted in Uncategorized

Pretty performance pictures

leave a comment »

I’ve been analysing block read patterns.

Firstly, here’s the block reads for reading about 6700 documents in a fairly random (actually sorted by a key) order. Pretty random, not so good:

53954-seek-read-calls-from-sortdatabase

But, here’s the read pattern for reading about 10000 documents in order of docid.

3455-seek-read-calls-from-strace

Finally, here’s the block reads for running xapian-compact:

106400-seek-read-calls-from-xapian-compact

Written by richardboulton

February 9, 2009 at 8:43 am

Posted in Uncategorized