SPARQL

I recently became aware that the implementation of arbitrary-length property paths I created for Sesame’s SPARQL engine suffered a serious flaw: it didn’t scale. At all. The chosen strategy (see my earlier blog post on this topic) of evaluating paths of ever-increasing length using multi-joins slowed down to a crawl (through tar) on any serious dataset at about length 7 or 8. Queries involving such paths took 15 minutes or more to complete. Clearly, this would need a little improvement.

Looking at the implementation, it turned out there were two main bottlenecks in the performance:

  1. complete re-evaluation of all paths of length (n – 1) whenever we expand the path length
  2. a computationally heavy cycle-detection check

After fiddling about with some ineffectual half-measures for one or both of these problems I decided to take a step back, and completely rethink the algorithm. It occurred to me that if we did a proper breadth-first search implementation we could eliminate the computationally expensive multi-join altogether (and even more importantly, the hugely inefficient repeat performances on known paths for every increase of the length). We could simply use a FIFO queue to pop intermediate values on and keep doing simple lookups of patterns of length 1.

Let me clarify with an example. Suppose you have the following path-expression:

  ex:alice foaf:knows+ ?x 

In the naive join-based algorithm, this would be evaluated by first looking up all paths of length 1:

  BGP(ex:alice, foaf:knows, ?x)

If the result of this is not empty, we continue by looking up paths of length 2, adding a filter-condition to guard against cycles:

Filter(
 And(?a1 != ex:alice, ?a1 != ?x),
 Join(
  BGP(ex:alice, foaf:knows, ?a1),
  BGP(?a1, foaf:knows, ?x)))

I won’t bore you with the details for paths of length 3 and further, but I trust you appreciate that this can quickly become quite big and complex. Also, notice the fact that in the expression for length 2, we have BGP(ex:alice, foaf:knows, ?a1), which is essentially the same expression we already evaluated for paths of length 1. We already know the answers to this bit, yet we still create a new join which will re-evaluate the damn thing, for every increase of the length, again and again.

Enter the new algorithm. I got rid of joins completely. Instead, I introduced a FIFO-queue of values. We iterate over results for length 1, reporting all resulting solutions, and putting all bindings found for ?x onto the queue. Then, when all results for length 1 are exhausted, we examine the queue. If it is not empty, we pop the first value, and create a new BGP where we replace the start with the popped value. We iterate over solutions to this BGP, and again, we report all solutions and add all bindings of ?x thus found to the stack. We continue in this fashion until finally the last iteration is exhausted and the queue is empty.

This is really classic breadth-first search, so no new wheels have been invented here, but it is amazing (although in retrospect really quite understandable) how much more performant this graph traversal mechanism is than the earlier join-based approach. One experiment I did (using the DBPedia category thesaurus) was the following query:

SELECT ?sub
WHERE {
   ?sub skos:broader+ <http://dbpedia.org/resource/Category:Business>
}

This query (which effectively retrieves all subcategories of Category:Business) took over 15 minutes to complete in the original approach. After switching to the new BFS-based approach, it completes in roughly 120ms.

One thing I haven’t touched on yet in this new implementation is how to detect cycles. In the original approach, we retrieved solutions that encompassed the complete path (joins with intermediate variables) and we used filters to check that all of the intermediate vars are pairwise distinct. Clearly, we can’t do that in this approach, because we no longer have all these intermediate vars available at once.

The solution is actually rather simple, if you stop and think of what a cycle actually is: a cycle is a path where at the end you have gotten back where you started (in other news: water is quite wet). Therefore, you simply have to mark where you’ve been before (that is, which solutions you’ve previously reported), and you’re home free.

A list internally records all reported combinations of the start value (ex:alice) and the end value (each binding of ?x) of our path. Whenever we find a new solution during iteration, we check if the reported combination already exists in that list, and if so, we discard it.

Why not just record the end values? Well… Think about it. Do all path expressions start with a fixed start point?

This approach to cycle detection has two consequences:

  • cycle detection has become a cheap lookup in a list.
    This is quick, but puts a burden on memory space. In practice, however, the result of property-paths stays small enough to be able to keep such a list without problems.
  • We now automatically filter out duplicate results

The second consequence is really useful in practice: the old approach reported back all possible paths of all possible lengths, so if for example Bob is a friend of Alice but also a friend of a friend of Alice, we would get back Bob twice – which is clearly redundant information. In the new algorithm, Bob is only reported once, saving bandwidth, computation time, and the user having to jump through a lot of hoops to get unique results.

The new and improved algorithm will be available in the next Sesame release (2.6.1).

It’s not yet up on the openrdf.org news page (mainly due to New Zealand being so far ahead of Europe – ha!), but Sesame 2.5.1 is now available for download.

This is a maintenance release that fixes a number of issues in the newly developed SPARQL 1.1 Query/Update support. Additionally, the Rio parser has improved support for validation of XML Schema datatypes and language tags.

For a full overview of changes, see the ChangeLog.

From OpenRDF.org: We are very pleased to announce the release of Sesame 2.5.0. This is a major new release with some very exciting features:

  • SPARQL 1.1 Query Language support
    Sesame 2.5 features near-complete support for the SPARQL 1.1 Query Language Last Call Working Draft , including all new builtin functions and operators, improved aggregation behavior and more.
  • SPARQL 1.1 Update support
    Sesame 2.5 has full support for the new SPARQL 1.1 Update Working Draft. The Repository API has been extended to support creation of SPARQL Update operations, the SAIL API has been extended to allow Update operations to be passed directly to the underlying backend implementation for optimized execution. Also, the Sesame Workbench application has been extended to allow easy execution of SPARQL update operations on your repositories.
  • SPARQL 1.1 Protocol support
    Sesame 2.5 fully supports the SPARQL 1.1 Protocol for RDF Working Draft. The Sesame REST protocol has been extended to allow update operations via SPARQL on repositories. A Sesame server therefore now automatically publishes any repository as a fully compliant SPARQL endpoint.
  • Binary RDF support
    Sesame 2.5 includes a new binary RDF serialization format. This format has been derived from the existing binary tuple results format. It’s main features are reduced parsing overhead and minimal memory requirements (for handling really long literals, a.o.t.).

Sesame 2.5 SPARQL improvements have been made possible by Ontotext, in cooperation with Aduna. We hope you enjoy the new features and look forward to receiving your feedback.

See the release notes for a full overview of all changes.

The Sesame framework comes with a pre-packaged web service (often referred to as the Sesame server). This web service enables deployment of Sesame as an online RDF database server, with multiple SPARQL query endpoints and full update capabilities.

In the default setup of the Sesame server, however, there is no security at all: everybody can access all available Sesame repositories, can query them, add data, remove data, and even change the server configuration (e.g. creating new databases or removing existing ones). Clearly this is not a desirable setup for a server which is publicly accessible.

Fortunately, it is possible to restrict access to a Sesame server, using standard Java web application technology: the Deployment Descriptor.

In this recipe, we will look at setting up some basic security constraints for a Sesame server, using the web application’s deployment descriptor, in combination with basic HTTP authentication. For the purpose of this explanation, we assume you have a default Sesame server running on Apache Tomcat 6.

0.1. Sesame HTTP REST protocol

The Sesame server implements a RESTful protocol for HTTP communication (it’s a superset of SPARQL protocol). What that comes down to is that the protocol defines specific locations, reachable by a specific URL using a specific HTTP method (GET, POST, etc.), for each repository and each operation on a repository.

This is good news for our security, as it means we can easily reuse HTTP-based security restrictions: since each operation on a repository maps to a specific URL and a specific method, we can have fairly fine-grained access control by simply restricting access to particular URL patterns and/or HTTP methods. read more

The SPARQL query language is extensible by nature: it allows implementors to define their own custom operators if the standard set of operators is not sufficient for the needs of some application.

Sesame’s SPARQL engine has been designed with this extensibility in mind: it allows you to define your own custom functions and use them as part of your SPARQL queries, like any other function. In this new recipe, I’ll show how to create a simple custom function. Specifically, we are going to implement a boolean function that detects if some string literal is a palindrome. read more…

I am very proud to announce the release of Sesame 2.4.0. This is a major new release featuring support for SPARQL 1.1 Query.

Sesame 2.4.0 implements all features of SPARQL 1.1 Query as outlined in the
October 14 W3C Working Draft, with the exception of basic federated query. SPARQL 1.1 for Sesame is developed by Ontotext in cooperation withAduna.

The list of new SPARQL query features includes:

  • Use of expressions in the SELECT clause
  • Aggregates (COUNT, MIN, MAX, AVG, SUM, GROUP_CONCAT), HAVING and GROUP BY
  • Property paths
  • Subqueries
  • Negation: (NOT) EXISTS and MINUS
  • Set membership: (NOT) IN
  • Conditionals: IF
  • Various new builtin functions: COALESCE, BNODE, IRI, isNumeric, strLang, strDt

Apart from this impressive array of new query language features, Sesame 2.4.0 also implements a number of bug fixes and improvements, including scalability and performance improvement in the Native store. For a full overview, see the release notes.