You can use any context configuration supported by Tupelo by writing your own default.rdf context configuration file. The file is a serialized version of a completely configured and initialized Tupelo Context. You can create such a file by writing a Java program using the Tupelo API to completely configure then serialize your Context.

The following Java code example will configure and write a Tupelo context that is backed by a Jena-managed persistent MySQL database for storing triples and a directory/folder for storing blobs.

import java.io.File;
import java.io.FileOutputStream;
import java.io.OutputStream;
import java.sql.SQLException;
 
import org.tupeloproject.jena.PersistentJenaContext;
import org.tupeloproject.kernel.Context;
import org.tupeloproject.kernel.PeerFacade;
import org.tupeloproject.kernel.UnionContext;
import org.tupeloproject.kernel.impl.HashFileContext;
import org.tupeloproject.kernel.impl.MemoryContext;
import org.tupeloproject.rdf.xml.RdfXml;
 
/**
 *  Specify a Tupelo Context and serialize it to a file.
 *  This program depends on the tupelo core, tupelo jena, and MySQL jdbc libraries.
 *  Building by hand, these are tupelo-kernel.jar, tupelo-jena.jar, and
 *  an appropriate MySQL JDBC bridge jar, e.g., mysql-connector-java-5.0.4.jar.
 *  Using eclipse, you would depend on the org.tupeloproject.core, org.tupeloproject.jena,
 *  and com.mysql.jdbc projects.
 */
public class CreateTupeloServerConfigFile {
 
    /**
     * Serialize the context to a file.
     *
     * @param newContext the new context
     * @param filename the name of the file to write the context to.
     * @throws Exception
     */
    private static void writeContext ( Context newContext, String filename ) throws Exception {
        // create an in-memory context to hold the triples that define our context
        MemoryContext mc = new MemoryContext();
 
        // create a peer facade on that memory context
        PeerFacade pf = new PeerFacade();
        pf.setContext(mc);
 
        // save our context into the memory context using the facade
        pf.saveDefaultPeer(newContext);
 
        // write the memory context triples to the file
        OutputStream os = new FileOutputStream( filename );
        RdfXml.write(mc, os);
    }
 
    /**
     * @param args - no arguments
     */
    public static void main(String[] args) {
 
        // create and configure a PersistentJenaContext backed by an existing MySQL database
        // for storing triples
        PersistentJenaContext jenaContext = new PersistentJenaContext();
 
        // set the parameters Jena requires to connect to the MySQL database
        jenaContext.setType(PersistentJenaContext.MYSQL_TYPE);    // back with MySQL
        jenaContext.setDriverClass("com.mysql.jdbc.Driver");    // the MySQL jdbc driver
        jenaContext.setUrl("jdbc:mysql://localhost/tupelo");
        jenaContext.setUser("tupelo");        // the MySQL username associated with the database
        jenaContext.setPassword("aPassword"); // the MySQL password for that user
        jenaContext.setModelName( "jena00" );    // the jena model name to use
 
        // initialize the Jena model/database
        try {
            jenaContext.initialize();
        } catch (ClassNotFoundException e) {
            e.printStackTrace();
            System.exit(0);
        } catch (SQLException e) {
            e.printStackTrace();
            System.exit(0);
        }
 
        // create a HashFileContext for storing blobs using an existing directory
        HashFileContext hashFileContext = new HashFileContext();
        hashFileContext.setDirectory( new File( "/usr/local/tupelo/blobs/" ));
 
        // create a UnionContext to tie them together
        UnionContext unionContext = new UnionContext();
 
        // Create union context using blob & triple contexts.
        // The order is important-- operators are attempted in the order
        // of specification; since hashFileContexts only support blob operations
        // blobs get put there, while triple operations get passed over and on
        // to the jena context.
        unionContext.addChild( hashFileContext );
        unionContext.addChild( jenaContext );
 
        // Serialize the context to a file.
        // This file would then be copied to a file named default.rdf in the tupelo
        // server WEB-INF directory.
        try {
            writeContext( unionContext, "jenaContext.rdf" );
        } catch (Exception e) {
            e.printStackTrace();
        }
    }
}

Manage Dependencies

You will need to include the proper dependencies when building and running your program. The minimum necessary is the tupelo kernel jar file as well as all jar files needed by the Tupelo core and the context implementation(s) your configuration will use. The default Tupelo Server build already includes the necessary dependencies to support the Tupelo kernel as well as MySql, PostgreSQL, Sesame, and Jena. Any additional dependencies used by your Context implementation must be copied to the WEB-INF/lib subdirectory of the server installation.

Note that the above example program uses a Jena context backed by MySQL, so the tupelo-jena.jar file and the MySQL JDBC connector file mysql-connector-java-5.0.4.jar are dependencies; they are as mentioned above already included in the server installation. If you wished to use, for example, an H2 context, you would need to install both the tupelo-h2 jar file and its dependency, the com.h2database jar file.

Once you have created the file, copy or move it into the WEB-INF directory of the tupelo webapp with the name default.rdf, and copy necessary additional dependencies to the WEB-INF/lib directory.

  • No labels