Frequently Asked Questions

General

  1. Which relational database systems are supported?
  2. How to build DbUnit?
  3. Can I use DbUnit in order to check and execute DDL statements?
  4. How to see SQL statements issued by DbUnit using P6Spy?
  5. How to learn more about known and fixed issues?

Using DbUnit

  1. How to extract a flat XML dataset from my database?
  2. How to generate a DTD representing my database schema?
  3. Can I work with multiple database schemas having tables with identical name?
  4. Can I use DbUnit with IDENTITY or auto-increment columns?
  5. How to specify NULL values with flat XML dataset?
  6. Can I use DbUnit with database views?
  7. How to enable batched statement?
  8. What are the date formats supported by DbUnit?
  9. How to replace the default data type factory?
  10. How to perform streamed import and export?
  11. How to enable flat XML dataset validation?
  12. How to improve the performance of my DbUnit tests?
  13. How to automatically order tables by their foreign keys?
  14. How to add the DOCTYPE declaration when writing a flat XML dataset?
  15. How to exclude some table columns at runtime?
  16. How to filter rows of an ITable by checking their column values?
  17. Is there an equivalent to JUnit's assertEquals(double expected, double actual, double delta) to define a tolerance level when comparing numeric values?
  18. Can I add more information to the result of an assertEquals(double expected, double actual) call so that I can easily identify the rows that mismatch?
  19. How to use InsertIdentityOperation with user defined types?
  20. How to customize primary keys detection?
  21. I'm getting the following message: Extra columns on line x. Those columns will be ignored. Please add the extra columns to line 1, or use a DTD to make sure the value of those columns are populated. Why is that?
  22. Is there any way I can specify the current system time for my testcase?
  23. Is there any way to preserve \r\n character sequences in an XmlDataSet if they are explicitly stored in the database?
  24. How can I keep my IDatabaseConnection open after setUp/tearDown when extending DBTestCase/DatabaseTestCase?
  25. Are Postgresql enum types supported by dbunit?
  26. Why am I getting an "The configured data type factory 'class org.dbunit.dataset.datatype.DefaultDataTypeFactory' might cause problems with the current database ..."?
  27. I am using UUIDs mapped to a binary column in the database, but encoding them in base64 for datasets is a bit awkward. Is there another way to use them in DbUnit datasets while being able to see the UUIDs clearly?

Errors

  1. Why am I getting an "AmbiguousTableNameException"?
  2. Why am I getting a "Foreign keys constraint violation" error?
  3. Why am I getting a "data type not recognized" warning?
  4. Why am I getting a "Can't start a cloned connection" exception when I use InsertIdentityOperation?
  5. Why am I getting an "UnsatisfiedLinkError" with the DB2 driver?
  6. Why am I getting the following error: "java.lang.NoClassDefFoundError: org/slf4j/LoggerFactory"
  7. Why am I getting a "StackOverflowError" when using Jakarta Commons Logging (JCL)?
  8. Why am I getting a "NoSuchColumnException" but I am sure that the column actually exists?

General

Which relational database systems are supported?

See the publicly maintained list of compatible RDBMS at the DbUnit Wiki. Don't hesitate to contribute to this list, particularly for databases not listed yet.

[top]


How to build DbUnit?

Cf. Building DbUnit detailed instruction.

[top]


Can I use DbUnit in order to check and execute DDL statements?

No. DbUnit is not intended to track or test structural DB changes. For this purpose check out liquibase.

[top]


How to see SQL statements issued by DbUnit using P6Spy?

P6Spy is a transparent JDBC proxy-driver which log the statements performed against the actual driver you normally would use. Using p6spy allows to log the SQL statements issued by DbUnit.

To install P6Spy, complete the following steps:

  1. Put the p6spy.jar file in your classpath.
  2. Move spy.properties into a directory listed in your classpath. Unlike JAR files, you do not directly reference your property file in the classpath (e.g.: if you have the file c:\r\proj\conf\spy.properties make sure you put c:\r\proj\conf in the classpath).
  3. Modify your application to use the P6Spy database driver i.e. com.p6spy.engine.spy.P6SpyDriver.
  4. Modify the realdriver line in the spy.properties file to reflect the wrapped database driver. An example of a modified realdriver line follows: realdriver = oracle.jdbc.driver.OracleDriver
Installation is complete. When you run your application, a spy.log file is generated in the same directory from where you run the application. The log file contains a list of all of the database statements executed. You can change both the destination of spy.log and what it logs by editing the spy.properties file.

[top]


How to learn more about known and fixed issues?

Look at Changes Report and at Issue Tracking.

[top]

Using DbUnit

How to extract a flat XML dataset from my database?

The following sample demonstrates how you can export one or many tables from a database to a flat XML dataset file. Note that if you want to specify a schema you can do this in the constructor of DatabaseConnection.

public class DatabaseExportSample
{
    public static void main(String[] args) throws Exception
    {
        // database connection
        Class driverClass = Class.forName("org.hsqldb.jdbcDriver");
        Connection jdbcConnection = DriverManager.getConnection(
                "jdbc:hsqldb:sample", "sa", "");
        IDatabaseConnection connection = new DatabaseConnection(jdbcConnection);

        // partial database export
        QueryDataSet partialDataSet = new QueryDataSet(connection);
        partialDataSet.addTable("FOO", "SELECT * FROM TABLE WHERE COL='VALUE'");
        partialDataSet.addTable("BAR");
        FlatXmlDataSet.write(partialDataSet, new FileOutputStream("partial.xml"));

        // full database export
        IDataSet fullDataSet = connection.createDataSet();
        FlatXmlDataSet.write(fullDataSet, new FileOutputStream("full.xml"));
        
        // dependent tables database export: export table X and all tables that
        // have a PK which is a FK on X, in the right order for insertion
        String[] depTableNames = 
          TablesDependencyHelper.getAllDependentTables( connection, "X" );
        IDataSet depDataset = connection.createDataSet( depTableNames );
        FlatXmlDataSet.write(depDataSet, new FileOutputStream("dependents.xml"));          
        
    }
}
     

[top]


How to generate a DTD representing my database schema?

The following sample demonstrates how you can generate a flat xml dataset DTD from a database.

public class DatabaseExportSample
{
    public static void main(String[] args) throws Exception
    {
        // database connection
        Class driverClass = Class.forName("org.hsqldb.jdbcDriver");
        Connection jdbcConnection = DriverManager.getConnection("jdbc:hsqldb:sample", "sa", "");
        IDatabaseConnection connection = new DatabaseConnection(jdbcConnection);

        // write DTD file
        FlatDtdDataSet.write(connection.createDataSet(), new FileOutputStream("test.dtd"));
    }
}

Note that there is a more flexible possibility to write out a DTD which allows you to vary the content model of the DTD:

        ...
        IDataSet dataSet = connection.createDataSet();
        Writer out = new OutputStreamWriter(new FileOutputStream("myFile.dtd");
        FlatDtdWriter datasetWriter = new FlatDtdWriter(out);
        datasetWriter.setContentModel(FlatDtdWriter.CHOICE);
        // You could also use the sequence model which is the default
        // datasetWriter.setContentModel(FlatDtdWriter.SEQUENCE);
        datasetWriter.write(dataSet);
     

[top]


Can I work with multiple database schemas having tables with identical name?
Yes, see Why do I get a "AmbiguousTableNameException"?.

[top]


Can I use DbUnit with IDENTITY or auto-increment columns?

Many RDBMSes allow IDENTITY and auto-increment columns to be implicitly overwritten with client values. DbUnit can be used with these RDBMS natively. Some databases, like MS SQL Server and Sybase, need to explicitly activate client values writing. The way to activate this feature is vendor-specific.

DbUnit provides this functionality for MS SQL Server with the InsertIdentityOperation class.

[top]


How to specify NULL values with flat XML dataset?

See FlatXmlDataSet documentation

[top]


Can I use DbUnit with database views?

Yes. By default DbUnit is configured to only recognize normal tables. Modify the table type property to work with other table types. For example, use {"TABLE", "VIEW"} for views.

Be aware that REFRESH, DELETE and UPDATE operations are not compatible with tables without primary keys. They are not usable with views without overriding primary keys detection. CLEAN_INSERT, INSERT and DELETE_ALL operations are compatible with views.

[top]


How to enable batched statement?
See batched statement feature.

[top]


What are the date formats supported by DbUnit?

DbUnit use the JDBC escape formats for string representation.

Type Format
DATE yyyy-mm-dd
TIME hh:mm:ss
TIMESTAMP yyyy-mm-dd hh:mm:ss.fffffffff

There also is special syntax to set relative date, time and timestamp. See data types documentation.

[top]


How to replace the default data type factory?

You can replace the default DbUnit data type factory to get support for custom data types. DbUnit provides extended factories for some vendors, which are located in org.dbunit.ext subpackages.

Here is how to setup the Oracle factory:

IDatabaseConnection connection = new DatabaseConnection(jdbcConnection, schema);
DatabaseConfig config = connection.getConfig();
config.setProperty(DatabaseConfig.PROPERTY_DATATYPE_FACTORY, new OracleDataTypeFactory());

Don't hesitate to submit your own implementation if you encounter types not currently supported by DbUnit.

[top]


How to perform streamed import and export?

Historically, DbUnit has had memory consumption issues when working with very large dataset files. DbUnit 2.0 includes many improvements, like using SAX2 instead of the Electric XML parser and streamed XML dataset writing, to overcome the memory consumption problems.

For compatibility reason, streamed export and import are not enabled by default. When working with large dataset, using this feature can make a huge difference.

Database Export:

Configure your DbUnit connection to use ForwardOnlyResultSetTable when exporting very large dataset. ForwardOnlyResultSetTable is a very efficient database table implememtation, useful when random data access is not required. By default, DbUnit uses CachedResultSetTable which consume more memory but provides random data access.

Following sample shows how to configure your DbUnit connection to use ForwardOnlyResultSetTable:

IDatabaseConnection connection = new DatabaseConnection(jdbcConnection, schema);
DatabaseConfig config = connection.getConfig();

config.setProperty(DatabaseConfig.PROPERTY_RESULTSET_TABLE_FACTORY, new ForwardOnlyResultSetTableFactory());
Database Import:

Use the very efficient StreamingDataSet to load your XML dataset when working with forward only database operations like UPDATE, INSERT, REFRESH.

[top]


How to enable flat XML dataset validation?

Flat XML validation is disabled by default even if you are using a DTD. Following sample demonstrate how to load a flat XML dataset with DTD validation enabled:

FlatXmlProducer producer = new FlatXmlProducer(new InputSource("dataset.xml"));
producer.setValidating(true);
IDataSet dataSet = new CachedDataSet(producer);

[top]


How to improve the performance of my DbUnit tests?

It is normal that testing with a real database is slower than testing with MockObjects. Here are few tricks that will help to speed up your DbUnit tests.

1. Reuse the same connection thorough your test suite

Creating a new DbUnit connection every time has a cost. The overhead is much more than just creating a new JDBC connection. DbUnit needs to fetch the metadata of tables to retrieve the column data types. This information is cached in the DbUnit connection. So it is highly recommended to reuse the same DbUnit connection throughout your test suite; the more tables you have, the greater the benefits.

2. Specify the database schema name

If your database server supports multiple schemas, like Oracle, you should always specify the schema name you want to use when creating the DbUnit connection. DbUnit can potentially fetch the metadata of all tables it has access to. This includes tables from other schemas if you are using a god JDBC connection. So in this situation, specifying a schema name can dramatically improve DbUnit performance.

3. Test with little data

Unit testing requires relatively little data. So try to keep your setup datasets as small as possible. There is no necessity to reset the entire database content at the beginning of every test. Try to use only the data you need for a particular test case.

4. Setup stale data once for the entire test suite

If most of your tests are using the same read-only data, you should consider initializing this data once for an entire test class or test suite.

5. Enable the batched statement feature

The batched statements feature is disabled by default because there are many JDBC drivers incompatible with it. It is recommended to enable this feature if your driver supports it. The performance gain may not be very significant when testing with small datasets, though.

[top]


How to automatically order tables by their foreign keys?
DbUnit inserts and updates rows in the order they are found in your dataset; deletes on the other hand are done in reverse order. You must therefore order your tables and rows appropriately in your datasets to prevent foreign keys constraint violation.

Since version 2.0, the DatabaseSequenceFilter can now be used to automatically determine the tables order using foreign/exported keys information.

The following sample demonstrates how to use this class to export a flat XML dataset:
IDatabaseConnection conn = new DatabaseConnection(jdbcConn);

ITableFilter filter = new DatabaseSequenceFilter(conn);
IDataSet dataset = new FilteredDataSet(filter, conn.createDataSet());
       
FlatXmlDataSet.write(dataset, new File(fileName));

[top]


How to add the DOCTYPE declaration when writing a flat XML dataset?

Use the setDocType() method of the FlatXmlWriter class like this:

FlatXmlWriter datasetWriter = new FlatXmlWriter(new FileOutputStream("dataset.xml")); 
datasetWriter.setDocType("dataset.dtd"); 
datasetWriter.write(connection.createDataSet());
This can also be done with the DbUnit Ant task.

[top]


How to exclude some table columns at runtime?

The FilteredTableMetaData class introduced in DbUnit 2.1 can be used in combination with the IColumnFilter interface to decide the inclusion or exclusion of table columns at runtime.

FilteredTableMetaData metaData = new FilteredTableMetaData(originalTable.getTableMetaData(), new MyColumnFilter());
ITable filteredTable = new CompositeTable(metaData, originalTable);

You can use your own IColumnFilter implementation or use the DefaultColumnFilter class provided by DbUnit. DefaultColumnFilter supports wildcards. This class also offers some convenience methods, includedColumnsTable() and excludedColumnsTable(), to ease creation of column filtered table.

The following sample demonstrates the usage of DefaultColumnFilter to exclude all columns prefixed with "PK" or suffixed by "TIME".

DefaultColumnFilter columnFilter = new DefaultColumnFilter();
columnFilter.excludeColumn("PK*");
columnFilter.excludeColumn("*TIME");

FilteredTableMetaData metaData = new FilteredTableMetaData(originalTable.getTableMetaData(), columnFilter);

Same than above but using the excludedColumnsTable() convenience method.

ITable filteredTable = DefaultColumnFilter.excludedColumnsTable(originalTable, new String[]{"PK*", "*TIME"});

See also Ignoring some columns in comparison.

[top]


How to filter rows of an ITable by checking their column values?

By implementing the interface IRowFilter it is possible to query column values and to return either true if this row should be accepted or false if it should be filtered out.

IRowFilter rowFilter = new IRowFilter() {
    public boolean accept(IRowValueProvider rowValueProvider) {
        Object columnValue = rowValueProvider.getColumnValue("COLUMN1");
        if(((String)columnValue).equalsIgnoreCase("customerAbroad")) {
            return true;
        }
        return false;
    }
};
ITable filteredTable = new RowFilterTable(iTable, rowFilter);
           

[top]


Is there an equivalent to JUnit's assertEquals(double expected, double actual, double delta) to define a tolerance level when comparing numeric values?

The DefaultDataTypeFactory provides the method "addToleratedDelta()" to define an allowed deviation of the compared values for a specific database column.

DefaultDataTypeFactory datatypeFactory = new OracleDataTypeFactory();
datatypeFactory.addToleratedDelta(new ToleratedDelta("TEST_TABLE", "COLUMN0", 0.0001));
// Set the datatype factory
DatabaseConnection dbConnection = new DatabaseConnection(jdbcConnection);
dbConnection.getConfig().setProperty(DatabaseConfig.PROPERTY_DATATYPE_FACTORY, datatypeFactory);
           

[top]


Can I add more information to the result of an assertEquals(double expected, double actual) call so that I can easily identify the rows that mismatch?

There is a method assertEquals(ITable expectedTable, ITable actualTable, Column[] additionalColumnInfo) that allows to specify additional columns as third parameter. If two rows are not equal the values of the given columns are printed out in the assertion text.

Assertion.assertEquals(iTableExpected, iTableActual, new Column[] {new Column("COLUMN0", DataType.VARCHAR)} );
           

[top]


How to use InsertIdentityOperation with user defined types?

The IColumnFilter interface is now used by InsertIdentityOperation to detect identity columns. The default implementation assumes that type name of identity columns end with "identity". If you are using user defined types that does not follow this assumption you can now provide your own implementation via the MS SQL identity column filter property.

IDatabaseConnection connection = new DatabaseConnection(jdbcConnection);
connection.getConfig().setProperty(
    "http://www.dbunit.org/properties/mssql/identityColumnFilter",
    new MyIndentityFilter());

[top]


How to customize primary keys detection?

The IColumnFilter interface can also be used to determine which columns are primary keys instead of using DatabaseMetaData.getPrimaryKeys(). This can be useful if your primary keys are not explicitly defined in your database model.

IDatabaseConnection connection = new DatabaseConnection(jdbcConnection);
connection.getConfig().setProperty(
    "http://www.dbunit.org/properties/primaryKeyFilter",
    new MyPrimaryKeyFilter());

[top]


I'm getting the following message: Extra columns on line x. Those columns will be ignored. Please add the extra columns to line 1, or use a DTD to make sure the value of those columns are populated. Why is that?

DbUnit uses the first tag for a table to define the columns to be populated. If the following records for this table contain extra columns, these ones will therefore not be populated. To solve this, either define all the columns of the table in the first row (using NULL values for the empty columns), or use a DTD to define the metadata of the tables you use.

Since DBUnit 2.3.0 there is a functionality called "column sensing" which basically reads in the whole XML into a buffer and dynamically adds new columns as they appear. It can be used as demonstrated in the following example:

//Since release 2.4.7 we use a builder:
FlatXmlDataSetBuilder builder = new FlatXmlDataSetBuilder();
builder.setColumnSensing(true);
IDataSet dataSet = builder.build(new File("src/xml/flatXmlTableTest.xml"));
         
              //This is for release < 2.4.7
              
boolean enableColumnSensing = true;
IDataSet dataSet = new FlatXmlDataSet(new File("src/xml/flatXmlTableTest.xml"), false, enableColumnSensing);
             
         

[top]


Is there any way I can specify the current system time for my testcase?

The simplest way is to use the ReplacementTable/ReplacementDataSet.

            ITable actualTable = ... // Load actual table from database or somewhere else
            ITable table = new XmlDataSet(new FileInputStream("myFile.xml")).getTable("TABLE");
            ReplacementTable replacementTable = new ReplacementTable(table);
            replacementTable.addReplacementObject("special-now", new java.util.Date(System.currentTimeMillis()));
            Assertion.assertEquals(replacementTable, actualTable);
         
Every "special-now" column value from your XML-dataset will be replaced by the current systime represented by a java.util.Date object.

[top]


Is there any way to preserve \r\n character sequences in an XmlDataSet if they are explicitly stored in the database?

You can easily extend the XmlDataSet class and overwrite the methods that write the string values as follows:

public class MyXmlWriter extends org.dbunit.dataset.xml.XmlDataSetWriter {
...
  protected void writeValueCData(String stringValue) throws IOException {
    int k = 0;
    while((k = str.indexOf('\r')) != -1){
      getXmlWriter().writeCData(str.substring(0, k));
      getXmlWriter().writeText("\r", true);
      str = str.substring(k + 1);
    }
    getXmlWriter().writeCData(str);
  }

  protected void writeValue(String stringValue) throws IOException {
    // Write literally
    super.getXmlWriter().writeText(stringValue, true);
  }
}
         
Just for the sake of completeness: storing such a character sequence explicitly in the database is possible for example using "insert into test_table(description) values ('Description'|| chr(13) || chr(10) ||'WithLineBreak');" on oracle

[top]


How can I keep my IDatabaseConnection open after setUp/tearDown when extending DBTestCase/DatabaseTestCase?

Since DBUnit 2.4.4 there is a new method IDatabaseTester#setOperationListener which allows to set a user defined listener which is invoked on different operations during the test execution. The DBTestCase/DatabaseTestCase classes make use of this new functionality and by default closes the connection after setUp/tearDown operations were executed. Here an example of how to change this behavior so that the connection will not be closed anymore:

            public class MyTestCase extends DBTestCase{
                // Overwrite the method getOperationListener to provide our own operation listener
                protected IOperationListener getOperationListener() {
                    return new DefaultOperationListener(){
                        public void operationSetUpFinished(
                                IDatabaseConnection connection) 
                        {
                            // Do not invoke the "super" method to avoid that the connection is closed
                        }
                        public void operationTearDownFinished(
                                IDatabaseConnection connection) 
                        {
                            // Do not invoke the "super" method to avoid that the connection is closed
                        }
                    };
                }
            }
         
Note that there dbunit provides such a No-Op implementation of the IOperationListener since 2.4.5:
            public class MyTestCase extends DBTestCase{
                // Overwrite the method getOperationListener to provide our own operation listener
                protected IOperationListener getOperationListener() {
                    return IOperationListener.NO_OP_OPERATION_LISTENER;
                }
            }
         
If you use the IDatabaseTester you can simple invoke the setOperationListener method on it:
              ...
              IDatabaseTester dbTester = new DataSourceDatabaseTester(ds);
              dbTester.setOperationListener(IOperationListener.NO_OP_OPERATION_LISTENER)
              dbTester.onSetup();
              ...
         

[top]


Are Postgresql enum types supported by dbunit?

There is limited support for postgresql enums because only reading and writing strings is supported since dbunit 2.4.6. To do so you have to override the method "isEnumType" in the PostgresqlDataTypeFactory like this:

PostgresqlDataTypeFactory factory = new PostgresqlDataTypeFactory(){
  public boolean isEnumType(String sqlTypeName) {
    if(sqlTypeName.equalsIgnoreCase("abc_enum")){
      return true;
    }
    return false;
  }
}; 
         

[top]


Why am I getting an "The configured data type factory 'class org.dbunit.dataset.datatype.DefaultDataTypeFactory' might cause problems with the current database ..."?

This warning occurs when no data type factory has been configured and DbUnit defaults to its org.dbunit.dataset.datatype.DefaultDataTypeFactory which supports a limited set of RDBMS.

You can solve this problem in two different ways:

  1. set the appropriate data type factory for your RDBMS through the data type factory feature; this is the preferred solution if you RDBMS is among the ones already having an IDataTypeFactory implementation
  2. disable the warning notification through the data type warning property, but only if a data type factory is not available for your RDBMS and your tests doesn't use features specific of your RDBMS!
If you are not in any of the above cases please help us to write down an IDataTypeFactory implementation for your RDBMS.

[top]


I am using UUIDs mapped to a binary column in the database, but encoding them in base64 for datasets is a bit awkward. Is there another way to use them in DbUnit datasets while being able to see the UUIDs clearly?

Use the special UUID syntax (uuid'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx', where all the x's are the hex-encoded string value of your UUID):

          
            <my_table uuid="uuid'029afe42-d8e7-11e2-aca1-50e549c9b654'" another_column="another column value"/>
          
       

[top]

Errors

Why am I getting an "AmbiguousTableNameException"?

This error occurs when no schema is specified and that DbUnit detect that it is getting columns information from multiple tables having the same name and located in different schemas.

You can solve this problem in three different ways:

  1. Provide the schema name when creating the database connection. Note that for Oracle you must specify the schema name in uppercase.
  2. Ensure that the connection is restricted to access only one schema.
  3. Enable the qualified table names feature.

[top]


Why am I getting a "Foreign keys constraint violation" error?
See How to automatically orders table according their foreign keys?

[top]


Why am I getting a "data type not recognized" warning?

By default, DbUnit only supports standard JDBC data types. You are getting this warning message if you are using vendor-specific data types.

Read how to replace the default data type factory and how to disable this warning message.

[top]


Why am I getting a "Can't start a cloned connection" exception when I use InsertIdentityOperation?

If you are using the Microsoft driver (i.e. com.microsoft.sqlserver.jdbc.SQLServerDriver), you need to use the SelectMethod=cursor parameter in the JDBC connection string (as outlined by this JDBC thread). Your database Url would look something like:

jdbc:microsoft:sqlserver://localhost:1433;DatabaseName=mydb;SelectMethod=cursor

[top]


Why am I getting an "UnsatisfiedLinkError" with the DB2 driver?

DbUnit uses JDBC 2.0 features (batch updates). By default, DB2 installs the JDBC 1.0 driver. You have to install the JDBC 2.0 driver for DbUnit to work, or you will get an UnsatisfiedLinkError from the DB2 JDBC 1.0 driver.

The steps for installing the DB2 JDBC 2.0 driver are covered in the DB2 documentation.

[top]


Why am I getting the following error: "java.lang.NoClassDefFoundError: org/slf4j/LoggerFactory"

Since version 2.2.1, DbUnit uses SLF4J for logging purposes (See dependencies). You must therefore download SLF4J and include slf4j-api-*.jar and the jar corresponding to your favourite logging library in your classpath (e.g. slf4j-log4j12-*.jar for Log4j or slf4j-jcl-*.jar for commons-logging). See the SLF4J homepage for more details.

[top]


Why am I getting a "StackOverflowError" when using Jakarta Commons Logging (JCL)?

Please make sure there is only one slf4j implementation library in your classpath. In particular, jcl104-over-slf4j-*.jar and slf4j-jcl-*.jar must not be used together

[top]


Why am I getting a "NoSuchColumnException" but I am sure that the column actually exists?

This exception can occur especially with MySQL RDBMS if you forget to set the special MySqlMetadataHandler on the DatabaseConfig. See properties page for more.

[top]