General
Using DbUnit
Errors
See the publicly maintained list of compatible RDBMS at the DbUnit Wiki. Don't hesitate to contribute to this list, particularly for databases not listed yet.
Cf. Building DbUnit detailed instruction.
No. DbUnit is not intended to track or test structural DB changes. For this purpose check out liquibase.
P6Spy is a transparent JDBC proxy-driver which log the statements performed against the actual driver you normally would use. Using p6spy allows to log the SQL statements issued by DbUnit.
To install P6Spy, complete the following steps:
Look at Changes Report and at Issue Tracking.
The following sample demonstrates how you can export one or many tables from a database to a flat XML dataset file. Note that if you want to specify a schema you can do this in the constructor of DatabaseConnection.
public class DatabaseExportSample { public static void main(String[] args) throws Exception { // database connection Class driverClass = Class.forName("org.hsqldb.jdbcDriver"); Connection jdbcConnection = DriverManager.getConnection( "jdbc:hsqldb:sample", "sa", ""); IDatabaseConnection connection = new DatabaseConnection(jdbcConnection); // partial database export QueryDataSet partialDataSet = new QueryDataSet(connection); partialDataSet.addTable("FOO", "SELECT * FROM TABLE WHERE COL='VALUE'"); partialDataSet.addTable("BAR"); FlatXmlDataSet.write(partialDataSet, new FileOutputStream("partial.xml")); // full database export IDataSet fullDataSet = connection.createDataSet(); FlatXmlDataSet.write(fullDataSet, new FileOutputStream("full.xml")); // dependent tables database export: export table X and all tables that // have a PK which is a FK on X, in the right order for insertion String[] depTableNames = TablesDependencyHelper.getAllDependentTables( connection, "X" ); IDataSet depDataset = connection.createDataSet( depTableNames ); FlatXmlDataSet.write(depDataSet, new FileOutputStream("dependents.xml")); } }
The following sample demonstrates how you can generate a flat xml dataset DTD from a database.
public class DatabaseExportSample { public static void main(String[] args) throws Exception { // database connection Class driverClass = Class.forName("org.hsqldb.jdbcDriver"); Connection jdbcConnection = DriverManager.getConnection("jdbc:hsqldb:sample", "sa", ""); IDatabaseConnection connection = new DatabaseConnection(jdbcConnection); // write DTD file FlatDtdDataSet.write(connection.createDataSet(), new FileOutputStream("test.dtd")); } }
Note that there is a more flexible possibility to write out a DTD which allows you to vary the content model of the DTD:
... IDataSet dataSet = connection.createDataSet(); Writer out = new OutputStreamWriter(new FileOutputStream("myFile.dtd"); FlatDtdWriter datasetWriter = new FlatDtdWriter(out); datasetWriter.setContentModel(FlatDtdWriter.CHOICE); // You could also use the sequence model which is the default // datasetWriter.setContentModel(FlatDtdWriter.SEQUENCE); datasetWriter.write(dataSet);
Many RDBMSes allow IDENTITY and auto-increment columns to be implicitly overwritten with client values.
DbUnit can be used with these RDBMS natively. Some databases, like MS SQL Server and Sybase,
need to explicitly activate client values writing. The way to activate this feature is vendor-specific.
DbUnit provides this functionality for MS SQL Server with the
InsertIdentityOperation class.
See FlatXmlDataSet documentation
Yes. By default DbUnit is configured to only recognize normal tables. Modify the
table type property to work with other table types.
For example, use {"TABLE", "VIEW"}
for views.
Be aware that REFRESH, DELETE and UPDATE operations are not compatible with tables without primary keys. They are not usable with views without overriding primary keys detection. CLEAN_INSERT, INSERT and DELETE_ALL operations are compatible with views.
DbUnit use the JDBC escape formats for string representation.
Type | Format |
---|---|
DATE | yyyy-mm-dd |
TIME | hh:mm:ss |
TIMESTAMP | yyyy-mm-dd hh:mm:ss.fffffffff |
There also is special syntax to set relative date, time and timestamp. See data types documentation.
You can replace the default DbUnit data type factory to get support for custom data types.
DbUnit provides extended factories for some vendors, which are located in org.dbunit.ext
subpackages.
Here is how to setup the Oracle factory:
IDatabaseConnection connection = new DatabaseConnection(jdbcConnection, schema); DatabaseConfig config = connection.getConfig(); config.setProperty(DatabaseConfig.PROPERTY_DATATYPE_FACTORY, new OracleDataTypeFactory());
Don't hesitate to submit your own implementation if you encounter types not currently supported by DbUnit.
Historically, DbUnit has had memory consumption issues when working with very large dataset files. DbUnit 2.0 includes many improvements, like using SAX2 instead of the Electric XML parser and streamed XML dataset writing, to overcome the memory consumption problems.
For compatibility reason, streamed export and import are not enabled by default. When working with large dataset, using this feature can make a huge difference.
Configure your DbUnit connection to use ForwardOnlyResultSetTable when exporting very large dataset. ForwardOnlyResultSetTable is a very efficient database table implememtation, useful when random data access is not required. By default, DbUnit uses CachedResultSetTable which consume more memory but provides random data access.
Following sample shows how to configure your DbUnit connection to use ForwardOnlyResultSetTable:
IDatabaseConnection connection = new DatabaseConnection(jdbcConnection, schema); DatabaseConfig config = connection.getConfig(); config.setProperty(DatabaseConfig.PROPERTY_RESULTSET_TABLE_FACTORY, new ForwardOnlyResultSetTableFactory());
Use the very efficient StreamingDataSet to load your XML dataset when working with forward only database operations like UPDATE, INSERT, REFRESH.
Flat XML validation is disabled by default even if you are using a DTD. Following sample demonstrate how to load a flat XML dataset with DTD validation enabled:
FlatXmlProducer producer = new FlatXmlProducer(new InputSource("dataset.xml")); producer.setValidating(true); IDataSet dataSet = new CachedDataSet(producer);
It is normal that testing with a real database is slower than testing with MockObjects. Here are few tricks that will help to speed up your DbUnit tests.
Creating a new DbUnit connection every time has a cost. The overhead is much more than just creating a new JDBC connection. DbUnit needs to fetch the metadata of tables to retrieve the column data types. This information is cached in the DbUnit connection. So it is highly recommended to reuse the same DbUnit connection throughout your test suite; the more tables you have, the greater the benefits.
If your database server supports multiple schemas, like Oracle, you should always specify the schema name you want to use when creating the DbUnit connection. DbUnit can potentially fetch the metadata of all tables it has access to. This includes tables from other schemas if you are using a god JDBC connection. So in this situation, specifying a schema name can dramatically improve DbUnit performance.
Unit testing requires relatively little data. So try to keep your setup datasets as small as possible. There is no necessity to reset the entire database content at the beginning of every test. Try to use only the data you need for a particular test case.
If most of your tests are using the same read-only data, you should consider initializing this data once for an entire test class or test suite.
The batched statements feature is disabled by default because there are many JDBC drivers incompatible with it. It is recommended to enable this feature if your driver supports it. The performance gain may not be very significant when testing with small datasets, though.
IDatabaseConnection conn = new DatabaseConnection(jdbcConn); ITableFilter filter = new DatabaseSequenceFilter(conn); IDataSet dataset = new FilteredDataSet(filter, conn.createDataSet()); FlatXmlDataSet.write(dataset, new File(fileName));
Use the setDocType() method of the FlatXmlWriter class like this:
FlatXmlWriter datasetWriter = new FlatXmlWriter(new FileOutputStream("dataset.xml")); datasetWriter.setDocType("dataset.dtd"); datasetWriter.write(connection.createDataSet());
The FilteredTableMetaData class introduced in DbUnit 2.1 can be used in combination with the IColumnFilter interface to decide the inclusion or exclusion of table columns at runtime.
FilteredTableMetaData metaData = new FilteredTableMetaData(originalTable.getTableMetaData(), new MyColumnFilter()); ITable filteredTable = new CompositeTable(metaData, originalTable);
You can use your own IColumnFilter implementation or use the DefaultColumnFilter
class provided by DbUnit. DefaultColumnFilter supports wildcards. This class also offers
some convenience methods, includedColumnsTable() and excludedColumnsTable(), to ease creation of column filtered table.
The following sample demonstrates the usage of DefaultColumnFilter to exclude all columns prefixed with "PK" or suffixed by "TIME".
DefaultColumnFilter columnFilter = new DefaultColumnFilter(); columnFilter.excludeColumn("PK*"); columnFilter.excludeColumn("*TIME"); FilteredTableMetaData metaData = new FilteredTableMetaData(originalTable.getTableMetaData(), columnFilter);
Same than above but using the excludedColumnsTable() convenience method.
ITable filteredTable = DefaultColumnFilter.excludedColumnsTable(originalTable, new String[]{"PK*", "*TIME"});
See also Ignoring some columns in comparison.
By implementing the interface IRowFilter it is possible to query column values and to return either true if this row should be accepted or false if it should be filtered out.
IRowFilter rowFilter = new IRowFilter() { public boolean accept(IRowValueProvider rowValueProvider) { Object columnValue = rowValueProvider.getColumnValue("COLUMN1"); if(((String)columnValue).equalsIgnoreCase("customerAbroad")) { return true; } return false; } }; ITable filteredTable = new RowFilterTable(iTable, rowFilter);
The DefaultDataTypeFactory provides the method "addToleratedDelta()" to define an allowed deviation of the compared values for a specific database column.
DefaultDataTypeFactory datatypeFactory = new OracleDataTypeFactory(); datatypeFactory.addToleratedDelta(new ToleratedDelta("TEST_TABLE", "COLUMN0", 0.0001)); // Set the datatype factory DatabaseConnection dbConnection = new DatabaseConnection(jdbcConnection); dbConnection.getConfig().setProperty(DatabaseConfig.PROPERTY_DATATYPE_FACTORY, datatypeFactory);
There is a method assertEquals(ITable expectedTable, ITable actualTable, Column[] additionalColumnInfo) that allows to specify additional columns as third parameter. If two rows are not equal the values of the given columns are printed out in the assertion text.
Assertion.assertEquals(iTableExpected, iTableActual, new Column[] {new Column("COLUMN0", DataType.VARCHAR)} );
The IColumnFilter interface is now used by InsertIdentityOperation to detect identity columns. The default implementation assumes that type name of identity columns end with "identity". If you are using user defined types that does not follow this assumption you can now provide your own implementation via the MS SQL identity column filter property.
IDatabaseConnection connection = new DatabaseConnection(jdbcConnection); connection.getConfig().setProperty( "http://www.dbunit.org/properties/mssql/identityColumnFilter", new MyIndentityFilter());
The IColumnFilter interface can also be used to determine which columns are primary keys instead of using DatabaseMetaData.getPrimaryKeys(). This can be useful if your primary keys are not explicitly defined in your database model.
IDatabaseConnection connection = new DatabaseConnection(jdbcConnection); connection.getConfig().setProperty( "http://www.dbunit.org/properties/primaryKeyFilter", new MyPrimaryKeyFilter());
DbUnit uses the first tag for a table to define the columns to be populated. If the following records for this table contain extra columns, these ones will therefore not be populated. To solve this, either define all the columns of the table in the first row (using NULL values for the empty columns), or use a DTD to define the metadata of the tables you use.
Since DBUnit 2.3.0 there is a functionality called "column sensing" which basically reads in the whole XML into a buffer and dynamically adds new columns as they appear. It can be used as demonstrated in the following example:
//Since release 2.4.7 we use a builder: FlatXmlDataSetBuilder builder = new FlatXmlDataSetBuilder(); builder.setColumnSensing(true); IDataSet dataSet = builder.build(new File("src/xml/flatXmlTableTest.xml"));
//This is for release < 2.4.7boolean enableColumnSensing = true; IDataSet dataSet = new FlatXmlDataSet(new File("src/xml/flatXmlTableTest.xml"), false, enableColumnSensing);
The simplest way is to use the ReplacementTable/ReplacementDataSet.
ITable actualTable = ... // Load actual table from database or somewhere else ITable table = new XmlDataSet(new FileInputStream("myFile.xml")).getTable("TABLE"); ReplacementTable replacementTable = new ReplacementTable(table); replacementTable.addReplacementObject("special-now", new java.util.Date(System.currentTimeMillis())); Assertion.assertEquals(replacementTable, actualTable);
You can easily extend the XmlDataSet class and overwrite the methods that write the string values as follows:
public class MyXmlWriter extends org.dbunit.dataset.xml.XmlDataSetWriter { ... protected void writeValueCData(String stringValue) throws IOException { int k = 0; while((k = str.indexOf('\r')) != -1){ getXmlWriter().writeCData(str.substring(0, k)); getXmlWriter().writeText("\r", true); str = str.substring(k + 1); } getXmlWriter().writeCData(str); } protected void writeValue(String stringValue) throws IOException { // Write literally super.getXmlWriter().writeText(stringValue, true); } }
Since DBUnit 2.4.4 there is a new method IDatabaseTester#setOperationListener which allows to set a user defined listener which is invoked on different operations during the test execution. The DBTestCase/DatabaseTestCase classes make use of this new functionality and by default closes the connection after setUp/tearDown operations were executed. Here an example of how to change this behavior so that the connection will not be closed anymore:
public class MyTestCase extends DBTestCase{ // Overwrite the method getOperationListener to provide our own operation listener protected IOperationListener getOperationListener() { return new DefaultOperationListener(){ public void operationSetUpFinished( IDatabaseConnection connection) { // Do not invoke the "super" method to avoid that the connection is closed } public void operationTearDownFinished( IDatabaseConnection connection) { // Do not invoke the "super" method to avoid that the connection is closed } }; } }
public class MyTestCase extends DBTestCase{ // Overwrite the method getOperationListener to provide our own operation listener protected IOperationListener getOperationListener() { return IOperationListener.NO_OP_OPERATION_LISTENER; } }
... IDatabaseTester dbTester = new DataSourceDatabaseTester(ds); dbTester.setOperationListener(IOperationListener.NO_OP_OPERATION_LISTENER) dbTester.onSetup(); ...
There is limited support for postgresql enums because only reading and writing strings is supported since dbunit 2.4.6. To do so you have to override the method "isEnumType" in the PostgresqlDataTypeFactory like this:
PostgresqlDataTypeFactory factory = new PostgresqlDataTypeFactory(){ public boolean isEnumType(String sqlTypeName) { if(sqlTypeName.equalsIgnoreCase("abc_enum")){ return true; } return false; } };
This warning occurs when no data type factory has been configured and DbUnit defaults to its org.dbunit.dataset.datatype.DefaultDataTypeFactory
which supports a limited set of RDBMS.
You can solve this problem in two different ways:
Use the special UUID syntax (uuid'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx', where all the x's are the hex-encoded string value of your UUID):
<my_table uuid="uuid'029afe42-d8e7-11e2-aca1-50e549c9b654'" another_column="another column value"/>
This error occurs when no schema is specified and that DbUnit detect that it is getting columns information from multiple tables having the same name and located in different schemas.
You can solve this problem in three different ways:
By default, DbUnit only supports standard JDBC data types. You are getting this warning message if you are using vendor-specific data types.
Read how to replace the default data type factory and how to disable this warning message.
If you are using the Microsoft driver (i.e. com.microsoft.sqlserver.jdbc.SQLServerDriver), you need to use the SelectMethod=cursor parameter in the JDBC connection string (as outlined by this JDBC thread). Your database Url would look something like:
jdbc:microsoft:sqlserver://localhost:1433;DatabaseName=mydb;SelectMethod=cursor
DbUnit uses JDBC 2.0 features (batch updates). By default, DB2 installs the JDBC 1.0 driver. You have to install the JDBC 2.0 driver for DbUnit to work, or you will get an UnsatisfiedLinkError from the DB2 JDBC 1.0 driver.
The steps for installing the DB2 JDBC 2.0 driver are covered in the DB2 documentation.
Since version 2.2.1, DbUnit uses SLF4J for logging purposes (See dependencies). You must therefore download SLF4J and include slf4j-api-*.jar and the jar corresponding to your favourite logging library in your classpath (e.g. slf4j-log4j12-*.jar for Log4j or slf4j-jcl-*.jar for commons-logging). See the SLF4J homepage for more details.
Please make sure there is only one slf4j implementation library in your classpath. In particular, jcl104-over-slf4j-*.jar and slf4j-jcl-*.jar must not be used together
This exception can occur especially with MySQL RDBMS if you forget to set the special MySqlMetadataHandler on the DatabaseConfig. See properties page for more.