package tap.db;
/*
* This file is part of TAPLibrary.
*
* TAPLibrary is free software: you can redistribute it and/or modify
* it under the terms of the GNU Lesser General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* TAPLibrary is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public License
* along with TAPLibrary. If not, see
This {@link DBConnection} implementation is theoretically able to deal with any DBMS JDBC connection.
* *Note: * "Theoretically", because its design has been done using information about Postgres, SQLite, Oracle, MySQL and Java DB (Derby). * Then it has been really tested successfully with Postgres and SQLite. *
* * ** With a single instance of {@link JDBCConnection} it is possible to execute only one query (whatever the type: SELECT, UPDATE, DELETE, ...) * at a time. This is indeed the simple way chosen with this implementation in order to allow the cancellation of any query by managing only * one {@link Statement}. Indeed, only a {@link Statement} has a cancel function able to stop any query execution on the database. * So all queries are executed with the same {@link Statement}. Thus, allowing the execution of one query at a time lets * abort only one query rather than several in once (though just one should have been stopped). *
* ** All the following functions are synchronized in order to prevent parallel execution of them by several threads: * {@link #addUploadedTable(TAPTable, TableIterator)}, {@link #dropUploadedTable(TAPTable)}, {@link #executeQuery(ADQLQuery)}, * {@link #getTAPSchema()} and {@link #setTAPSchema(TAPMetadata)}. *
* ** To cancel a query execution the function {@link #cancel(boolean)} must be called. No error is returned by this function in case * no query is currently executing. *
* * *Update queries are taking into account whether the following features are supported by the DBMS:
*Warning: * All these features have no impact at all on ADQL query executions ({@link #executeQuery(ADQLQuery)}). *
* * ** All datatype conversions done while fetching a query result (via a {@link ResultSet}) * are done exclusively by the returned {@link TableIterator} (so, here {@link ResultSetTableIterator}). *
* ** However, datatype conversions done while uploading a table are done here by the function * {@link #convertTypeToDB(DBType)}. This function uses first the conversion function of the translator * ({@link JDBCTranslator#convertTypeToDB(DBType)}), and then {@link #defaultTypeConversion(DBType)} * if it fails. *
* ** In this default conversion, all typical DBMS datatypes are taken into account, EXCEPT the geometrical types * (POINT and REGION). That's why it is recommended to use a translator in which the geometrical types are supported * and managed. *
* * ** The possibility to specify a "fetch size" to the JDBC driver (and more exactly to a {@link Statement}) may reveal * very helpful when dealing with large datasets. Thus, it is possible to fetch rows by block of a size represented * by this "fetch size". This is also possible with this {@link DBConnection} thanks to the function {@link #setFetchSize(int)}. *
* *
* However, some JDBC driver or DBMS may not support this feature. In such case, it is then automatically disabled by
* {@link JDBCConnection} so that any subsequent queries do not attempt to use it again. The {@link #supportsFetchSize}
* is however reset to true
when {@link #setFetchSize(int)} is called.
*
Note 1: * The "fetch size" feature is used only for SELECT queries executed by {@link #executeQuery(ADQLQuery)}. In all other functions, * results of SELECT queries are fetched with the default parameter of the JDBC driver and its {@link Statement} implementation. *
* *Note 2: * By default, this feature is disabled. So the default value of the JDBC driver is used. * To enable it, a simple call to {@link #setFetchSize(int)} is enough, whatever is the given value. *
* *Note 3: * Generally set a fetch size starts a transaction in the database. So, after the result of the fetched query * is not needed any more, do not forget to call {@link #endQuery()} in order to end the implicitly opened transaction. * However, generally closing the returned {@link TableIterator} is fully enough (see the sources of * {@link ResultSetTableIterator#close()} for more details). *
* * @author Grégory Mantelet (CDS;ARI) * @version 2.1 (07/2016) * @since 2.0 */ public class JDBCConnection implements DBConnection { /** DBMS name of PostgreSQL used in the database URL. */ protected final static String DBMS_POSTGRES = "postgresql"; /** DBMS name of SQLite used in the database URL. */ protected final static String DBMS_SQLITE = "sqlite"; /** DBMS name of MySQL used in the database URL. */ protected final static String DBMS_MYSQL = "mysql"; /** DBMS name of Oracle used in the database URL. */ protected final static String DBMS_ORACLE = "oracle"; /** Name of the database column giving the database name of a TAP column, table or schema. */ protected final static String DB_NAME_COLUMN = "dbname"; /** Connection ID (typically, the job ID). It lets identify the DB errors linked to the Job execution in the logs. */ protected final String ID; /** JDBC connection (created and initialized at the creation of this {@link JDBCConnection} instance). */ protected final Connection connection; /**The only {@link Statement} instance that should be used in this {@link JDBCConnection}. * Having the same {@link Statement} for all the interactions with the database lets cancel any when needed (e.g. when the execution is too long).
*This statement is by default NULL ; it must be initialized by the function {@link #getStatement()}.
* @since 2.1 */ protected Statement stmt = null; /** *It true
, this flag indicates that the function {@link #cancel(boolean)} has been called successfully.
{@link #cancel(boolean)} sets this flag to true
.
* All functions executing any kind of query on the database MUST set this flag to false
before doing anything
* by calling the function {@link #resetCancel()}.
*
* This flag is particularly useful for debugging: when an exception is detected inside a function executing a query,
* this flag is used to know whether the exception should be ignored for logging (if true
) or not.
*
* Any access (write AND read) to this flag MUST be synchronized on it using one of the following functions: * {@link #cancel(boolean)}, {@link #resetCancel()} and {@link #isCancelled()}. *
* @since 2.1 */ private Boolean cancelled = false; /** The translator this connection must use to translate ADQL into SQL. It is also used to get information about the case sensitivity of all types of identifier (schema, table, column). */ protected final JDBCTranslator translator; /** Object to use if any message needs to be logged. note: this logger may be NULL. If NULL, messages will never be printed. */ protected final TAPLog logger; /* JDBC URL MANAGEMENT */ /** JDBC prefix of any database URL (for instance: jdbc:postgresql://127.0.0.1/myDB or jdbc:postgresql:myDB). */ public final static String JDBC_PREFIX = "jdbc"; /** Name (in lower-case) of the DBMS with which the connection is linked. */ protected final String dbms; /* DBMS SUPPORTED FEATURES */ /** Indicate whether the DBMS supports transactions (start, commit, rollback and end). note: If no transaction is possible, no transaction will be used, but then, it will never possible to cancel modifications in case of error. */ protected boolean supportsTransaction; /** Indicate whether the DBMS supports the definition of data (create, update, drop, insert into schemas and tables). note: If not supported, it will never possible to create TAP_SCHEMA from given metadata (see {@link #setTAPSchema(TAPMetadata)}) and to upload/drop tables (see {@link #addUploadedTable(TAPTable, TableIterator)} and {@link #dropUploadedTable(TAPTable)}). */ protected boolean supportsDataDefinition; /** Indicate whether the DBMS supports several updates in once (using {@link Statement#addBatch(String)} and {@link Statement#executeBatch()}). note: If not supported, every updates will be done one by one. So it is not really a problem, but just a loss of optimization. */ protected boolean supportsBatchUpdates; /** Indicate whether the DBMS has the notion of SCHEMA. Most of the DBMS has it, but not SQLite for instance. note: If not supported, the DB table name will be prefixed by the DB schema name followed by the character "_". Nevertheless, if the DB schema name is NULL, the DB table name will never be prefixed. */ protected boolean supportsSchema; /**Indicate whether a DBMS statement is able to cancel a query execution.
*Since this information is not provided by {@link DatabaseMetaData} a first attempt is always performed. * In case a {@link SQLFeatureNotSupportedException} is caught, this flag is set to false preventing any further * attempt of canceling a query.
* @since 2.1 */ protected boolean supportsCancel = true; /* CASE SENSITIVITY SUPPORT */ /** Indicate whether UNquoted identifiers will be considered as case INsensitive and stored in mixed case by the DBMS. note: If FALSE, unquoted identifiers will still be considered as case insensitive for the researches, but will be stored in lower or upper case (in function of {@link #lowerCaseUnquoted} and {@link #upperCaseUnquoted}). If none of these two flags is TRUE, the storage case will be though considered as mixed. */ protected boolean supportsMixedCaseUnquotedIdentifier; /** Indicate whether the unquoted identifiers are stored in lower case in the DBMS. */ protected boolean lowerCaseUnquoted; /** Indicate whether the unquoted identifiers are stored in upper case in the DBMS. */ protected boolean upperCaseUnquoted; /** Indicate whether quoted identifiers will be considered as case INsensitive and stored in mixed case by the DBMS. note: If FALSE, quoted identifiers will be considered as case sensitive and will be stored either in lower, upper or in mixed case (in function of {@link #lowerCaseQuoted}, {@link #upperCaseQuoted} and {@link #mixedCaseQuoted}). If none of these three flags is TRUE, the storage case will be mixed case. */ protected boolean supportsMixedCaseQuotedIdentifier; /** Indicate whether the quoted identifiers are stored in lower case in the DBMS. */ protected boolean lowerCaseQuoted; /** Indicate whether the quoted identifiers are stored in mixed case in the DBMS. */ protected boolean mixedCaseQuoted; /** Indicate whether the quoted identifiers are stored in upper case in the DBMS. */ protected boolean upperCaseQuoted; /* FETCH SIZE */ /** Special fetch size meaning that the JDBC driver is free to set its own guess for this value. */ public final static int IGNORE_FETCH_SIZE = 0; /** Default fetch size. * Note 1: this value may be however ignored if the JDBC driver does not support this feature. * Note 2: by default set to {@link #IGNORE_FETCH_SIZE}. */ public final static int DEFAULT_FETCH_SIZE = IGNORE_FETCH_SIZE; /**Indicate whether the last fetch size operation works.
*By default, this attribute is set to false
, meaning that the "fetch size" feature is
* disabled. To enable it, a simple call to {@link #setFetchSize(int)} is enough, whatever is the given value.
If just once this operation fails, the fetch size feature will be always considered as unsupported in this {@link JDBCConnection} * until the next call of {@link #setFetchSize(int)}.
*/ protected boolean supportsFetchSize = false; /**Fetch size to set in the {@link Statement} in charge of executing a SELECT query.
*Note 1: this value must always be positive. If negative or null, it will be ignored and the {@link Statement} will keep its default behavior.
*Note 2: if this feature is enabled (i.e. has a value > 0), the AutoCommit will be disabled.
*/ protected int fetchSize = DEFAULT_FETCH_SIZE; /** *Creates a JDBC connection to the specified database and with the specified JDBC driver. * This connection is established using the given user name and password.
* *
note: the JDBC driver is loaded using Class.forName(driverPath)
and the connection is created with DriverManager.getConnection(dbUrl, dbUser, dbPassword)
.
Warning: * This constructor really creates a new SQL connection. Creating a SQL connection is time consuming! * That's why it is recommended to use a pool of connections. When doing so, you should use the other constructor of this class * ({@link #JDBCConnection(Connection, JDBCTranslator, String, TAPLog)}). *
* * @param driverPath Full class name of the JDBC driver. * @param dbUrl URL to the database. note This URL may not be prefixed by "jdbc:". If not, the prefix will be automatically added. * @param dbUser Name of the database user. * @param dbPassword Password of the given database user. * @param translator {@link ADQLTranslator} to use in order to get SQL from an ADQL query and to get qualified DB table names. * @param connID ID of this connection. note: may be NULL ; but in this case, logs concerning this connection will be more difficult to localize. * @param logger Logger to use in case of need. note: may be NULL ; in this case, error will never be logged, but sometimes DBException may be raised. * * @throws DBException If the driver can not be found or if the connection can not merely be created (usually because DB parameters are wrong). */ public JDBCConnection(final String driverPath, final String dbUrl, final String dbUser, final String dbPassword, final JDBCTranslator translator, final String connID, final TAPLog logger) throws DBException{ this(createConnection(driverPath, dbUrl, dbUser, dbPassword), translator, connID, logger); } /** * Create a JDBC connection by wrapping the given connection. * * @param conn Connection to wrap. * @param translator {@link ADQLTranslator} to use in order to get SQL from an ADQL query and to get qualified DB table names. * @param connID ID of this connection. note: may be NULL ; but in this case, logs concerning this connection will be more difficult to localize. * @param logger Logger to use in case of need. note: may be NULL ; in this case, error will never be logged, but sometimes DBException may be raised. */ public JDBCConnection(final Connection conn, final JDBCTranslator translator, final String connID, final TAPLog logger) throws DBException{ if (conn == null) throw new NullPointerException("Missing SQL connection! => can not create a JDBCConnection object."); if (translator == null) throw new NullPointerException("Missing ADQL translator! => can not create a JDBCConnection object."); this.connection = conn; this.translator = translator; this.ID = connID; this.logger = logger; // Set the supporting features' flags + DBMS type: try{ DatabaseMetaData dbMeta = connection.getMetaData(); dbms = getDBMSName(dbMeta.getURL()); supportsTransaction = dbMeta.supportsTransactions(); supportsBatchUpdates = dbMeta.supportsBatchUpdates(); supportsDataDefinition = dbMeta.supportsDataDefinitionAndDataManipulationTransactions(); supportsSchema = dbMeta.supportsSchemasInTableDefinitions(); lowerCaseUnquoted = dbMeta.storesLowerCaseIdentifiers(); upperCaseUnquoted = dbMeta.storesUpperCaseIdentifiers(); supportsMixedCaseUnquotedIdentifier = dbMeta.supportsMixedCaseIdentifiers(); lowerCaseQuoted = dbMeta.storesLowerCaseQuotedIdentifiers(); mixedCaseQuoted = dbMeta.storesMixedCaseQuotedIdentifiers(); upperCaseQuoted = dbMeta.storesUpperCaseQuotedIdentifiers(); supportsMixedCaseQuotedIdentifier = dbMeta.supportsMixedCaseQuotedIdentifiers(); }catch(SQLException se){ throw new DBException("Unable to access to one or several DB metadata (url, supportsTransaction, supportsBatchUpdates, supportsDataDefinitionAndDataManipulationTransactions, supportsSchemasInTableDefinitions, storesLowerCaseIdentifiers, storesUpperCaseIdentifiers, supportsMixedCaseIdentifiers, storesLowerCaseQuotedIdentifiers, storesMixedCaseQuotedIdentifiers, storesUpperCaseQuotedIdentifiers and supportsMixedCaseQuotedIdentifiers) from the given Connection!"); } } /** * Extract the DBMS name from the given database URL. * * @param dbUrl JDBC URL to access the database. This URL must start with "jdbc:" ; otherwise an exception will be thrown. * * @return The DBMS name as found in the given URL. * * @throws DBException If NULL has been given, if the URL is not a JDBC one (starting with "jdbc:") or if the DBMS name is missing. */ protected static final String getDBMSName(String dbUrl) throws DBException{ if (dbUrl == null) throw new DBException("Missing database URL!"); if (!dbUrl.startsWith(JDBC_PREFIX + ":")) throw new DBException("This DBConnection implementation is only able to deal with JDBC connection! (the DB URL must start with \"" + JDBC_PREFIX + ":\" ; given url: " + dbUrl + ")"); dbUrl = dbUrl.substring(5); int indSep = dbUrl.indexOf(':'); if (indSep <= 0) throw new DBException("Incorrect database URL: " + dbUrl); return dbUrl.substring(0, indSep).toLowerCase(); } /** * Create a {@link Connection} instance using the given database parameters. * The path of the JDBC driver will be used to load the adequate driver if none is found by default. * * @param driverPath Path to the JDBC driver. * @param dbUrl JDBC URL to connect to the database. note This URL may not be prefixed by "jdbc:". If not, the prefix will be automatically added. * @param dbUser Name of the user to use to connect to the database. * @param dbPassword Password of the user to use to connect to the database. * * @return A new DB connection. * * @throws DBException If the driver can not be found or if the connection can not merely be created (usually because DB parameters are wrong). * * @see DriverManager#getDriver(String) * @see Driver#connect(String, Properties) */ private final static Connection createConnection(final String driverPath, final String dbUrl, final String dbUser, final String dbPassword) throws DBException{ // Normalize the DB URL: String url = dbUrl.startsWith(JDBC_PREFIX) ? dbUrl : (JDBC_PREFIX + dbUrl); // Select the JDBDC driver: Driver d; try{ d = DriverManager.getDriver(dbUrl); }catch(SQLException e){ try{ // ...load it, if necessary: if (driverPath == null) throw new DBException("Missing JDBC driver path! Since the required JDBC driver is not yet loaded, this path is needed to load it."); Class.forName(driverPath); // ...and try again: d = DriverManager.getDriver(dbUrl); }catch(ClassNotFoundException cnfe){ throw new DBException("Impossible to find the JDBC driver \"" + driverPath + "\" !", cnfe); }catch(SQLException se){ throw new DBException("No suitable JDBC driver found for the database URL \"" + dbUrl + "\" and the driver path \"" + driverPath + "\"!", se); } } // Build a connection to the specified database: try{ Properties p = new Properties(); if (dbUser != null) p.setProperty("user", dbUser); if (dbPassword != null) p.setProperty("password", dbPassword); Connection con = d.connect(url, p); return con; }catch(SQLException se){ throw new DBException("Impossible to establish a connection to the database \"" + url + "\"!", se); } } @Override public final String getID(){ return ID; } /** *Get the JDBC connection wrapped by this {@link JDBCConnection} object.
* *Note: * This is the best way to get the JDBC connection in order to properly close it. *
* * @return The wrapped JDBC connection. */ public final Connection getInnerConnection(){ return connection; } /** *Tell whether this {@link JDBCConnection} is already associated with a {@link Statement}.
* * @returntrue
if a {@link Statement} instance is already associated with this {@link JDBCConnection}
* false
otherwise.
*
* @throws SQLException In case the open/close status of the current {@link Statement} instance can not be checked.
*
* @since 2.1
*/
protected boolean hasStatement() throws SQLException{
return (stmt != null && !stmt.isClosed());
}
/**
* Get the only statement associated with this {@link JDBCConnection}.
* ** If no {@link Statement} is yet existing, one is created, stored in this {@link JDBCConnection} (for further uses) * and then returned. *
* * @return The {@link Statement} instance associated with this {@link JDBCConnection}. Never NULL * * @throws SQLException In case a {@link Statement} can not be created. * * @since 2.1 */ protected Statement getStatement() throws SQLException{ if (hasStatement()) return stmt; else return (stmt = connection.createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY)); } /** * Close the only statement associated with this {@link JDBCConnection}. * * @since 2.1 */ protected void closeStatement(){ close(stmt); stmt = null; } /** *Cancel (and rollback when possible) the currently running query of this {@link JDBCConnection} instance.
* *Important note: * This function is effective only if the JDBC driver and DBMS both support * this operation. *
*
* If a call of this function fails the flag {@link #supportsCancel} is set to false
* so that any subsequent call of this function for this instance of {@link JDBCConnection}
* does not try any other cancellation attempt. HOWEVER the rollback will still continue
* to be performed if the parameter rollback
is set to true
.
*
Note 1: * A failure of a rollback is not considered as a not supported cancellation feature by the JDBC driver or the DBMS. * So if the cancellation succeeds but a rollback fails, a next call of this function will still try canceling the given statement. * In case of a rollback failure, only a WARNING is written in the log file ; no exception is thrown. *
* *Note 2:
* In case of cancellation success, the flag {@link #cancelled} is set to true
.
* Thus, the function executing a query can know that if any SQL exception is thrown, it will be due to the cancellation and
* should not be then considered as a real error (=> exception not logged but anyway propagated in order to stop any processing).
*
Note 3: * This function is synchronized on the {@link #cancelled} flag. * Thus, it may block until another synchronized block on this same flag is finished. *
* * @param rollbacktrue
to cancel the statement AND rollback the current connection transaction,
* false
to just cancel the statement.
*
* @see DBConnection#cancel(boolean)
* @see #cancel(Statement, boolean)
*
* @since 2.1
*/
@Override
public final void cancel(final boolean rollback){
synchronized(cancelled){
cancelled = cancel(stmt, rollback);
// Log the success of the cancellation:
if (cancelled && logger != null)
logger.logDB(LogLevel.INFO, this, "CANCEL", "Query execution successfully stopped!", null);
}
}
/**
* Cancel (and rollback when asked and if possible) the given statement.
* *Important note: * This function is effective only if the JDBC driver and DBMS both support * this operation. *
*
* If a call of this function fails the flag {@link #supportsCancel} is set to false
* so that any subsequent call of this function for this instance of {@link JDBCConnection}
* does not try any other cancellation attempt. HOWEVER the rollback will still continue
* to be performed if the parameter rollback
is set to true
.
*
Note: * A failure of a rollback is not considered as a not supported cancellation feature by the JDBC driver or the DBMS. * So if the cancellation succeeds but a rollback fails, a next call of this function will still try canceling the given statement. * In case of a rollback failure, only a WARNING is written in the log file ; no exception is thrown. *
* * @param stmt The statement to cancel. Note: if closed or NULL, no exception will be thrown and only a rollback will be attempted if asked in parameter. * @param rollbacktrue
to cancel the statement AND rollback the current connection transaction,
* false
to just cancel the statement.
*
* @return true
if the cancellation succeeded (or none was running),
* false
otherwise (and especially if the "cancel" operation is not supported).
*
* @since 2.1
*/
protected boolean cancel(final Statement stmt, final boolean rollback){
try{
// If the statement is not already closed, cancel its current query execution:
if (supportsCancel && stmt != null && !stmt.isClosed()){
stmt.cancel();
return true;
}else
return false;
}catch(SQLFeatureNotSupportedException sfnse){
// prevent further cancel attempts:
supportsCancel = false;
// log a warning:
if (logger != null)
logger.logDB(LogLevel.WARNING, this, "CANCEL", "This JDBC driver does not support Statement.cancel(). No further cancel attempt will be performed with this JDBCConnection instance.", sfnse);
return false;
}catch(SQLException se){
if (logger != null)
logger.logDB(LogLevel.ERROR, this, "CANCEL", "Abortion of the current query apparently fails! The query may still run on the database server.", se);
return false;
}
// Whatever happens, rollback all executed operations (only if rollback=true and if in a transaction ; that's to say if AutoCommit = false):
finally{
if (rollback && supportsTransaction)
rollback();
}
}
/**
* Tell whether the last query execution has been canceled.
* *Note: * This function is synchronized on the {@link #cancelled} flag. * Thus, it may block until another synchronized block on this same flag is finished. *
* * @returntrue
if the last query execution has been cancelled,
* false
otherwise.
*
* @since 2.1
*/
protected final boolean isCancelled(){
synchronized(cancelled){
return cancelled;
}
}
/**
* Reset the {@link #cancelled} flag to false
.
Note: * This function is synchronized on the {@link #cancelled} flag. * Thus, it may block until another synchronized block on this same flag is finished. *
* * @since 2.1 */ protected final void resetCancel(){ synchronized(cancelled){ cancelled = false; } } @Override public void endQuery(){ // Cancel the last query processing, if still running: cancel(stmt, false); // note: this function is called instead of cancel(false) in order to avoid a log message about the cancellation operation result. // Close the statement, if still opened: closeStatement(); // Rollback the transaction, if one has been opened: rollback(false); // End the transaction (i.e. go back to autocommit=true), if one has been opened: endTransaction(false); } /* ********************* */ /* INTERROGATION METHODS */ /* ********************* */ @Override public synchronized TableIterator executeQuery(final ADQLQuery adqlQuery) throws DBException{ // Starting of new query execution => disable the cancel flag: resetCancel(); String sql = null; ResultSet result = null; try{ // 1. Translate the ADQL query into SQL: if (logger != null) logger.logDB(LogLevel.INFO, this, "TRANSLATE", "Translating ADQL: " + adqlQuery.toADQL().replaceAll("(\t|\r?\n)+", " "), null); sql = translator.translate(adqlQuery); // 2. Create the statement and if needed, configure it for the given fetch size: if (supportsTransaction && supportsFetchSize && fetchSize > 0){ try{ connection.setAutoCommit(false); }catch(SQLException se){ if (!isCancelled()){ supportsFetchSize = false; if (logger != null) logger.logDB(LogLevel.WARNING, this, "RESULT", "Fetch size unsupported!", null); } } } getStatement(); if (supportsFetchSize){ try{ stmt.setFetchSize(fetchSize); }catch(SQLException se){ if (!isCancelled()){ supportsFetchSize = false; if (logger != null) logger.logDB(LogLevel.WARNING, this, "RESULT", "Fetch size unsupported!", null); } } } // 3. Execute the SQL query: if (logger != null) logger.logDB(LogLevel.INFO, this, "EXECUTE", "SQL query: " + sql.replaceAll("(\t|\r?\n)+", " "), null); result = stmt.executeQuery(sql); // 4. Return the result through a TableIterator object: if (logger != null) logger.logDB(LogLevel.INFO, this, "RESULT", "Returning result (" + (supportsFetchSize ? "fetch size = " + fetchSize : "all in once") + ").", null); return createTableIterator(result, adqlQuery.getResultingColumns()); }catch(Exception ex){ // Close the ResultSet, if one was open: close(result); // End properly the query: endQuery(); // Propagate the exception with an appropriate error message: if (ex instanceof SQLException) throw new DBException("Unexpected error while executing a SQL query: " + ex.getMessage(), ex); else if (ex instanceof TranslationException) throw new DBException("Unexpected error while translating ADQL into SQL: " + ex.getMessage(), ex); else if (ex instanceof DataReadException) throw new DBException("Impossible to read the query result, because: " + ex.getMessage(), ex); else throw new DBException("Unexpected error while executing an ADQL query: " + ex.getMessage(), ex); } } /** *Create a {@link TableIterator} instance which lets reading the given result table.
* *Note: * The statement currently opened is not closed by this function. Actually, it is still associated with * this {@link JDBCConnection}. However, this latter is provided to the {@link TableIterator} returned by * this function. Thus, when the {@link TableIterator#close()} is called, the function {@link #endQuery()} * will be called. It will then close the {@link ResultSet}, the {@link Statement} and end any opened * transaction (with rollback). See {@link #endQuery()} for more details. *
* * @param rs Result of an SQL query. * @param resultingColumns Metadata corresponding to each columns of the result. * * @return A {@link TableIterator} instance. * * @throws DataReadException If the metadata (columns count and types) can not be fetched * or if any other error occurs. * * @see ResultSetTableIterator#ResultSetTableIterator(DBConnection, ResultSet, DBColumn[], JDBCTranslator, String) */ protected TableIterator createTableIterator(final ResultSet rs, final DBColumn[] resultingColumns) throws DataReadException{ try{ return new ResultSetTableIterator(this, rs, resultingColumns, translator, dbms); }catch(Throwable t){ throw (t instanceof DataReadException) ? (DataReadException)t : new DataReadException(t); } } /* *********************** */ /* TAP_SCHEMA MANIPULATION */ /* *********************** */ /** * Tell when, compared to the other TAP standard tables, a given standard TAP table should be created. * * @param table Standard TAP table. * * @return An index between 0 and 4 (included) - 0 meaning the first table to create whereas 4 is the last one. * -1 is returned if NULL is given in parameter of if the standard table is not taken into account here. */ protected int getCreationOrder(final STDTable table){ if (table == null) return -1; switch(table){ case SCHEMAS: return 0; case TABLES: return 1; case COLUMNS: return 2; case KEYS: return 3; case KEY_COLUMNS: return 4; default: return -1; } } /* ************************************ */ /* GETTING TAP_SCHEMA FROM THE DATABASE */ /* ************************************ */ /** *In this implementation, this function is first creating a virgin {@link TAPMetadata} object * that will be filled progressively by calling the following functions:
*Note: * If schemas are not supported by this DBMS connection, the DB name of all tables will be set to NULL * and the DB name of all tables will be prefixed by the ADQL name of their respective schema. *
* * @see tap.db.DBConnection#getTAPSchema() */ @Override public synchronized TAPMetadata getTAPSchema() throws DBException{ // Starting of new query execution => disable the cancel flag: resetCancel(); // Build a virgin TAP metadata: TAPMetadata metadata = new TAPMetadata(); // Get the definition of the standard TAP_SCHEMA tables: TAPSchema tap_schema = TAPMetadata.getStdSchema(supportsSchema); // LOAD ALL METADATA FROM THE STANDARD TAP TABLES: try{ // create a common statement for all loading functions: getStatement(); // load all schemas from TAP_SCHEMA.schemas: if (logger != null) logger.logDB(LogLevel.INFO, this, "LOAD_TAP_SCHEMA", "Loading TAP_SCHEMA.schemas.", null); loadSchemas(tap_schema.getTable(STDTable.SCHEMAS.label), metadata, stmt); // load all tables from TAP_SCHEMA.tables: if (logger != null) logger.logDB(LogLevel.INFO, this, "LOAD_TAP_SCHEMA", "Loading TAP_SCHEMA.tables.", null); ListLoad into the given metadata all schemas listed in TAP_SCHEMA.schemas.
* *Note 1: * If schemas are not supported by this DBMS connection, the DB name of the loaded schemas is set to NULL. *
* *Note 2: * Schema entries are retrieved ordered by ascending schema_name. *
* * @param tableDef Definition of the table TAP_SCHEMA.schemas. * @param metadata Metadata to fill with all found schemas. * @param stmt Statement to use in order to interact with the database. * * @throws DBException If any error occurs while interacting with the database. */ protected void loadSchemas(final TAPTable tableDef, final TAPMetadata metadata, final Statement stmt) throws DBException{ ResultSet rs = null; try{ // Determine whether the dbName column exists: /* note: if the schema notion is not supported by this DBMS, the column "dbname" is ignored. */ boolean hasDBName = supportsSchema && isColumnExisting(tableDef.getDBSchemaName(), tableDef.getDBName(), DB_NAME_COLUMN, connection.getMetaData()); // Build the SQL query: StringBuffer sqlBuf = new StringBuffer("SELECT "); sqlBuf.append(translator.getColumnName(tableDef.getColumn("schema_name"))); sqlBuf.append(", ").append(translator.getColumnName(tableDef.getColumn("description"))); sqlBuf.append(", ").append(translator.getColumnName(tableDef.getColumn("utype"))); if (hasDBName){ sqlBuf.append(", "); translator.appendIdentifier(sqlBuf, DB_NAME_COLUMN, IdentifierField.COLUMN); } sqlBuf.append(" FROM ").append(translator.getTableName(tableDef, supportsSchema)); sqlBuf.append(" ORDER BY 1"); // Execute the query: rs = stmt.executeQuery(sqlBuf.toString()); // Create all schemas: while(rs.next()){ String schemaName = rs.getString(1), description = rs.getString(2), utype = rs.getString(3), dbName = (hasDBName ? rs.getString(4) : null); // create the new schema: TAPSchema newSchema = new TAPSchema(schemaName, nullifyIfNeeded(description), nullifyIfNeeded(utype)); if (dbName != null && dbName.trim().length() > 0) newSchema.setDBName(dbName); // add the new schema inside the given metadata: metadata.addSchema(newSchema); } }catch(SQLException se){ if (!isCancelled() && logger != null) logger.logDB(LogLevel.ERROR, this, "LOAD_TAP_SCHEMA", "Impossible to load schemas from TAP_SCHEMA.schemas!", se); throw new DBException("Impossible to load schemas from TAP_SCHEMA.schemas!", se); }finally{ close(rs); } } /** *Load into the corresponding metadata all tables listed in TAP_SCHEMA.tables.
* *Note 1: * Schemas are searched in the given metadata by their ADQL name and case sensitively. * If they can not be found a {@link DBException} is thrown. *
* *Note 2: * If schemas are not supported by this DBMS connection, the DB name of the loaded * {@link TAPTable}s is prefixed by the ADQL name of their respective schema. *
* *Note 3: * If the column table_index exists, table entries are retrieved ordered by ascending schema_name, then table_index, and finally table_name. * If this column does not exist, table entries are retrieved ordered by ascending schema_name and then table_name. *
* * @param tableDef Definition of the table TAP_SCHEMA.tables. * @param metadata Metadata (containing already all schemas listed in TAP_SCHEMA.schemas). * @param stmt Statement to use in order to interact with the database. * * @return The complete list of all loaded tables. note: this list is required by {@link #loadColumns(TAPTable, List, Statement)}. * * @throws DBException If a schema can not be found, or if any other error occurs while interacting with the database. */ protected ListLoad into the corresponding tables all columns listed in TAP_SCHEMA.columns.
* *Note: * Tables are searched in the given list by their ADQL name and case sensitively. * If they can not be found a {@link DBException} is thrown. *
* *Note 2: * If the column column_index exists, column entries are retrieved ordered by ascending table_name, then column_index, and finally column_name. * If this column does not exist, column entries are retrieved ordered by ascending table_name and then column_name. *
* * @param tableDef Definition of the table TAP_SCHEMA.columns. * @param lstTables List of all published tables (= all tables listed in TAP_SCHEMA.tables). * @param stmt Statement to use in order to interact with the database. * * @throws DBException If a table can not be found, or if any other error occurs while interacting with the database. */ protected void loadColumns(final TAPTable tableDef, final ListLoad into the corresponding tables all keys listed in TAP_SCHEMA.keys and detailed in TAP_SCHEMA.key_columns.
* *Note 1: * Tables and columns are searched in the given list by their ADQL name and case sensitively. * If they can not be found a {@link DBException} is thrown. *
* *Note 2: * Key entries are retrieved ordered by ascending key_id, then from_table and finally target_table. * Key_Column entries are retrieved ordered by ascending from_column and then target_column. *
* * @param keysDef Definition of the table TAP_SCHEMA.keys. * @param keyColumnsDef Definition of the table TAP_SCHEMA.key_columns. * @param lstTables List of all published tables (= all tables listed in TAP_SCHEMA.tables). * @param stmt Statement to use in order to interact with the database. * * @throws DBException If a table or a column can not be found, or if any other error occurs while interacting with the database. */ protected void loadKeys(final TAPTable keysDef, final TAPTable keyColumnsDef, final ListThis function is just calling the following functions:
*Important note: * If the connection does not support transactions, then there will be merely no transaction. * Consequently, any failure (exception/error) will not clean the partial modifications done by this function. *
* * @see tap.db.DBConnection#setTAPSchema(tap.metadata.TAPMetadata) */ @Override public synchronized void setTAPSchema(final TAPMetadata metadata) throws DBException{ // Starting of new query execution => disable the cancel flag: resetCancel(); try{ // A. GET THE DEFINITION OF ALL STANDARD TAP TABLES: TAPTable[] stdTables = mergeTAPSchemaDefs(metadata); startTransaction(); // B. RE-CREATE THE STANDARD TAP_SCHEMA TABLES: getStatement(); // 1. Ensure TAP_SCHEMA exists and drop all its standard TAP tables: if (logger != null) logger.logDB(LogLevel.INFO, this, "CLEAN_TAP_SCHEMA", "Cleaning TAP_SCHEMA.", null); resetTAPSchema(stmt, stdTables); // 2. Create all standard TAP tables: if (logger != null) logger.logDB(LogLevel.INFO, this, "CREATE_TAP_SCHEMA", "Creating TAP_SCHEMA tables.", null); for(TAPTable table : stdTables) createTAPSchemaTable(table, stmt); // C. FILL THE NEW TABLE USING THE GIVEN DATA ITERATOR: if (logger != null) logger.logDB(LogLevel.INFO, this, "CREATE_TAP_SCHEMA", "Filling TAP_SCHEMA tables.", null); fillTAPSchema(metadata); // D. CREATE THE INDEXES OF ALL STANDARD TAP TABLES: if (logger != null) logger.logDB(LogLevel.INFO, this, "CREATE_TAP_SCHEMA", "Creating TAP_SCHEMA tables' indexes.", null); for(TAPTable table : stdTables) createTAPTableIndexes(table, stmt); commit(); }catch(SQLException se){ if (!isCancelled() && logger != null) logger.logDB(LogLevel.ERROR, this, "CREATE_TAP_SCHEMA", "Impossible to SET TAP_SCHEMA in DB!", se); rollback(); throw new DBException("Impossible to SET TAP_SCHEMA in DB!", se); }finally{ closeStatement(); endTransaction(); } } /** *Merge the definition of TAP_SCHEMA tables given in parameter with the definition provided in the TAP standard.
* ** The goal is to get in output the list of all standard TAP_SCHEMA tables. But it must take into account the customized * definition given in parameter if there is one. Indeed, if a part of TAP_SCHEMA is not provided, it will be completed here by the * definition provided in the TAP standard. And so, if the whole TAP_SCHEMA is not provided at all, the returned tables will be those * of the IVOA standard. *
* *Important note: * If the TAP_SCHEMA definition is missing or incomplete in the given metadata, it will be added or completed automatically * by this function with the definition provided in the IVOA TAP standard. *
* *Note: * Only the standard tables of TAP_SCHEMA are considered. The others are skipped (that's to say: never returned by this function ; * however, they will stay in the given metadata). *
* *Note: * If schemas are not supported by this DBMS connection, the DB name of schemas is set to NULL and * the DB name of tables is prefixed by the schema name. *
* * @param metadata Metadata (with or without TAP_SCHEMA schema or some of its table). Must not be NULL * * @return The list of all standard TAP_SCHEMA tables, ordered by creation order (see {@link #getCreationOrder(tap.metadata.TAPMetadata.STDTable)}). * * @see TAPMetadata#resolveStdTable(String) * @see TAPMetadata#getStdSchema(boolean) * @see TAPMetadata#getStdTable(STDTable) */ protected TAPTable[] mergeTAPSchemaDefs(final TAPMetadata metadata){ // 1. Get the TAP_SCHEMA schema from the given metadata: TAPSchema tapSchema = null; IteratorEnsure the TAP_SCHEMA schema exists in the database AND it must especially drop all of its standard tables * (schemas, tables, columns, keys and key_columns), if they exist.
* *Important note: * If TAP_SCHEMA already exists and contains other tables than the standard ones, they will not be dropped and they will stay in place. *
* * @param stmt The statement to use in order to interact with the database. * @param stdTables List of all standard tables that must be (re-)created. * They will be used just to know the name of the standard tables that should be dropped here. * * @throws SQLException If any error occurs while querying or updating the database. */ protected void resetTAPSchema(final Statement stmt, final TAPTable[] stdTables) throws SQLException{ DatabaseMetaData dbMeta = connection.getMetaData(); // 1. Get the qualified DB schema name: String dbSchemaName = (supportsSchema ? stdTables[0].getDBSchemaName() : null); /* 2. Test whether the schema TAP_SCHEMA exists * and if it does not, create it: */ if (dbSchemaName != null){ // test whether the schema TAP_SCHEMA exists: boolean hasTAPSchema = isSchemaExisting(dbSchemaName, dbMeta); // create TAP_SCHEMA if it does not exist: if (!hasTAPSchema) stmt.executeUpdate("CREATE SCHEMA " + translator.getQualifiedSchemaName(stdTables[0]) + ";"); } // 2-bis. Drop all its standard tables: dropTAPSchemaTables(stdTables, stmt, dbMeta); } /** *Remove/Drop all standard TAP_SCHEMA tables given in parameter.
* *Note: * To test the existence of tables to drop, {@link DatabaseMetaData#getTables(String, String, String, String[])} is called. * Then the schema and table names are compared with the case sensitivity defined by the translator. * Only tables matching with these comparisons will be dropped. *
* * @param stdTables Tables to drop. (they should be provided ordered by their creation order (see {@link #getCreationOrder(STDTable)})). * @param stmt Statement to use in order to interact with the database. * @param dbMeta Database metadata. Used to list all existing tables. * * @throws SQLException If any error occurs while querying or updating the database. * * @see JDBCTranslator#isCaseSensitive(IdentifierField) */ private void dropTAPSchemaTables(final TAPTable[] stdTables, final Statement stmt, final DatabaseMetaData dbMeta) throws SQLException{ String[] stdTablesToDrop = new String[]{null,null,null,null,null}; ResultSet rs = null; try{ // Retrieve only the schema name and determine whether the search should be case sensitive: String tapSchemaName = stdTables[0].getDBSchemaName(); boolean schemaCaseSensitive = translator.isCaseSensitive(IdentifierField.SCHEMA); boolean tableCaseSensitive = translator.isCaseSensitive(IdentifierField.TABLE); // Identify which standard TAP tables must be dropped: rs = dbMeta.getTables(null, null, null, null); while(rs.next()){ String rsSchema = nullifyIfNeeded(rs.getString(2)), rsTable = rs.getString(3); if (!supportsSchema || (tapSchemaName == null && rsSchema == null) || equals(rsSchema, tapSchemaName, schemaCaseSensitive)){ int indStdTable; indStdTable = getCreationOrder(isStdTable(rsTable, stdTables, tableCaseSensitive)); if (indStdTable > -1){ stdTablesToDrop[indStdTable] = (rsSchema != null ? "\"" + rsSchema + "\"." : "") + "\"" + rsTable + "\""; } } } }finally{ close(rs); } // Drop the existing tables (in the reverse order of creation): for(int i = stdTablesToDrop.length - 1; i >= 0; i--){ if (stdTablesToDrop[i] != null) stmt.executeUpdate("DROP TABLE " + stdTablesToDrop[i] + ";"); } } /** *Create the specified standard TAP_SCHEMA tables into the database.
* *Important note: * Only standard TAP_SCHEMA tables (schemas, tables, columns, keys and key_columns) can be created here. * If the given table is not part of the schema TAP_SCHEMA (comparison done on the ADQL name case-sensitively) * and is not a standard TAP_SCHEMA table (comparison done on the ADQL name case-sensitively), * this function will do nothing and will throw an exception. *
* *Note: * An extra column is added in TAP_SCHEMA.schemas, TAP_SCHEMA.tables and TAP_SCHEMA.columns: {@value #DB_NAME_COLUMN}. * This column is particularly used when getting the TAP metadata from the database to alias some schema, table and/or column names in ADQL. *
* * @param table Table to create. * @param stmt Statement to use in order to interact with the database. * * @throws DBException If the given table is not a standard TAP_SCHEMA table. * @throws SQLException If any error occurs while querying or updating the database. */ protected void createTAPSchemaTable(final TAPTable table, final Statement stmt) throws DBException, SQLException{ // 1. ENSURE THE GIVEN TABLE IS REALLY A TAP_SCHEMA TABLE (according to the ADQL names): if (!table.getADQLSchemaName().equalsIgnoreCase(STDSchema.TAPSCHEMA.label) || TAPMetadata.resolveStdTable(table.getADQLName()) == null) throw new DBException("Forbidden table creation: " + table + " is not a standard table of TAP_SCHEMA!"); // 2. BUILD THE SQL QUERY TO CREATE THE TABLE: StringBuffer sql = new StringBuffer("CREATE TABLE "); // a. Write the fully qualified table name: sql.append(translator.getTableName(table, supportsSchema)); // b. List all the columns: sql.append('('); IteratorGet the primary key corresponding to the specified table.
* *If the specified table is not a standard TAP_SCHEMA table, NULL will be returned.
* * @param tableName ADQL table name. * * @return The primary key definition (prefixed by a space) corresponding to the specified table (ex: " PRIMARY KEY(schema_name)"), * or NULL if the specified table is not a standard TAP_SCHEMA table. */ private String getPrimaryKeyDef(final String tableName){ STDTable stdTable = TAPMetadata.resolveStdTable(tableName); if (stdTable == null) return null; boolean caseSensitive = translator.isCaseSensitive(IdentifierField.COLUMN); switch(stdTable){ case SCHEMAS: return " PRIMARY KEY(" + (caseSensitive ? "\"schema_name\"" : "schema_name") + ")"; case TABLES: return " PRIMARY KEY(" + (caseSensitive ? "\"table_name\"" : "table_name") + ")"; case COLUMNS: return " PRIMARY KEY(" + (caseSensitive ? "\"table_name\"" : "table_name") + ", " + (caseSensitive ? "\"column_name\"" : "column_name") + ")"; case KEYS: case KEY_COLUMNS: return " PRIMARY KEY(" + (caseSensitive ? "\"key_id\"" : "key_id") + ")"; default: return null; } } /** *Create the DB indexes corresponding to the given TAP_SCHEMA table.
* *Important note: * Only standard TAP_SCHEMA tables (schemas, tables, columns, keys and key_columns) can be created here. * If the given table is not part of the schema TAP_SCHEMA (comparison done on the ADQL name case-sensitively) * and is not a standard TAP_SCHEMA table (comparison done on the ADQL name case-sensitively), * this function will do nothing and will throw an exception. *
* * @param table Table whose indexes must be created here. * @param stmt Statement to use in order to interact with the database. * * @throws DBException If the given table is not a standard TAP_SCHEMA table. * @throws SQLException If any error occurs while querying or updating the database. */ protected void createTAPTableIndexes(final TAPTable table, final Statement stmt) throws DBException, SQLException{ // 1. Ensure the given table is really a TAP_SCHEMA table (according to the ADQL names): if (!table.getADQLSchemaName().equalsIgnoreCase(STDSchema.TAPSCHEMA.label) || TAPMetadata.resolveStdTable(table.getADQLName()) == null) throw new DBException("Forbidden index creation: " + table + " is not a standard table of TAP_SCHEMA!"); // Build the fully qualified DB name of the table: final String dbTableName = translator.getTableName(table, supportsSchema); // Build the name prefix of all the indexes to create: final String indexNamePrefix = "INDEX_" + ((table.getADQLSchemaName() != null) ? (table.getADQLSchemaName() + "_") : "") + table.getADQLName() + "_"; IteratorFill all the standard tables of TAP_SCHEMA (schemas, tables, columns, keys and key_columns).
* *This function just call the following functions:
*Fill the standard table TAP_SCHEMA.schemas with the list of all published schemas.
* *Note: * Batch updates may be done here if its supported by the DBMS connection. * In case of any failure while using this feature, it will be flagged as unsupported and one-by-one updates will be processed. *
* * @param metaTable Description of TAP_SCHEMA.schemas. * @param itSchemas Iterator over the list of schemas. * * @return Iterator over the full list of all tables (whatever is their schema). * * @throws DBException If rows can not be inserted because the SQL update query has failed. * @throws SQLException If any other SQL exception occurs. */ private IteratorFill the standard table TAP_SCHEMA.tables with the list of all published tables.
* *Note: * Batch updates may be done here if its supported by the DBMS connection. * In case of any failure while using this feature, it will be flagged as unsupported and one-by-one updates will be processed. *
* * @param metaTable Description of TAP_SCHEMA.tables. * @param itTables Iterator over the list of tables. * * @return Iterator over the full list of all columns (whatever is their table). * * @throws DBException If rows can not be inserted because the SQL update query has failed. * @throws SQLException If any other SQL exception occurs. */ private IteratorFill the standard table TAP_SCHEMA.columns with the list of all published columns.
* *Note: * Batch updates may be done here if its supported by the DBMS connection. * In case of any failure while using this feature, it will be flagged as unsupported and one-by-one updates will be processed. *
* * @param metaTable Description of TAP_SCHEMA.columns. * @param itColumns Iterator over the list of columns. * * @return Iterator over the full list of all foreign keys. * * @throws DBException If rows can not be inserted because the SQL update query has failed. * @throws SQLException If any other SQL exception occurs. */ private IteratorFill the standard tables TAP_SCHEMA.keys and TAP_SCHEMA.key_columns with the list of all published foreign keys.
* *Note: * Batch updates may be done here if its supported by the DBMS connection. * In case of any failure while using this feature, it will be flagged as unsupported and one-by-one updates will be processed. *
* * @param metaKeys Description of TAP_SCHEMA.keys. * @param metaKeyColumns Description of TAP_SCHEMA.key_columns. * @param itKeys Iterator over the list of foreign keys. * * @throws DBException If rows can not be inserted because the SQL update query has failed. * @throws SQLException If any other SQL exception occurs. */ private void fillKeys(final TAPTable metaKeys, final TAPTable metaKeyColumns, final IteratorImportant note: * Only tables uploaded by users can be created in the database. To ensure that, the schema name of this table MUST be {@link STDSchema#UPLOADSCHEMA} ("TAP_UPLOAD") in ADQL. * If it has another ADQL name, an exception will be thrown. Of course, the DB name of this schema MAY be different. *
* *Important note: * This function may modify the given {@link TAPTable} object if schemas are not supported by this connection. * In this case, this function will prefix the table's DB name by the schema's DB name directly inside the given * {@link TAPTable} object. Then the DB name of the schema will be set to NULL. *
* *Note: * If the upload schema does not already exist in the database, it will be created. *
* * @see tap.db.DBConnection#addUploadedTable(tap.metadata.TAPTable, tap.data.TableIterator) * @see #checkUploadedTableDef(TAPTable) */ @Override public synchronized boolean addUploadedTable(TAPTable tableDef, TableIterator data) throws DBException, DataReadException{ // If no table to upload, consider it has been dropped and return TRUE: if (tableDef == null) return true; // Starting of new query execution => disable the cancel flag: resetCancel(); // Check the table is well defined (and particularly the schema is well set with an ADQL name = TAP_UPLOAD): checkUploadedTableDef(tableDef); try{ // Start a transaction: startTransaction(); // ...create a statement: getStatement(); DatabaseMetaData dbMeta = connection.getMetaData(); // 1. Create the upload schema, if it does not already exist: if (!isSchemaExisting(tableDef.getDBSchemaName(), dbMeta)){ stmt.executeUpdate("CREATE SCHEMA " + translator.getQualifiedSchemaName(tableDef) + ";"); if (logger != null) logger.logDB(LogLevel.INFO, this, "SCHEMA_CREATED", "Schema \"" + tableDef.getADQLSchemaName() + "\" (in DB: " + translator.getQualifiedSchemaName(tableDef) + ") created.", null); } // 1bis. Ensure the table does not already exist and if it is the case, throw an understandable exception: else if (isTableExisting(tableDef.getDBSchemaName(), tableDef.getDBName(), dbMeta)){ DBException de = new DBException("Impossible to create the user uploaded table in the database: " + translator.getTableName(tableDef, supportsSchema) + "! This table already exists."); if (logger != null) logger.logDB(LogLevel.ERROR, this, "ADD_UPLOAD_TABLE", de.getMessage(), de); throw de; } // 2. Create the table: // ...build the SQL query: StringBuffer sqlBuf = new StringBuffer("CREATE TABLE "); sqlBuf.append(translator.getTableName(tableDef, supportsSchema)).append(" ("); IteratorFill the table uploaded by the user with the given data.
* *Note: * Batch updates may be done here if its supported by the DBMS connection. * In case of any failure while using this feature, it will be flagged as unsupported and one-by-one updates will be processed. *
* *Note: * This function proceeds to a formatting of TIMESTAMP and GEOMETRY (point, circle, box, polygon) values. *
* * @param metaTable Description of the updated table. * @param data Iterator over the rows to insert. * * @return Number of inserted rows. * * @throws DBException If rows can not be inserted because the SQL update query has failed. * @throws SQLException If any other SQL exception occurs. * @throws DataReadException If there is any error while reading the data from the given {@link TableIterator} (and particularly if a limit - in byte or row - has been reached). */ protected int fillUploadedTable(final TAPTable metaTable, final TableIterator data) throws SQLException, DBException, DataReadException{ // 1. Build the SQL update query: StringBuffer sql = new StringBuffer("INSERT INTO "); StringBuffer varParam = new StringBuffer(); // ...table name: sql.append(translator.getTableName(metaTable, supportsSchema)).append(" ("); // ...list of columns: TAPColumn[] cols = data.getMetadata(); for(int c = 0; c < cols.length; c++){ if (c > 0){ sql.append(", "); varParam.append(", "); } sql.append(translator.getColumnName(cols[c])); varParam.append('?'); } // ...values pattern: sql.append(") VALUES (").append(varParam).append(");"); // 2. Prepare the statement: PreparedStatement stmt = null; int nbRows = 0; try{ stmt = connection.prepareStatement(sql.toString()); // 3. Execute the query for each given row: while(data.nextRow()){ nbRows++; int c = 1; while(data.hasNextCol()){ Object val = data.nextCol(); if (val != null && cols[c - 1] != null){ /* TIMESTAMP FORMATTING */ if (cols[c - 1].getDatatype().type == DBDatatype.TIMESTAMP){ try{ val = new Timestamp(ISO8601Format.parse(val.toString())); }catch(ParseException pe){ if (logger != null) logger.logDB(LogLevel.ERROR, this, "UPLOAD", "[l. " + nbRows + ", c. " + c + "] Unexpected date format for the value: \"" + val + "\"! A date formatted in ISO8601 was expected.", pe); throw new DBException("[l. " + nbRows + ", c. " + c + "] Unexpected date format for the value: \"" + val + "\"! A date formatted in ISO8601 was expected.", pe); } } /* GEOMETRY FORMATTING */ else if (cols[c - 1].getDatatype().type == DBDatatype.POINT || cols[c - 1].getDatatype().type == DBDatatype.REGION){ Region region; // parse the region as an STC-S expression: try{ region = STCS.parseRegion(val.toString()); }catch(adql.parser.ParseException e){ if (logger != null) logger.logDB(LogLevel.ERROR, this, "UPLOAD", "[l. " + nbRows + ", c. " + c + "] Incorrect STC-S syntax for the geometrical value \"" + val + "\"! " + e.getMessage(), e); throw new DataReadException("[l. " + nbRows + ", c. " + c + "] Incorrect STC-S syntax for the geometrical value \"" + val + "\"! " + e.getMessage(), e); } // translate this STC region into the corresponding column value: try{ val = translator.translateGeometryToDB(region); }catch(adql.parser.ParseException e){ if (logger != null) logger.logDB(LogLevel.ERROR, this, "UPLOAD", "[l. " + nbRows + ", c. " + c + "] Impossible to import the ADQL geometry \"" + val + "\" into the database! " + e.getMessage(), e); throw new DataReadException("[l. " + nbRows + ", c. " + c + "] Impossible to import the ADQL geometry \"" + val + "\" into the database! " + e.getMessage(), e); } } /* BOOLEAN CASE (more generally, type incompatibility) */ else if (val != null && cols[c - 1].getDatatype().type == DBDatatype.SMALLINT && val instanceof Boolean) val = ((Boolean)val) ? (short)1 : (short)0; /* NULL CHARACTER CASE (JUST FOR POSTGRESQL) */ else if ((dbms == null || dbms.equalsIgnoreCase(DBMS_POSTGRES)) && val instanceof Character && (Character)val == 0x00) val = null; } stmt.setObject(c++, val); } executeUpdate(stmt, nbRows); } executeBatchUpdates(stmt, nbRows); return nbRows; }finally{ close(stmt); } } /** *Important note: * Only tables uploaded by users can be dropped from the database. To ensure that, the schema name of this table MUST be {@link STDSchema#UPLOADSCHEMA} ("TAP_UPLOAD") in ADQL. * If it has another ADQL name, an exception will be thrown. Of course, the DB name of this schema MAY be different. *
* *Important note: * This function may modify the given {@link TAPTable} object if schemas are not supported by this connection. * In this case, this function will prefix the table's DB name by the schema's DB name directly inside the given * {@link TAPTable} object. Then the DB name of the schema will be set to NULL. *
* *Note: * This implementation is able to drop only one uploaded table. So if this function finds more than one table matching to the given one, * an exception will be thrown and no table will be dropped. *
* * @see tap.db.DBConnection#dropUploadedTable(tap.metadata.TAPTable) * @see #checkUploadedTableDef(TAPTable) */ @Override public synchronized boolean dropUploadedTable(final TAPTable tableDef) throws DBException{ // If no table to upload, consider it has been dropped and return TRUE: if (tableDef == null) return true; // Starting of new query execution => disable the cancel flag: resetCancel(); // Check the table is well defined (and particularly the schema is well set with an ADQL name = TAP_UPLOAD): checkUploadedTableDef(tableDef); try{ // Check the existence of the table to drop: if (!isTableExisting(tableDef.getDBSchemaName(), tableDef.getDBName(), connection.getMetaData())) return true; // Execute the update: int cnt = getStatement().executeUpdate("DROP TABLE " + translator.getTableName(tableDef, supportsSchema) + ";"); // Log the end: if (logger != null){ if (cnt >= 0) logger.logDB(LogLevel.INFO, this, "TABLE_DROPPED", "Table \"" + tableDef.getADQLName() + "\" (in DB: " + translator.getTableName(tableDef, supportsSchema) + ") dropped.", null); else logger.logDB(LogLevel.ERROR, this, "TABLE_DROPPED", "Table \"" + tableDef.getADQLName() + "\" (in DB: " + translator.getTableName(tableDef, supportsSchema) + ") NOT dropped.", null); } // Ensure the update is successful: return (cnt >= 0); }catch(SQLException se){ if (!isCancelled() && logger != null) logger.logDB(LogLevel.WARNING, this, "DROP_UPLOAD_TABLE", "Impossible to drop the uploaded table: " + translator.getTableName(tableDef, supportsSchema) + "!", se); throw new DBException("Impossible to drop the uploaded table: " + translator.getTableName(tableDef, supportsSchema) + "!", se); }finally{ cancel(true); closeStatement(); } } /** *Ensures that the given table MUST be inside the upload schema in ADQL.
* *Thus, the following cases are taken into account:
*Convert the given TAP type into the corresponding DBMS column type.
* ** This function tries first the type conversion using the translator ({@link JDBCTranslator#convertTypeToDB(DBType)}). * If it fails, a default conversion is done considering all the known types of the following DBMS: * PostgreSQL, SQLite, MySQL, Oracle and JavaDB/Derby. *
* * @param type TAP type to convert. * * @return The corresponding DBMS type. * * @see JDBCTranslator#convertTypeToDB(DBType) * @see #defaultTypeConversion(DBType) */ protected String convertTypeToDB(final DBType type){ String dbmsType = translator.convertTypeToDB(type); return (dbmsType == null) ? defaultTypeConversion(type) : dbmsType; } /** *Get the DBMS compatible datatype corresponding to the given column {@link DBType}.
* *Note 1: * This function is able to generate a DB datatype compatible with the currently used DBMS. * In this current implementation, only Postgresql, Oracle, SQLite, MySQL and Java DB/Derby have been considered. * Most of the TAP types have been tested only with Postgresql and SQLite without any problem. * If the DBMS you are using has not been considered, note that this function will return the TAP type expression by default. *
* *Note 2: * In case the given datatype is NULL or not managed here, the DBMS type corresponding to "VARCHAR" will be returned. *
* *Note 3: * The special TAP types POINT and REGION are converted into the DBMS type corresponding to "VARCHAR". *
* * @param datatype Column TAP type. * * @return The corresponding DB type, or VARCHAR if the given type is not managed or is NULL. */ protected String defaultTypeConversion(DBType datatype){ if (datatype == null) datatype = new DBType(DBDatatype.VARCHAR); switch(datatype.type){ case SMALLINT: return dbms.equals("sqlite") ? "INTEGER" : "SMALLINT"; case INTEGER: case REAL: return datatype.type.toString(); case BIGINT: if (dbms.equals("oracle")) return "NUMBER(19,0)"; else if (dbms.equals("sqlite")) return "INTEGER"; else return "BIGINT"; case DOUBLE: if (dbms.equals("postgresql") || dbms.equals("oracle")) return "DOUBLE PRECISION"; else if (dbms.equals("sqlite")) return "REAL"; else return "DOUBLE"; case BINARY: if (dbms.equals("postgresql")) return "bytea"; else if (dbms.equals("sqlite")) return "BLOB"; else if (dbms.equals("oracle")) return "RAW" + (datatype.length > 0 ? "(" + datatype.length + ")" : ""); else if (dbms.equals("derby")) return "CHAR" + (datatype.length > 0 ? "(" + datatype.length + ")" : "") + " FOR BIT DATA"; else return datatype.type.toString(); case VARBINARY: if (dbms.equals("postgresql")) return "bytea"; else if (dbms.equals("sqlite")) return "BLOB"; else if (dbms.equals("oracle")) return "LONG RAW" + (datatype.length > 0 ? "(" + datatype.length + ")" : ""); else if (dbms.equals("derby")) return "VARCHAR" + (datatype.length > 0 ? "(" + datatype.length + ")" : "") + " FOR BIT DATA"; else return datatype.type.toString(); case CHAR: if (dbms.equals("sqlite")) return "TEXT"; else return "CHAR"; case BLOB: if (dbms.equals("postgresql")) return "bytea"; else return "BLOB"; case CLOB: if (dbms.equals("postgresql") || dbms.equals("mysql") || dbms.equals("sqlite")) return "TEXT"; else return "CLOB"; case TIMESTAMP: if (dbms.equals("sqlite")) return "TEXT"; else return "TIMESTAMP"; case POINT: case REGION: case VARCHAR: default: if (dbms.equals("sqlite")) return "TEXT"; else return "VARCHAR"; } } /** *Start a transaction.
* ** Basically, if transactions are supported by this connection, the flag AutoCommit is just turned off. * It will be turned on again when {@link #endTransaction()} is called. *
* *If transactions are not supported by this connection, nothing is done.
* *Important note: * If any error interrupts the START TRANSACTION operation, transactions will be afterwards considered as not supported by this connection. * So, subsequent call to this function (and any other transaction related function) will never do anything. *
* * @throws DBException If it is impossible to start a transaction though transactions are supported by this connection. * If these are not supported, this error can never be thrown. */ protected void startTransaction() throws DBException{ try{ if (supportsTransaction){ connection.setAutoCommit(false); if (logger != null) logger.logDB(LogLevel.INFO, this, "START_TRANSACTION", "Transaction STARTED.", null); } }catch(SQLException se){ supportsTransaction = false; if (logger != null) logger.logDB(LogLevel.ERROR, this, "START_TRANSACTION", "Transaction STARTing impossible!", se); throw new DBException("Transaction STARTing impossible!", se); } } /** *Commit the current transaction.
* ** {@link #startTransaction()} must have been called before. If it's not the case the connection * may throw a {@link SQLException} which will be transformed into a {@link DBException} here. *
* *If transactions are not supported by this connection, nothing is done.
* *Important note: * If any error interrupts the COMMIT operation, transactions will be afterwards considered as not supported by this connection. * So, subsequent call to this function (and any other transaction related function) will never do anything. *
* * @throws DBException If it is impossible to commit a transaction though transactions are supported by this connection.. * If these are not supported, this error can never be thrown. */ protected void commit() throws DBException{ try{ if (supportsTransaction){ connection.commit(); if (logger != null) logger.logDB(LogLevel.INFO, this, "COMMIT", "Transaction COMMITED.", null); } }catch(SQLException se){ supportsTransaction = false; if (logger != null) logger.logDB(LogLevel.ERROR, this, "COMMIT", "Transaction COMMIT impossible!", se); throw new DBException("Transaction COMMIT impossible!", se); } } /** *Rollback the current transaction. * The success or the failure of the rollback operation is always logged (except if no logger is available).
* ** {@link #startTransaction()} must have been called before. If it's not the case the connection * may throw a {@link SQLException} which will be transformed into a {@link DBException} here. *
* *If transactions are not supported by this connection, nothing is done.
* *Important note: * If any error interrupts the ROLLBACK operation, transactions will considered afterwards as not supported by this connection. * So, subsequent call to this function (and any other transaction related function) will never do anything. *
* * @throws DBException If it is impossible to rollback a transaction though transactions are supported by this connection.. * If these are not supported, this error can never be thrown. * * @see #rollback(boolean) */ protected final void rollback(){ rollback(true); } /** *Rollback the current transaction.
* ** {@link #startTransaction()} must have been called before. If it's not the case the connection * may throw a {@link SQLException} which will be transformed into a {@link DBException} here. *
* *If transactions are not supported by this connection, nothing is done.
* *Important note: * If any error interrupts the ROLLBACK operation, transactions will considered afterwards as not supported by this connection. * So, subsequent call to this function (and any other transaction related function) will never do anything. *
* * @param logtrue
to log the success/failure of the rollback operation,
* false
to be quiet whatever happens.
*
* @throws DBException If it is impossible to rollback a transaction though transactions are supported by this connection..
* If these are not supported, this error can never be thrown.
*
* @since 2.1
*/
protected void rollback(final boolean log){
try{
if (supportsTransaction && !connection.getAutoCommit()){
connection.rollback();
if (log && logger != null)
logger.logDB(LogLevel.INFO, this, "ROLLBACK", "Transaction ROLLBACKED.", null);
}
}catch(SQLException se){
supportsTransaction = false;
if (log && logger != null)
logger.logDB(LogLevel.ERROR, this, "ROLLBACK", "Transaction ROLLBACK impossible!", se);
}
}
/**
* End the current transaction. * The success or the failure of the transaction ending operation is always logged (except if no logger is available).
* ** Basically, if transactions are supported by this connection, the flag AutoCommit is just turned on. *
* *If transactions are not supported by this connection, nothing is done.
* *Important note: * If any error interrupts the END TRANSACTION operation, transactions will be afterwards considered as not supported by this connection. * So, subsequent call to this function (and any other transaction related function) will never do anything. *
* * @throws DBException If it is impossible to end a transaction though transactions are supported by this connection. * If these are not supported, this error can never be thrown. * * @see #endTransaction(boolean) */ protected final void endTransaction(){ endTransaction(true); } /** *End the current transaction.
* ** Basically, if transactions are supported by this connection, the flag AutoCommit is just turned on. *
* *If transactions are not supported by this connection, nothing is done.
* *Important note: * If any error interrupts the END TRANSACTION operation, transactions will be afterwards considered as not supported by this connection. * So, subsequent call to this function (and any other transaction related function) will never do anything. *
* * @param logtrue
to log the success/failure of the transaction ending operation,
* false
to be quiet whatever happens.
*
* @throws DBException If it is impossible to end a transaction though transactions are supported by this connection.
* If these are not supported, this error can never be thrown.
*
* @since 2.1
*/
protected void endTransaction(final boolean log){
try{
if (supportsTransaction){
connection.setAutoCommit(true);
if (log && logger != null)
logger.logDB(LogLevel.INFO, this, "END_TRANSACTION", "Transaction ENDED.", null);
}
}catch(SQLException se){
supportsTransaction = false;
if (log && logger != null)
logger.logDB(LogLevel.ERROR, this, "END_TRANSACTION", "Transaction ENDing impossible!", se);
}
}
/**
* Close silently the given {@link ResultSet}.
* *If the given {@link ResultSet} is NULL, nothing (even exception/error) happens.
* ** If any {@link SQLException} occurs during this operation, it is caught and just logged * (see {@link TAPLog#logDB(uws.service.log.UWSLog.LogLevel, DBConnection, String, String, Throwable)}). * No error is thrown and nothing else is done. *
* * @param rs {@link ResultSet} to close. */ protected final void close(final ResultSet rs){ try{ if (rs != null) rs.close(); }catch(SQLException se){ if (logger != null) logger.logDB(LogLevel.WARNING, this, "CLOSE", "Can not close a ResultSet!", null); } } /** *Close silently the given {@link Statement}.
* *If the given {@link Statement} is NULL, nothing (even exception/error) happens.
* ** The given statement is explicitly canceled by this function before being closed. * Thus the corresponding DBMS process is ensured to be stopped. Of course, this * cancellation is effective only if this operation is supported by the JDBC driver * and the DBMS. *
* *Important note: * In case of cancellation, NO rollback is performed. *
* ** If any {@link SQLException} occurs during this operation, it is caught and just logged * (see {@link TAPLog#logDB(uws.service.log.UWSLog.LogLevel, DBConnection, String, String, Throwable)}). * No error is thrown and nothing else is done. *
* * @param stmt {@link Statement} to close. * * @see #cancel(Statement, boolean) */ protected final void close(final Statement stmt){ try{ if (stmt != null){ cancel(stmt, false); stmt.close(); } }catch(SQLException se){ if (logger != null) logger.logDB(LogLevel.WARNING, this, "CLOSE", "Can not close a Statement!", null); } } /** *Transform the given column value in a boolean value.
* *The following cases are taken into account in function of the given value's type:
*Tell whether the specified schema exists in the database. * To do so, it is using the given {@link DatabaseMetaData} object to query the database and list all existing schemas.
* *Note: * This function is completely useless if the connection is not supporting schemas. *
* *Note: * Test on the schema name is done considering the case sensitivity indicated by the translator * (see {@link JDBCTranslator#isCaseSensitive(IdentifierField)}). *
* *Note: * This functions is used by {@link #addUploadedTable(TAPTable, TableIterator)} and {@link #resetTAPSchema(Statement, TAPTable[])}. *
* * @param schemaName DB name of the schema whose the existence must be checked. * @param dbMeta Metadata about the database, and mainly the list of all existing schemas. * * @return true if the specified schema exists, false otherwise. * * @throws SQLException If any error occurs while interrogating the database about existing schema. */ protected boolean isSchemaExisting(String schemaName, final DatabaseMetaData dbMeta) throws SQLException{ if (!supportsSchema || schemaName == null || schemaName.length() == 0) return true; // Determine the case sensitivity to use for the equality test: boolean caseSensitive = translator.isCaseSensitive(IdentifierField.SCHEMA); ResultSet rs = null; try{ // List all schemas available and stop when a schema name matches ignoring the case: rs = dbMeta.getSchemas(); boolean hasSchema = false; while(!hasSchema && rs.next()) hasSchema = equals(rs.getString(1), schemaName, caseSensitive); return hasSchema; }finally{ close(rs); } } /** *Tell whether the specified table exists in the database. * To do so, it is using the given {@link DatabaseMetaData} object to query the database and list all existing tables.
* *Important note: * If schemas are not supported by this connection but a schema name is even though provided in parameter, * the table name will be prefixed by the schema name. * The research will then be done with NULL as schema name and this prefixed table name. *
* *Note: * Test on the schema name is done considering the case sensitivity indicated by the translator * (see {@link JDBCTranslator#isCaseSensitive(IdentifierField)}). *
* *Note: * This function is used by {@link #addUploadedTable(TAPTable, TableIterator)} and {@link #dropUploadedTable(TAPTable)}. *
* * @param schemaName DB name of the schema in which the table to search is. If NULL, the table is expected in any schema but ONLY one MUST exist. * @param tableName DB name of the table to search. * @param dbMeta Metadata about the database, and mainly the list of all existing tables. * * @return true if the specified table exists, false otherwise. * * @throws SQLException If any error occurs while interrogating the database about existing tables. */ protected boolean isTableExisting(String schemaName, String tableName, final DatabaseMetaData dbMeta) throws DBException, SQLException{ if (tableName == null || tableName.length() == 0) return true; // Determine the case sensitivity to use for the equality test: boolean schemaCaseSensitive = translator.isCaseSensitive(IdentifierField.SCHEMA); boolean tableCaseSensitive = translator.isCaseSensitive(IdentifierField.TABLE); ResultSet rs = null; try{ // List all matching tables: if (supportsSchema){ String schemaPattern = schemaCaseSensitive ? schemaName : null; String tablePattern = tableCaseSensitive ? tableName : null; rs = dbMeta.getTables(null, schemaPattern, tablePattern, null); }else{ String tablePattern = tableCaseSensitive ? tableName : null; rs = dbMeta.getTables(null, null, tablePattern, null); } // Stop on the first table which match completely (schema name + table name in function of their respective case sensitivity): int cnt = 0; while(rs.next()){ String rsSchema = nullifyIfNeeded(rs.getString(2)); String rsTable = rs.getString(3); if (!supportsSchema || schemaName == null || equals(rsSchema, schemaName, schemaCaseSensitive)){ if (equals(rsTable, tableName, tableCaseSensitive)) cnt++; } } if (cnt > 1){ if (logger != null) logger.logDB(LogLevel.ERROR, this, "TABLE_EXIST", "More than one table match to these criteria (schema=" + schemaName + " (case sensitive?" + schemaCaseSensitive + ") && table=" + tableName + " (case sensitive?" + tableCaseSensitive + "))!", null); throw new DBException("More than one table match to these criteria (schema=" + schemaName + " (case sensitive?" + schemaCaseSensitive + ") && table=" + tableName + " (case sensitive?" + tableCaseSensitive + "))!"); } return cnt == 1; }finally{ close(rs); } } /** *Tell whether the specified column exists in the specified table of the database. * To do so, it is using the given {@link DatabaseMetaData} object to query the database and list all existing columns.
* *Important note: * If schemas are not supported by this connection but a schema name is even though provided in parameter, * the table name will be prefixed by the schema name. * The research will then be done with NULL as schema name and this prefixed table name. *
* *Note: * Test on the schema name is done considering the case sensitivity indicated by the translator * (see {@link JDBCTranslator#isCaseSensitive(IdentifierField)}). *
* *Note: * This function is used by {@link #loadSchemas(TAPTable, TAPMetadata, Statement)}, {@link #loadTables(TAPTable, TAPMetadata, Statement)} * and {@link #loadColumns(TAPTable, List, Statement)}. *
* * @param schemaName DB name of the table schema. MAY BE NULL * @param tableName DB name of the table containing the column to search. MAY BE NULL * @param columnName DB name of the column to search. * @param dbMeta Metadata about the database, and mainly the list of all existing tables. * * @return true if the specified column exists, false otherwise. * * @throws SQLException If any error occurs while interrogating the database about existing columns. */ protected boolean isColumnExisting(String schemaName, String tableName, String columnName, final DatabaseMetaData dbMeta) throws DBException, SQLException{ if (columnName == null || columnName.length() == 0) return true; // Determine the case sensitivity to use for the equality test: boolean schemaCaseSensitive = translator.isCaseSensitive(IdentifierField.SCHEMA); boolean tableCaseSensitive = translator.isCaseSensitive(IdentifierField.TABLE); boolean columnCaseSensitive = translator.isCaseSensitive(IdentifierField.COLUMN); ResultSet rsT = null, rsC = null; try{ /* Note: * * The DatabaseMetaData.getColumns(....) function does not work properly * with the SQLite driver: when all parameters are set to null, meaning all columns of the database * must be returned, absolutely no rows are selected. * * The solution proposed here, is to first search all (matching) tables, and then for each table get * all its columns and find the matching one(s). */ // List all matching tables: if (supportsSchema){ String schemaPattern = schemaCaseSensitive ? schemaName : null; String tablePattern = tableCaseSensitive ? tableName : null; rsT = dbMeta.getTables(null, schemaPattern, tablePattern, null); }else{ String tablePattern = tableCaseSensitive ? tableName : null; rsT = dbMeta.getTables(null, null, tablePattern, null); } // For each matching table: int cnt = 0; String columnPattern = columnCaseSensitive ? columnName : null; while(rsT.next()){ String rsSchema = nullifyIfNeeded(rsT.getString(2)); String rsTable = rsT.getString(3); // test the schema name: if (!supportsSchema || schemaName == null || equals(rsSchema, schemaName, schemaCaseSensitive)){ // test the table name: if ((tableName == null || equals(rsTable, tableName, tableCaseSensitive))){ // list its columns: rsC = dbMeta.getColumns(null, rsSchema, rsTable, columnPattern); // count all matching columns: while(rsC.next()){ String rsColumn = rsC.getString(4); if (equals(rsColumn, columnName, columnCaseSensitive)) cnt++; } close(rsC); } } } if (cnt > 1){ if (logger != null) logger.logDB(LogLevel.ERROR, this, "COLUMN_EXIST", "More than one column match to these criteria (schema=" + schemaName + " (case sensitive?" + schemaCaseSensitive + ") && table=" + tableName + " (case sensitive?" + tableCaseSensitive + ") && column=" + columnName + " (case sensitive?" + columnCaseSensitive + "))!", null); throw new DBException("More than one column match to these criteria (schema=" + schemaName + " (case sensitive?" + schemaCaseSensitive + ") && table=" + tableName + " (case sensitive?" + tableCaseSensitive + ") && column=" + columnName + " (case sensitive?" + columnCaseSensitive + "))!"); } return cnt == 1; }finally{ close(rsT); close(rsC); } } /* *Build a table prefix with the given schema name.
* *By default, this function returns: schemaName + "_".
* *CAUTION: * This function is used only when schemas are not supported by the DBMS connection. * It aims to propose an alternative of the schema notion by prefixing the table name by the schema name. *
* *Note: * If the given schema is NULL or is an empty string, an empty string will be returned. * Thus, no prefix will be set....which is very useful when the table name has already been prefixed * (in such case, the DB name of its schema has theoretically set to NULL). *
* * @param schemaName (DB) Schema name. * * @return The corresponding table prefix, or "" if the given schema name is an empty string or NULL. * protected String getTablePrefix(final String schemaName){ if (schemaName != null && schemaName.trim().length() > 0) return schemaName + "_"; else return ""; }*/ /** * Tell whether the specified table (using its DB name only) is a standard one or not. * * @param dbTableName DB (unqualified) table name. * @param stdTables List of all tables to consider as the standard ones. * @param caseSensitive Indicate whether the equality test must be done case sensitively or not. * * @return The corresponding {@link STDTable} if the specified table is a standard one, * NULL otherwise. * * @see TAPMetadata#resolveStdTable(String) */ protected final STDTable isStdTable(final String dbTableName, final TAPTable[] stdTables, final boolean caseSensitive){ if (dbTableName != null){ for(TAPTable t : stdTables){ if (equals(dbTableName, t.getDBName(), caseSensitive)) return TAPMetadata.resolveStdTable(t.getADQLName()); } } return null; } /** *"Execute" the query update. This update must concern ONLY ONE ROW.
* ** Note that the "execute" action will be different in function of whether batch update queries are supported or not by this connection: *
** Before returning, and only if batch update queries are not supported, this function is ensuring that exactly one row has been updated. * If it is not the case, a {@link DBException} is thrown. *
* *Important note: * If the function {@link PreparedStatement#addBatch()} fails by throwing an {@link SQLException}, batch updates * will be afterwards considered as not supported by this connection. Besides, if this row is the first one in a batch update (parameter indRow=1), * then, the error will just be logged and an {@link PreparedStatement#executeUpdate()} will be tried. However, if the row is not the first one, * the error will be logged but also thrown as a {@link DBException}. In both cases, a subsequent call to * {@link #executeBatchUpdates(PreparedStatement, int)} will have obviously no effect. *
* * @param stmt {@link PreparedStatement} in which the update query has been prepared. * @param indRow Index of the row in the whole update process. It is used only for error management purpose. * * @throws SQLException If {@link PreparedStatement#executeUpdate()} fails. * @throws DBException If {@link PreparedStatement#addBatch()} fails and this update does not concern the first row, or if the number of updated rows is different from 1. */ protected final void executeUpdate(final PreparedStatement stmt, int indRow) throws SQLException, DBException{ // BATCH INSERTION: (the query is queued and will be executed later) if (supportsBatchUpdates){ // Add the prepared query in the batch queue of the statement: try{ stmt.addBatch(); }catch(SQLException se){ if (!isCancelled()) supportsBatchUpdates = false; /* * If the error happens for the first row, it is still possible to insert all rows * with the non-batch function - executeUpdate(). * * Otherwise, it is impossible to insert the previous batched rows ; an exception must be thrown * and must stop the whole TAP_SCHEMA initialization. */ if (indRow == 1){ if (!isCancelled() && logger != null) logger.logDB(LogLevel.WARNING, this, "EXEC_UPDATE", "BATCH query impossible => TRYING AGAIN IN A NORMAL EXECUTION (executeUpdate())!", se); }else{ if (!isCancelled() && logger != null) logger.logDB(LogLevel.ERROR, this, "EXEC_UPDATE", "BATCH query impossible!", se); throw new DBException("BATCH query impossible!", se); } } } // NORMAL INSERTION: (immediate insertion) if (!supportsBatchUpdates){ // Insert the row prepared in the given statement: int nbRowsWritten = stmt.executeUpdate(); // Check the row has been inserted with success: if (nbRowsWritten != 1){ if (logger != null) logger.logDB(LogLevel.ERROR, this, "EXEC_UPDATE", "ROW " + indRow + " not inserted!", null); throw new DBException("ROW " + indRow + " not inserted!"); } } } /** *Execute all batched queries.
* *To do so, {@link PreparedStatement#executeBatch()} and then, if the first was successful, {@link PreparedStatement#clearBatch()} is called.
* ** Before returning, this function is ensuring that exactly the given number of rows has been updated. * If it is not the case, a {@link DBException} is thrown. *
* *Note: * This function has no effect if batch queries are not supported. *
* *Important note: * In case {@link PreparedStatement#executeBatch()} fails by throwing an {@link SQLException}, * batch update queries will be afterwards considered as not supported by this connection. *
* * @param stmt {@link PreparedStatement} in which the update query has been prepared. * @param nbRows Number of rows that should be updated. * * @throws DBException If {@link PreparedStatement#executeBatch()} fails, or if the number of updated rows is different from the given one. */ protected final void executeBatchUpdates(final PreparedStatement stmt, int nbRows) throws DBException{ if (supportsBatchUpdates){ // Execute all the batch queries: int[] rows; try{ rows = stmt.executeBatch(); }catch(SQLException se){ if (!isCancelled()){ supportsBatchUpdates = false; if (logger != null) logger.logDB(LogLevel.ERROR, this, "EXEC_UPDATE", "BATCH execution impossible!", se); } throw new DBException("BATCH execution impossible!", se); } // Remove executed queries from the statement: try{ stmt.clearBatch(); }catch(SQLException se){ if (!isCancelled() && logger != null) logger.logDB(LogLevel.WARNING, this, "EXEC_UPDATE", "CLEAR BATCH impossible!", se); } // Count the updated rows: int nbRowsUpdated = 0; for(int i = 0; i < rows.length; i++) nbRowsUpdated += rows[i]; // Check all given rows have been inserted with success: if (nbRowsUpdated != nbRows){ if (logger != null) logger.logDB(LogLevel.ERROR, this, "EXEC_UPDATE", "ROWS not all update (" + nbRows + " to update ; " + nbRowsUpdated + " updated)!", null); throw new DBException("ROWS not all updated (" + nbRows + " to update ; " + nbRowsUpdated + " updated)!"); } } } /** * Append all items of the iterator inside the given list. * * @param lst List to update. * @param it All items to append inside the list. */ private < T > void appendAllInto(final ListTell whether the given DB name is equals (case sensitively or not, in function of the given parameter) * to the given name coming from a {@link TAPMetadata} object.
* *If at least one of the given name is NULL, false is returned.
* *Note: * The comparison will be done in function of the specified case sensitivity BUT ALSO of the case supported and stored by the DBMS. * For instance, if it has been specified a case insensitivity and that mixed case is not supported by unquoted identifier, * the comparison must be done, surprisingly, by considering the case if unquoted identifiers are stored in lower or upper case. * Thus, this special way to evaluate equality should be as closed as possible to the identifier storage and research policies of the used DBMS. *
* * @param dbName Name provided by the database. * @param metaName Name provided by a {@link TAPMetadata} object. * @param caseSensitive true if the equality test must be done case sensitively, false otherwise. * * @return true if both names are equal, false otherwise. */ protected final boolean equals(final String dbName, final String metaName, final boolean caseSensitive){ if (dbName == null || metaName == null) return false; if (caseSensitive){ if (supportsMixedCaseQuotedIdentifier || mixedCaseQuoted) return dbName.equals(metaName); else if (lowerCaseQuoted) return dbName.equals(metaName.toLowerCase()); else if (upperCaseQuoted) return dbName.equals(metaName.toUpperCase()); else return dbName.equalsIgnoreCase(metaName); }else{ if (supportsMixedCaseUnquotedIdentifier) return dbName.equalsIgnoreCase(metaName); else if (lowerCaseUnquoted) return dbName.equals(metaName.toLowerCase()); else if (upperCaseUnquoted) return dbName.equals(metaName.toUpperCase()); else return dbName.equalsIgnoreCase(metaName); } } @Override public void setFetchSize(final int size){ supportsFetchSize = true; fetchSize = (size > 0) ? size : IGNORE_FETCH_SIZE; } }