/** * Prepare prefetch store. {@inheritDoc} * * @see * com.continuent.tungsten.replicator.plugin.ReplicatorPlugin#prepare(com.continuent.tungsten.replicator.plugin.PluginContext) */ public void prepare(PluginContext context) throws ReplicatorException { // Perform super-class prepare. super.prepare(context); logger.info("Preparing PrefetchStore for slave catalog schema: " + slaveCatalogSchema); // Load defaults for connection if (url == null) url = context.getJdbcUrl("tungsten_" + context.getServiceName()); if (user == null) user = context.getJdbcUser(); if (password == null) password = context.getJdbcPassword(); // Connect. try { conn = DatabaseFactory.createDatabase(url, user, password); conn.connect(true); seqnoStatement = conn.prepareStatement( "select seqno, fragno, last_Frag, source_id, epoch_number, eventid, applied_latency from " + slaveCatalogSchema + "." + CommitSeqnoTable.TABLE_NAME); } catch (SQLException e) { throw new ReplicatorException(e); } // Show that we have started. startTimeMillis = System.currentTimeMillis(); prefetchState = PrefetchState.active; }
/** Wrapper for startHeartbeat() call. */ public void startHeartbeat( String url, String user, String password, String name, String initScript) throws SQLException { Database db = null; try { db = DatabaseFactory.createDatabase(url, user, password); if (initScript != null) db.setInitScript(initScript); db.connect(); startHeartbeat(db, name); } finally { db.close(); } }
/** * {@inheritDoc} * * @see * com.continuent.tungsten.replicator.plugin.ReplicatorPlugin#prepare(com.continuent.tungsten.replicator.plugin.PluginContext) */ @Override public void prepare(PluginContext context) throws ReplicatorException, InterruptedException { try { // Oracle JDBC URL, for example : // jdbc:oracle:thin:@192.168.0.60:1521:ORCL connection = DatabaseFactory.createDatabase(url, user, password); } catch (SQLException e) { } try { connection.connect(); } catch (SQLException e) { throw new ReplicatorException("Unable to connect to Oracle", e); } Statement stmt = null; try { stmt = connection.createStatement(); } catch (SQLException e) { throw new ReplicatorException("Unable to create a statement object", e); } // Step 1 Find the source tables for which the subscriber has access // privileges. ResultSet rs = null; try { rs = stmt.executeQuery("SELECT * FROM ALL_SOURCE_TABLES"); sources = new ArrayList<OracleCDCSource>(); while (rs.next()) { String srcSchema = rs.getString("SOURCE_SCHEMA_NAME"); String srcTable = rs.getString("SOURCE_TABLE_NAME"); if (logger.isDebugEnabled()) logger.debug("Subscribing to " + srcSchema + "." + srcTable); sources.add(new OracleCDCSource(srcSchema, srcTable)); } } catch (SQLException e) { throw new ReplicatorException("Unable to connect to query source tables", e); } finally { if (rs != null) { try { rs.close(); } catch (SQLException ignore) { if (logger.isDebugEnabled()) logger.debug("Failed to close resultset", ignore); } rs = null; } } Set<String> changeSets = new LinkedHashSet<String>(); // Step 2 Find the change set names and columns for which the subscriber // has access privileges. for (Iterator<OracleCDCSource> iterator = sources.iterator(); iterator.hasNext(); ) { OracleCDCSource src = iterator.next(); try { if (logger.isDebugEnabled()) logger.debug( "Executing" + "SELECT UNIQUE CHANGE_SET_NAME, PUB.COLUMN_NAME," + " PUB_ID, COL.COLUMN_ID " + " FROM ALL_PUBLISHED_COLUMNS PUB, ALL_TAB_COLUMNS COL " + " WHERE SOURCE_SCHEMA_NAME = '" + src.getSchema() + "'" + " AND SOURCE_TABLE_NAME = '" + src.getTable() + "'" + " AND SOURCE_SCHEMA_NAME = COL.OWNER " + " AND SOURCE_TABLE_NAME = COL.TABLE_NAME" + " AND PUB.COLUMN_NAME = COL.COLUMN_NAME" + " ORDER BY COL.COLUMN_ID"); rs = stmt.executeQuery( "SELECT UNIQUE CHANGE_SET_NAME, PUB.COLUMN_NAME," + " PUB_ID, COL.COLUMN_ID " + " FROM ALL_PUBLISHED_COLUMNS PUB, ALL_TAB_COLUMNS COL " + " WHERE SOURCE_SCHEMA_NAME = '" + src.getSchema() + "'" + " AND SOURCE_TABLE_NAME = '" + src.getTable() + "'" + " AND SOURCE_SCHEMA_NAME = COL.OWNER " + " AND SOURCE_TABLE_NAME = COL.TABLE_NAME" + " AND PUB.COLUMN_NAME = COL.COLUMN_NAME" + " ORDER BY COL.COLUMN_ID"); while (rs.next()) { String changeSetName = rs.getString("CHANGE_SET_NAME"); String columnName = rs.getString("COLUMN_NAME"); long pubId = rs.getLong("PUB_ID"); src.addPublication(changeSetName, columnName, pubId); changeSets.add(changeSetName); if (logger.isDebugEnabled()) logger.debug("Found column " + changeSetName + "\t" + columnName + "\t" + pubId); } } catch (SQLException e) { throw new ReplicatorException("Unable to fetch change set definition", e); } finally { if (rs != null) try { rs.close(); } catch (SQLException ignore) { if (logger.isDebugEnabled()) logger.debug("Failed to close resultset", ignore); } } } if (stmt != null) try { stmt.close(); } catch (SQLException ignore) { if (logger.isDebugEnabled()) logger.debug("Failed to close statement object", ignore); } // Step 3 Create subscriptions. // For each publication, create the subscription to the publication if // not already done. // Then, subscribe int i = 1; subscriberViews = new HashMap<String, OracleCDCSource>(); for (Iterator<OracleCDCSource> iterator = sources.iterator(); iterator.hasNext(); ) { OracleCDCSource src = iterator.next(); Map<Long, OracleCDCPublication> publications = src.getPublications(); StringBuffer subscribeStmt = new StringBuffer(); for (OracleCDCPublication pub : publications.values()) { if (changeSets.remove(pub.getPublicationName())) { if (logger.isDebugEnabled()) logger.debug("Creating subscription to " + pub.getPublicationName()); /* * Dropping subscription if it already exists : this can * happen if release code was not called */ executeQuery( "BEGIN DBMS_CDC_SUBSCRIBE.DROP_SUBSCRIPTION(" + "subscription_name => 'TUNGSTEN_PUB');END;", true); executeQuery( "BEGIN DBMS_CDC_SUBSCRIBE.CREATE_SUBSCRIPTION(" + "change_set_name => '" + pub.getPublicationName() + "', description => 'Change data used by Tungsten', " + "subscription_name => 'TUNGSTEN_PUB" + "');end;", false); } // Step 4 Subscribe to a source table and the columns in the // source table. String viewName = "VW_TUNGSTEN_CDC" + i; subscribeStmt.append( "DBMS_CDC_SUBSCRIBE.SUBSCRIBE(subscription_name => 'TUNGSTEN_PUB" + "', " + "publication_id => " + pub.getPublicationId() + "," + "column_list => '" + pub.getColumnList() + "'," + "subscriber_view => '" + viewName + "');"); subscriberViews.put(viewName, src); src.setSubscriptionView(viewName, pub.getPublicationId()); if (logger.isDebugEnabled()) logger.debug( "Creating change view " + viewName + " - Now handling " + subscriberViews.keySet().size() + " views"); i++; } executeQuery("BEGIN " + subscribeStmt.toString() + " END;", false); } // Step 5 Activate the subscription. executeQuery( "BEGIN DBMS_CDC_SUBSCRIBE.ACTIVATE_SUBSCRIPTION(" + "subscription_name => 'TUNGSTEN_PUB'" + ");END;", false); }
/** TODO: runTask definition. */ private void runTask() { connection = null; try { connection = DatabaseFactory.createDatabase(url, user, password); } catch (SQLException e) { // TODO Auto-generated catch block e.printStackTrace(); } try { connection.connect(); } catch (SQLException e) { // TODO Auto-generated catch block e.printStackTrace(); } if (ignoreTablesFile != null) { ignoreTablesDefinition = new ChunkDefinitions(ignoreTablesFile); try { ignoreTablesDefinition.parseFile(); } catch (Exception e) { e.printStackTrace(); } } // Check whether we have to use a chunk definition file if (chunkDefFile != null) { logger.info("Using definition from file " + chunkDefFile); chunkDefinition = new ChunkDefinitions(chunkDefFile); try { chunkDefinition.parseFile(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (ReplicatorException e) { // TODO Auto-generated catch block e.printStackTrace(); } LinkedList<ChunkRequest> chunksDefinitions = chunkDefinition.getChunksDefinitions(); for (ChunkRequest chunkRequest : chunksDefinitions) { if (chunkRequest.getTable() != null) { try { Table table = connection.findTable(chunkRequest.getSchema(), chunkRequest.getTable(), true); if (table != null) generateChunksForTable(table, chunkRequest.getChunkSize(), chunkRequest.getColumns()); else logger.warn( "Failed while processing table " + chunkRequest.getSchema() + "." + chunkRequest.getTable() + " : table not found."); } catch (SQLException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (ReplicatorException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } } else if (chunkRequest.getSchema() != null) { generateChunksForSchema(chunkRequest.getSchema()); } } } else { try { DatabaseMetaData databaseMetaData = connection.getDatabaseMetaData(); ResultSet schemasRs = databaseMetaData.getSchemas(); while (schemasRs.next()) { String schemaName = schemasRs.getString("TABLE_SCHEM"); // TODO: System schemas could be needed -> this needs a // setting if (!connection.isSystemSchema(schemaName)) { generateChunksForSchema(schemaName); } } schemasRs.close(); } catch (SQLException e) { logger.error(e); } catch (Exception e) { logger.error(e); } } // Stop threads for (int i = 0; i < extractChannels; i++) { logger.info("Posting job complete request " + i); try { chunks.put(new NumericChunk()); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } } if (logger.isDebugEnabled()) logger.debug(this.getName() + " done."); }