Migrating the Data

After you have generated and run DDL statements to create the Oracle schema objects for the migrated database, you can migrate (move) any existing data from the source database to the Oracle database. You have two options for data migration: online or offline.

Related Topics

Migrating Third-Party Databases

Transferring the Data Offline

Transferring the Data Offline

To transfer the data offline, you generate and use scripts to copy data from the source database to the destination database. During this process you must:

Creating Data Files From Microsoft SQL Server or Sybase Adaptive Server

To create data files from a Microsoft SQL Server or Sybase Adaptive Server database:

  1. Copy the contents of the directory where SQL Developer generated the data unload scripts onto the computer where the source database is installed.

  2. Edit the BCP extract script to include the name of the source database server.

    • On Windows, edit the unload_script.bat script to alter the bcp lines to include the appropriate variables.

    The following shows a line from a sample unload_script.bat script:

    bcp "AdventureWorks.dbo.AWBuildVersion" out "[AdventureWorks].[dbo].[AWBuildVersion].dat" -q -c -t "<EOFD>" -r "<EORD>" -U<Username> -P<Password> -S<ServerName>
    
  3. Run the BCP extract script.

    • On Windows, enter:

      prompt> unload_script.bat
      

    This script creates the data files in the current directory.

  4. Copy the data files and scripts, if necessary, to the target Oracle database system, or to a system that has access to the target Oracle database and has SQL*Loader (Oracle Client) installed.

Creating Data Files From Microsoft Access

To create data files from a Microsoft Access database, use the Exporter for Microsoft Access tool.


Note:

For information about how to create data files from a Microsoft Access database, see online help for the exporter tool.

Creating Data Files From MySQL

To create data files from a MySQL database:

  1. Copy the contents of the directory where SQL Developer generated the data unload scripts, if necessary, onto the system where the source database is installed or a system that has access to the source database and has the mysqldump tool installed.

  2. Edit the unload_script script to include the correct host, user name, password, and destination directory for the data files.

    • On Windows, edit the unload_script.bat script.

    • On Linux or UNIX, edit the unload_script.sh script.

    The following shows a line from a sample unload_script.bat script:

    mysqldump -h localhost -u <USERNAME> -p<PASSWORD>  -T <DESTINATION_PATH> 
    --fields-terminated-by="<EOFD>" --fields-escaped-by="" 
    --lines-terminated-by="<EORD>" "CarrierDb" "CarrierPlanTb"
    

    Edit this line to include the correct values for USERNAME, PASSWORD, and DESTINATION PATH. Do not include the angle brackets in the edited version of this file.

    In this command line, localhost indicates a loopback connection, which is required by the -T option. (See the mysqldump documentation for more information.)

  3. Run the script.

    • On Windows, enter:

      prompt> unload_script.bat
      
    • On Linux or UNIX, enter:

      prompt> chmod 755 unload_script.sh
      prompt> sh ./unload_script.sh
      

    This script creates the data files in the current directory.

  4. Copy the data files and scripts, if necessary, to the target Oracle database system, or to a system that has access to the target Oracle database and has SQL*Loader (Oracle Client) installed.

Populating the Destination Database Using the Data Files

To populate the destination database using the data files, you run the data load scripts using SQL*Loader:

  1. Navigate to the directory where you created the data unload scripts.

  2. Edit the oracle_ctl.bat (Windows systems) or oractl_ctl.sh (Linux or UNIX systems) file, to provide the appropriate user name and password strings.

  3. Run the SQL Load script.

    • On Windows, enter:

      prompt> oracle_ctl.bat
      
    • On Linux or UNIX, enter:

      prompt> ./oracle_ctl.sh
      

For Microsoft SQL Server and Sybase migrations, if you are inserting into BLOB fields with SQL*Loader, you will receive the following error:

SQL*Loader-309: No SQL string allowed as part of LARGEOBJECT field specification

To handle situations indicated by this error, you can use either one of the following options:

Workaround

The workaround is to load the data (which is in hex format) into an additional CLOB field and then convert the CLOB to a BLOB through a PL/SQL procedure.

The only way to export binary data properly through the Microsoft SQL Server or Sybase Adaptive Server BCP is to export it in a hexadecimal (hex) format; however, to get the hex values into Oracle, save them in a CLOB (holds text) column, and then convert the hex values to binary values and insert them into the BLOB column. The problem here is that the HEXTORAW function in Oracle only converts a maximum of 2000 hex pairs. Consequently, write your own procedure that will convert (piece by piece) your hex data to binary. (In the following steps and examples, modify the START.SQL and FINISH.SQL to reflect your environment.

The following shows the code for two scripts, start.sql and finish.sql, that implement this workaround. Read the comments in the code, and modify any SQL statements as needed to reflect your environment and your needs.


Note:

After you run start.sql and before you run finish.sql, run BCP; and before you run BCP, change the relevant line in the .ctl file from:
<blob_column> CHAR(2000000) "HEXTORAW (:<blob_column>)"

to:

<blob_column>_CLOB CHAR(2000000)

-- START.SQL
-- Modify this for your environment.
 
-- This should be executed in the user schema in Oracle that contains the table.
-- DESCRIPTION:
-- ALTERS THE OFFENDING TABLE SO THAT THE DATA MOVE CAN BE EXECUTED
-- DISABLES TRIGGERS, INDEXES AND SEQUENCES ON THE OFFENDING TABLE
 
-- 1) Add an extra column to hold the hex string;
alter table <tablename> add (<blob_column>_CLOB CLOB);
 
-- 2) Allow the BLOB column to accept NULLS
alter table <tablename> MODIFY <blob_column> NULL;
 
-- 3) Disable triggers and sequences on <tablename>
alter trigger <triggername> disable;
 
alter table <tablename> drop primary key cascade;
 
drop index <indexname>;
 
-- 4) Allow the table to use the tablespace
alter table <tablename> move lob (<blob_column>) store as (tablespace lob_tablespace);
 
alter table <tablename> move lob (<blob_column>_clob) store as (tablespace lob_tablespace);
 
COMMIT;
 
-- END OF FILE
 
 
-- FINISH.SQL
-- Modify this for your enironment.
 
-- This should be executed in the table schema in Oracle.
-- DESCRIPTION:
-- MOVES THE DATA FROM CLOB TO BLOB
-- MODIFIES THE TABLE BACK TO ITS ORIGINAL SPEC (without a clob)
-- THEN ENABLES THE SEQUENCES, TRIGGERS AND INDEXES AGAIN
 
-- Currently we have the hex values saved as 
-- text in the <blob_column>_CLOB column
-- And we have NULL in all rows for the <blob_column> column.
-- We have to get BLOB locators for each row in the BLOB column
 
-- put empty blobs in the blob column
UPDATE <tablename> SET <blob_column>=EMPTY_BLOB();
 
COMMIT;
 
-- create the following procedure in your table schema
CREATE OR REPLACE PROCEDURE CLOBTOBLOB
AS
inputLength NUMBER; -- size of input CLOB
offSet NUMBER := 1;
pieceMaxSize NUMBER := 2000; -- the max size of each peice
piece VARCHAR2(2000); -- these pieces will make up the entire CLOB
currentPlace NUMBER := 1; -- this is where were up to in the CLOB
blobLoc BLOB; -- blob locator in the table
clobLoc CLOB; -- clob locator pointsthis is the value from the dat file
 
-- THIS HAS TO BE CHANGED FOR SPECIFIC CUSTOMER TABLE 
-- AND COLUMN NAMES
CURSOR cur IS SELECT <blob_column>_clob clob_column , <blob_column> blob_column FROM /*table*/<tablename> FOR UPDATE;
 
cur_rec cur%ROWTYPE;
 
BEGIN
 
OPEN cur;
FETCH cur INTO cur_rec;
 
WHILE cur%FOUND
LOOP
--RETRIVE THE clobLoc and blobLoc
clobLoc := cur_rec.clob_column;
blobLoc := cur_rec.blob_column;
 
currentPlace := 1; -- reset evertime
-- find the lenght of the clob
inputLength := DBMS_LOB.getLength(clobLoc);
 
-- loop through each peice
LOOP
-- get the next piece and add it to the clob
piece := DBMS_LOB.subStr(clobLoc,pieceMaxSize,currentPlace);
 
-- append this piece to the BLOB
DBMS_LOB.WRITEAPPEND(blobLoc, LENGTH(piece)/2, HEXTORAW(piece));
 
currentPlace := currentPlace + pieceMaxSize ;
 
EXIT WHEN inputLength < currentplace;
END LOOP;
 
FETCH cur INTO cur_rec;
END LOOP;
 
END CLOBtoBLOB;
/
 
-- now run the procedure
-- It will update the blob column with the correct binary representation
-- of the clob column
EXEC CLOBtoBLOB;
 
-- drop the extra clob cloumn
alter table <tablename> drop column <blob_column>_clob;
 
-- 2) apply the constraint we removed during the data load
alter table <tablename> MODIFY FILEBINARY NOT NULL;
 
-- Now re enable the triggers, indexes and primary keys
alter trigger <triggername> enable;
 
ALTER TABLE <tablename> ADD ( CONSTRAINT <pkname> PRIMARY KEY ( <column>) ) ;
 
CREATE INDEX <index_name> ON <tablename>( <column> );
 
COMMIT;
 
-- END OF FILE