24
Tue, Dec
1 New Articles

TechTip: MERGE DB2 Data to SQL Server 2008

Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

Data replication chores are a snap with the new MERGE T-SQL statement.

 

Released in August 2008, SQL Server 2008 is the latest incarnation of Microsoft's flagship database platform. While there are some cool new features for developers, there is one for DB2 developers that deserves special mention: the new Transact SQL (T-SQL) MERGE statement. MERGE will be useful to any IBM i (aka AS/400, iSeries, System i) shop that needs to replicate DB2 for i (formerly known as DB2 for i5/OS and DB2/400) data to SQL Server. This includes shops that use SQL Server for reporting, data warehousing. and application data sharing.

What Is MERGE?

The MERGE statement is a single SQL statement that allows data to be synchronized between a "source" data source and a "target" data source. In this case, we'll consider a DB2 for i database table as the source and a SQL Server database table as the target. 

Traditionally, when writing a SQL Server routine to synchronize data between these two sources, a developer has a few possible approaches:

  1. Clear the target table and reload it entirely from the source table.
  2. For source tables that are "append only," simply copy the latest rows to the destination table.
  3. Write a combination of INSERT, UPDATE, and DELETE statements that keep the data between the two tables up to date.

Option 1 is clearly an option for small tables, such as code tables, but not so good for large tables. Option 2 works well for tables that only have new rows inserted, never updated (such as an inventory transaction history table). But what about when operating on larger tables that are not append only? Reloading entire large tables is often time-consuming on both servers. Therefore, option 3 is used to synchronize data between disparate data sources when a large amount of data is involved.

 

Let's look at an example of this last scenario that uses SQL Server's linked server facility to communicate with DB2. If you're not up to speed on linked servers and how to define them, see the references at the end of this tip. For this demonstration, I defined a linked server using the ODBC driver that comes with System i Access V6R1 (formerly known as iSeries Access) with service pack level SI31390 connecting to a V6R1 host. For those who prefer the IBM OLE DB providers over ODBC, I attempted to use the IBMDASQL driver but was not successful in getting it to work at this level.

 

For simplicity, say we have table QCUSTCDT with identical columns that exist on both DB2 for i and SQL Server. The primary key on both tables is CUSNUM. Further, we simply need to replicate all changes from DB2 to SQL Server and not the other way around. Using a linked server called DB2_ODBC (and utilizing the SQL naming convention in the ODBC setup), our T-SQL code would look like the following:

 

/* Update existing rows from DB2 */

Update Target

   Set LSTNAM = Source.LSTNAM,

       INIT   = Source.INIT,

       STREET = Source.STREET,

       CITY   = Source.CITY,

       STATE  = Source.STATE,

       ZIPCOD = Source.ZIPCOD,

       CDTLMT = Source.CDTLMT,

       CHGCOD = Source.CHGCOD,

       BALDUE = Source.BALDUE,

       CDTDUE = Source.CDTDUE

  From QCUSTCDT Target

  Join OpenQuery(DB2_ODBC,'Select * From QIWS.QCUSTCDT') Source

    On Target.CusNum=Source.CusNum

/* Insert new rows from DB2 */

Insert Into QCUSTCDT

Select *

  From OpenQuery(DB2_ODBC,'Select * From QIWS.QCUSTCDT') Source

 Where Not Exists

     (Select *

        From QCUSTCDT Target

       Where Target.CUSNUM=Source.CusNum)

/* Delete rows that are not in DB2 */

Delete From QCUSTCDT

 Where Not Exists

     (Select *

        From OpenQuery(DB2_ODBC,'Select * From QIWS.QCUSTCDT') Source

       Where QCUSTCDT.CUSNUM=Source.CusNum)

 

If this was a large table and had additional columns for timestamp added and updated, we'd probably want to add some additional criteria to the INSERT and UPDATE statements based on these values to limit the number of rows being processed.

 

MERGE is essentially a statement that allows a developer to specify a source (which can be a query, table, or view) and a target (which can be a table or updateable view) and define any combination of INSERT, UPDATE, and DELETE statements all rolled into one. Below is an example MERGE statement that replaces the three statements shown above:

 

MERGE dbo.QCUSTCDT AS Customer_Target

USING OpenQuery(DB2_ODBC,

      'Select * From QIWS.QCUSTCDT') AS Customer_Source

ON (Customer_Target.CusNum = Customer_Source.CusNum)

WHEN MATCHED THEN

UPDATE

   SET Customer_Target.LSTNAM=Customer_Source.LstNam,

 Customer_Target.Init=Customer_Source.Init,

       Customer_Target.Street=Customer_Source.Street,

       Customer_Target.City=Customer_Source.City,

       Customer_Target.State=Customer_Source.State,

       Customer_Target.ZipCod=Customer_Source.ZipCod,

       Customer_Target.CdtLmt=Customer_Source.CdtLmt,

       Customer_Target.ChgCod=Customer_Source.ChgCod,      

       Customer_Target.BalDue=Customer_Source.BalDue,

       Customer_Target.CdtDue=Customer_Source.CdtDue

WHEN NOT MATCHED BY TARGET THEN

INSERT (CUSNUM,LSTNAM,INIT,STREET,CITY,STATE,ZIPCOD,

        CDTLMT,CHGCOD,BALDUE,CDTDUE)

VALUES (Customer_Source.CUSNUM,Customer_Source.LSTNAM,

        Customer_Source.INIT,

        Customer_Source.STREET,Customer_Source.CITY,

        Customer_Source.STATE,Customer_Source.ZIPCOD,

        Customer_Source.CDTLMT,Customer_Source.CHGCOD,

        Customer_Source.BALDUE,Customer_Source.CDTDUE)

WHEN NOT MATCHED BY SOURCE THEN

    DELETE

OUTPUT $action,   /* Reserved Keyword */

       inserted.CusNum AS SourceCusNum,

       deleted.CusNum AS TargetCustNum

;      /* Merge needs a semicolon terminator! */

Let's go over the MERGE statement's clauses:

•·                 MERGE is a clause used to specify the name of a table or an updateable view as the target of the merge operation. Unfortunately, remote data sources are not allowed here.

•·                 USING specifies the data source for the merge, including tables, views, or a valid T-SQL query. In this case, the OpenQuery table function is used to retrieve data from DB2 for i.

•·                 ON is the search condition that correlates the data in the source and target tables. Generally, this should specify column name(s) that uniquely identify the rows between the data sources. The documentation warns against adding additional filtering criteria to this clause; save that task for the WHEN clauses!

•·                 WHEN MATCHED THEN is an optional clause that instructs MERGE how to handle a row in the target table that matches the current row in the source table. The options here are to issue an abbreviated UPDATE or DELETE command against the target.

•·                 WHEN NOT MATCHED BY TARGET THEN is an optional clause that instructs MERGE how to handle the situation when the source has a row that the target does not have. Generally, this is when an INSERT is done against the target. Note in the example the abbreviated SQL syntax used for an INSERT.

•·                 WHEN NOT MATCHED BY SOURCE THEN is an optional clause that instructs MERGE how to handle the case when there is a row in the target that is not in the source. Usually, if anything, a DELETE against the target is issued at this time. Please note that a search condition may be added to any of the WHEN clauses. Also, the abbreviated INSERT, UPDATE, and DELETE statements lack the table or view name (already specified in the MERGE clause) and lack criteria (done in the ON and WHEN clauses.)

•·                 OUTPUT is an elective clause that is useful for debugging or auditing. When OUTPUT is specified, each row that is modified by MERGE is added to a result set returned by the statement. As with T-SQL triggers, the "inserted" and "deleted" tables are special temporary tables that are used to track data from INSERT and UPDATE operations and UPDATE and DELETE operations, respectively. "$action" is a system function that contains a value of INSERT, UPDATE, or DELETE, depending on the action taken by MERGE. Further, OUTPUT can direct merge results to a table for auditing purposes.

 

MERGE is extremely versatile in that the source columns on the INSERT and UPDATE statements can contain expressions. Also, additional criteria can be specified before doing any of the data modification operations. For example, if a timestamp column is available, we can require that a target row is over one year old before deleting it. Or, before updating a row, SQL Server can be made to check for a change in the data before using resources to perform an update:

 

WHEN MATCHED AND

    (Customer_Target.LstNam<>Customer_Source.LstNam

  Or Customer_Target.Street<>Customer_Source.Street

  Or Customer_Target.ZipCod<>Customer_Source.ZipCod) THEN

 

In this example, data is updated only if the last name, street, or ZIP code change.

 

Unfortunately, when working with linked servers, MERGE is unidirectional. That is, it can merge data only to a local SQL Server table. Merging from a SQL Server source to a linked server target is not allowed. Therefore, MERGE should be used only when moving data in one direction--from DB2 to SQL Server.

Performance

To get an idea of how MERGE performs against individually written statements, I decided to export data from the SQL Server Adventure Works sample database to our trusty AS/400. I picked a single table called SalesOrderDetail that contains 121317 rows. The table has an 8-byte composite primary key and contains about 120 bytes per row. To give the MERGE statement some work to do, I...

•·         added about 30K bogus records that it would have to delete

•·         changed data in about 30K rows that it would have to update

•·         deleted about 30K rows that it would have to insert

•·         left about 30K alone that it should ignore

 

All in all, MERGE would have about 90K rows to update for this test. For three trials, MERGE averaged about 39 seconds (although neither the test workstation nor the AS/400 has impressive specs!). For comparison, I ran three trials against individual INSERT, UPDATE, and DELETE statements, which averaged 49 seconds. So in this case, MERGE offered about a 20 percent performance improvement when accessing a remote data source. 

 

The T-SQL statements used in the trial are shown at the end of this tip.

MERGE Ahead!

101008SansoterraMerge.JPG

MERGE statements should prove easier to maintain and debug. If a column is added to the primary key of your merge tables, for instance, you only need to change the correlation once in the ON clause instead of changing it in multiple statements. Also, since MERGE is T-SQL-based, you don't have to worry about the complexity of adding an integration services package to move the data. Finally, MERGE offers better performance over writing individual distributed query SQL statements.  When moving data from DB2 to SQL Server, MERGE is an efficient, easy way to go.

References

Trial T-SQL Statements

--T-SQL Script comparing MERGE vs distributed query statement performance

--

-- Step 1 - Mangle the data to give MERGE/statements

--          something to do.

--          (assumes SQL Server table is sync'd with

--          AS/400 table to begin with)

--

--

-- Delete every fourth row to give Merge

-- something to Insert

--

Delete

  From Sales.SalesOrderDetail

 Where (SalesOrderDetailID % 4)=0

GO

--

-- Update a few columns on every 3rd row

-- to force Merge to update existing data

--

Update Sales.SalesOrderDetail

   Set OrderQty=OrderQty+5,

       CarrierTrackingNumber=CarrierTrackingNumber+''

 Where (SalesOrderDetailID % 3)=0

GO

--

-- Insert some bad rows that will have to be

-- deleted by Mere

--

SET IDENTITY_INSERT Sales.SalesOrderDetail ON

GO

INSERT INTO Sales.SalesOrderDetail

(SalesOrderID,SalesOrderDetailId,CarrierTrackingNumber,

 OrderQty,ProductID,SpecialOfferID,

 UnitPrice,UnitPriceDiscount,rowguid,

 ModifiedDate)

Select -SalesOrderID,SalesOrderDetailId,CarrierTrackingNumber,

       OrderQty,ProductID,SpecialOfferID,

       UnitPrice,UnitPriceDiscount,rowguid,

       ModifiedDate

  From Sales.SalesOrderDetail

 Where SalesOrderDetailID % 2=0

GO

SET IDENTITY_INSERT Sales.SalesOrderDetail OFF

GO

--

-- Step 2 -- OK Merge, do your stuff!

--

-- Change the updated rows, delete the new bad rows

-- and re-insert the deleted rows

--

SET IDENTITY_INSERT Sales.SalesOrderDetail ON

GO

MERGE Sales.SalesOrderDetail AS Target

USING OpenQuery(DB2_ODBC,

      'Select * From AdvWorks.SalesOrderDetail') AS Source

ON (Target.SalesOrderId=Source.SalesOrderId

And Target.SalesOrderDetailId=Source.SalesOrderDetailId)

WHEN MATCHED AND

      (Target.CarrierTrackingNumber<>Source.CarrierTrackingNumber

    Or Target.OrderQty<>Source.OrderQty

    Or Target.ProductID<>Source.ProductID

    Or Target.SpecialOfferID<>Source.SpecialOfferID

    Or Target.UnitPrice<>Source.UnitPrice

    Or Target.UnitPriceDiscount<>Source.UnitPriceDiscount)THEN

UPDATE

   SET CarrierTrackingNumber = Source.CarrierTrackingNumber,

       OrderQty = Source.OrderQty,

       ProductID = Source.ProductID,

       SpecialOfferID = Source.SpecialOfferID,

       UnitPrice = Source.UnitPrice,

       UnitPriceDiscount = Source.UnitPriceDiscount,

       rowguid = Source.rowguid,

       ModifiedDate = Source.ModifiedDate

WHEN NOT MATCHED BY TARGET THEN

INSERT

(SalesOrderID,SalesOrderDetailId,CarrierTrackingNumber,

 OrderQty,ProductID,SpecialOfferID,

 UnitPrice,UnitPriceDiscount,rowguid,

 ModifiedDate)

VALUES

(Source.SalesOrderID,Source.SalesOrderDetailId,

 Source.CarrierTrackingNumber,Source.OrderQty,

 Source.ProductID,Source.SpecialOfferID,

 Source.UnitPrice,Source.UnitPriceDiscount,

 Source.rowguid,Source.ModifiedDate)

WHEN NOT MATCHED BY SOURCE THEN

    DELETE

;

SET IDENTITY_INSERT Sales.SalesOrderDetail OFF

GO

-- TRIAL 1 -- 41 SECONDS

-- TRIAL 2 -- 36 SECONDS

-- TRIAL 3 -- 39 SECONDS

-- ~ 39 seconds

--

-- Step 3 - Use individual statements to

--          synchronize data

--

-- (The data in the SQL Server table needs to be

--  reset again - run Step 1)

--

-- Using individual statements, change the updated rows,

-- delete the new bad rows and insert the deleted rows

--

UPDATE Target

   SET CarrierTrackingNumber = Source.CarrierTrackingNumber,

       OrderQty = Source.OrderQty,

       ProductID = Source.ProductID,

       SpecialOfferID = Source.SpecialOfferID,

       UnitPrice = Source.UnitPrice,

       UnitPriceDiscount = Source.UnitPriceDiscount,

       rowguid = Source.rowguid,

       ModifiedDate = Source.ModifiedDate

  From Sales.SalesOrderDetail Target

  Join OpenQuery(DB2_ODBC,'Select * From ADVWORKS.SalesOrderDetail') Source

    On Target.SalesOrderDetailID=Source.SalesOrderDetailID

   And Target.SalesOrderID=Source.SalesOrderID

 Where (Target.CarrierTrackingNumber<>Source.CarrierTrackingNumber

    Or Target.OrderQty<>Source.OrderQty

    Or Target.ProductID<>Source.ProductID

    Or Target.SpecialOfferID<>Source.SpecialOfferID

    Or Target.UnitPrice<>Source.UnitPrice

    Or Target.UnitPriceDiscount<>Source.UnitPriceDiscount)

SET IDENTITY_INSERT Sales.SalesOrderDetail ON

GO

INSERT INTO Sales.SalesOrderDetail

(SalesOrderID,SalesOrderDetailId,CarrierTrackingNumber,

 OrderQty,ProductID,SpecialOfferID,

 UnitPrice,UnitPriceDiscount,rowguid,

 ModifiedDate)

Select SalesOrderID,SalesOrderDetailId,CarrierTrackingNumber,

       OrderQty,ProductID,SpecialOfferID,

       UnitPrice,UnitPriceDiscount,rowguid,

       ModifiedDate

  From OpenQuery(DB2_ODBC,'Select * From ADVWORKS.SalesOrderDetail ORDER BY SalesOrderDetailId') Source

 Where Not Exists

     (Select *

        From Sales.SalesOrderDetail Target

       Where Target.SalesOrderDetailID=Source.SalesOrderDetailID

         And Target.SalesOrderID=Source.SalesOrderID)

  Order By SalesOrderDetailId

GO

SET IDENTITY_INSERT Sales.SalesOrderDetail OFF

GO

Delete From Sales.SalesOrderDetail

 Where Not Exists

     (Select *

        From OpenQuery(DB2_ODBC,'Select SalesOrderId,SalesOrderDetailId

                                 From ADVWORKS.SalesOrderDetail') Source

       Where Sales.SalesOrderDetail.SalesOrderDetailID=Source.SalesOrderDetailID

         And Sales.SalesOrderDetail.SalesOrderID=Source.SalesOrderID)

-- trial 1 - 47 seconds

-- trial 2 - 51 seconds

-- trial 3 - 49 seconds

-- Avg - 49 seconds

 

 

BLOG COMMENTS POWERED BY DISQUS

LATEST COMMENTS

Support MC Press Online

$

Book Reviews

Resource Center

  • SB Profound WC 5536 Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application. You can find Part 1 here. In Part 2 of our free Node.js Webinar Series, Brian May teaches you the different tooling options available for writing code, debugging, and using Git for version control. Brian will briefly discuss the different tools available, and demonstrate his preferred setup for Node development on IBM i or any platform. Attend this webinar to learn:

  • SB Profound WP 5539More than ever, there is a demand for IT to deliver innovation. Your IBM i has been an essential part of your business operations for years. However, your organization may struggle to maintain the current system and implement new projects. The thousands of customers we've worked with and surveyed state that expectations regarding the digital footprint and vision of the company are not aligned with the current IT environment.

  • SB HelpSystems ROBOT Generic IBM announced the E1080 servers using the latest Power10 processor in September 2021. The most powerful processor from IBM to date, Power10 is designed to handle the demands of doing business in today’s high-tech atmosphere, including running cloud applications, supporting big data, and managing AI workloads. But what does Power10 mean for your data center? In this recorded webinar, IBMers Dan Sundt and Dylan Boday join IBM Power Champion Tom Huntington for a discussion on why Power10 technology is the right strategic investment if you run IBM i, AIX, or Linux. In this action-packed hour, Tom will share trends from the IBM i and AIX user communities while Dan and Dylan dive into the tech specs for key hardware, including:

  • Magic MarkTRY the one package that solves all your document design and printing challenges on all your platforms. Produce bar code labels, electronic forms, ad hoc reports, and RFID tags – without programming! MarkMagic is the only document design and print solution that combines report writing, WYSIWYG label and forms design, and conditional printing in one integrated product. Make sure your data survives when catastrophe hits. Request your trial now!  Request Now.

  • SB HelpSystems ROBOT GenericForms of ransomware has been around for over 30 years, and with more and more organizations suffering attacks each year, it continues to endure. What has made ransomware such a durable threat and what is the best way to combat it? In order to prevent ransomware, organizations must first understand how it works.

  • SB HelpSystems ROBOT GenericIT security is a top priority for businesses around the world, but most IBM i pros don’t know where to begin—and most cybersecurity experts don’t know IBM i. In this session, Robin Tatam explores the business impact of lax IBM i security, the top vulnerabilities putting IBM i at risk, and the steps you can take to protect your organization. If you’re looking to avoid unexpected downtime or corrupted data, you don’t want to miss this session.

  • SB HelpSystems ROBOT GenericCan you trust all of your users all of the time? A typical end user receives 16 malicious emails each month, but only 17 percent of these phishing campaigns are reported to IT. Once an attack is underway, most organizations won’t discover the breach until six months later. A staggering amount of damage can occur in that time. Despite these risks, 93 percent of organizations are leaving their IBM i systems vulnerable to cybercrime. In this on-demand webinar, IBM i security experts Robin Tatam and Sandi Moore will reveal:

  • FORTRA Disaster protection is vital to every business. Yet, it often consists of patched together procedures that are prone to error. From automatic backups to data encryption to media management, Robot automates the routine (yet often complex) tasks of iSeries backup and recovery, saving you time and money and making the process safer and more reliable. Automate your backups with the Robot Backup and Recovery Solution. Key features include:

  • FORTRAManaging messages on your IBM i can be more than a full-time job if you have to do it manually. Messages need a response and resources must be monitored—often over multiple systems and across platforms. How can you be sure you won’t miss important system events? Automate your message center with the Robot Message Management Solution. Key features include:

  • FORTRAThe thought of printing, distributing, and storing iSeries reports manually may reduce you to tears. Paper and labor costs associated with report generation can spiral out of control. Mountains of paper threaten to swamp your files. Robot automates report bursting, distribution, bundling, and archiving, and offers secure, selective online report viewing. Manage your reports with the Robot Report Management Solution. Key features include:

  • FORTRAFor over 30 years, Robot has been a leader in systems management for IBM i. With batch job creation and scheduling at its core, the Robot Job Scheduling Solution reduces the opportunity for human error and helps you maintain service levels, automating even the biggest, most complex runbooks. Manage your job schedule with the Robot Job Scheduling Solution. Key features include:

  • LANSA Business users want new applications now. Market and regulatory pressures require faster application updates and delivery into production. Your IBM i developers may be approaching retirement, and you see no sure way to fill their positions with experienced developers. In addition, you may be caught between maintaining your existing applications and the uncertainty of moving to something new.

  • LANSAWhen it comes to creating your business applications, there are hundreds of coding platforms and programming languages to choose from. These options range from very complex traditional programming languages to Low-Code platforms where sometimes no traditional coding experience is needed. Download our whitepaper, The Power of Writing Code in a Low-Code Solution, and:

  • LANSASupply Chain is becoming increasingly complex and unpredictable. From raw materials for manufacturing to food supply chains, the journey from source to production to delivery to consumers is marred with inefficiencies, manual processes, shortages, recalls, counterfeits, and scandals. In this webinar, we discuss how:

  • The MC Resource Centers bring you the widest selection of white papers, trial software, and on-demand webcasts for you to choose from. >> Review the list of White Papers, Trial Software or On-Demand Webcast at the MC Press Resource Center. >> Add the items to yru Cart and complet he checkout process and submit

  • Profound Logic Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application.

  • SB Profound WC 5536Join us for this hour-long webcast that will explore:

  • Fortra IT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators with intimate knowledge of the operating system and the applications that run on it is small. This begs the question: How will you manage the platform that supports such a big part of your business? This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn: