[ajug-members] Deleting Duplicate Records using JAVA and JDBC!
Sfinch at manh.com
Wed Dec 13 09:35:26 EST 2006
I am not sure rowid is guaranteed to be monotonically increasing. If
you are using a sequence to fill in you PK, it would probably be better
(and more portable) to use it as the discriminator. Assuming
RESPONSE_ID is monotonically increasing, then the first instance of
RECORD_ID will always be saved.
From: ajug-members-bounces at ajug.org
[mailto:ajug-members-bounces at ajug.org] On Behalf Of Sid Womac
Sent: Tuesday, December 12, 2006 8:39 PM
To: General AJUG membership forum (100-200 messages/month)
Subject: Re: [ajug-members] Deleting Duplicate Records using JAVA and
hsaid at checkfree.com wrote:
> 3- Option is to use the something like this query to delete multiple
> delete from RESPONSE_DATA m1
> where m1.rowid > (select min(m2.rowid) from RESPONSE_DATA m2 where m1.
Doing it on the database server side would be the most efficient.
(Given the mention of 'rowid' above, I assume that Oracle is the db
implementation). Databases are optimized to perform operations such as
these (subqueries, mass deletions, etc).
The downside with either Option 1 or 2, the programmatic approach, is
that they must pull all of that data, all 100+ million records, across
the network. Also, as you mention, there are all of the db
transactions, or client side memory usage. So the db server must read
the data into memory from disk, send it across the network, to be loaded
into memory on the client side, who then operates on it and sends a
subset back across the network to the db. More work on the db side,
then finally committed back to disk (or the transaction logs, at least).
All of the above approaches would work, but I think the Db approach puts
the least strain on resources.
You might consider a Unique constraint to RECORD_ID in the future, but I
realize that all business logic is unique, and there may be a reason
this is not feasible.
ajug-members mailing list
ajug-members at ajug.org
More information about the ajug-members