-
Mar 17th, 2015, 06:18 AM
#1
Thread Starter
PowerPoster
speed up to delete big numbers of records...
Code:
Dim strConnectionString As String
Dim objConn As ADODB.Connection
Dim objCommand As ADODB.Command
strConnectionString = "Provider=Microsoft.Jet.OLEDB.4.0;" & "Data Source=e:\My
Documents\db1.mdb;" & _
"Jet OLEDB:Engine Type=4"
Set objConn = New ADODB.Connection
objConn.Open strConnectionString
Set objCommand = New ADODB.Command
objCommand.ActiveConnection = objConn
'here begintrans<<<<<<<<<<<<<<<
' Assign to ADO Command object
objCommand.CommandText = "deleteTable10"
objCommand.CommandType = adCmdStoredProc
objCommand.Execute
'here commitrans<<<<<<<<<<<<<
objConn.Close
Set objCommand = Nothing
Set objConn = Nothing
the query "deleteTable10" is a delete type.
The number of records to delete are approx 56.xxx
If i insert before and after execute command, the transecation command, i speed up the delete of big numbers of records?
-
Mar 17th, 2015, 06:44 AM
#2
Re: speed up to delete big numbers of records...
The delete is going to take as long as it needs to... there's not much you can do to speed it up. There's FKeys to deal with, indexes to update I don't think a transaction is going to help, in fact it may take it longer because first it would need to set the transaction point, mark the records as deleted, follow along any cascading deletes, also marking those as deleted, and then when you commit, it then actually does the delete. Meanwhile you're locking those tables from other users. W/o knowing what your delete sproc does specifically, hard to say much more.
-tg
-
Mar 17th, 2015, 06:52 AM
#3
Re: speed up to delete big numbers of records...
I agree with TG. There's no good reason why a transaction would speed things up. They actually slow things down but with good reason.
The purpose of a transaction is not performance, it's consistency. It's about making sure that all the operations it contains are completed or rolled back as one unit. It prevents a job from being half-done. If you need that assurance of consistency then you should use a transaction. If you don't then you shouldn't.
The best argument against democracy is a five minute conversation with the average voter - Winston Churchill
Hadoop actually sounds more like the way they greet each other in Yorkshire - Inferrd
-
Mar 17th, 2015, 06:57 AM
#4
Thread Starter
PowerPoster
Re: speed up to delete big numbers of records...
Originally Posted by FunkyDexter
I agree with TG. There's no good reason why a transaction would speed things up. They actually slow things down but with good reason.
The purpose of a transaction is not performance, it's consistency. It's about making sure that all the operations it contains are completed or rolled back as one unit. It prevents a job from being half-done. If you need that assurance of consistency then you should use a transaction. If you don't then you shouldn't.
..."should use a transaction", example for Access and ADO?
Tks.
-
Mar 17th, 2015, 07:04 AM
#5
Re: speed up to delete big numbers of records...
"should use a transaction", example for Access and ADO?
Tks.
If you need that assurance of consistency then you should use a transaction. If you don't then you shouldn't.
Its amazing how people can just choose the bit they want to hear !!
Transactions WONT speed your code up !!!!
That is what people are trying to tell you
Please Mark your Thread "Resolved", if the query is solved & Rate those who have helped you
-
Mar 17th, 2015, 07:07 AM
#6
Thread Starter
PowerPoster
Re: speed up to delete big numbers of records...
Originally Posted by NeedSomeAnswers
Its amazing how people can just choose the bit they want to hear !!
Transactions WONT speed your code up !!!!
That is what people are trying to tell you
ahhhhhhhhhhhh...
ok, tks!
-
Mar 17th, 2015, 08:06 AM
#7
Re: speed up to delete big numbers of records...
Transactions aren't for speeding up code. They actually cause quite the opposite. Transactions should only be used when there are a number of operations that must take place and it's an all-or-nothing situation.
Consider a bank transaction. There's two operations that take place. 1) debit the source account. 2) credit the target account. That's something that should be done in a transaction. If I can't credit the target account, I better not be debiting the source account. Otherwise someone could lose money. It's an all-or-nothing situation. Both operations must take place, or none of them do.
-tg
-
Mar 17th, 2015, 11:41 AM
#8
Re: speed up to delete big numbers of records...
Does this database normally have multiple connections open on it?
If it doesn't, or even if you can do "large-scale delete" maintenance on a controlled or scheduled basis... Opening exclusively may make a big difference in performance.
Add "Mode=Share Exclusive" as a phrase in your connection string.
Ideally you would compact afterward, either using JRO or the JetComp.exe utility.
-
Mar 18th, 2015, 02:45 AM
#9
Thread Starter
PowerPoster
Re: speed up to delete big numbers of records...
Originally Posted by dilettante
Does this database normally have multiple connections open on it?
If it doesn't, or even if you can do "large-scale delete" maintenance on a controlled or scheduled basis... Opening exclusively may make a big difference in performance.
Add "Mode=Share Exclusive" as a phrase in your connection string.
Ideally you would compact afterward, either using JRO or the JetComp.exe utility.
Tks for suggestion.
In my case, actually i use Access and ADO conn, but the Mode=Share Exclusive, i can use also in ADO conn with SQlServer connection, or not?
-
Mar 18th, 2015, 05:37 AM
#10
Re: speed up to delete big numbers of records...
Originally Posted by luca90
Code:
Dim strConnectionString As String
Dim objConn As ADODB.Connection
Dim objCommand As ADODB.Command
strConnectionString = "Provider=Microsoft.Jet.OLEDB.4.0;" & "Data Source=e:\My
Documents\db1.mdb;" & _
"Jet OLEDB:Engine Type=4"
Set objConn = New ADODB.Connection
objConn.Open strConnectionString
Set objCommand = New ADODB.Command
objCommand.ActiveConnection = objConn
'here begintrans<<<<<<<<<<<<<<<
' Assign to ADO Command object
objCommand.CommandText = "deleteTable10"
objCommand.CommandType = adCmdStoredProc
objCommand.Execute
'here commitrans<<<<<<<<<<<<<
objConn.Close
Set objCommand = Nothing
Set objConn = Nothing
the query "deleteTable10" is a delete type.
The number of records to delete are approx 56.xxx
If i insert before and after execute command, the transecation command, i speed up the delete of big numbers of records?
I have to admit I can't tell what kind of database you are using or what "deleteTable10" is doing from your post but in MS SQL Truncate is much faster than a delete. The logging isn't as extensive.
https://msdn.microsoft.com/en-us/library/ms177570.aspx
Please remember next time...elections matter!
-
Mar 18th, 2015, 06:33 AM
#11
Re: speed up to delete big numbers of records...
As a thought, what does the actual sql delete statement look like? Does it have a where clause in it? If so perhaps you could look at using indexes to speed this up.
edit> I'd be a little leary of issuing a Truncate from code. Due to the lack of logging it won't fire any triggers. Even if there are no triggers on the table now they could be introduced in the future and the truncate will bypass them and lead to some unexpected results. I always felt that Truncate was a DBA's tool rather than a developer's one.
Last edited by FunkyDexter; Mar 18th, 2015 at 06:38 AM.
The best argument against democracy is a five minute conversation with the average voter - Winston Churchill
Hadoop actually sounds more like the way they greet each other in Yorkshire - Inferrd
-
Mar 18th, 2015, 07:29 AM
#12
Re: speed up to delete big numbers of records...
Originally Posted by FunkyDexter
As a thought, what does the actual sql delete statement look like? Does it have a where clause in it? If so perhaps you could look at using indexes to speed this up.
edit> I'd be a little leary of issuing a Truncate from code. Due to the lack of logging it won't fire any triggers. Even if there are no triggers on the table now they could be introduced in the future and the truncate will bypass them and lead to some unexpected results. I always felt that Truncate was a DBA's tool rather than a developer's one.
It is not correct to say there is no logging using truncate. You'll notice in my post I said "The logging isn't as extensive". But I agree about the trigger. However I stand by my suggestion agreeing with the caveat to consider if a trigger exists (and I guess worrying that someday someone might want to add a trigger could be a consideration). Truncate is lighting fast compared to a delete. Also logging is at a minimum (if you wrapped it in a transaction you would have logging). As with all things in programming it depends on what you are trying to do.
Please remember next time...elections matter!
-
Mar 18th, 2015, 08:05 AM
#13
Thread Starter
PowerPoster
Re: speed up to delete big numbers of records...
Originally Posted by TysonLPrice
It is not correct to say there is no logging using truncate. You'll notice in my post I said "The logging isn't as extensive". But I agree about the trigger. However I stand by my suggestion agreeing with the caveat to consider if a trigger exists (and I guess worrying that someday someone might want to add a trigger could be a consideration). Truncate is lighting fast compared to a delete. Also logging is at a minimum (if you wrapped it in a transaction you would have logging). As with all things in programming it depends on what you are trying to do.
OK, I know the truncate statment.
I dont use tath because i dont need to delete alls records in table!
I need to delete a lot of records with a "Where" clausole.
-
Mar 18th, 2015, 08:09 AM
#14
Re: speed up to delete big numbers of records...
Originally Posted by luca90
OK, I know the truncate statment.
I dont use tath because i dont need to delete alls records in table!
I need to delete a lot of records with a "Where" clausole.
Copy off what you want to save, truncate the table, reload. Even that might me faster than what you are doing now. Otherwise, it was just a suggestion
Please remember next time...elections matter!
-
Mar 18th, 2015, 08:57 AM
#15
Re: speed up to delete big numbers of records...
Originally Posted by TysonLPrice
Copy off what you want to save, truncate the table, reload. Even that might me faster than what you are doing now. Otherwise, it was just a suggestion
But we'll never know because we don't know what the delete "Sproc" does... If he's only deleting a few records, it may not be worth it... plus who knows what kind of re-indexing is goign to happen on the re-insert.
-tg
-
Mar 18th, 2015, 10:16 AM
#16
Re: speed up to delete big numbers of records...
we don't know what the delete "Sproc" does
Indeed.
I need to delete a lot of records with a "Where" clausole
That's quite likely your problem. Try selecting, rather than deleting, those records. If the select also takes a long time to run it's likely that the bottleneck isn't the deletion of the records, it's the implicit selection of them. If that's the case then probably the best thing you could do would be to add a non-clustered index to support that selection. This will have a small negative impact on the speed of the actual deletions (due to the overhead of maintaining the index) but should have a much larger positive impact on the performance of the implicit selection.
Another potential (but less likely) problem if you're deleting with a Where clause could be that you have a lot of indexes on there, particularly if one of them is clustered. If you're deleting elements out of the middle of an index that can have a significant overhead. I think you'd probably need to be maintaining quite a few indexes before you really started to suffer from that though.
By the way, this thread provides an object lesson in why you should give us all the information we might need right up front. Because you haven't shown us the content of the Sproc we all assumed up front that you were deleting the entire table so a lot of the early answers just aren't relevant.
The best argument against democracy is a five minute conversation with the average voter - Winston Churchill
Hadoop actually sounds more like the way they greet each other in Yorkshire - Inferrd
-
Mar 18th, 2015, 10:34 AM
#17
Re: speed up to delete big numbers of records...
Just for Grins
Here is proof truncate logs if wrapped in a transaction. I found it on Google somewhere a long time ago:
CREATE TABLE ##Temp(Id int Identity(1,1),Name Varchar(100))
INSERT INTO ##Temp(Name) VALUES('Testing')
INSERT INTO ##Temp(Name) VALUES('Testing')
INSERT INTO ##Temp(Name) VALUES('Testing')
INSERT INTO ##Temp(Name) VALUES('Testing')
INSERT INTO ##Temp(Name) VALUES('Testing')
BEGIN TRAN
TRUNCATE TABLE ##Temp
SELECT * FROM ##Temp
ROLLBACK
SELECT * FROM ##Temp
drop table ##Temp
Please remember next time...elections matter!
-
Mar 18th, 2015, 11:07 AM
#18
Re: speed up to delete big numbers of records...
I didn't say it's not logged at all, by "lack of logging" I meant that the level of logging is reduced and that it doesn't fire triggers. For those reasons I wouldn't recommend putting it into production code that will be called regularly and unpredictably. It's suitable for one off manual operations but nothing automated IMO.
It is just an opinion, though. If you feel the extra risk (and it's not a particularly big risk) engendered by a Truncate is worth it for the performance pay off then go ahead. As you say, it depends. I think I'd argue, though, that if you really need the extra performance from a Truncate in production code then there's probably something wrong with the design and you're probably fixing the wrong problem.
Last edited by FunkyDexter; Mar 18th, 2015 at 11:33 AM.
The best argument against democracy is a five minute conversation with the average voter - Winston Churchill
Hadoop actually sounds more like the way they greet each other in Yorkshire - Inferrd
-
Mar 19th, 2015, 02:00 PM
#19
Re: speed up to delete big numbers of records...
i'd say: if the concept of your code requires regular truncates on tables then the concept is wrong. yes truncates are much quicker than delete *.
Here is proof truncate logs if wrapped in a transaction.
the truncate should still be in the logs even without a transaction. i havent checked but in full recovery model the event sould be logged with all its data needed to roll back. the truncate will be handled atomar and will be automatically rolled back in case some error occurs during execution even if all but one records had already been deleted so it needs to be logged.
Another potential (but less likely) problem if you're deleting with a Where clause could be that you have a lot of indexes on there, particularly if one of them is clustered. If you're deleting elements out of the middle of an index that can have a significant overhead.
i dont think that deleting records from a table with a clustered index has much more overhead than one with an unclustered one. if you mean that the leaves get rewritten, defragged that is, then i do not think this is happening on a delete, only if you rebuild the index. the leave will just be set to empty, no?
OP wont tell us his SP code
-
Mar 19th, 2015, 02:06 PM
#20
Re: speed up to delete big numbers of records...
"the truncate should still be in the logs even without a transaction"
That makes sense based on :
TRUNCATE TABLE removes the data by deallocating the data pages used to store the table data and records only the page deallocations in the transaction log
I got that from here:
http://stackoverflow.com/questions/3...vs-delete-from
Please remember next time...elections matter!
Posting Permissions
- You may not post new threads
- You may not post replies
- You may not post attachments
- You may not edit your posts
-
Forum Rules
|
Click Here to Expand Forum to Full Width
|