I have a large table 34 million records and I need to update a field.
I am doing this in batches of 1 million records (by joining a temp
table to the primary key).
I am trying to monitor the update using
SELECT COUNT(*)
FROM myTable with(nolock)
where newKey is null
What is weird is the number counts down a few million (one to many
relationship) records then all of the sudden the number is bumped back
up to 34 million records again and the update statement (same
transaction) seems to start over.
Any ideas on what is going on?
Is my with(nolock) method an invalid way to monitor the update?
Thanks for the advice!If you are updating in batches within a loop, then you should have a count
of the current iteration. Perhaps have the SP display some status
information while executing in Query Analyzer. If <print @.loop> doesn't
display until the end of the transaction, then try <select @.loop>. You can
also monitor using SQL Profiler.
To improve the performace of this monster update try the following:
#1 Insert the records with the columns already populated with non-null
values. This way your update does not need to extend the data when updating.
#2 Consider setting recovery model to simple.
#3 Drop all indexes from table before the update and re-create them again
afterward.
#4 Do not include updatable columns in a clustered index.
<daveg.01@.gmail.com> wrote in message
news:1122495806.739965.230420@.z14g2000cwz.googlegroups.com...
>I have a large table 34 million records and I need to update a field.
> I am doing this in batches of 1 million records (by joining a temp
> table to the primary key).
> I am trying to monitor the update using
> SELECT COUNT(*)
> FROM myTable with(nolock)
> where newKey is null
> What is weird is the number counts down a few million (one to many
> relationship) records then all of the sudden the number is bumped back
> up to 34 million records again and the update statement (same
> transaction) seems to start over.
> Any ideas on what is going on?
> Is my with(nolock) method an invalid way to monitor the update?
> Thanks for the advice!
>|||Hi
You may want to create an indexed view
Some examples from Steve Kass
create table T (
i int,
filler char(1000) default 'abc'
)
go
create view T_count with schemabinding as
select
cast(i as bit) as val,
count_big(*) T_count
from dbo.T group by cast(i as bit)
go
create unique clustered index T_count_uci on T_count(val)
go
insert into T(i)
select OrderID
from Northwind..[Order Details]
go
set statistics io on
select count(*) from T
go
select sum(T_count) from T_count with (noexpand)
go
set statistics io off
-- uses an efficient query plan on the materialized view
go
drop view T_count
drop table T
<daveg.01@.gmail.com> wrote in message
news:1122495806.739965.230420@.z14g2000cwz.googlegroups.com...
>I have a large table 34 million records and I need to update a field.
> I am doing this in batches of 1 million records (by joining a temp
> table to the primary key).
> I am trying to monitor the update using
> SELECT COUNT(*)
> FROM myTable with(nolock)
> where newKey is null
> What is weird is the number counts down a few million (one to many
> relationship) records then all of the sudden the number is bumped back
> up to 34 million records again and the update statement (same
> transaction) seems to start over.
> Any ideas on what is going on?
> Is my with(nolock) method an invalid way to monitor the update?
> Thanks for the advice!
>|||Could you expain that a bit? Will this allow me to monitor a large
update that might take 2 or 3 hours?
Does the clustered index on the view get updated while the update on
the base table is going on?
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment