Showing posts with label queries. Show all posts
Showing posts with label queries. Show all posts

Friday, March 30, 2012

More queries more problems

I'm trying to write a query that returns a list of all employess from one table, and their hours worked from another table. If there are no entries in the hours table, I want to still show them with value of zero hours worked. I almost have this working, but I'm having some issues. Here's my query:

SELECT th.ID, tp.FirstName + ' ' + tp.LastName as Name, IsNull(Total_Hours, '0.0') as Hours1
FROM tblProfiles tp LEFT JOIN tblHours th on tp.ID = th.userID
WHERE (th.Start_Date = '" + myStartDate + "' OR th.Start_Date IS NULL)
ORDER BY tp.FirstName

For testing, in my profiles table, I have 5 users. In my hours table, I have entries for 3 of those users. Of those three users, 2 have entries for both 1/1/03 and 1/16/03, and the other has just 1 entry for 1/1/03.

When I run the query above for myStartDate = '1/1/03', I get the correct output, which shows the hours for 3 users, and shows 0 hours for the other 2. However, if I run the query on myStartDate = '1/16/03', my result set has the hours for the 2 users that have entries for that date, and also has 0 hours for the 2 users that have no entries. The user that has only 1 entry (for 1/1/03)should show up with 0 hours, but instead, it doesn't show up at all.

I'm guessing that the problem is with the way the table is joined, and that because that user has 1 valid Start_Date, it is getting excluded from the WHERE conditions, but I can't seem to make it work.

help!can you show the structure of the 2 tables ...|||tblProfiles has the following columns:


ID FirstName LastName ... (other unimportant columns)

tblHours has the following columns:


ID userID Start_Date End_Date Total_Hours
|||have you tried running this query in Query Analyzer as with the straight sql to make sure that you are getting the same results?|||yeah, I did... in Enterprise Manager, if I run the same query, I get the same results... the user with 1 entry for 1/1/03 does not show up at all if I run the query with:


WHERE Start_Date = '1/16/03' OR Start_Date IS NULL

I am assuming that "Start_Date IS NULL" is not evaluating to true because, with the table join, it is picking up the other record with the 1/1/03 Start_Date ...|||In my WHERE clause:


WHERE Start_Date = '" + myStartDate + "' OR Start_Date IS NULL

I think theStart_Date IS NULL part needs to specifically refer only the record for the given "myStartDate", but I can't make it jive...|||Yeah this is ringing a bell, I had something similiar. Maybe try wrapping everything in parens, like


WHERE ((Start_Date = '" + myStartDate + "') OR (Start_Date IS NULL))

Not sure but this might force SQL to evaluate the where clause correctly.|||I tried your suggestion, but it didn't change the results at all...|||have you thought abt using a cursor...
loop through for each record in tblprofiles...and for each record join with tblhours on id and start_date and get the hours..
this way you can get those ids tht dont have any entries...startdates/hours...etc and still show them.

HTH|||I'm not familiar with using cursors, but I'm willing to try anything... can you give me an example of how I would use it?|||heres a sample cursor tht i am using


DECLARE rs CURSOR
LOCAL
FORWARD_ONLY
OPTIMISTIC
TYPE_WARNING
FOR SELECT ID,FirstName,LastName, ... (other columns)
OPEN rs

fetch next from rs into @.id,@.fname,@.lname...( other columns)
WHILE ( @.@.FETCH_STATUS = 0 )

begin

/* here do your stuff - join with the other table and get the stuff */
use the @.id to join with tblhours..

FETCH NEXT FROM rs INTO @.id,@.fname,@.lname...( other columns)
END
close rs
deallocate rs

HTH|||Ok, I'm getting lost in the syntax... here's what I have, but it's not working:


DECLARE @.id integer, @.fname varchar(50), @.lname varchar(50)
DECLARE rs CURSOR
LOCAL
FORWARD_ONLY
OPTIMISTIC
TYPE_WARNING
FOR SELECT tp.ID, tp.FirstName, tp.LastName FROM tblProfiles tp
OPEN rs
fetch next from rs into @.id,@.fname,@.lname
WHILE ( @.@.FETCH_STATUS = 0 )
begin
SELECT @.id, @.fname, @.lname
SELECT th.Total_Hours FROM tblHours th WHERE th.userID = @.id AND th.Start_Date='1/16/03'
fetch next from rs into @.id,@.fname,@.lname
END
close rs
deallocate rs

what am I doing wrong?|||I finally got it to work, and didn't have to use the cursor! Here's the working query:


SELECT IsNull(th.ID, '0') as ID, tp.FirstName + ' ' + tp.LastName as Name, IsNull(Total_Hours, '0.0') as Hours1
FROM tblProfiles tp LEFT JOIN tblHours th on tp.ID = th.userID AND th.Start_Date = '" + myStartDate + "'
WHERE ((th.Start_Date = '" + myStartDate + "') OR (th.Start_Date IS NULL)) ORDER BY th.Total_Hours DESC, tp.FirstName

The solution was to addStart_Date = '" + myStartDate + "' to the LEFT JOIN criteria.

Thanks for all the help along the way guys...
<sigh of relief>|||

CREATE PROCEDURE get_hrs AS
begin

DECLARE @.id as int, @.fname as varchar(50), @.lname as varchar(50)
declare @.mystartdate as datetime
set @.mystartdate='1/1/2003'
declare @.hrs as int
DECLARE rs CURSOR
LOCAL
FORWARD_ONLY
OPTIMISTIC
TYPE_WARNING
FOR SELECT [ID], FName, LName FROM tblpro
OPEN rs
fetch next from rs into @.id,@.fname,@.lname
WHILE ( @.@.FETCH_STATUS = 0 )
begin

select @.hrs=hours from tblhours where ([ID] =@.id and startdate=@.mystartdate)

print convert(varchar(5),@.id) + ' ' + @.fname + '-' + @.lname + ' ' + convert(nvarchar(10),@.hrs)
set @.hrs=0

fetch next from rs into @.id,@.fname,@.lname
END

close rs
deallocate rs
end
GO

you might need to change the column names, table names...

HTH

More Partition Function and Partition Scheme questions.

Howdy again. Many of my queries are grouped by months. I want to potentially
make use of Partition Functions (PF) and Partition Schemes (PS) but I don't
want to have an out of control amount of of Files and File Groups to create
my PS's on. If I simply partition by year, will my month queries still be
sped up? Or is the only way to accomplish a speed advantage to partition by
month? Im a few weeks out from being able to do a real test, and the
suspence will kill me by then.
TIA, ChrisRPartitions help where the majority of the queries align themselves with your
partitions. In your case you will probably get some, but not the same amount
if you partition by month and year.
Note that you can park your partitions on different disks to spread the IO,
so you might not have 24 disks to spread your io, but you might have data
distribution patterns so that the volatile data could be on separate disks
and partition which are not queries as frequently could be on a single disk.
Hilary Cotter
Director of Text Mining and Database Strategy
RelevantNOISE.Com - Dedicated to mining blogs for business intelligence.
This posting is my own and doesn't necessarily represent RelevantNoise's
positions, strategies or opinions.
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"ChrisR" <noFudgingWay@.NoEmail.com> wrote in message
news:%234a9SbNBHHA.3604@.TK2MSFTNGP04.phx.gbl...
> Howdy again. Many of my queries are grouped by months. I want to
> potentially
> make use of Partition Functions (PF) and Partition Schemes (PS) but I
> don't
> want to have an out of control amount of of Files and File Groups to
> create
> my PS's on. If I simply partition by year, will my month queries still be
> sped up? Or is the only way to accomplish a speed advantage to partition
> by
> month? Im a few weeks out from being able to do a real test, and the
> suspence will kill me by then.
> TIA, ChrisR
>|||> I want to potentially
> make use of Partition Functions (PF) and Partition Schemes (PS) but I don'
t
> want to have an out of control amount of of Files and File Groups to creat
e
> my PS's on.
You don't have to direct each partition to an unique filegroup. All can go t
o the same, or you can
distribute the partitions over the filegroups any way you want.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"ChrisR" <noFudgingWay@.NoEmail.com> wrote in message news:%234a9SbNBHHA.3604@.TK2MSFTNGP04.ph
x.gbl...
> Howdy again. Many of my queries are grouped by months. I want to potential
ly
> make use of Partition Functions (PF) and Partition Schemes (PS) but I don'
t
> want to have an out of control amount of of Files and File Groups to creat
e
> my PS's on. If I simply partition by year, will my month queries still be
> sped up? Or is the only way to accomplish a speed advantage to partition b
y
> month? Im a few weeks out from being able to do a real test, and the
> suspence will kill me by then.
> TIA, ChrisR
>|||In addition to the fact that you can place multiple partitions on the same
filegroup, you can also take advantage of the ability to merge partitions. I
t
is a common practice to have many finer-grained partitions for more recent
and often more active data and few larger partitions for aged and often less
active data.
Linchi
"ChrisR" wrote:

> Howdy again. Many of my queries are grouped by months. I want to potential
ly
> make use of Partition Functions (PF) and Partition Schemes (PS) but I don'
t
> want to have an out of control amount of of Files and File Groups to creat
e
> my PS's on. If I simply partition by year, will my month queries still be
> sped up? Or is the only way to accomplish a speed advantage to partition b
y
> month? Im a few weeks out from being able to do a real test, and the
> suspence will kill me by then.
> TIA, ChrisR
>
>

More Partition Function and Partition Scheme questions.

Howdy again. Many of my queries are grouped by months. I want to potentially
make use of Partition Functions (PF) and Partition Schemes (PS) but I don't
want to have an out of control amount of of Files and File Groups to create
my PS's on. If I simply partition by year, will my month queries still be
sped up? Or is the only way to accomplish a speed advantage to partition by
month? Im a few weeks out from being able to do a real test, and the
suspence will kill me by then.
TIA, ChrisR
Partitions help where the majority of the queries align themselves with your
partitions. In your case you will probably get some, but not the same amount
if you partition by month and year.
Note that you can park your partitions on different disks to spread the IO,
so you might not have 24 disks to spread your io, but you might have data
distribution patterns so that the volatile data could be on separate disks
and partition which are not queries as frequently could be on a single disk.
Hilary Cotter
Director of Text Mining and Database Strategy
RelevantNOISE.Com - Dedicated to mining blogs for business intelligence.
This posting is my own and doesn't necessarily represent RelevantNoise's
positions, strategies or opinions.
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"ChrisR" <noFudgingWay@.NoEmail.com> wrote in message
news:%234a9SbNBHHA.3604@.TK2MSFTNGP04.phx.gbl...
> Howdy again. Many of my queries are grouped by months. I want to
> potentially
> make use of Partition Functions (PF) and Partition Schemes (PS) but I
> don't
> want to have an out of control amount of of Files and File Groups to
> create
> my PS's on. If I simply partition by year, will my month queries still be
> sped up? Or is the only way to accomplish a speed advantage to partition
> by
> month? Im a few weeks out from being able to do a real test, and the
> suspence will kill me by then.
> TIA, ChrisR
>
|||In addition to the fact that you can place multiple partitions on the same
filegroup, you can also take advantage of the ability to merge partitions. It
is a common practice to have many finer-grained partitions for more recent
and often more active data and few larger partitions for aged and often less
active data.
Linchi
"ChrisR" wrote:

> Howdy again. Many of my queries are grouped by months. I want to potentially
> make use of Partition Functions (PF) and Partition Schemes (PS) but I don't
> want to have an out of control amount of of Files and File Groups to create
> my PS's on. If I simply partition by year, will my month queries still be
> sped up? Or is the only way to accomplish a speed advantage to partition by
> month? Im a few weeks out from being able to do a real test, and the
> suspence will kill me by then.
> TIA, ChrisR
>
>

More Partition Function and Partition Scheme questions.

Howdy again. Many of my queries are grouped by months. I want to potentially
make use of Partition Functions (PF) and Partition Schemes (PS) but I don't
want to have an out of control amount of of Files and File Groups to create
my PS's on. If I simply partition by year, will my month queries still be
sped up? Or is the only way to accomplish a speed advantage to partition by
month? Im a few weeks out from being able to do a real test, and the
suspence will kill me by then.
TIA, ChrisRPartitions help where the majority of the queries align themselves with your
partitions. In your case you will probably get some, but not the same amount
if you partition by month and year.
Note that you can park your partitions on different disks to spread the IO,
so you might not have 24 disks to spread your io, but you might have data
distribution patterns so that the volatile data could be on separate disks
and partition which are not queries as frequently could be on a single disk.
--
Hilary Cotter
Director of Text Mining and Database Strategy
RelevantNOISE.Com - Dedicated to mining blogs for business intelligence.
This posting is my own and doesn't necessarily represent RelevantNoise's
positions, strategies or opinions.
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"ChrisR" <noFudgingWay@.NoEmail.com> wrote in message
news:%234a9SbNBHHA.3604@.TK2MSFTNGP04.phx.gbl...
> Howdy again. Many of my queries are grouped by months. I want to
> potentially
> make use of Partition Functions (PF) and Partition Schemes (PS) but I
> don't
> want to have an out of control amount of of Files and File Groups to
> create
> my PS's on. If I simply partition by year, will my month queries still be
> sped up? Or is the only way to accomplish a speed advantage to partition
> by
> month? Im a few weeks out from being able to do a real test, and the
> suspence will kill me by then.
> TIA, ChrisR
>|||> I want to potentially
> make use of Partition Functions (PF) and Partition Schemes (PS) but I don't
> want to have an out of control amount of of Files and File Groups to create
> my PS's on.
You don't have to direct each partition to an unique filegroup. All can go to the same, or you can
distribute the partitions over the filegroups any way you want.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"ChrisR" <noFudgingWay@.NoEmail.com> wrote in message news:%234a9SbNBHHA.3604@.TK2MSFTNGP04.phx.gbl...
> Howdy again. Many of my queries are grouped by months. I want to potentially
> make use of Partition Functions (PF) and Partition Schemes (PS) but I don't
> want to have an out of control amount of of Files and File Groups to create
> my PS's on. If I simply partition by year, will my month queries still be
> sped up? Or is the only way to accomplish a speed advantage to partition by
> month? Im a few weeks out from being able to do a real test, and the
> suspence will kill me by then.
> TIA, ChrisR
>|||In addition to the fact that you can place multiple partitions on the same
filegroup, you can also take advantage of the ability to merge partitions. It
is a common practice to have many finer-grained partitions for more recent
and often more active data and few larger partitions for aged and often less
active data.
Linchi
"ChrisR" wrote:
> Howdy again. Many of my queries are grouped by months. I want to potentially
> make use of Partition Functions (PF) and Partition Schemes (PS) but I don't
> want to have an out of control amount of of Files and File Groups to create
> my PS's on. If I simply partition by year, will my month queries still be
> sped up? Or is the only way to accomplish a speed advantage to partition by
> month? Im a few weeks out from being able to do a real test, and the
> suspence will kill me by then.
> TIA, ChrisR
>
>

Wednesday, March 28, 2012

More index questions

I'm learning about indexes and performance, looking at some of my queries and
comparing them with the Index Tuning Wizard (ITW) to figure things out.
BTW, is there a way to use the Profiler to "get the distinct queries [on
table x]"? I would really like to see every query on a table when looking at
the indexes on them, but I can't figure out how to do this.
Ok, so right now I'm looking at tblPrices, ~500,000 rows with a pkey, an int
fkey (security), two floats (one unused), a type, a date and a varchar notes
(typically empty). Data is unique on fkey, type and date. We insert about 500
rows a day in one big batch, so insert performance isn't _too_ much of an
issue, but because it's a batch you tend to notice it.
Indexes are currently on the pkey (clustered unique), fkey, type and date.
There is also a constraint on fkey/type/date.
My GUESS is that the common types of queries are...
1) price for this security and this date
2) prices for this date
3) prices for this security
I don't really know which ones are common or not, which is why I was asking
how I could find this out.
dbcc SHOW_STATISTICS (tblprices, type) said "0.0". According to the docs
this makes it an excellent target for an index. In reality there are only 5
values in total, and the VAST majority of the records are a single one. Can
anyone explain this output? Also, why didn't the ITW suggest removing it,
when it obviously will never be used?
ITW also suggested removing the index on date. This seems suspect, but I
can't prove it. The reason it's suspect is that we would have roughtly 500
prices per date, and about the same number per security. This would seem to
imply that either index should be useful depending on the query. We tend to
GROUP BY on the security, in order to get the newest price per security,
which would also seem to imply that the date index would be useful -- is this
possible? Can it use the fkey to extract the unique securities, the date to
get the newest for all securities, and then hash match?
Basically ITW simply doesn't have enough information for me to make up my
mind. If I could see the queries, how often they occured, and what indexes
they used, then it would be useful. Now, however, I don't trust the numbers.
One last question about the compound constraint. Since the first column in
it is the fkey, should I instead make it an index and drop the fkey index?
What is the difference between a constraint and index in this context anyway?
I assume that the constraint doesn't do a table search for every insert, so
there's some sort of index that it's using anyway?
Maury
"Maury Markowitz" <MauryMarkowitz@.discussions.microsoft.com> wrote in
message news:7058F853-8697-4DBA-9225-3B544D2E59A4@.microsoft.com...
> I'm learning about indexes and performance, looking at some of my queries
> and
> comparing them with the Index Tuning Wizard (ITW) to figure things out.
> BTW, is there a way to use the Profiler to "get the distinct queries [on
> table x]"? I would really like to see every query on a table when looking
> at
> the indexes on them, but I can't figure out how to do this.
> Ok, so right now I'm looking at tblPrices, ~500,000 rows with a pkey, an
> int
> fkey (security), two floats (one unused), a type, a date and a varchar
> notes
> (typically empty). Data is unique on fkey, type and date. We insert about
> 500
> rows a day in one big batch, so insert performance isn't _too_ much of an
> issue, but because it's a batch you tend to notice it.
> Indexes are currently on the pkey (clustered unique), fkey, type and date.
> There is also a constraint on fkey/type/date.
> My GUESS is that the common types of queries are...
> 1) price for this security and this date
> 2) prices for this date
> 3) prices for this security
> I don't really know which ones are common or not, which is why I was
> asking
> how I could find this out.
> dbcc SHOW_STATISTICS (tblprices, type) said "0.0". According to the docs
> this makes it an excellent target for an index. In reality there are only
> 5
> values in total, and the VAST majority of the records are a single one.
> Can
> anyone explain this output? Also, why didn't the ITW suggest removing it,
> when it obviously will never be used?
> ITW also suggested removing the index on date. This seems suspect, but I
> can't prove it. The reason it's suspect is that we would have roughtly 500
> prices per date, and about the same number per security. This would seem
> to
> imply that either index should be useful depending on the query. We tend
> to
> GROUP BY on the security, in order to get the newest price per security,
> which would also seem to imply that the date index would be useful -- is
> this
> possible? Can it use the fkey to extract the unique securities, the date
> to
> get the newest for all securities, and then hash match?
> Basically ITW simply doesn't have enough information for me to make up my
> mind. If I could see the queries, how often they occured, and what indexes
> they used, then it would be useful. Now, however, I don't trust the
> numbers.
> One last question about the compound constraint. Since the first column in
> it is the fkey, should I instead make it an index and drop the fkey index?
> What is the difference between a constraint and index in this context
> anyway?
A foreign key constraint will generate a check on insert and update to
guarantee that a related row exists in the primary key table. The
constraint does not store any data, and is no help at all in querying. In
generay you should support each foreign key constraint with an index on the
foreign key table on the foreign key columns.
Now. Your bigger question. What indexing scheme to use. First off since
you only insert 500 rows/day, and you do it in one big batch, you should
probably change your clustered index. You will need to run Profiler to
capture a typical query workload, and this will drive your choice of index.
I suspect that the rignt clustered index is a unique clustered index on
either (security,date,type) or (date,security,type). Which one will depend
on the query workload. Then you will have an index for your primary key,
and clustered index for your alternate key. Once you have that, rerun ITW
and see where you are.

> I assume that the constraint doesn't do a table search for every insert,
> so
> there's some sort of index that it's using anyway?
Yes it uses the primary key index on the PK table, but that doesn't help
your query.
David
|||I did respond, quite a bit, to another post you had, but what you are asking
is a pretty generic question, since we do not have specific enough
information here to give detailed repsones; so, about the best you are going
to get is a generic response...but that can be useful too.
1. Relational theory dictates that before an entity is evein 1NF (first
normal form) you MUST specify a Key. Any entity may have one or multiple
candidate keys. There may be a natural key. And, there may even be
complex, compound (composite) keys. One of these has to be the Primary Key.
The rest are alternate keys and should have Unique Key Constraints defined
on them. Oftentimes, the Primary Business Key is a lousy physical key; so,
a surrogate, Auto-Generated, IDENTITY is constructed then chosen as the
Primary Key, physically. This is fine, but keep in mind that that physical
database is just an implementation of a Logical Database Model that supports
a very specific Business Model and that NO surrogate key defines a business
key; thus, the term surrogate. The physical alternate keys, that model the
Business Keys, should always be constrained in order to enforce the logical
business rules.
2. As you go through the normalization process, you wll, hopefully, end up
at 3NF (third normal form), or higher, and a physical consequence of which
is that you will end up with dependent tables that must be constrained with
Foreign Key definitions. In the dependents, these FKCs should be indexed.
In the parent, they must be Uniquely constrained by a PKC or UCs. In the
child, they normally will be reasonably selectible, if not highly
selectible.
3. EVERY TABLE should have a clustered index defined. This is a purely
physical argument and is highly dependent on the SQL Server DBMS. Heaps,
which is what you get when a clustered index is NOT defined, perform
horibly, in terms of storage management and database maitenance. But many
have trouble choosing a proper definition. Here are some generic tips. The
attributes should not be modified but minimally at most. Most often you
would be better served with a single attribute as opposed to a composite
definition. Choose an attribute that is queried often, but by a range
mechanism as opposed to singleton selects (medium cardinality). Then,
finally, the attribute should be as narrow as possible, which is helped by
having single column definitions, but even those, numerics are better than
characters, and narrow characters are better than wider ones. You should
examine the alternate key definitions as possible clustered index candidates
as ALL clustered indexes use UNIQUENESS, either because they are defined as
unique, or because SQL Server appends a GUID to the attribute definition to
make them unique. So, choosing an already uniquely defined key does keep
the overhead down.
4. Now, go through your tables and determine which columns are highly
selectible (high cardinality), that is, if you were to query the table, with
a specific value, what fraction of the table rows would you expect to
return? The fewer, the better candidate for a non-clustered index that
attribute would be. At this stage, I would concentrate solely on single
attribute column indexes. The reason is that statistical histograms are
only stored for the most significant attribute; thus, it is very important
for that attribute to be highly selectible.
The only reason having a composite index would be of value would be becuase
what is known as a "covering" index. A clustered index is a covering index
by definition. This is because a covering index has all the data defining
the index stored on the index page. So, if first name is apart of a name
index, where the last name is the first defining attribute, after the
page(s) is located by a search on last name, SQL Server can also pull the
first name attribute from the index page itself instead of have to run a
bookmark lookup to the referenced data page. Not too many indexes can be
defined this way. A clustered index can because the entire data row is
stored on the clustered index page because, for a clustered index, the index
page IS the data page. Another example would be a non-clustered index but
where every table attribute is used to define it, but that would be highly
redundant. So, besides the clustered index, there are not to many
candidates that could be defined for a covering index.
Moreover, having many singleton indexes provides SQL Server an oportunity to
pull several indexes to perform a set intersection routine, which is far
more efficient than having any one index with composite values.
5. Finally, run sp_createstats against your database. This will create
single-column statistics against every table and every column that does not
already have a supporting index. This can help you determine missed columns
that have high cardinality that could be assisted by creating a
non-clustered index. These statistics also assist the Optimizer to quickly
reject low cardinality columns.
Well, I know this covered quite a bit of gound, some of it not too well, but
there is a ton of literature out there that goes into a lot of detail on
this subject. You need to dig into it. But, a word of caution, many of
these publications are just flat-out wrong. Take everything you read,
your's truely included, with some scepticism. Spend time in the Query
Analyzer. Examine the execution plans. Use hints to override the
Optimizer, not as a production practice, but to see where and how the
Optimizer behaves and why it is far more likely to pick an execution plan
than you are.
Lastly, always make sure your statistics are up to date before you begin any
analysis work. The Optimizer is only as good as these statistics are.
Sincerely,
Anthony Thomas
"Maury Markowitz" <MauryMarkowitz@.discussions.microsoft.com> wrote in
message news:7058F853-8697-4DBA-9225-3B544D2E59A4@.microsoft.com...
I'm learning about indexes and performance, looking at some of my queries
and
comparing them with the Index Tuning Wizard (ITW) to figure things out.
BTW, is there a way to use the Profiler to "get the distinct queries [on
table x]"? I would really like to see every query on a table when looking at
the indexes on them, but I can't figure out how to do this.
Ok, so right now I'm looking at tblPrices, ~500,000 rows with a pkey, an int
fkey (security), two floats (one unused), a type, a date and a varchar notes
(typically empty). Data is unique on fkey, type and date. We insert about
500
rows a day in one big batch, so insert performance isn't _too_ much of an
issue, but because it's a batch you tend to notice it.
Indexes are currently on the pkey (clustered unique), fkey, type and date.
There is also a constraint on fkey/type/date.
My GUESS is that the common types of queries are...
1) price for this security and this date
2) prices for this date
3) prices for this security
I don't really know which ones are common or not, which is why I was asking
how I could find this out.
dbcc SHOW_STATISTICS (tblprices, type) said "0.0". According to the docs
this makes it an excellent target for an index. In reality there are only 5
values in total, and the VAST majority of the records are a single one. Can
anyone explain this output? Also, why didn't the ITW suggest removing it,
when it obviously will never be used?
ITW also suggested removing the index on date. This seems suspect, but I
can't prove it. The reason it's suspect is that we would have roughtly 500
prices per date, and about the same number per security. This would seem to
imply that either index should be useful depending on the query. We tend to
GROUP BY on the security, in order to get the newest price per security,
which would also seem to imply that the date index would be useful -- is
this
possible? Can it use the fkey to extract the unique securities, the date to
get the newest for all securities, and then hash match?
Basically ITW simply doesn't have enough information for me to make up my
mind. If I could see the queries, how often they occured, and what indexes
they used, then it would be useful. Now, however, I don't trust the numbers.
One last question about the compound constraint. Since the first column in
it is the fkey, should I instead make it an index and drop the fkey index?
What is the difference between a constraint and index in this context
anyway?
I assume that the constraint doesn't do a table search for every insert, so
there's some sort of index that it's using anyway?
Maury

More index questions

I'm learning about indexes and performance, looking at some of my queries an
d
comparing them with the Index Tuning Wizard (ITW) to figure things out.
BTW, is there a way to use the Profiler to "get the distinct queries [on
table x]"? I would really like to see every query on a table when looking at
the indexes on them, but I can't figure out how to do this.
Ok, so right now I'm looking at tblPrices, ~500,000 rows with a pkey, an int
fkey (security), two floats (one unused), a type, a date and a varchar notes
(typically empty). Data is unique on fkey, type and date. We insert about 50
0
rows a day in one big batch, so insert performance isn't _too_ much of an
issue, but because it's a batch you tend to notice it.
Indexes are currently on the pkey (clustered unique), fkey, type and date.
There is also a constraint on fkey/type/date.
My GUESS is that the common types of queries are...
1) price for this security and this date
2) prices for this date
3) prices for this security
I don't really know which ones are common or not, which is why I was asking
how I could find this out.
dbcc SHOW_STATISTICS (tblprices, type) said "0.0". According to the docs
this makes it an excellent target for an index. In reality there are only 5
values in total, and the VAST majority of the records are a single one. Can
anyone explain this output? Also, why didn't the ITW suggest removing it,
when it obviously will never be used?
ITW also suggested removing the index on date. This seems suspect, but I
can't prove it. The reason it's suspect is that we would have roughtly 500
prices per date, and about the same number per security. This would seem to
imply that either index should be useful depending on the query. We tend to
GROUP BY on the security, in order to get the newest price per security,
which would also seem to imply that the date index would be useful -- is thi
s
possible? Can it use the fkey to extract the unique securities, the date to
get the newest for all securities, and then hash match?
Basically ITW simply doesn't have enough information for me to make up my
mind. If I could see the queries, how often they occured, and what indexes
they used, then it would be useful. Now, however, I don't trust the numbers.
One last question about the compound constraint. Since the first column in
it is the fkey, should I instead make it an index and drop the fkey index?
What is the difference between a constraint and index in this context anyway
?
I assume that the constraint doesn't do a table search for every insert, so
there's some sort of index that it's using anyway?
Maury"Maury Markowitz" <MauryMarkowitz@.discussions.microsoft.com> wrote in
message news:7058F853-8697-4DBA-9225-3B544D2E59A4@.microsoft.com...
> I'm learning about indexes and performance, looking at some of my queries
> and
> comparing them with the Index Tuning Wizard (ITW) to figure things out.
> BTW, is there a way to use the Profiler to "get the distinct queries [
on
> table x]"? I would really like to see every query on a table when looking
> at
> the indexes on them, but I can't figure out how to do this.
> Ok, so right now I'm looking at tblPrices, ~500,000 rows with a pkey, an
> int
> fkey (security), two floats (one unused), a type, a date and a varchar
> notes
> (typically empty). Data is unique on fkey, type and date. We insert about
> 500
> rows a day in one big batch, so insert performance isn't _too_ much of an
> issue, but because it's a batch you tend to notice it.
> Indexes are currently on the pkey (clustered unique), fkey, type and date.
> There is also a constraint on fkey/type/date.
> My GUESS is that the common types of queries are...
> 1) price for this security and this date
> 2) prices for this date
> 3) prices for this security
> I don't really know which ones are common or not, which is why I was
> asking
> how I could find this out.
> dbcc SHOW_STATISTICS (tblprices, type) said "0.0". According to the docs
> this makes it an excellent target for an index. In reality there are only
> 5
> values in total, and the VAST majority of the records are a single one.
> Can
> anyone explain this output? Also, why didn't the ITW suggest removing it,
> when it obviously will never be used?
> ITW also suggested removing the index on date. This seems suspect, but I
> can't prove it. The reason it's suspect is that we would have roughtly 500
> prices per date, and about the same number per security. This would seem
> to
> imply that either index should be useful depending on the query. We tend
> to
> GROUP BY on the security, in order to get the newest price per security,
> which would also seem to imply that the date index would be useful -- is
> this
> possible? Can it use the fkey to extract the unique securities, the date
> to
> get the newest for all securities, and then hash match?
> Basically ITW simply doesn't have enough information for me to make up my
> mind. If I could see the queries, how often they occured, and what indexes
> they used, then it would be useful. Now, however, I don't trust the
> numbers.
> One last question about the compound constraint. Since the first column in
> it is the fkey, should I instead make it an index and drop the fkey index?
> What is the difference between a constraint and index in this context
> anyway?
A foreign key constraint will generate a check on insert and update to
guarantee that a related row exists in the primary key table. The
constraint does not store any data, and is no help at all in querying. In
generay you should support each foreign key constraint with an index on the
foreign key table on the foreign key columns.
Now. Your bigger question. What indexing scheme to use. First off since
you only insert 500 rows/day, and you do it in one big batch, you should
probably change your clustered index. You will need to run Profiler to
capture a typical query workload, and this will drive your choice of index.
I suspect that the rignt clustered index is a unique clustered index on
either (security,date,type) or (date,security,type). Which one will depend
on the query workload. Then you will have an index for your primary key,
and clustered index for your alternate key. Once you have that, rerun ITW
and see where you are.

> I assume that the constraint doesn't do a table search for every insert,
> so
> there's some sort of index that it's using anyway?
Yes it uses the primary key index on the PK table, but that doesn't help
your query.
David|||I did respond, quite a bit, to another post you had, but what you are asking
is a pretty generic question, since we do not have specific enough
information here to give detailed repsones; so, about the best you are going
to get is a generic response...but that can be useful too.
1. Relational theory dictates that before an entity is evein 1NF (first
normal form) you MUST specify a Key. Any entity may have one or multiple
candidate keys. There may be a natural key. And, there may even be
complex, compound (composite) keys. One of these has to be the Primary Key.
The rest are alternate keys and should have Unique Key Constraints defined
on them. Oftentimes, the Primary Business Key is a lousy physical key; so,
a surrogate, Auto-Generated, IDENTITY is constructed then chosen as the
Primary Key, physically. This is fine, but keep in mind that that physical
database is just an implementation of a Logical Database Model that supports
a very specific Business Model and that NO surrogate key defines a business
key; thus, the term surrogate. The physical alternate keys, that model the
Business Keys, should always be constrained in order to enforce the logical
business rules.
2. As you go through the normalization process, you wll, hopefully, end up
at 3NF (third normal form), or higher, and a physical consequence of which
is that you will end up with dependent tables that must be constrained with
Foreign Key definitions. In the dependents, these FKCs should be indexed.
In the parent, they must be Uniquely constrained by a PKC or UCs. In the
child, they normally will be reasonably selectible, if not highly
selectible.
3. EVERY TABLE should have a clustered index defined. This is a purely
physical argument and is highly dependent on the SQL Server DBMS. Heaps,
which is what you get when a clustered index is NOT defined, perform
horibly, in terms of storage management and database maitenance. But many
have trouble choosing a proper definition. Here are some generic tips. The
attributes should not be modified but minimally at most. Most often you
would be better served with a single attribute as opposed to a composite
definition. Choose an attribute that is queried often, but by a range
mechanism as opposed to singleton selects (medium cardinality). Then,
finally, the attribute should be as narrow as possible, which is helped by
having single column definitions, but even those, numerics are better than
characters, and narrow characters are better than wider ones. You should
examine the alternate key definitions as possible clustered index candidates
as ALL clustered indexes use UNIQUENESS, either because they are defined as
unique, or because SQL Server appends a GUID to the attribute definition to
make them unique. So, choosing an already uniquely defined key does keep
the overhead down.
4. Now, go through your tables and determine which columns are highly
selectible (high cardinality), that is, if you were to query the table, with
a specific value, what fraction of the table rows would you expect to
return? The fewer, the better candidate for a non-clustered index that
attribute would be. At this stage, I would concentrate solely on single
attribute column indexes. The reason is that statistical histograms are
only stored for the most significant attribute; thus, it is very important
for that attribute to be highly selectible.
The only reason having a composite index would be of value would be becuase
what is known as a "covering" index. A clustered index is a covering index
by definition. This is because a covering index has all the data defining
the index stored on the index page. So, if first name is apart of a name
index, where the last name is the first defining attribute, after the
page(s) is located by a search on last name, SQL Server can also pull the
first name attribute from the index page itself instead of have to run a
bookmark lookup to the referenced data page. Not too many indexes can be
defined this way. A clustered index can because the entire data row is
stored on the clustered index page because, for a clustered index, the index
page IS the data page. Another example would be a non-clustered index but
where every table attribute is used to define it, but that would be highly
redundant. So, besides the clustered index, there are not to many
candidates that could be defined for a covering index.
Moreover, having many singleton indexes provides SQL Server an oportunity to
pull several indexes to perform a set intersection routine, which is far
more efficient than having any one index with composite values.
5. Finally, run sp_createstats against your database. This will create
single-column statistics against every table and every column that does not
already have a supporting index. This can help you determine missed columns
that have high cardinality that could be assisted by creating a
non-clustered index. These statistics also assist the Optimizer to quickly
reject low cardinality columns.
Well, I know this covered quite a bit of gound, some of it not too well, but
there is a ton of literature out there that goes into a lot of detail on
this subject. You need to dig into it. But, a word of caution, many of
these publications are just flat-out wrong. Take everything you read,
your's truely included, with some scepticism. Spend time in the Query
Analyzer. Examine the execution plans. Use hints to override the
Optimizer, not as a production practice, but to see where and how the
Optimizer behaves and why it is far more likely to pick an execution plan
than you are.
Lastly, always make sure your statistics are up to date before you begin any
analysis work. The Optimizer is only as good as these statistics are.
Sincerely,
Anthony Thomas
"Maury Markowitz" <MauryMarkowitz@.discussions.microsoft.com> wrote in
message news:7058F853-8697-4DBA-9225-3B544D2E59A4@.microsoft.com...
I'm learning about indexes and performance, looking at some of my queries
and
comparing them with the Index Tuning Wizard (ITW) to figure things out.
BTW, is there a way to use the Profiler to "get the distinct queries [on
table x]"? I would really like to see every query on a table when looking at
the indexes on them, but I can't figure out how to do this.
Ok, so right now I'm looking at tblPrices, ~500,000 rows with a pkey, an int
fkey (security), two floats (one unused), a type, a date and a varchar notes
(typically empty). Data is unique on fkey, type and date. We insert about
500
rows a day in one big batch, so insert performance isn't _too_ much of an
issue, but because it's a batch you tend to notice it.
Indexes are currently on the pkey (clustered unique), fkey, type and date.
There is also a constraint on fkey/type/date.
My GUESS is that the common types of queries are...
1) price for this security and this date
2) prices for this date
3) prices for this security
I don't really know which ones are common or not, which is why I was asking
how I could find this out.
dbcc SHOW_STATISTICS (tblprices, type) said "0.0". According to the docs
this makes it an excellent target for an index. In reality there are only 5
values in total, and the VAST majority of the records are a single one. Can
anyone explain this output? Also, why didn't the ITW suggest removing it,
when it obviously will never be used?
ITW also suggested removing the index on date. This seems suspect, but I
can't prove it. The reason it's suspect is that we would have roughtly 500
prices per date, and about the same number per security. This would seem to
imply that either index should be useful depending on the query. We tend to
GROUP BY on the security, in order to get the newest price per security,
which would also seem to imply that the date index would be useful -- is
this
possible? Can it use the fkey to extract the unique securities, the date to
get the newest for all securities, and then hash match?
Basically ITW simply doesn't have enough information for me to make up my
mind. If I could see the queries, how often they occured, and what indexes
they used, then it would be useful. Now, however, I don't trust the numbers.
One last question about the compound constraint. Since the first column in
it is the fkey, should I instead make it an index and drop the fkey index?
What is the difference between a constraint and index in this context
anyway?
I assume that the constraint doesn't do a table search for every insert, so
there's some sort of index that it's using anyway?
Maury

More index questions

I'm learning about indexes and performance, looking at some of my queries and
comparing them with the Index Tuning Wizard (ITW) to figure things out.
BTW, is there a way to use the Profiler to "get the distinct queries [on
table x]"? I would really like to see every query on a table when looking at
the indexes on them, but I can't figure out how to do this.
Ok, so right now I'm looking at tblPrices, ~500,000 rows with a pkey, an int
fkey (security), two floats (one unused), a type, a date and a varchar notes
(typically empty). Data is unique on fkey, type and date. We insert about 500
rows a day in one big batch, so insert performance isn't _too_ much of an
issue, but because it's a batch you tend to notice it.
Indexes are currently on the pkey (clustered unique), fkey, type and date.
There is also a constraint on fkey/type/date.
My GUESS is that the common types of queries are...
1) price for this security and this date
2) prices for this date
3) prices for this security
I don't really know which ones are common or not, which is why I was asking
how I could find this out.
dbcc SHOW_STATISTICS (tblprices, type) said "0.0". According to the docs
this makes it an excellent target for an index. In reality there are only 5
values in total, and the VAST majority of the records are a single one. Can
anyone explain this output? Also, why didn't the ITW suggest removing it,
when it obviously will never be used?
ITW also suggested removing the index on date. This seems suspect, but I
can't prove it. The reason it's suspect is that we would have roughtly 500
prices per date, and about the same number per security. This would seem to
imply that either index should be useful depending on the query. We tend to
GROUP BY on the security, in order to get the newest price per security,
which would also seem to imply that the date index would be useful -- is this
possible? Can it use the fkey to extract the unique securities, the date to
get the newest for all securities, and then hash match?
Basically ITW simply doesn't have enough information for me to make up my
mind. If I could see the queries, how often they occured, and what indexes
they used, then it would be useful. Now, however, I don't trust the numbers.
One last question about the compound constraint. Since the first column in
it is the fkey, should I instead make it an index and drop the fkey index?
What is the difference between a constraint and index in this context anyway?
I assume that the constraint doesn't do a table search for every insert, so
there's some sort of index that it's using anyway?
Maury"Maury Markowitz" <MauryMarkowitz@.discussions.microsoft.com> wrote in
message news:7058F853-8697-4DBA-9225-3B544D2E59A4@.microsoft.com...
> I'm learning about indexes and performance, looking at some of my queries
> and
> comparing them with the Index Tuning Wizard (ITW) to figure things out.
> BTW, is there a way to use the Profiler to "get the distinct queries [on
> table x]"? I would really like to see every query on a table when looking
> at
> the indexes on them, but I can't figure out how to do this.
> Ok, so right now I'm looking at tblPrices, ~500,000 rows with a pkey, an
> int
> fkey (security), two floats (one unused), a type, a date and a varchar
> notes
> (typically empty). Data is unique on fkey, type and date. We insert about
> 500
> rows a day in one big batch, so insert performance isn't _too_ much of an
> issue, but because it's a batch you tend to notice it.
> Indexes are currently on the pkey (clustered unique), fkey, type and date.
> There is also a constraint on fkey/type/date.
> My GUESS is that the common types of queries are...
> 1) price for this security and this date
> 2) prices for this date
> 3) prices for this security
> I don't really know which ones are common or not, which is why I was
> asking
> how I could find this out.
> dbcc SHOW_STATISTICS (tblprices, type) said "0.0". According to the docs
> this makes it an excellent target for an index. In reality there are only
> 5
> values in total, and the VAST majority of the records are a single one.
> Can
> anyone explain this output? Also, why didn't the ITW suggest removing it,
> when it obviously will never be used?
> ITW also suggested removing the index on date. This seems suspect, but I
> can't prove it. The reason it's suspect is that we would have roughtly 500
> prices per date, and about the same number per security. This would seem
> to
> imply that either index should be useful depending on the query. We tend
> to
> GROUP BY on the security, in order to get the newest price per security,
> which would also seem to imply that the date index would be useful -- is
> this
> possible? Can it use the fkey to extract the unique securities, the date
> to
> get the newest for all securities, and then hash match?
> Basically ITW simply doesn't have enough information for me to make up my
> mind. If I could see the queries, how often they occured, and what indexes
> they used, then it would be useful. Now, however, I don't trust the
> numbers.
> One last question about the compound constraint. Since the first column in
> it is the fkey, should I instead make it an index and drop the fkey index?
> What is the difference between a constraint and index in this context
> anyway?
A foreign key constraint will generate a check on insert and update to
guarantee that a related row exists in the primary key table. The
constraint does not store any data, and is no help at all in querying. In
generay you should support each foreign key constraint with an index on the
foreign key table on the foreign key columns.
Now. Your bigger question. What indexing scheme to use. First off since
you only insert 500 rows/day, and you do it in one big batch, you should
probably change your clustered index. You will need to run Profiler to
capture a typical query workload, and this will drive your choice of index.
I suspect that the rignt clustered index is a unique clustered index on
either (security,date,type) or (date,security,type). Which one will depend
on the query workload. Then you will have an index for your primary key,
and clustered index for your alternate key. Once you have that, rerun ITW
and see where you are.
> I assume that the constraint doesn't do a table search for every insert,
> so
> there's some sort of index that it's using anyway?
Yes it uses the primary key index on the PK table, but that doesn't help
your query.
David|||I did respond, quite a bit, to another post you had, but what you are asking
is a pretty generic question, since we do not have specific enough
information here to give detailed repsones; so, about the best you are going
to get is a generic response...but that can be useful too.
1. Relational theory dictates that before an entity is evein 1NF (first
normal form) you MUST specify a Key. Any entity may have one or multiple
candidate keys. There may be a natural key. And, there may even be
complex, compound (composite) keys. One of these has to be the Primary Key.
The rest are alternate keys and should have Unique Key Constraints defined
on them. Oftentimes, the Primary Business Key is a lousy physical key; so,
a surrogate, Auto-Generated, IDENTITY is constructed then chosen as the
Primary Key, physically. This is fine, but keep in mind that that physical
database is just an implementation of a Logical Database Model that supports
a very specific Business Model and that NO surrogate key defines a business
key; thus, the term surrogate. The physical alternate keys, that model the
Business Keys, should always be constrained in order to enforce the logical
business rules.
2. As you go through the normalization process, you wll, hopefully, end up
at 3NF (third normal form), or higher, and a physical consequence of which
is that you will end up with dependent tables that must be constrained with
Foreign Key definitions. In the dependents, these FKCs should be indexed.
In the parent, they must be Uniquely constrained by a PKC or UCs. In the
child, they normally will be reasonably selectible, if not highly
selectible.
3. EVERY TABLE should have a clustered index defined. This is a purely
physical argument and is highly dependent on the SQL Server DBMS. Heaps,
which is what you get when a clustered index is NOT defined, perform
horibly, in terms of storage management and database maitenance. But many
have trouble choosing a proper definition. Here are some generic tips. The
attributes should not be modified but minimally at most. Most often you
would be better served with a single attribute as opposed to a composite
definition. Choose an attribute that is queried often, but by a range
mechanism as opposed to singleton selects (medium cardinality). Then,
finally, the attribute should be as narrow as possible, which is helped by
having single column definitions, but even those, numerics are better than
characters, and narrow characters are better than wider ones. You should
examine the alternate key definitions as possible clustered index candidates
as ALL clustered indexes use UNIQUENESS, either because they are defined as
unique, or because SQL Server appends a GUID to the attribute definition to
make them unique. So, choosing an already uniquely defined key does keep
the overhead down.
4. Now, go through your tables and determine which columns are highly
selectible (high cardinality), that is, if you were to query the table, with
a specific value, what fraction of the table rows would you expect to
return? The fewer, the better candidate for a non-clustered index that
attribute would be. At this stage, I would concentrate solely on single
attribute column indexes. The reason is that statistical histograms are
only stored for the most significant attribute; thus, it is very important
for that attribute to be highly selectible.
The only reason having a composite index would be of value would be becuase
what is known as a "covering" index. A clustered index is a covering index
by definition. This is because a covering index has all the data defining
the index stored on the index page. So, if first name is apart of a name
index, where the last name is the first defining attribute, after the
page(s) is located by a search on last name, SQL Server can also pull the
first name attribute from the index page itself instead of have to run a
bookmark lookup to the referenced data page. Not too many indexes can be
defined this way. A clustered index can because the entire data row is
stored on the clustered index page because, for a clustered index, the index
page IS the data page. Another example would be a non-clustered index but
where every table attribute is used to define it, but that would be highly
redundant. So, besides the clustered index, there are not to many
candidates that could be defined for a covering index.
Moreover, having many singleton indexes provides SQL Server an oportunity to
pull several indexes to perform a set intersection routine, which is far
more efficient than having any one index with composite values.
5. Finally, run sp_createstats against your database. This will create
single-column statistics against every table and every column that does not
already have a supporting index. This can help you determine missed columns
that have high cardinality that could be assisted by creating a
non-clustered index. These statistics also assist the Optimizer to quickly
reject low cardinality columns.
Well, I know this covered quite a bit of gound, some of it not too well, but
there is a ton of literature out there that goes into a lot of detail on
this subject. You need to dig into it. But, a word of caution, many of
these publications are just flat-out wrong. Take everything you read,
your's truely included, with some scepticism. Spend time in the Query
Analyzer. Examine the execution plans. Use hints to override the
Optimizer, not as a production practice, but to see where and how the
Optimizer behaves and why it is far more likely to pick an execution plan
than you are.
Lastly, always make sure your statistics are up to date before you begin any
analysis work. The Optimizer is only as good as these statistics are.
Sincerely,
Anthony Thomas
--
"Maury Markowitz" <MauryMarkowitz@.discussions.microsoft.com> wrote in
message news:7058F853-8697-4DBA-9225-3B544D2E59A4@.microsoft.com...
I'm learning about indexes and performance, looking at some of my queries
and
comparing them with the Index Tuning Wizard (ITW) to figure things out.
BTW, is there a way to use the Profiler to "get the distinct queries [on
table x]"? I would really like to see every query on a table when looking at
the indexes on them, but I can't figure out how to do this.
Ok, so right now I'm looking at tblPrices, ~500,000 rows with a pkey, an int
fkey (security), two floats (one unused), a type, a date and a varchar notes
(typically empty). Data is unique on fkey, type and date. We insert about
500
rows a day in one big batch, so insert performance isn't _too_ much of an
issue, but because it's a batch you tend to notice it.
Indexes are currently on the pkey (clustered unique), fkey, type and date.
There is also a constraint on fkey/type/date.
My GUESS is that the common types of queries are...
1) price for this security and this date
2) prices for this date
3) prices for this security
I don't really know which ones are common or not, which is why I was asking
how I could find this out.
dbcc SHOW_STATISTICS (tblprices, type) said "0.0". According to the docs
this makes it an excellent target for an index. In reality there are only 5
values in total, and the VAST majority of the records are a single one. Can
anyone explain this output? Also, why didn't the ITW suggest removing it,
when it obviously will never be used?
ITW also suggested removing the index on date. This seems suspect, but I
can't prove it. The reason it's suspect is that we would have roughtly 500
prices per date, and about the same number per security. This would seem to
imply that either index should be useful depending on the query. We tend to
GROUP BY on the security, in order to get the newest price per security,
which would also seem to imply that the date index would be useful -- is
this
possible? Can it use the fkey to extract the unique securities, the date to
get the newest for all securities, and then hash match?
Basically ITW simply doesn't have enough information for me to make up my
mind. If I could see the queries, how often they occured, and what indexes
they used, then it would be useful. Now, however, I don't trust the numbers.
One last question about the compound constraint. Since the first column in
it is the fkey, should I instead make it an index and drop the fkey index?
What is the difference between a constraint and index in this context
anyway?
I assume that the constraint doesn't do a table search for every insert, so
there's some sort of index that it's using anyway?
Maury

Monday, March 19, 2012

Monitoring sql statement ran against a table

I need to know if there is a way to either record all queries ran again
a particular table or to fire off a trigger if a particualr row is
selected from that table in sql server 2005. The main issue I am
having is "select" statements. The reason I am attempting to do this is
its full fill part of some requirements for a project. I was hoping to
do this with sql server and not a third party solution if possible.
Thanks for any help,
Marty
--[vbcol=seagreen]
A free community forum for resolving all of your high tech issues.maznblue@.gmail.com wrote:

> I need to know if there is a way to either record all queries ran again
> a particular table or to fire off a trigger if a particualr row is
> selected from that table in sql server 2005. The main issue I am
> having is "select" statements. The reason I am attempting to do this is
> its full fill part of some requirements for a project. I was hoping to
> do this with sql server and not a third party solution if possible.
> Thanks for any help,
> Marty
> --
> A free community forum for resolving all of your high tech issues.
Use profiler
You can filter data as per your requirement.
use sp:stmtcompleted , sp:completed events and objecttype , objectname
column, which will show you details about triggers when event will be
triggered
Regards
Amish Shah|||Marty,
Try using SQL Profiler and put a filter on TextData column like
'SELECT%XXXX%' where XXXXis the table which you are looking for..
Jayesh
<maznblue@.gmail.com> wrote in message
news:1149817748.504429.74260@.i40g2000cwc.googlegroups.com...
>I need to know if there is a way to either record all queries ran again
> a particular table or to fire off a trigger if a particualr row is
> selected from that table in sql server 2005. The main issue I am
> having is "select" statements. The reason I am attempting to do this is
> its full fill part of some requirements for a project. I was hoping to
> do this with sql server and not a third party solution if possible.
> Thanks for any help,
> Marty
> --
> A free community forum for resolving all of your high tech issues.
>|||Jayesh -
Thanks for the info, it worked like a champ.
Marty
Jayesh Antony Jose wrote:[vbcol=seagreen]
> Marty,
> Try using SQL Profiler and put a filter on TextData column like
> 'SELECT%XXXX%' where XXXXis the table which you are looking for..
> Jayesh
>
> <maznblue@.gmail.com> wrote in message
> news:1149817748.504429.74260@.i40g2000cwc.googlegroups.com...

Monitoring sql statement ran against a table

I need to know if there is a way to either record all queries ran again
a particular table or to fire off a trigger if a particualr row is
selected from that table in sql server 2005. The main issue I am
having is "select" statements. The reason I am attempting to do this is
its full fill part of some requirements for a project. I was hoping to
do this with sql server and not a third party solution if possible.
Thanks for any help,
Marty
--
>> www.techmentors.net <<<<<
A free community forum for resolving all of your high tech issues.maznblue@.gmail.com wrote:
> I need to know if there is a way to either record all queries ran again
> a particular table or to fire off a trigger if a particualr row is
> selected from that table in sql server 2005. The main issue I am
> having is "select" statements. The reason I am attempting to do this is
> its full fill part of some requirements for a project. I was hoping to
> do this with sql server and not a third party solution if possible.
> Thanks for any help,
> Marty
> --
> >> www.techmentors.net <<<<<
> A free community forum for resolving all of your high tech issues.
Use profiler
You can filter data as per your requirement.
use sp:stmtcompleted , sp:completed events and objecttype , objectname
column, which will show you details about triggers when event will be
triggered
Regards
Amish Shah|||Marty,
Try using SQL Profiler and put a filter on TextData column like
'SELECT%XXXX%' where XXXXis the table which you are looking for..
Jayesh
<maznblue@.gmail.com> wrote in message
news:1149817748.504429.74260@.i40g2000cwc.googlegroups.com...
>I need to know if there is a way to either record all queries ran again
> a particular table or to fire off a trigger if a particualr row is
> selected from that table in sql server 2005. The main issue I am
> having is "select" statements. The reason I am attempting to do this is
> its full fill part of some requirements for a project. I was hoping to
> do this with sql server and not a third party solution if possible.
> Thanks for any help,
> Marty
> --
>> www.techmentors.net <<<<<
> A free community forum for resolving all of your high tech issues.
>|||Jayesh -
Thanks for the info, it worked like a champ.
Marty
Jayesh Antony Jose wrote:
> Marty,
> Try using SQL Profiler and put a filter on TextData column like
> 'SELECT%XXXX%' where XXXXis the table which you are looking for..
> Jayesh
>
> <maznblue@.gmail.com> wrote in message
> news:1149817748.504429.74260@.i40g2000cwc.googlegroups.com...
> >I need to know if there is a way to either record all queries ran again
> > a particular table or to fire off a trigger if a particualr row is
> > selected from that table in sql server 2005. The main issue I am
> > having is "select" statements. The reason I am attempting to do this is
> > its full fill part of some requirements for a project. I was hoping to
> > do this with sql server and not a third party solution if possible.
> >
> > Thanks for any help,
> >
> > Marty
> >
> > --
> >> www.techmentors.net <<<<<
> > A free community forum for resolving all of your high tech issues.
> >

Monitoring Query Performance

Is there any table in SQL Server 2000 which lists the queries currently
cached in memory and statistics on those queries e.g. execution time, number
of executions cost, etc. I would like to be able to determine the queries
that have the highest cost to target for performance tuning first.
Not really. You have some information in master..syscacheobjects, but that are only the cached plans
(those that are cached in the first place, of course). You can have a profiler trace running to pick
up the measures you are interested in and to your tuning based on that profiler trace.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Simon" <Simon@.discussions.microsoft.com> wrote in message
news:C5531A9C-7344-4734-B041-7A83FA31F5F8@.microsoft.com...
> Is there any table in SQL Server 2000 which lists the queries currently
> cached in memory and statistics on those queries e.g. execution time, number
> of executions cost, etc. I would like to be able to determine the queries
> that have the highest cost to target for performance tuning first.

Monday, March 12, 2012

Monitoring Query Performance

Is there any table in SQL Server 2000 which lists the queries currently
cached in memory and statistics on those queries e.g. execution time, number
of executions cost, etc. I would like to be able to determine the queries
that have the highest cost to target for performance tuning first.Not really. You have some information in master..syscacheobjects, but that are only the cached plans
(those that are cached in the first place, of course). You can have a profiler trace running to pick
up the measures you are interested in and to your tuning based on that profiler trace.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Simon" <Simon@.discussions.microsoft.com> wrote in message
news:C5531A9C-7344-4734-B041-7A83FA31F5F8@.microsoft.com...
> Is there any table in SQL Server 2000 which lists the queries currently
> cached in memory and statistics on those queries e.g. execution time, number
> of executions cost, etc. I would like to be able to determine the queries
> that have the highest cost to target for performance tuning first.

Monitoring Query Performance

Is there any table in SQL Server 2000 which lists the queries currently
cached in memory and statistics on those queries e.g. execution time, number
of executions cost, etc. I would like to be able to determine the queries
that have the highest cost to target for performance tuning first.Not really. You have some information in master..syscacheobjects, but that a
re only the cached plans
(those that are cached in the first place, of course). You can have a profil
er trace running to pick
up the measures you are interested in and to your tuning based on that profi
ler trace.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Simon" <Simon@.discussions.microsoft.com> wrote in message
news:C5531A9C-7344-4734-B041-7A83FA31F5F8@.microsoft.com...
> Is there any table in SQL Server 2000 which lists the queries currently
> cached in memory and statistics on those queries e.g. execution time, numb
er
> of executions cost, etc. I would like to be able to determine the queries
> that have the highest cost to target for performance tuning first.

Friday, March 9, 2012

monitoring applications querys

Hi there!

I'm programming in delphi and new to querieng MS SQL Server. Is there a way
to monitor the queries sent to the server and something of the returns. At
least execution time would be interesting ...

Thx in advance,

FritzOn 01.02.2007 16:19, Fritz Franz wrote:

Quote:

Originally Posted by

I'm programming in delphi and new to querieng MS SQL Server. Is there a way
to monitor the queries sent to the server and something of the returns. At
least execution time would be interesting ...


You can do this with the Profiler that comes with MSSQL.

robert|||"Robert Klemme" <shortcutter@.googlemail.comha scritto

Quote:

Originally Posted by

You can do this with the Profiler that comes with MSSQL.


Thanks a lot! This is a great help to me!

Regards, Fritz

Wednesday, March 7, 2012

Monitor Query's that come through SQL 2000

I need to know if there is a way to log all the queries that get sent to my
server. I have some queries that may be dogging my system but I don't have
access to the exact query. I'm new to SQL 2000 so if it is simple, that is
why. Thanks
ChrisCris, SQL Server Profiler does what you need.
Juraj
"Chris, Master of All Things Insignificant" wrote:
>I need to know if there is a way to log all the queries that get sent to my
>server. I have some queries that may be dogging my system but I don't have
>access to the exact query. I'm new to SQL 2000 so if it is simple, that is
>why. Thanks
> Chris
>|||Thanks...
You see, you answer the question and all the sudden, it becomes completely
obvious...
Chris
"Juraj Rojko" <juraj@.rojko.cz> wrote in message
news:%23cqij8C%23EHA.3120@.TK2MSFTNGP12.phx.gbl...
> Cris, SQL Server Profiler does what you need.
> Juraj
> "Chris, Master of All Things Insignificant" wrote:
>

Monitor Query's that come through SQL 2000

I need to know if there is a way to log all the queries that get sent to my
server. I have some queries that may be dogging my system but I don't have
access to the exact query. I'm new to SQL 2000 so if it is simple, that is
why. Thanks
ChrisCris, SQL Server Profiler does what you need.
Juraj
"Chris, Master of All Things Insignificant" wrote:
>I need to know if there is a way to log all the queries that get sent to my
>server. I have some queries that may be dogging my system but I don't have
>access to the exact query. I'm new to SQL 2000 so if it is simple, that is
>why. Thanks
> Chris
>|||Thanks...
You see, you answer the question and all the sudden, it becomes completely
obvious...
Chris
"Juraj Rojko" <juraj@.rojko.cz> wrote in message
news:%23cqij8C%23EHA.3120@.TK2MSFTNGP12.phx.gbl...
> Cris, SQL Server Profiler does what you need.
> Juraj
> "Chris, Master of All Things Insignificant" wrote:
>>I need to know if there is a way to log all the queries that get sent to
>>my server. I have some queries that may be dogging my system but I don't
>>have access to the exact query. I'm new to SQL 2000 so if it is simple,
>>that is why. Thanks
>> Chris
>

Monitor Query's that come through SQL 2000

I need to know if there is a way to log all the queries that get sent to my
server. I have some queries that may be dogging my system but I don't have
access to the exact query. I'm new to SQL 2000 so if it is simple, that is
why. Thanks
Chris
Cris, SQL Server Profiler does what you need.
Juraj
"Chris, Master of All Things Insignificant" wrote:
>I need to know if there is a way to log all the queries that get sent to my
>server. I have some queries that may be dogging my system but I don't have
>access to the exact query. I'm new to SQL 2000 so if it is simple, that is
>why. Thanks
> Chris
>
|||Thanks...
You see, you answer the question and all the sudden, it becomes completely
obvious...
Chris
"Juraj Rojko" <juraj@.rojko.cz> wrote in message
news:%23cqij8C%23EHA.3120@.TK2MSFTNGP12.phx.gbl...
> Cris, SQL Server Profiler does what you need.
> Juraj
> "Chris, Master of All Things Insignificant" wrote:
>

Saturday, February 25, 2012

Monitor CPU Usage by Query?

We are currently running MS SQL 8.0 on Windows 2003. I've been tasked
setting up a monitor that can identify queries that are causing CPU
spikes.
Does anyone have any suggestions about going about this? I'm at a
complete loss.The Profiler can do this. Create a new trace using the SQLProfilerStandard
trace template and add the CPU column. This will show both the query and the
CPU time, plus some other stuff. If this is a super busy production server
this can degrade performance so you might want to play with it on a test db
instance first. I typically run it from my desktop or from a remote desktop
and save the results to a local file cause if you write the result to the db
you are monitoring it too can impact your findings.
Netmon
"quixster@.gmail.com" wrote:
> We are currently running MS SQL 8.0 on Windows 2003. I've been tasked
> setting up a monitor that can identify queries that are causing CPU
> spikes.
> Does anyone have any suggestions about going about this? I'm at a
> complete loss.
>|||Thanks for the tip! Do you know of any way to use this or some other
utility to create an alert for administrators to receive when a query
has risen above a certain threshold? I can see how I can filter the
trace for CPU usage of a certain level, but I don't see how I could use
that to create an alert. Any thoughts?
On Oct 31, 2:21 pm, Netmon <Net...@.discussions.microsoft.com> wrote:
> The Profiler can do this. Create a new trace using the SQLProfilerStandard
> trace template and add the CPU column. This will show both the query and the
> CPU time, plus some other stuff. If this is a super busy production server
> this can degrade performance so you might want to play with it on a test db
> instance first. I typically run it from my desktop or from a remote desktop
> and save the results to a local file cause if you write the result to the db
> you are monitoring it too can impact your findings.
> Netmon
>
> "quixs...@.gmail.com" wrote:
> > We are currently running MS SQL 8.0 on Windows 2003. I've been tasked
> > setting up a monitor that can identify queries that are causing CPU
> > spikes.
> > Does anyone have any suggestions about going about this? I'm at a
> > complete loss.- Hide quoted text -- Show quoted text -|||You can also use this query to see what's using the CPU - highest to
lowest usage:
SELECT * FROM master..sysprocesses ORDER BY cpu DESC
Then, run this to see the actual query:
DBCC INPUTBUFFER (spid) -- spid being the # in the spid column from the
above query
hth
quixster@.gmail.com wrote:
> We are currently running MS SQL 8.0 on Windows 2003. I've been tasked
> setting up a monitor that can identify queries that are causing CPU
> spikes.
> Does anyone have any suggestions about going about this? I'm at a
> complete loss.|||You can also use the windows tool PerfMon on the server and set up your
own custom traces which can viewed realtime or logged to files and
replayed.
quixster@.gmail.com wrote:
> We are currently running MS SQL 8.0 on Windows 2003. I've been tasked
> setting up a monitor that can identify queries that are causing CPU
> spikes.
> Does anyone have any suggestions about going about this? I'm at a
> complete loss.|||No the trace is not going to do that. I'v heard that NetIQ has a product
that monitors for query execution but I'm not sure of the details on how it
works.
Like the other person (tootsuite) posted you could use perfmon however it
does not take sustained thresholds, that is if CPU > 90% over 60 seconds then
alert. So anytime the CPU is above your threshold it will alert even if it
is a short lived spike you get alerted.
I'm not sure the context within which you want to filter the trace to
generate an alert. If you have any enterprise server monitoring tools like
ProIT, MOM or Nimbus available you could just monitor the CPU via those and
have it email your admin.
Are you hoping to find the rogue/cpu intense queries in action? Or are you
desiring to know when the CPU gets hammered so an admin can log in and kill
the offending query or find out the user running it?
"quixster@.gmail.com" wrote:
> Thanks for the tip! Do you know of any way to use this or some other
> utility to create an alert for administrators to receive when a query
> has risen above a certain threshold? I can see how I can filter the
> trace for CPU usage of a certain level, but I don't see how I could use
> that to create an alert. Any thoughts?
>
> On Oct 31, 2:21 pm, Netmon <Net...@.discussions.microsoft.com> wrote:
> > The Profiler can do this. Create a new trace using the SQLProfilerStandard
> > trace template and add the CPU column. This will show both the query and the
> > CPU time, plus some other stuff. If this is a super busy production server
> > this can degrade performance so you might want to play with it on a test db
> > instance first. I typically run it from my desktop or from a remote desktop
> > and save the results to a local file cause if you write the result to the db
> > you are monitoring it too can impact your findings.
> >
> > Netmon
> >
> >
> >
> > "quixs...@.gmail.com" wrote:
> > > We are currently running MS SQL 8.0 on Windows 2003. I've been tasked
> > > setting up a monitor that can identify queries that are causing CPU
> > > spikes.
> >
> > > Does anyone have any suggestions about going about this? I'm at a
> > > complete loss.- Hide quoted text -- Show quoted text -
>|||Perfmon is just a general monitoring tool... you cannot see individual
queries.
Usually it works like this:
1. end users complain
2. look at overall picture using tool like perfmon or Quest Performance
Analysis, or even profiler (I find profiler to be flaky/unpredictable
at times, at least in 2000 it was terrible)
3. isolate queries using Quest based on various factors
4. most of our performance problems in the past have stemmed from
inadequate memory, which then in turns affects cpu, disk i/o, which
leads to blocking locks, etc - fortunately this has been resolved with
the purchase of higher performance servers + memory
so it's not enough to just isolate cpu performance, as many factors are
usually involved
so your cpu might be hammered, but the real underlying problem could be
lack of memory, etc but you wouldn't know that unless you can look at
all the various possible factors
Netmon wrote:
> No the trace is not going to do that. I'v heard that NetIQ has a product
> that monitors for query execution but I'm not sure of the details on how it
> works.
> Like the other person (tootsuite) posted you could use perfmon however it
> does not take sustained thresholds, that is if CPU > 90% over 60 seconds then
> alert. So anytime the CPU is above your threshold it will alert even if it
> is a short lived spike you get alerted.
> I'm not sure the context within which you want to filter the trace to
> generate an alert. If you have any enterprise server monitoring tools like
> ProIT, MOM or Nimbus available you could just monitor the CPU via those and
> have it email your admin.
> Are you hoping to find the rogue/cpu intense queries in action? Or are you
> desiring to know when the CPU gets hammered so an admin can log in and kill
> the offending query or find out the user running it?
> "quixster@.gmail.com" wrote:
> >
> > Thanks for the tip! Do you know of any way to use this or some other
> > utility to create an alert for administrators to receive when a query
> > has risen above a certain threshold? I can see how I can filter the
> > trace for CPU usage of a certain level, but I don't see how I could use
> > that to create an alert. Any thoughts?
> >
> >
> > On Oct 31, 2:21 pm, Netmon <Net...@.discussions.microsoft.com> wrote:
> > > The Profiler can do this. Create a new trace using the SQLProfilerStandard
> > > trace template and add the CPU column. This will show both the query and the
> > > CPU time, plus some other stuff. If this is a super busy production server
> > > this can degrade performance so you might want to play with it on a test db
> > > instance first. I typically run it from my desktop or from a remote desktop
> > > and save the results to a local file cause if you write the result to the db
> > > you are monitoring it too can impact your findings.
> > >
> > > Netmon
> > >
> > >
> > >
> > > "quixs...@.gmail.com" wrote:
> > > > We are currently running MS SQL 8.0 on Windows 2003. I've been tasked
> > > > setting up a monitor that can identify queries that are causing CPU
> > > > spikes.
> > >
> > > > Does anyone have any suggestions about going about this? I'm at a
> > > > complete loss.- Hide quoted text -- Show quoted text -
> >
> >|||The first; trying to find rogue/cpu intense queries in action so we can
identify not only the query causing the problem, but perhaps what else
is occuring simultaneously that could be contributing to the high CPU
situation. We have MOM, and it can alert us to CPU incidents, but I
was hoping to find something that could single out the individual
queries that are running to create the incident.
On Oct 31, 4:30 pm, Netmon <Net...@.discussions.microsoft.com> wrote:
> No the trace is not going to do that. I'v heard that NetIQ has a product
> that monitors for query execution but I'm not sure of the details on how it
> works.
> Like the other person (tootsuite) posted you could use perfmon however it
> does not take sustained thresholds, that is if CPU > 90% over 60 seconds then
> alert. So anytime the CPU is above your threshold it will alert even if it
> is a short lived spike you get alerted.
> I'm not sure the context within which you want to filter the trace to
> generate an alert. If you have any enterprise server monitoring tools like
> ProIT, MOM or Nimbus available you could just monitor the CPU via those and
> have it email your admin.
> Are you hoping to find the rogue/cpu intense queries in action? Or are you
> desiring to know when the CPU gets hammered so an admin can log in and kill
> the offending query or find out the user running it?
>
> "quixs...@.gmail.com" wrote:
> > Thanks for the tip! Do you know of any way to use this or some other
> > utility to create an alert for administrators to receive when a query
> > has risen above a certain threshold? I can see how I can filter the
> > trace for CPU usage of a certain level, but I don't see how I could use
> > that to create an alert. Any thoughts?
> > On Oct 31, 2:21 pm, Netmon <Net...@.discussions.microsoft.com> wrote:
> > > The Profiler can do this. Create a new trace using the SQLProfilerStandard
> > > trace template and add the CPU column. This will show both the query and the
> > > CPU time, plus some other stuff. If this is a super busy production server
> > > this can degrade performance so you might want to play with it on a test db
> > > instance first. I typically run it from my desktop or from a remote desktop
> > > and save the results to a local file cause if you write the result to the db
> > > you are monitoring it too can impact your findings.
> > > Netmon
> > > "quixs...@.gmail.com" wrote:
> > > > We are currently running MS SQL 8.0 on Windows 2003. I've been tasked
> > > > setting up a monitor that can identify queries that are causing CPU
> > > > spikes.
> > > > Does anyone have any suggestions about going about this? I'm at a
> > > > complete loss.- Hide quoted text -- Show quoted text -- Hide quoted text -- Show quoted text -|||Yes, you can do this easily using the query I provided in an earlier
post.
SELECT * FROM master..sysprocesses ORDER BY cpu DESC
DBCC INPUTBUFFER (spid) -- shows query
it can't get much clearer than this - it will list processes by CPU
usage, highest to lowest, at the given point in time you run the stmt
Or you can invest in some type of monitoring tool, like Quest
(performance analysis) or some other tool that you like if you need to
examine various performance indicators - this is what I would recommend
for a long term solution
hth
quixster@.gmail.com wrote:
> The first; trying to find rogue/cpu intense queries in action so we can
> identify not only the query causing the problem, but perhaps what else
> is occuring simultaneously that could be contributing to the high CPU
> situation. We have MOM, and it can alert us to CPU incidents, but I
> was hoping to find something that could single out the individual
> queries that are running to create the incident.
>
> On Oct 31, 4:30 pm, Netmon <Net...@.discussions.microsoft.com> wrote:
> > No the trace is not going to do that. I'v heard that NetIQ has a product
> > that monitors for query execution but I'm not sure of the details on how it
> > works.
> >
> > Like the other person (tootsuite) posted you could use perfmon however it
> > does not take sustained thresholds, that is if CPU > 90% over 60 seconds then
> > alert. So anytime the CPU is above your threshold it will alert even if it
> > is a short lived spike you get alerted.
> >
> > I'm not sure the context within which you want to filter the trace to
> > generate an alert. If you have any enterprise server monitoring tools like
> > ProIT, MOM or Nimbus available you could just monitor the CPU via those and
> > have it email your admin.
> >
> > Are you hoping to find the rogue/cpu intense queries in action? Or are you
> > desiring to know when the CPU gets hammered so an admin can log in and kill
> > the offending query or find out the user running it?
> >
> >
> >
> > "quixs...@.gmail.com" wrote:
> >
> > > Thanks for the tip! Do you know of any way to use this or some other
> > > utility to create an alert for administrators to receive when a query
> > > has risen above a certain threshold? I can see how I can filter the
> > > trace for CPU usage of a certain level, but I don't see how I could use
> > > that to create an alert. Any thoughts?
> >
> > > On Oct 31, 2:21 pm, Netmon <Net...@.discussions.microsoft.com> wrote:
> > > > The Profiler can do this. Create a new trace using the SQLProfilerStandard
> > > > trace template and add the CPU column. This will show both the query and the
> > > > CPU time, plus some other stuff. If this is a super busy production server
> > > > this can degrade performance so you might want to play with it on a test db
> > > > instance first. I typically run it from my desktop or from a remote desktop
> > > > and save the results to a local file cause if you write the result to the db
> > > > you are monitoring it too can impact your findings.
> >
> > > > Netmon
> >
> > > > "quixs...@.gmail.com" wrote:
> > > > > We are currently running MS SQL 8.0 on Windows 2003. I've been tasked
> > > > > setting up a monitor that can identify queries that are causing CPU
> > > > > spikes.
> >
> > > > > Does anyone have any suggestions about going about this? I'm at a
> > > > > complete loss.- Hide quoted text -- Show quoted text -- Hide quoted text -- Show quoted text -|||Thanks for all the help! I'm taking the tips mentioned on this post
and also talking to a Quest rep about "Spotlight on SQL Server
Enterprise" http://www.quest.com/spotlight_on_sql_server_enterprise/
This has been immensely helpful!
On Nov 1, 11:20 am, tootsu...@.gmail.com wrote:
> Yes, you can do this easily using the query I provided in an earlier
> post.
> SELECT * FROM master..sysprocesses ORDER BY cpu DESC
> DBCC INPUTBUFFER (spid) -- shows query
> it can't get much clearer than this - it will list processes by CPU
> usage, highest to lowest, at the given point in time you run the stmt
> Or you can invest in some type of monitoring tool, like Quest
> (performance analysis) or some other tool that you like if you need to
> examine various performance indicators - this is what I would recommend
> for a long term solution
> hth
>
> quixs...@.gmail.com wrote:
> > The first; trying to find rogue/cpu intense queries in action so we can
> > identify not only the query causing the problem, but perhaps what else
> > is occuring simultaneously that could be contributing to the high CPU
> > situation. We have MOM, and it can alert us to CPU incidents, but I
> > was hoping to find something that could single out the individual
> > queries that are running to create the incident.
> > On Oct 31, 4:30 pm, Netmon <Net...@.discussions.microsoft.com> wrote:
> > > No the trace is not going to do that. I'v heard that NetIQ has a product
> > > that monitors for query execution but I'm not sure of the details on how it
> > > works.
> > > Like the other person (tootsuite) posted you could use perfmon however it
> > > does not take sustained thresholds, that is if CPU > 90% over 60 seconds then
> > > alert. So anytime the CPU is above your threshold it will alert even if it
> > > is a short lived spike you get alerted.
> > > I'm not sure the context within which you want to filter the trace to
> > > generate an alert. If you have any enterprise server monitoring tools like
> > > ProIT, MOM or Nimbus available you could just monitor the CPU via those and
> > > have it email your admin.
> > > Are you hoping to find the rogue/cpu intense queries in action? Or are you
> > > desiring to know when the CPU gets hammered so an admin can log in and kill
> > > the offending query or find out the user running it?
> > > "quixs...@.gmail.com" wrote:
> > > > Thanks for the tip! Do you know of any way to use this or some other
> > > > utility to create an alert for administrators to receive when a query
> > > > has risen above a certain threshold? I can see how I can filter the
> > > > trace for CPU usage of a certain level, but I don't see how I could use
> > > > that to create an alert. Any thoughts?
> > > > On Oct 31, 2:21 pm, Netmon <Net...@.discussions.microsoft.com> wrote:
> > > > > The Profiler can do this. Create a new trace using the SQLProfilerStandard
> > > > > trace template and add the CPU column. This will show both the query and the
> > > > > CPU time, plus some other stuff. If this is a super busy production server
> > > > > this can degrade performance so you might want to play with it on a test db
> > > > > instance first. I typically run it from my desktop or from a remote desktop
> > > > > and save the results to a local file cause if you write the result to the db
> > > > > you are monitoring it too can impact your findings.
> > > > > Netmon
> > > > > "quixs...@.gmail.com" wrote:
> > > > > > We are currently running MS SQL 8.0 on Windows 2003. I've been tasked
> > > > > > setting up a monitor that can identify queries that are causing CPU
> > > > > > spikes.
> > > > > > Does anyone have any suggestions about going about this? I'm at a
> > > > > > complete loss.- Hide quoted text -- Show quoted text -- Hide quoted text -- Show quoted text -- Hide quoted text -- Show quoted text -|||Another product that you might want to consider is Idera Diagnostic Manager
http://www.idera.com/Products/SQLdm/Default.aspx
Lucas
"quixster@.gmail.com" wrote:
> Thanks for all the help! I'm taking the tips mentioned on this post
> and also talking to a Quest rep about "Spotlight on SQL Server
> Enterprise" http://www.quest.com/spotlight_on_sql_server_enterprise/
> This has been immensely helpful!
> On Nov 1, 11:20 am, tootsu...@.gmail.com wrote:
> > Yes, you can do this easily using the query I provided in an earlier
> > post.
> >
> > SELECT * FROM master..sysprocesses ORDER BY cpu DESC
> > DBCC INPUTBUFFER (spid) -- shows query
> >
> > it can't get much clearer than this - it will list processes by CPU
> > usage, highest to lowest, at the given point in time you run the stmt
> >
> > Or you can invest in some type of monitoring tool, like Quest
> > (performance analysis) or some other tool that you like if you need to
> > examine various performance indicators - this is what I would recommend
> > for a long term solution
> >
> > hth
> >
> >
> >
> > quixs...@.gmail.com wrote:
> > > The first; trying to find rogue/cpu intense queries in action so we can
> > > identify not only the query causing the problem, but perhaps what else
> > > is occuring simultaneously that could be contributing to the high CPU
> > > situation. We have MOM, and it can alert us to CPU incidents, but I
> > > was hoping to find something that could single out the individual
> > > queries that are running to create the incident.
> >
> > > On Oct 31, 4:30 pm, Netmon <Net...@.discussions.microsoft.com> wrote:
> > > > No the trace is not going to do that. I'v heard that NetIQ has a product
> > > > that monitors for query execution but I'm not sure of the details on how it
> > > > works.
> >
> > > > Like the other person (tootsuite) posted you could use perfmon however it
> > > > does not take sustained thresholds, that is if CPU > 90% over 60 seconds then
> > > > alert. So anytime the CPU is above your threshold it will alert even if it
> > > > is a short lived spike you get alerted.
> >
> > > > I'm not sure the context within which you want to filter the trace to
> > > > generate an alert. If you have any enterprise server monitoring tools like
> > > > ProIT, MOM or Nimbus available you could just monitor the CPU via those and
> > > > have it email your admin.
> >
> > > > Are you hoping to find the rogue/cpu intense queries in action? Or are you
> > > > desiring to know when the CPU gets hammered so an admin can log in and kill
> > > > the offending query or find out the user running it?
> >
> > > > "quixs...@.gmail.com" wrote:
> >
> > > > > Thanks for the tip! Do you know of any way to use this or some other
> > > > > utility to create an alert for administrators to receive when a query
> > > > > has risen above a certain threshold? I can see how I can filter the
> > > > > trace for CPU usage of a certain level, but I don't see how I could use
> > > > > that to create an alert. Any thoughts?
> >
> > > > > On Oct 31, 2:21 pm, Netmon <Net...@.discussions.microsoft.com> wrote:
> > > > > > The Profiler can do this. Create a new trace using the SQLProfilerStandard
> > > > > > trace template and add the CPU column. This will show both the query and the
> > > > > > CPU time, plus some other stuff. If this is a super busy production server
> > > > > > this can degrade performance so you might want to play with it on a test db
> > > > > > instance first. I typically run it from my desktop or from a remote desktop
> > > > > > and save the results to a local file cause if you write the result to the db
> > > > > > you are monitoring it too can impact your findings.
> >
> > > > > > Netmon
> >
> > > > > > "quixs...@.gmail.com" wrote:
> > > > > > > We are currently running MS SQL 8.0 on Windows 2003. I've been tasked
> > > > > > > setting up a monitor that can identify queries that are causing CPU
> > > > > > > spikes.
> >
> > > > > > > Does anyone have any suggestions about going about this? I'm at a
> > > > > > > complete loss.- Hide quoted text -- Show quoted text -- Hide quoted text -- Show quoted text -- Hide quoted text -- Show quoted text -
>