Showing posts with label comparing. Show all posts
Showing posts with label comparing. Show all posts

Wednesday, March 28, 2012

More index questions

I'm learning about indexes and performance, looking at some of my queries and
comparing them with the Index Tuning Wizard (ITW) to figure things out.
BTW, is there a way to use the Profiler to "get the distinct queries [on
table x]"? I would really like to see every query on a table when looking at
the indexes on them, but I can't figure out how to do this.
Ok, so right now I'm looking at tblPrices, ~500,000 rows with a pkey, an int
fkey (security), two floats (one unused), a type, a date and a varchar notes
(typically empty). Data is unique on fkey, type and date. We insert about 500
rows a day in one big batch, so insert performance isn't _too_ much of an
issue, but because it's a batch you tend to notice it.
Indexes are currently on the pkey (clustered unique), fkey, type and date.
There is also a constraint on fkey/type/date.
My GUESS is that the common types of queries are...
1) price for this security and this date
2) prices for this date
3) prices for this security
I don't really know which ones are common or not, which is why I was asking
how I could find this out.
dbcc SHOW_STATISTICS (tblprices, type) said "0.0". According to the docs
this makes it an excellent target for an index. In reality there are only 5
values in total, and the VAST majority of the records are a single one. Can
anyone explain this output? Also, why didn't the ITW suggest removing it,
when it obviously will never be used?
ITW also suggested removing the index on date. This seems suspect, but I
can't prove it. The reason it's suspect is that we would have roughtly 500
prices per date, and about the same number per security. This would seem to
imply that either index should be useful depending on the query. We tend to
GROUP BY on the security, in order to get the newest price per security,
which would also seem to imply that the date index would be useful -- is this
possible? Can it use the fkey to extract the unique securities, the date to
get the newest for all securities, and then hash match?
Basically ITW simply doesn't have enough information for me to make up my
mind. If I could see the queries, how often they occured, and what indexes
they used, then it would be useful. Now, however, I don't trust the numbers.
One last question about the compound constraint. Since the first column in
it is the fkey, should I instead make it an index and drop the fkey index?
What is the difference between a constraint and index in this context anyway?
I assume that the constraint doesn't do a table search for every insert, so
there's some sort of index that it's using anyway?
Maury"Maury Markowitz" <MauryMarkowitz@.discussions.microsoft.com> wrote in
message news:7058F853-8697-4DBA-9225-3B544D2E59A4@.microsoft.com...
> I'm learning about indexes and performance, looking at some of my queries
> and
> comparing them with the Index Tuning Wizard (ITW) to figure things out.
> BTW, is there a way to use the Profiler to "get the distinct queries [on
> table x]"? I would really like to see every query on a table when looking
> at
> the indexes on them, but I can't figure out how to do this.
> Ok, so right now I'm looking at tblPrices, ~500,000 rows with a pkey, an
> int
> fkey (security), two floats (one unused), a type, a date and a varchar
> notes
> (typically empty). Data is unique on fkey, type and date. We insert about
> 500
> rows a day in one big batch, so insert performance isn't _too_ much of an
> issue, but because it's a batch you tend to notice it.
> Indexes are currently on the pkey (clustered unique), fkey, type and date.
> There is also a constraint on fkey/type/date.
> My GUESS is that the common types of queries are...
> 1) price for this security and this date
> 2) prices for this date
> 3) prices for this security
> I don't really know which ones are common or not, which is why I was
> asking
> how I could find this out.
> dbcc SHOW_STATISTICS (tblprices, type) said "0.0". According to the docs
> this makes it an excellent target for an index. In reality there are only
> 5
> values in total, and the VAST majority of the records are a single one.
> Can
> anyone explain this output? Also, why didn't the ITW suggest removing it,
> when it obviously will never be used?
> ITW also suggested removing the index on date. This seems suspect, but I
> can't prove it. The reason it's suspect is that we would have roughtly 500
> prices per date, and about the same number per security. This would seem
> to
> imply that either index should be useful depending on the query. We tend
> to
> GROUP BY on the security, in order to get the newest price per security,
> which would also seem to imply that the date index would be useful -- is
> this
> possible? Can it use the fkey to extract the unique securities, the date
> to
> get the newest for all securities, and then hash match?
> Basically ITW simply doesn't have enough information for me to make up my
> mind. If I could see the queries, how often they occured, and what indexes
> they used, then it would be useful. Now, however, I don't trust the
> numbers.
> One last question about the compound constraint. Since the first column in
> it is the fkey, should I instead make it an index and drop the fkey index?
> What is the difference between a constraint and index in this context
> anyway?
A foreign key constraint will generate a check on insert and update to
guarantee that a related row exists in the primary key table. The
constraint does not store any data, and is no help at all in querying. In
generay you should support each foreign key constraint with an index on the
foreign key table on the foreign key columns.
Now. Your bigger question. What indexing scheme to use. First off since
you only insert 500 rows/day, and you do it in one big batch, you should
probably change your clustered index. You will need to run Profiler to
capture a typical query workload, and this will drive your choice of index.
I suspect that the rignt clustered index is a unique clustered index on
either (security,date,type) or (date,security,type). Which one will depend
on the query workload. Then you will have an index for your primary key,
and clustered index for your alternate key. Once you have that, rerun ITW
and see where you are.
> I assume that the constraint doesn't do a table search for every insert,
> so
> there's some sort of index that it's using anyway?
Yes it uses the primary key index on the PK table, but that doesn't help
your query.
David|||I did respond, quite a bit, to another post you had, but what you are asking
is a pretty generic question, since we do not have specific enough
information here to give detailed repsones; so, about the best you are going
to get is a generic response...but that can be useful too.
1. Relational theory dictates that before an entity is evein 1NF (first
normal form) you MUST specify a Key. Any entity may have one or multiple
candidate keys. There may be a natural key. And, there may even be
complex, compound (composite) keys. One of these has to be the Primary Key.
The rest are alternate keys and should have Unique Key Constraints defined
on them. Oftentimes, the Primary Business Key is a lousy physical key; so,
a surrogate, Auto-Generated, IDENTITY is constructed then chosen as the
Primary Key, physically. This is fine, but keep in mind that that physical
database is just an implementation of a Logical Database Model that supports
a very specific Business Model and that NO surrogate key defines a business
key; thus, the term surrogate. The physical alternate keys, that model the
Business Keys, should always be constrained in order to enforce the logical
business rules.
2. As you go through the normalization process, you wll, hopefully, end up
at 3NF (third normal form), or higher, and a physical consequence of which
is that you will end up with dependent tables that must be constrained with
Foreign Key definitions. In the dependents, these FKCs should be indexed.
In the parent, they must be Uniquely constrained by a PKC or UCs. In the
child, they normally will be reasonably selectible, if not highly
selectible.
3. EVERY TABLE should have a clustered index defined. This is a purely
physical argument and is highly dependent on the SQL Server DBMS. Heaps,
which is what you get when a clustered index is NOT defined, perform
horibly, in terms of storage management and database maitenance. But many
have trouble choosing a proper definition. Here are some generic tips. The
attributes should not be modified but minimally at most. Most often you
would be better served with a single attribute as opposed to a composite
definition. Choose an attribute that is queried often, but by a range
mechanism as opposed to singleton selects (medium cardinality). Then,
finally, the attribute should be as narrow as possible, which is helped by
having single column definitions, but even those, numerics are better than
characters, and narrow characters are better than wider ones. You should
examine the alternate key definitions as possible clustered index candidates
as ALL clustered indexes use UNIQUENESS, either because they are defined as
unique, or because SQL Server appends a GUID to the attribute definition to
make them unique. So, choosing an already uniquely defined key does keep
the overhead down.
4. Now, go through your tables and determine which columns are highly
selectible (high cardinality), that is, if you were to query the table, with
a specific value, what fraction of the table rows would you expect to
return? The fewer, the better candidate for a non-clustered index that
attribute would be. At this stage, I would concentrate solely on single
attribute column indexes. The reason is that statistical histograms are
only stored for the most significant attribute; thus, it is very important
for that attribute to be highly selectible.
The only reason having a composite index would be of value would be becuase
what is known as a "covering" index. A clustered index is a covering index
by definition. This is because a covering index has all the data defining
the index stored on the index page. So, if first name is apart of a name
index, where the last name is the first defining attribute, after the
page(s) is located by a search on last name, SQL Server can also pull the
first name attribute from the index page itself instead of have to run a
bookmark lookup to the referenced data page. Not too many indexes can be
defined this way. A clustered index can because the entire data row is
stored on the clustered index page because, for a clustered index, the index
page IS the data page. Another example would be a non-clustered index but
where every table attribute is used to define it, but that would be highly
redundant. So, besides the clustered index, there are not to many
candidates that could be defined for a covering index.
Moreover, having many singleton indexes provides SQL Server an oportunity to
pull several indexes to perform a set intersection routine, which is far
more efficient than having any one index with composite values.
5. Finally, run sp_createstats against your database. This will create
single-column statistics against every table and every column that does not
already have a supporting index. This can help you determine missed columns
that have high cardinality that could be assisted by creating a
non-clustered index. These statistics also assist the Optimizer to quickly
reject low cardinality columns.
Well, I know this covered quite a bit of gound, some of it not too well, but
there is a ton of literature out there that goes into a lot of detail on
this subject. You need to dig into it. But, a word of caution, many of
these publications are just flat-out wrong. Take everything you read,
your's truely included, with some scepticism. Spend time in the Query
Analyzer. Examine the execution plans. Use hints to override the
Optimizer, not as a production practice, but to see where and how the
Optimizer behaves and why it is far more likely to pick an execution plan
than you are.
Lastly, always make sure your statistics are up to date before you begin any
analysis work. The Optimizer is only as good as these statistics are.
Sincerely,
Anthony Thomas
--
"Maury Markowitz" <MauryMarkowitz@.discussions.microsoft.com> wrote in
message news:7058F853-8697-4DBA-9225-3B544D2E59A4@.microsoft.com...
I'm learning about indexes and performance, looking at some of my queries
and
comparing them with the Index Tuning Wizard (ITW) to figure things out.
BTW, is there a way to use the Profiler to "get the distinct queries [on
table x]"? I would really like to see every query on a table when looking at
the indexes on them, but I can't figure out how to do this.
Ok, so right now I'm looking at tblPrices, ~500,000 rows with a pkey, an int
fkey (security), two floats (one unused), a type, a date and a varchar notes
(typically empty). Data is unique on fkey, type and date. We insert about
500
rows a day in one big batch, so insert performance isn't _too_ much of an
issue, but because it's a batch you tend to notice it.
Indexes are currently on the pkey (clustered unique), fkey, type and date.
There is also a constraint on fkey/type/date.
My GUESS is that the common types of queries are...
1) price for this security and this date
2) prices for this date
3) prices for this security
I don't really know which ones are common or not, which is why I was asking
how I could find this out.
dbcc SHOW_STATISTICS (tblprices, type) said "0.0". According to the docs
this makes it an excellent target for an index. In reality there are only 5
values in total, and the VAST majority of the records are a single one. Can
anyone explain this output? Also, why didn't the ITW suggest removing it,
when it obviously will never be used?
ITW also suggested removing the index on date. This seems suspect, but I
can't prove it. The reason it's suspect is that we would have roughtly 500
prices per date, and about the same number per security. This would seem to
imply that either index should be useful depending on the query. We tend to
GROUP BY on the security, in order to get the newest price per security,
which would also seem to imply that the date index would be useful -- is
this
possible? Can it use the fkey to extract the unique securities, the date to
get the newest for all securities, and then hash match?
Basically ITW simply doesn't have enough information for me to make up my
mind. If I could see the queries, how often they occured, and what indexes
they used, then it would be useful. Now, however, I don't trust the numbers.
One last question about the compound constraint. Since the first column in
it is the fkey, should I instead make it an index and drop the fkey index?
What is the difference between a constraint and index in this context
anyway?
I assume that the constraint doesn't do a table search for every insert, so
there's some sort of index that it's using anyway?
Maury

More help with Contains

I had posted an earlier question about the CONTAINS predicate that uses the
full text search.
The entire process of comparing a column with a keyword variable using the
contains predicate was not only taking too long (cursors) but also breaking
the stored procedure when noise words were encountered.
I have come up with an alternative that involves copying all the data I want
to compare into a table say Table A. This table will contain three columns
ColA - The description Column
ColB - The column containing the keywords
ColC - A Bit/Flag column.
Can I do this?
Update TableA
SET ColC = 1
WHERE CONTAINS(COLA, COLB)
Can I use two columns in the contains predicate
I know I need to refresh the full text search index after I fill Table A
with data and this user needs DBO permission.
Can a user - datareader/datawriter be assigned privileges to update the full
text.
Thanks in advance
SanjaySanjay,
What is the version of SQL Server and on what OS platform do you have it
installed? Could you post the full output of -- SELECT @.@.version -- as this
is most helpful info in troubleshooting SQL FTS issues.
Yes, you can use CONTAINS an UPDATE statement's where clause. If fact, I've
been planning on writing an article entitled "Putting Full Text Search to
work" that would demonstrate how to use FTS queries with INSERT, DELETE and
UPDATE to do real database table data manipulation, and the article is still
on the drawing boards & perhaps will be posted to my blog soon...
Q. Can I use two columns in the contains predicate?
A. No. However, you can use multiple CONTAINS clauses, i.e., CONTAINS(COLA,
'search1') AND/OR CONTAINS(COLA, 'search2') in SQL Server 2000. Furthermore,
in SQL Server 2005 (Yukon) will support multiple columns in the CONTAINS
syntax.
Q. Can a user - datareader/datawriter be assigned privileges to update the
full text?
A. No, as symin privileges are required to update the Full Text
Index/Catalog via the system stored procs: sp_fulltext*. See SQL Server 2000
BOL title "sp_fulltext_catalog" - "Only members of the symin fixed server
role and the db_owner (or higher) fixed database roles can execute
sp_fulltext_catalog."
Regards,
John
--
SQL Full Text Search Blog
http://spaces.msn.com/members/jtkane/
"Sanjay Pais" <spaisatnospammarketlinksolutions.com> wrote in message
news:u2Rtzp#AFHA.2112@.TK2MSFTNGP14.phx.gbl...
> I had posted an earlier question about the CONTAINS predicate that uses
the
> full text search.
> The entire process of comparing a column with a keyword variable using the
> contains predicate was not only taking too long (cursors) but also
breaking
> the stored procedure when noise words were encountered.
> I have come up with an alternative that involves copying all the data I
want
> to compare into a table say Table A. This table will contain three columns
> ColA - The description Column
> ColB - The column containing the keywords
> ColC - A Bit/Flag column.
> Can I do this?
> Update TableA
> SET ColC = 1
> WHERE CONTAINS(COLA, COLB)
> Can I use two columns in the contains predicate
> I know I need to refresh the full text search index after I fill Table A
> with data and this user needs DBO permission.
> Can a user - datareader/datawriter be assigned privileges to update the
full
> text.
> Thanks in advance
> Sanjay
>|||Microsoft SQL Server 7.00 - 7.00.1094 (Intel X86)
May 29 2003 15:21:25
Copyright (c) 1988-2002 Microsoft Corporation
Enterprise Edition on Windows NT 4.0 (Build 1381: Service Pack 6)
Thanks
"Sanjay Pais" <spaisatnospammarketlinksolutions.com> wrote in message
news:u2Rtzp%23AFHA.2112@.TK2MSFTNGP14.phx.gbl...
>I had posted an earlier question about the CONTAINS predicate that uses the
>full text search.
> The entire process of comparing a column with a keyword variable using the
> contains predicate was not only taking too long (cursors) but also
> breaking the stored procedure when noise words were encountered.
> I have come up with an alternative that involves copying all the data I
> want to compare into a table say Table A. This table will contain three
> columns
> ColA - The description Column
> ColB - The column containing the keywords
> ColC - A Bit/Flag column.
> Can I do this?
> Update TableA
> SET ColC = 1
> WHERE CONTAINS(COLA, COLB)
> Can I use two columns in the contains predicate
> I know I need to refresh the full text search index after I fill Table A
> with data and this user needs DBO permission.
> Can a user - datareader/datawriter be assigned privileges to update the
> full text.
> Thanks in advance
> Sanjay
>|||Sql Server 7
"Sanjay Pais" <spaisatnospammarketlinksolutions.com> wrote in message
news:u2Rtzp%23AFHA.2112@.TK2MSFTNGP14.phx.gbl...
>I had posted an earlier question about the CONTAINS predicate that uses the
>full text search.
> The entire process of comparing a column with a keyword variable using the
> contains predicate was not only taking too long (cursors) but also
> breaking the stored procedure when noise words were encountered.
> I have come up with an alternative that involves copying all the data I
> want to compare into a table say Table A. This table will contain three
> columns
> ColA - The description Column
> ColB - The column containing the keywords
> ColC - A Bit/Flag column.
> Can I do this?
> Update TableA
> SET ColC = 1
> WHERE CONTAINS(COLA, COLB)
> Can I use two columns in the contains predicate
> I know I need to refresh the full text search index after I fill Table A
> with data and this user needs DBO permission.
> Can a user - datareader/datawriter be assigned privileges to update the
> full text.
> Thanks in advance
> Sanjay
>|||I setup full text searches on a table to test my solution and it DID NOT
WORK!!!
So it seems like you can't do this
Select * from TableA WHERE CONTAINS(COLA, COLB)
Where COLA & COLB are both columns in TableA
COLA has a full text search index defined on it.
Thanks for the replies though.
The details of the SQL Server are...
****************************************
***********
Microsoft SQL Server 7.00 - 7.00.1094 (Intel X86)
May 29 2003 15:21:25
Copyright (c) 1988-2002 Microsoft Corporation
Enterprise Edition on Windows NT 4.0 (Build 1381: Service Pack 6)
****************************************
***********
"Sanjay Pais" <spaisatnospammarketlinksolutions.com> wrote in message
news:u2Rtzp%23AFHA.2112@.TK2MSFTNGP14.phx.gbl...
>I had posted an earlier question about the CONTAINS predicate that uses the
>full text search.
> The entire process of comparing a column with a keyword variable using the
> contains predicate was not only taking too long (cursors) but also
> breaking the stored procedure when noise words were encountered.
> I have come up with an alternative that involves copying all the data I
> want to compare into a table say Table A. This table will contain three
> columns
> ColA - The description Column
> ColB - The column containing the keywords
> ColC - A Bit/Flag column.
> Can I do this?
> Update TableA
> SET ColC = 1
> WHERE CONTAINS(COLA, COLB)
> Can I use two columns in the contains predicate
> I know I need to refresh the full text search index after I fill Table A
> with data and this user needs DBO permission.
> Can a user - datareader/datawriter be assigned privileges to update the
> full text.
> Thanks in advance
> Sanjay
>|||Sanjay,
I never said that you could use multiple columns in ONE contains statement
with SQL Server 2000.
What I said is that you could use MULTIPLE contains clauses in your WHERE
clause with SQL Server 2000, specifically:
CONTAINS(COLA, 'search_word_here') AND CONTAINS(COLA, 'search_word_here')
Note, that this is not CONTAINS(COLA, COLB), but CONTAINS(COLA,
'search_word_here').
More than one column per contains statement is ONLY supported in SQL Server
2005 that is still in limited beta release.
Hope that helps,
John
--
SQL Full Text Search Blog
http://spaces.msn.com/members/jtkane/
"Sanjay Pais" <spaisatnospammarketlinksolutions.com> wrote in message
news:e6gfE$IBFHA.2180@.TK2MSFTNGP12.phx.gbl...
> I setup full text searches on a table to test my solution and it DID NOT
> WORK!!!
> So it seems like you can't do this
> Select * from TableA WHERE CONTAINS(COLA, COLB)
> Where COLA & COLB are both columns in TableA
> COLA has a full text search index defined on it.
> Thanks for the replies though.
> The details of the SQL Server are...
> ****************************************
***********
> Microsoft SQL Server 7.00 - 7.00.1094 (Intel X86)
> May 29 2003 15:21:25
> Copyright (c) 1988-2002 Microsoft Corporation
> Enterprise Edition on Windows NT 4.0 (Build 1381: Service Pack 6)
> ****************************************
***********
> "Sanjay Pais" <spaisatnospammarketlinksolutions.com> wrote in message
> news:u2Rtzp%23AFHA.2112@.TK2MSFTNGP14.phx.gbl...
the
the
>|||I'm sorry if I gave you that impression. My original question was just that
and I finally got down to setting up the environment to see if I could
compare two columns and subsequently discovered I could not. I however have
another bainwave that involves the use of a UDF that takes the keyword as a
parameter and looks up columnA using a uniqueid to compare the value of the
row.
So esentially I plan to do this
Select IdentityInd, ColA, ColB
From TableA Where
UDFContains(IdentityInd, ColB ) = 1
And UDFContains will looklike
UDFContains(@.IdentityInd, @.ColB )
Returns Bit
Begin
IF EXISTS(Select IdentityInd From TableA Where
IdentityInd = @.IdentityInd AND CONTAINS(ColA,
@.ColB)) BEGIN
Return 1
End
ELSE BEGIN
Retuen 0
End
End
I am not sure if this will work but I plan to try and see for myself.
Thanks for the reply though
Sanjay
"John Kane" <jt-kane@.comcast.net> wrote in message
news:%234ChvdOBFHA.2196@.TK2MSFTNGP14.phx.gbl...
> Sanjay,
> I never said that you could use multiple columns in ONE contains statement
> with SQL Server 2000.
> What I said is that you could use MULTIPLE contains clauses in your WHERE
> clause with SQL Server 2000, specifically:
> CONTAINS(COLA, 'search_word_here') AND CONTAINS(COLA, 'search_word_here')
> Note, that this is not CONTAINS(COLA, COLB), but CONTAINS(COLA,
> 'search_word_here').
> More than one column per contains statement is ONLY supported in SQL
> Server
> 2005 that is still in limited beta release.
> Hope that helps,
> John
> --
> SQL Full Text Search Blog
> http://spaces.msn.com/members/jtkane/
>
> "Sanjay Pais" <spaisatnospammarketlinksolutions.com> wrote in message
> news:e6gfE$IBFHA.2180@.TK2MSFTNGP12.phx.gbl...
> the
> the
>|||Oh crap!!! SQL Server 7 does not allow UDF's!!!!!
Do I have any other alternatives?
"Sanjay Pais" <spaisatnospammarketlinksolutions.com> wrote in message
news:e4pn1qUBFHA.1076@.TK2MSFTNGP10.phx.gbl...
> I'm sorry if I gave you that impression. My original question was just
> that and I finally got down to setting up the environment to see if I
> could compare two columns and subsequently discovered I could not. I
> however have another bainwave that involves the use of a UDF that takes
> the keyword as a parameter and looks up columnA using a uniqueid to
> compare the value of the row.
> So esentially I plan to do this
> Select IdentityInd, ColA, ColB
> From TableA Where
> UDFContains(IdentityInd, ColB ) = 1
> And UDFContains will looklike
> UDFContains(@.IdentityInd, @.ColB )
> Returns Bit
> Begin
> IF EXISTS(Select IdentityInd From TableA Where
> IdentityInd = @.IdentityInd AND CONTAINS(ColA,
> @.ColB)) BEGIN
> Return 1
> End
> ELSE BEGIN
> Retuen 0
> End
> End
>
> I am not sure if this will work but I plan to try and see for myself.
> Thanks for the reply though
> Sanjay
> "John Kane" <jt-kane@.comcast.net> wrote in message
> news:%234ChvdOBFHA.2196@.TK2MSFTNGP14.phx.gbl...
>|||You're welcome, Sanjay,
As they say in the movies..., lets try again (or something like that ;-). If
not UDF, how about a cursor operation? I'm not sure if this will meet your
needs, but you may be able to fit it to your requirements.. The table
jtkane_Tables contains a list of keywords that is the source of the @.keyword
variable for FT Searching of the jtkane_SysComments table.
SET NOCOUNT ON
DECLARE keyword_cursor CURSOR FAST_FORWARD
FOR
select TableNane from jtkane_Tables order by TableNane
CREATE TABLE #table_name (tblname varchar(100))
OPEN keyword_cursor
DECLARE @.keyword varchar(100)
-- Fetch the first row in the cursor.
FETCH NEXT FROM keyword_cursor INTO @.keyword
WHILE @.@.FETCH_STATUS = 0
BEGIN
select distinct SysNames from jtkane_SysComments
where contains(*, @.keyword)
INSERT INTO #table_name (tblname) VALUES (@.keyword)
FETCH NEXT FROM keyword_cursor INTO @.keyword
END
CLOSE keyword_cursor
DEALLOCATE keyword_cursor
SET NOCOUNT OFF
select * from #table_name order by tblname
drop table #table_name
If the above doesn't fit your needs, checkout the use of
"sp_help_fulltext_catalogs_cursor" in the code example at:
http://spaces.msn.com/members/jtkane/Blog/cns!1pWDBCiDX1uvH5ATJmNCVLPQ!308.e
ntry
Enjoy!
John
--
SQL Full Text Search Blog
http://spaces.msn.com/members/jtkane/
"Sanjay Pais" <spaisatnospammarketlinksolutions.com> wrote in message
news:OwNLZuUBFHA.1992@.TK2MSFTNGP10.phx.gbl...
> Oh crap!!! SQL Server 7 does not allow UDF's!!!!!
> Do I have any other alternatives?
> "Sanjay Pais" <spaisatnospammarketlinksolutions.com> wrote in message
> news:e4pn1qUBFHA.1076@.TK2MSFTNGP10.phx.gbl...
WHERE
'search_word_here')
NOT
uses
using
data
Table
>|||The problem with cursors is that they take just too long are inefficient and
create all sorts of problems with locks( though I am yet to experience
problems with locking)
We currently have an inefficient procedure that uses cursors that I am
trying to replace hence the discourse :)
Thanks once again for the reply though!
"John Kane" <jt-kane@.comcast.net> wrote in message
news:%231gBsdbBFHA.3592@.TK2MSFTNGP09.phx.gbl...
> You're welcome, Sanjay,
> As they say in the movies..., lets try again (or something like that ;-).
> If
> not UDF, how about a cursor operation? I'm not sure if this will meet your
> needs, but you may be able to fit it to your requirements.. The table
> jtkane_Tables contains a list of keywords that is the source of the
> @.keyword
> variable for FT Searching of the jtkane_SysComments table.
> SET NOCOUNT ON
> DECLARE keyword_cursor CURSOR FAST_FORWARD
> FOR
> select TableNane from jtkane_Tables order by TableNane
> CREATE TABLE #table_name (tblname varchar(100))
> OPEN keyword_cursor
> DECLARE @.keyword varchar(100)
> -- Fetch the first row in the cursor.
> FETCH NEXT FROM keyword_cursor INTO @.keyword
> WHILE @.@.FETCH_STATUS = 0
> BEGIN
> select distinct SysNames from jtkane_SysComments
> where contains(*, @.keyword)
> INSERT INTO #table_name (tblname) VALUES (@.keyword)
> FETCH NEXT FROM keyword_cursor INTO @.keyword
> END
> CLOSE keyword_cursor
> DEALLOCATE keyword_cursor
> SET NOCOUNT OFF
> select * from #table_name order by tblname
> drop table #table_name
> If the above doesn't fit your needs, checkout the use of
> "sp_help_fulltext_catalogs_cursor" in the code example at:
> http://spaces.msn.com/members/jtkane/Blog/cns!1pWDBCiDX1uvH5ATJmNCVLPQ!308
.entry
> Enjoy!
> John
> --
> SQL Full Text Search Blog
> http://spaces.msn.com/members/jtkane/
>
> "Sanjay Pais" <spaisatnospammarketlinksolutions.com> wrote in message
> news:OwNLZuUBFHA.1992@.TK2MSFTNGP10.phx.gbl...
> WHERE
> 'search_word_here')
> NOT
> uses
> using
> data
> Table
>

Friday, March 23, 2012

Month & Year comparison

Hi,

The problem here in hand is comparing the month & year ranges rather than the entire date. I am passing a start month and start year along with the end month and end year, now the query that i have written does not seem to be appropriate

My Query:

MONTH(Date_Added) BETWEEN @.StartMonth AND @.EndMonth

AND YEAR(Date_Added) BETWEEN @.StartYear AND @.EndYear

For some reason this works fine if i specify the startmonth as 1 and endmonth as 12, whereas if i specify 1 and 1 it doesnot work. Can anyone help me by refining my query so that i can search a particular date within mm/yyyy and mm/yyyy.

Thanks in advance.

Santosh

So if you put in between January and March and between 2004 and 2006, and then select for April 2005, it wouldn't qualify, right?

Do you really mean to select that way, or do you want to select between January 2004 and March 2006?

If you want to select continuous dates, set your first date to the first day of the month and your last date to the last day of the month and then search between those two values.

Here's a function to set a date to the first day of the month. You can use the same logic to set a date to the last day of the month.

http://www.sql-server-helper.com/functions/get-first-day-of-month.aspx

|||I would like to select between January 2004 and March 2006?|||

That'swhat I figured you want. You can't unlink the logic. Basically you have to have your year/months combined into a single variable and then do a between for those dates. Set the earlier date to the first of the month and your later date to the end of the month.

|||

WHERE Date_Added >= CAST(CAST(@.StartYear AS VARCHAR(4)) + '/' + CAST(@.StartMonth AS VARCHAR(2)) + '/01' AS DATETIME) AND Date_Added < DATEADD(month,1,CAST(CAST(@.EndYear AS VARCHAR(4)) + '/' + CAST(@.EndMonth AS VARCHAR(2)) + '/01' AS DATETIME))

|||

Yeah, that's the idea.

Monday, March 12, 2012

Monitoring Inserted data and comparing against selected data

I made ahuge load script in SQL Server 2000 as i load data from many
tables(select some of each one collumns) into one single table and i
want to test the loaded data against the selected data to make sure
that the loaded data is the same the selected data
is there a code or tool to make this test or monitoring ?? please
urgent ...tamatem wrote:

Quote:

Originally Posted by

I made ahuge load script in SQL Server 2000 as i load data from many
tables(select some of each one collumns) into one single table and i
want to test the loaded data against the selected data to make sure
that the loaded data is the same the selected data
is there a code or tool to make this test or monitoring ?? please
urgent ...


So you did a select ... into query? I'd be curious to know why exactly
you want to compare the two tables. Do you not trust the server to do
it correctly? It isn't really like a file copy or network transfer
where it's subject to errors.|||I agree.

You can trust SQL to do the job correctly or give an error. Just check
for @.@.ERROR to make sure that no error happened. I dont think you would
actually need to check whether the data got inserted properly.

ZeldorBlat wrote:

Quote:

Originally Posted by

tamatem wrote:

Quote:

Originally Posted by

I made ahuge load script in SQL Server 2000 as i load data from many
tables(select some of each one collumns) into one single table and i
want to test the loaded data against the selected data to make sure
that the loaded data is the same the selected data
is there a code or tool to make this test or monitoring ?? please
urgent ...


>
So you did a select ... into query? I'd be curious to know why exactly
you want to compare the two tables. Do you not trust the server to do
it correctly? It isn't really like a file copy or network transfer
where it's subject to errors.

Monitoring Disk Consumption/DB Growth

Hello, I have a 125GB SQL DB whose growth I need to monitor. I've been
manually checking it each day and comparing growth of the .bak file from my
DB Maint. Plan. Is there a better way to measure daily growth of the DB.
Also, I have 4 other drives with non-DB image files that are added daily. Is
there an automated way or Administrative Tool to monitor the decline of free
disk space on 4 logical drives? Thanks, Pancho
You can check out DataStudio from AgileInfoSoftware at
http://www.agileinfollc.com, under the performance tab it has database
performance monitoring facility.
"Pancho" <Pancho@.discussions.microsoft.com> wrote in message
news:A3A8C7B3-382D-467E-A8F0-DE80AF1FFF79@.microsoft.com...
> Hello, I have a 125GB SQL DB whose growth I need to monitor. I've been
> manually checking it each day and comparing growth of the .bak file from
> my
> DB Maint. Plan. Is there a better way to measure daily growth of the DB.
> Also, I have 4 other drives with non-DB image files that are added daily.
> Is
> there an automated way or Administrative Tool to monitor the decline of
> free
> disk space on 4 logical drives? Thanks, Pancho
|||Hi
There is a script on
http://www.sqlservercentral.com/Scri...0&categoryid=1
"SQL Server Database File Sizes" which may do what you require.
John
"Pancho" wrote:

> Hello, I have a 125GB SQL DB whose growth I need to monitor. I've been
> manually checking it each day and comparing growth of the .bak file from my
> DB Maint. Plan. Is there a better way to measure daily growth of the DB.
> Also, I have 4 other drives with non-DB image files that are added daily. Is
> there an automated way or Administrative Tool to monitor the decline of free
> disk space on 4 logical drives? Thanks, Pancho
|||Check out AgileInfoSoftware DataStudio for space monitoring.
"John Bell" <jbellnewsposts@.hotmail.com> wrote in message
news:25900858-65EE-41B6-9308-761E29428555@.microsoft.com...[vbcol=seagreen]
> Hi
> There is a script on
> http://www.sqlservercentral.com/Scri...0&categoryid=1
> "SQL Server Database File Sizes" which may do what you require.
> John
> "Pancho" wrote:

Monitoring Disk Consumption/DB Growth

Hello, I have a 125GB SQL DB whose growth I need to monitor. I've been
manually checking it each day and comparing growth of the .bak file from my
DB Maint. Plan. Is there a better way to measure daily growth of the DB.
Also, I have 4 other drives with non-DB image files that are added daily. I
s
there an automated way or Administrative Tool to monitor the decline of free
disk space on 4 logical drives? Thanks, PanchoYou can check out DataStudio from AgileInfoSoftware at
http://www.agileinfollc.com, under the performance tab it has database
performance monitoring facility.
"Pancho" <Pancho@.discussions.microsoft.com> wrote in message
news:A3A8C7B3-382D-467E-A8F0-DE80AF1FFF79@.microsoft.com...
> Hello, I have a 125GB SQL DB whose growth I need to monitor. I've been
> manually checking it each day and comparing growth of the .bak file from
> my
> DB Maint. Plan. Is there a better way to measure daily growth of the DB.
> Also, I have 4 other drives with non-DB image files that are added daily.
> Is
> there an automated way or Administrative Tool to monitor the decline of
> free
> disk space on 4 logical drives? Thanks, Pancho|||Hi
There is a script on
http://www.sqlservercentral.com/Scr...20&categoryid=1
"SQL Server Database File Sizes" which may do what you require.
John
"Pancho" wrote:

> Hello, I have a 125GB SQL DB whose growth I need to monitor. I've been
> manually checking it each day and comparing growth of the .bak file from m
y
> DB Maint. Plan. Is there a better way to measure daily growth of the DB.
> Also, I have 4 other drives with non-DB image files that are added daily.
Is
> there an automated way or Administrative Tool to monitor the decline of fr
ee
> disk space on 4 logical drives? Thanks, Pancho|||Check out AgileInfoSoftware DataStudio for space monitoring.
"John Bell" <jbellnewsposts@.hotmail.com> wrote in message
news:25900858-65EE-41B6-9308-761E29428555@.microsoft.com...[vbcol=seagreen]
> Hi
> There is a script on
> http://www.sqlservercentral.com/Scr...20&categoryid=1
> "SQL Server Database File Sizes" which may do what you require.
> John
> "Pancho" wrote:
>

Monitoring Disk Consumption/DB Growth

Hello, I have a 125GB SQL DB whose growth I need to monitor. I've been
manually checking it each day and comparing growth of the .bak file from my
DB Maint. Plan. Is there a better way to measure daily growth of the DB.
Also, I have 4 other drives with non-DB image files that are added daily. Is
there an automated way or Administrative Tool to monitor the decline of free
disk space on 4 logical drives? Thanks, PanchoYou can check out DataStudio from AgileInfoSoftware at
http://www.agileinfollc.com, under the performance tab it has database
performance monitoring facility.
"Pancho" <Pancho@.discussions.microsoft.com> wrote in message
news:A3A8C7B3-382D-467E-A8F0-DE80AF1FFF79@.microsoft.com...
> Hello, I have a 125GB SQL DB whose growth I need to monitor. I've been
> manually checking it each day and comparing growth of the .bak file from
> my
> DB Maint. Plan. Is there a better way to measure daily growth of the DB.
> Also, I have 4 other drives with non-DB image files that are added daily.
> Is
> there an automated way or Administrative Tool to monitor the decline of
> free
> disk space on 4 logical drives? Thanks, Pancho|||Hi
There is a script on
http://www.sqlservercentral.com/Scripts/listscripts.asp?categorynm=Maintenance%20and%20Management%20&categoryid=1
"SQL Server Database File Sizes" which may do what you require.
John
"Pancho" wrote:
> Hello, I have a 125GB SQL DB whose growth I need to monitor. I've been
> manually checking it each day and comparing growth of the .bak file from my
> DB Maint. Plan. Is there a better way to measure daily growth of the DB.
> Also, I have 4 other drives with non-DB image files that are added daily. Is
> there an automated way or Administrative Tool to monitor the decline of free
> disk space on 4 logical drives? Thanks, Pancho|||Check out AgileInfoSoftware DataStudio for space monitoring.
"John Bell" <jbellnewsposts@.hotmail.com> wrote in message
news:25900858-65EE-41B6-9308-761E29428555@.microsoft.com...
> Hi
> There is a script on
> http://www.sqlservercentral.com/Scripts/listscripts.asp?categorynm=Maintenance%20and%20Management%20&categoryid=1
> "SQL Server Database File Sizes" which may do what you require.
> John
> "Pancho" wrote:
>> Hello, I have a 125GB SQL DB whose growth I need to monitor. I've been
>> manually checking it each day and comparing growth of the .bak file from
>> my
>> DB Maint. Plan. Is there a better way to measure daily growth of the DB.
>> Also, I have 4 other drives with non-DB image files that are added daily.
>> Is
>> there an automated way or Administrative Tool to monitor the decline of
>> free
>> disk space on 4 logical drives? Thanks, Pancho