Showing posts with label run. Show all posts
Showing posts with label run. Show all posts

Wednesday, March 28, 2012

More info on SQL Server 2005 Express problem install

When I run the setup there is a System Configuration Check page. I get 13
successes and 2 warnings. The 2 warnings are:
Minimum hardware requirement - there is about 37GB free space, 512MB memory,
an Intel Pentium 4 2.4GHz CPU.
IIS Feature requirement - says not installed. Is IIS REQUIRED? Help says
some features may be disabled - that's all.
Then after going through wizard it complained that the SQL Server native
client wasn't available - so ~I will try to download and install that
separately.
Then I get error SQL Server database Services - setup failed. Refer to
setup log - where is that?
There are lots of log files in c:\Program Files\Microsoft SQL Server
subdirectories.
Help please. This is proving a lot harder than I imagined.
AngusI was installing SQL Server 2005 Express Advanced and gave up on that and
installed just standard SQL Server 2005 Express - that worked without a
hitch.
"Angus" <nospam@.gmail.com> wrote in message
news:#mrYry7kHHA.680@.TK2MSFTNGP06.phx.gbl...
> When I run the setup there is a System Configuration Check page. I get 13
> successes and 2 warnings. The 2 warnings are:
> Minimum hardware requirement - there is about 37GB free space, 512MB
memory,
> an Intel Pentium 4 2.4GHz CPU.
> IIS Feature requirement - says not installed. Is IIS REQUIRED? Help says
> some features may be disabled - that's all.
> Then after going through wizard it complained that the SQL Server native
> client wasn't available - so ~I will try to download and install that
> separately.
> Then I get error SQL Server database Services - setup failed. Refer to
> setup log - where is that?
> There are lots of log files in c:\Program Files\Microsoft SQL Server
> subdirectories.
> Help please. This is proving a lot harder than I imagined.
> Angus
>
>|||Hi Angus
"Angus" wrote:
> I was installing SQL Server 2005 Express Advanced and gave up on that and
> installed just standard SQL Server 2005 Express - that worked without a
> hitch.
>
The differences are shown at
http://msdn.microsoft.com/vstudio/express/sql/compare/default.aspx if you
wanted reporting services then you would need advanced edition and IIS, but
it should not stop the install of the other components. I am wondering if you
installed the toolkit by mistake?
John

More info on SQL Server 2005 Express problem install

When I run the setup there is a System Configuration Check page. I get 13
successes and 2 warnings. The 2 warnings are:
Minimum hardware requirement - there is about 37GB free space, 512MB memory,
an Intel Pentium 4 2.4GHz CPU.
IIS Feature requirement - says not installed. Is IIS REQUIRED? Help says
some features may be disabled - that's all.
Then after going through wizard it complained that the SQL Server native
client wasn't available - so ~I will try to download and install that
separately.
Then I get error SQL Server database Services - setup failed. Refer to
setup log - where is that?
There are lots of log files in c:\Program Files\Microsoft SQL Server
subdirectories.
Help please. This is proving a lot harder than I imagined.
AngusI was installing SQL Server 2005 Express Advanced and gave up on that and
installed just standard SQL Server 2005 Express - that worked without a
hitch.
"Angus" <nospam@.gmail.com> wrote in message
news:#mrYry7kHHA.680@.TK2MSFTNGP06.phx.gbl...
> When I run the setup there is a System Configuration Check page. I get 13
> successes and 2 warnings. The 2 warnings are:
> Minimum hardware requirement - there is about 37GB free space, 512MB
memory,
> an Intel Pentium 4 2.4GHz CPU.
> IIS Feature requirement - says not installed. Is IIS REQUIRED? Help says
> some features may be disabled - that's all.
> Then after going through wizard it complained that the SQL Server native
> client wasn't available - so ~I will try to download and install that
> separately.
> Then I get error SQL Server database Services - setup failed. Refer to
> setup log - where is that?
> There are lots of log files in c:\Program Files\Microsoft SQL Server
> subdirectories.
> Help please. This is proving a lot harder than I imagined.
> Angus
>
>|||Hi Angus
"Angus" wrote:

> I was installing SQL Server 2005 Express Advanced and gave up on that and
> installed just standard SQL Server 2005 Express - that worked without a
> hitch.
>
The differences are shown at
http://msdn.microsoft.com/vstudio/e...re/default.aspx if you
wanted reporting services then you would need advanced edition and IIS, but
it should not stop the install of the other components. I am wondering if yo
u
installed the toolkit by mistake?
John

More fun with DTS Packages

Hi guys,

Well, another day, another adventure in the land of editing DTS packages. Anyone out there run into a package that will run for one sysadmin but not for the other? I have a package that can't initialize it's connections (sql server doesn't exist or login denied) for me but runs fine for my boss from his workstation. Note, permissions have been thoroughly investigated both on the database and the server plus I can run other packages on that server.

Bad or missing files somewhere perhaps? I logged onto the server (where the packages and the database live) and ran some packages. While all my other packages run OK from the server, the package that is giving me fits from my box gets an error message about

'cannot find msdb..sp_log_dtspackage_begin'.

Has this file given anyone else issues? My other packages run from the server without this error.

Any other advice besides 'don't use those evil DTS packages'?

Thanks for your help!DTS is not evil silly girl Microsoft is evil!

who does your boss authenticate to when he/she/it logs in?
who do you athenticate to when you log in?

sp_log_dtspackage_begin should be owned by dbo, is that the case on your server?

What happens if you recompile the sp? Make sure you are logged in as SA!|||Hah! Just another example of absolute power corrupting absolutely?

Well, we all get authenticated (on the network, I'm assuming) through the same box--there are only 15 of us so we have one domain controller. Supposedly we are all processed similarly.

As for the sp_, I cannot find this puppy. I'm assuming that if it exists, lets not rule out a bogus error message, it's not something MS is displaying. Searching for it doesn't turn up anything. It certainly doesn't seem to be anything anyone here wrote.

Ahhhh, to have a local DBA. Currently I'm recreating the whole @.#^$#!$ thing (it's rather robust, to say the least--30 tasks, most with individual field scripting in data transformation objects). I'm on object 4 and so far it runs fine.

Have you ever had one of these become corrupted?

Should I get a bigger stick?|||Bigger stick? Only if it's long enough to reach the the prior developer!

When I was asking about authentication I meant on the SQL box. Are you athenticated to dbo where as your boss isauthenticated to sa?

this sp was shipped from Microsoft. it should be:

CREATE PROCEDURE sp_log_dtspackage_begin
@.name sysname,
@.description NVARCHAR(1000),
@.id UNIQUEIDENTIFIER,
@.versionid UNIQUEIDENTIFIER,
@.lineagefull UNIQUEIDENTIFIER,
@.lineageshort INT,
@.starttime DATETIME,
@.computer sysname,
@.operator sysname
AS
SET NOCOUNT ON

INSERT sysdtspackagelog (
name,
description,
id,
versionid,
lineagefull,
lineageshort,
starttime,
computer,
operator
) VALUES (
@.name,
@.description,
@.id,
@.versionid,
@.lineagefull,
@.lineageshort,
@.starttime,
@.computer,
@.operator
)
RETURN 0 -- SUCCESS

GO

if you can not find it in the list of stored procedures in the msdb database I would say you have a bigger problems. You should be able to copy this from another server.

SQL Server has been a stable product since 7.0. Stable in the sence that things don't mysteriously stop working. Data can and does get corrupt but this sounds like a stored procedure was deleted! Maybe time to check security and change the sa password?

as for corruption, I like to think of Microsoft as a drug dealer, they give you a free taste of their drugs till you get hooked and then WHOMP they have you!|||Thanks for the sp. Adding it to the database's sp's didn't make a difference. However, it is either not displaying or simply not there on any of the databases on this server. Hard to tell what's going on. The packages that run, run with or without. The one that doesn't, similarly, won't run either way.

Sorry if I'm being obtuse, after all I'm just a stray VB/VBA programmer who took a job without realizing that there was no DBA for the back end here. (A situation you do not find when writing to Oracle. Probably MS is making things appear to be way too easy. I bet it's a marketing tactic, 'Use our software and you can fire those expensive DBA's. Any schmoe can ride herd on SQL Server--even applications programmers.') However, under the Server's Security, Server Login Properties I am a system administrator. I'm assuming it doesn't really get any better than that.

Note, the box I'm currently using is considered to be seriously corrupted--at least by the previous user--and when I installed the sp_ my icons for Enterprise Manager blanked out on the desktop and icon bars. Is it a sign? Should I start looking for Holy Water and a disk with SQL Server on it?|||The sp must be add to the msdb database, if it isn't already there. Please verify that the sp exists in the msdb db.

sa = god in SQL Server speak.

<soapbox>I hate companies that think they don't need a dba and that a developer should be able to handle it. IMNSHO this is one of Microsoft's weaknesses. You will rarely see Oracle installed by someone that doesn't know what's going on. This is the #1 reason why Oracle is perceived to out perform SQL Server.</soapbox>

Before you lose your sanity I would verify you are working on a stable box or not. I am wondering if your DTS problems started suddenly or were never working right to begin with? If they just started to fail around the time your predecessor left you may be dealing with another issue.|||Oh dear, I am being a bonehead. Yes, it is there in the msdb.db. Forgot about that whole thing. For this level of hand-holding I may have to fly to Texas and deliver that ride.

About the box, it was my boss's until he managed to get an upgrade so I don't think he's been sabotaging (sp?) it, however, there have been many non-kosher installs. I'll be getting a recycled box whenever someone has time to rebuild a spare.

The only remnant of the previous developer here is the big crack in my monitor case--I hear the keyboard was destroyed. Apparently this occured immediately before his giving 2 minutes notice... Guess he was short on sense of humor.

In any case, I'm slowly hacking my way through rebuilding this thing (still on object 4 though). It's a good learning experience and I obviously have much to learn since I just met up with DTS two weeks ago. The VBScript part feels very comfortable and the two packages I have built so far seem to do a nice job. It's a great idea to have this nifty visual interface. Now if they could only add some decent debugging and flesh out the reference material to actually cover the entire product it would be grand.

I'm sure I'll beat this package into submission eventually.

Thanks for your help!|||Don't worrie about the ride, I needed a good laugh today!

DTS is a VERY cool tool, mainly because it was written to work with almost any provider not just SQL Server. When I worked at Exxon we used it heavely for Oracle and DB2.

Who is the owner of the DTS package? After re-reading your original post I have to believe this is an ownership/permissions issue.|||Well, I hope I made you laugh with me, not at me--but I guess either way is OK.

DTS is a cool tool and I think I'll be pretty good at it in a few months. Looks like I'm going to get lots of practice until they drag me away to do customizations on the front end.

Currently I am shown as the owner. However, it didn't start out that way. We have a wierd (read dysfunctional) setup here in that this package is used on a server in NY. However, development occurs in Boulder. We do not have access to the same databases or servers. They emailed me a copy of the package (which was working) for modifications/bug fixes. Then the fun began because, among other things, our databases have different names. 30 objects all pointing at a connections to nowhere...

I do believe that I have changed all connection strings to point where I can get a connection to a local copy of the NY database. I've also tried adding a new connection and pointing them there. Initially I suspected that the connections changes were the problem, but it is not intuitively obvious why it will run on my boss's box if that is the case.

None of this, of course, bodes well for the idea of sending the package back to NY and having it run in their environment. Right now I have tentative plans (assuming I can manage to test my changes here) of walking someone through just adding the changes on their system so that we change the one that is running rather than replacing it. Far from an ideal scenario either way.|||What happens if your boss logs on your box and runs the package?
Do you run the package from the GUI or DTSRUN.EXE?

Given your current suspect box and diffrent connections string, I think walking a warm body through the changes would be a safe thing to do, for now.

BTW The only person I ever laugh at is myself.|||Hey, I only started programming after they gave us a GUI. I was being an opera singer when that whole DOS promt thing was in vogue. Then almost ten years ago I discovered how easy it is to make money by working on computers... Well, it is easier than trying to get people to pay you decent money (or give you health benefits) to sing.

To answer your question, I click on that there big green arrow thingy or go to Package>Execute.

I'll see if I can get my boss to log onto my box. Might be a while before I can corral him into that.|||Well, I guess they don't pay the boss the big $$$ for nothing.

It wouldn't run for him either, but having received packages for editing from NY before, he knew of one more place to look. He figured out that the path for the package log was set to 'local'. I do have a local copy of SQL Server but it isn't running. Apparently his local copy was running this morning when he tried it on his machine.

Whew! It always seems so easy when it's fixed.

Thanks for all your help--not to mention the entertainment value! File this one in the 'strange but true' category.|||Glad you found a use for your boss. Some times they are useful to have around.

School of hard knocks! Always the best teacher.

Not sure I would have come up with that one but in time all magic is revieled.sql

Monday, March 26, 2012

More dumb questions

Still learning so bear with me
1) Say I have a complicated select query (many joins) that I'd like users
to run. Which typically
performs better and why: creating a view with the select query or creating
a stored procedure with
the select query. Assume users don't care which mechanism is used.
2) I read where the rows column in sysindexes shows the number of rows in a
table and is a good
alternative to select count(*). The same article went on to state that this
figure can become inaccurate.
What can cause this figure to not match what a select count(*) would bring
back?
3) with select statements with many left outer joins used, do the order of
the joins matter? Do the joins
filter as they go along or does sql server figure it all out?
4) what is an IX lock? I can't figure it out from my reading. How is it
different from the X lock? Is there
a relationship?1. No telling which will be more efficient. You need to compare both
execution plans to determine that. They may be the same, and maybe not.
2. The values in sysindexes can get 'de-synchronized from the actual
table.row count as the result of bulk insert operations, deleting large
blocks of rows, etc. Re-synchronize by update statistics.
3. Order of JOINs 'can' matter. SQL will do it's best to 'figure it out',
but perhaps not the way you intended.
4. IX Lock =Intent Exclusive. Next in line to have an Exclusive (X) lock. IX
lock is marking one's position in the queue to place an exclusive lock (X)
as soon as any preceding shared locks are removed.
Arnie Rowland, Ph.D.
Westwood Consulting, Inc
Most good judgment comes from experience.
Most experience comes from bad judgment.
- Anonymous
"Dodo Lurker" <none@.noemailplease> wrote in message
news:bPadneyD79wGI3LZnZ2dnUVZ_qGdnZ2d@.co
mcast.com...
> Still learning so bear with me
> 1) Say I have a complicated select query (many joins) that I'd like users
> to run. Which typically
> performs better and why: creating a view with the select query or
> creating
> a stored procedure with
> the select query. Assume users don't care which mechanism is used.
> 2) I read where the rows column in sysindexes shows the number of rows in
> a
> table and is a good
> alternative to select count(*). The same article went on to state that
> this
> figure can become inaccurate.
> What can cause this figure to not match what a select count(*) would
> bring
> back?
> 3) with select statements with many left outer joins used, do the order
> of
> the joins matter? Do the joins
> filter as they go along or does sql server figure it all out?
> 4) what is an IX lock? I can't figure it out from my reading. How is it
> different from the X lock? Is there
> a relationship?
>|||The decision between a view and a stored procedure is usually based on how
the data will be used. A view can be used like a table, but a stored
procedure cannot. A stored procedure can take parameters, and include other
statements, and error checking. There is lots of info available about
optimizing stored procedures so search the Knowledgebase at
http://support.microsoft.com/search/?adv=1 It's impossible to say which
will perform better. It completely depends on what you're doing, what your
parameters are, how the query or proc is called, etc, etc.
The values in sysindexes can get out of date when you do certain bulk
operations. SQL Server doesn't always update the counts for all bulk inserts
and truncates. You can run DBCC UPDATEUSAGE to correct the values. UPDATE
STATISTICS will not do it.
The order of joins should not matter. The query is optimized as a whole and
may be completely rewritten internally.
An IX lock means INTENT-EXCLUSIVE, and is mainly a protection mechanism. It
does not mean a process is waiting for an X lock; if it were waiting, it
would be blocked, it doesn't get a special kind of lock to indicate waiting.
An intent lock is acquired a higher granularities when a regular lock is
acquired on a lower granularity unit. For example, if you have an exclusive
lock on a row, you will get an IX lock on the page and another IX lock on
the table, which will keep other processes from getting a lock on the whole
page or table. So yes, there is a relationship between X and IX.
Two IX locks are compatible with each other (if two processes each have X
locks on separate rows in the same table, they will each have IX locks on
the table itself), but X and IX are not compatible. You can read about "Lock
Compatibility" in the Books Online.
I suggest if you need to follow up on any of these questions, you start a
separate thread. It's a bit confusing to have so many separate topics in a
single message. They are not dumb questions, and each topic is certainly
worth of its own discussion. There is more info you can read about each of
these topics. Once you've done that, feel free to post follow-up questions.
HTH
Kalen Delaney, SQL Server MVP
"Dodo Lurker" <none@.noemailplease> wrote in message
news:bPadneyD79wGI3LZnZ2dnUVZ_qGdnZ2d@.co
mcast.com...
> Still learning so bear with me
> 1) Say I have a complicated select query (many joins) that I'd like users
> to run. Which typically
> performs better and why: creating a view with the select query or
> creating
> a stored procedure with
> the select query. Assume users don't care which mechanism is used.
> 2) I read where the rows column in sysindexes shows the number of rows in
> a
> table and is a good
> alternative to select count(*). The same article went on to state that
> this
> figure can become inaccurate.
> What can cause this figure to not match what a select count(*) would
> bring
> back?
> 3) with select statements with many left outer joins used, do the order
> of
> the joins matter? Do the joins
> filter as they go along or does sql server figure it all out?
> 4) what is an IX lock? I can't figure it out from my reading. How is it
> different from the X lock? Is there
> a relationship?
>|||Thank you. From now on, I'll start separate threads. Sorry for these
questions. I am VB developer
who's been tabbed to be the "database guy" since the company does not want
to hire a real
dba *shrug*.
"Kalen Delaney" <replies@.public_newsgroups.com> wrote in message
news:OBeH6WNyGHA.2572@.TK2MSFTNGP06.phx.gbl...
> The decision between a view and a stored procedure is usually based on how
> the data will be used. A view can be used like a table, but a stored
> procedure cannot. A stored procedure can take parameters, and include
other
> statements, and error checking. There is lots of info available about
> optimizing stored procedures so search the Knowledgebase at
> http://support.microsoft.com/search/?adv=1 It's impossible to say which
> will perform better. It completely depends on what you're doing, what your
> parameters are, how the query or proc is called, etc, etc.
> The values in sysindexes can get out of date when you do certain bulk
> operations. SQL Server doesn't always update the counts for all bulk
inserts
> and truncates. You can run DBCC UPDATEUSAGE to correct the values. UPDATE
> STATISTICS will not do it.
> The order of joins should not matter. The query is optimized as a whole
and
> may be completely rewritten internally.
> An IX lock means INTENT-EXCLUSIVE, and is mainly a protection mechanism.
It
> does not mean a process is waiting for an X lock; if it were waiting, it
> would be blocked, it doesn't get a special kind of lock to indicate
waiting.
> An intent lock is acquired a higher granularities when a regular lock is
> acquired on a lower granularity unit. For example, if you have an
exclusive
> lock on a row, you will get an IX lock on the page and another IX lock on
> the table, which will keep other processes from getting a lock on the
whole
> page or table. So yes, there is a relationship between X and IX.
> Two IX locks are compatible with each other (if two processes each have X
> locks on separate rows in the same table, they will each have IX locks on
> the table itself), but X and IX are not compatible. You can read about
"Lock
> Compatibility" in the Books Online.
> I suggest if you need to follow up on any of these questions, you start a
> separate thread. It's a bit confusing to have so many separate topics in a
> single message. They are not dumb questions, and each topic is certainly
> worth of its own discussion. There is more info you can read about each of
> these topics. Once you've done that, feel free to post follow-up
questions.
> --
> HTH
> Kalen Delaney, SQL Server MVP
>
> "Dodo Lurker" <none@.noemailplease> wrote in message
> news:bPadneyD79wGI3LZnZ2dnUVZ_qGdnZ2d@.co
mcast.com...
users[vbcol=seagreen]
in[vbcol=seagreen]
it[vbcol=seagreen]
>|||Don't worry, they're all valid questions.
Do you have access to the documentation (Books Online) that comes with SQL
Server? If you can't find it, that should be your first question.
HTH
Kalen Delaney, SQL Server MVP
"Dodo Lurker" <none@.noemailplease> wrote in message
news:J_6dnTrx3rlQ2W3ZnZ2dnUVZ_qGdnZ2d@.co
mcast.com...
> Thank you. From now on, I'll start separate threads. Sorry for these
> questions. I am VB developer
> who's been tabbed to be the "database guy" since the company does not want
> to hire a real
> dba *shrug*.
> "Kalen Delaney" <replies@.public_newsgroups.com> wrote in message
> news:OBeH6WNyGHA.2572@.TK2MSFTNGP06.phx.gbl...
> other
> inserts
> and
> It
> waiting.
> exclusive
> whole
> "Lock
> questions.
> users
> in
> it
>|||I do and I did read them. I just can't seem to get my arms around some of
the topics so
I look for a layman's explanation.
Thank you for your help!
"Kalen Delaney" <replies@.public_newsgroups.com> wrote in message
news:#MVNBRSyGHA.5048@.TK2MSFTNGP05.phx.gbl...
> Don't worry, they're all valid questions.
> Do you have access to the documentation (Books Online) that comes with SQL
> Server? If you can't find it, that should be your first question.
> --
> HTH
> Kalen Delaney, SQL Server MVP
>
> "Dodo Lurker" <none@.noemailplease> wrote in message
> news:J_6dnTrx3rlQ2W3ZnZ2dnUVZ_qGdnZ2d@.co
mcast.com...
want[vbcol=seagreen]
which[vbcol=seagreen]
UPDATE[vbcol=seagreen]
mechanism.[vbcol=seagreen]
it[vbcol=seagreen]
is[vbcol=seagreen]
on[vbcol=seagreen]
X[vbcol=seagreen]
on[vbcol=seagreen]
a[vbcol=seagreen]
in[vbcol=seagreen]
certainly[vbcol=seagreen]
rows[vbcol=seagreen]
that[vbcol=seagreen]
is[vbcol=seagreen]
>

More dumb questions

Still learning so bear with me
1) Say I have a complicated select query (many joins) that I'd like users
to run. Which typically
performs better and why: creating a view with the select query or creating
a stored procedure with
the select query. Assume users don't care which mechanism is used.
2) I read where the rows column in sysindexes shows the number of rows in a
table and is a good
alternative to select count(*). The same article went on to state that this
figure can become inaccurate.
What can cause this figure to not match what a select count(*) would bring
back?
3) with select statements with many left outer joins used, do the order of
the joins matter? Do the joins
filter as they go along or does sql server figure it all out?
4) what is an IX lock? I can't figure it out from my reading. How is it
different from the X lock? Is there
a relationship?1. No telling which will be more efficient. You need to compare both
execution plans to determine that. They may be the same, and maybe not.
2. The values in sysindexes can get 'de-synchronized from the actual
table.row count as the result of bulk insert operations, deleting large
blocks of rows, etc. Re-synchronize by update statistics.
3. Order of JOINs 'can' matter. SQL will do it's best to 'figure it out',
but perhaps not the way you intended.
4. IX Lock =Intent Exclusive. Next in line to have an Exclusive (X) lock. IX
lock is marking one's position in the queue to place an exclusive lock (X)
as soon as any preceding shared locks are removed.
--
Arnie Rowland, Ph.D.
Westwood Consulting, Inc
Most good judgment comes from experience.
Most experience comes from bad judgment.
- Anonymous
"Dodo Lurker" <none@.noemailplease> wrote in message
news:bPadneyD79wGI3LZnZ2dnUVZ_qGdnZ2d@.comcast.com...
> Still learning so bear with me
> 1) Say I have a complicated select query (many joins) that I'd like users
> to run. Which typically
> performs better and why: creating a view with the select query or
> creating
> a stored procedure with
> the select query. Assume users don't care which mechanism is used.
> 2) I read where the rows column in sysindexes shows the number of rows in
> a
> table and is a good
> alternative to select count(*). The same article went on to state that
> this
> figure can become inaccurate.
> What can cause this figure to not match what a select count(*) would
> bring
> back?
> 3) with select statements with many left outer joins used, do the order
> of
> the joins matter? Do the joins
> filter as they go along or does sql server figure it all out?
> 4) what is an IX lock? I can't figure it out from my reading. How is it
> different from the X lock? Is there
> a relationship?
>|||The decision between a view and a stored procedure is usually based on how
the data will be used. A view can be used like a table, but a stored
procedure cannot. A stored procedure can take parameters, and include other
statements, and error checking. There is lots of info available about
optimizing stored procedures so search the Knowledgebase at
http://support.microsoft.com/search/?adv=1 It's impossible to say which
will perform better. It completely depends on what you're doing, what your
parameters are, how the query or proc is called, etc, etc.
The values in sysindexes can get out of date when you do certain bulk
operations. SQL Server doesn't always update the counts for all bulk inserts
and truncates. You can run DBCC UPDATEUSAGE to correct the values. UPDATE
STATISTICS will not do it.
The order of joins should not matter. The query is optimized as a whole and
may be completely rewritten internally.
An IX lock means INTENT-EXCLUSIVE, and is mainly a protection mechanism. It
does not mean a process is waiting for an X lock; if it were waiting, it
would be blocked, it doesn't get a special kind of lock to indicate waiting.
An intent lock is acquired a higher granularities when a regular lock is
acquired on a lower granularity unit. For example, if you have an exclusive
lock on a row, you will get an IX lock on the page and another IX lock on
the table, which will keep other processes from getting a lock on the whole
page or table. So yes, there is a relationship between X and IX.
Two IX locks are compatible with each other (if two processes each have X
locks on separate rows in the same table, they will each have IX locks on
the table itself), but X and IX are not compatible. You can read about "Lock
Compatibility" in the Books Online.
I suggest if you need to follow up on any of these questions, you start a
separate thread. It's a bit confusing to have so many separate topics in a
single message. They are not dumb questions, and each topic is certainly
worth of its own discussion. There is more info you can read about each of
these topics. Once you've done that, feel free to post follow-up questions.
--
HTH
Kalen Delaney, SQL Server MVP
"Dodo Lurker" <none@.noemailplease> wrote in message
news:bPadneyD79wGI3LZnZ2dnUVZ_qGdnZ2d@.comcast.com...
> Still learning so bear with me
> 1) Say I have a complicated select query (many joins) that I'd like users
> to run. Which typically
> performs better and why: creating a view with the select query or
> creating
> a stored procedure with
> the select query. Assume users don't care which mechanism is used.
> 2) I read where the rows column in sysindexes shows the number of rows in
> a
> table and is a good
> alternative to select count(*). The same article went on to state that
> this
> figure can become inaccurate.
> What can cause this figure to not match what a select count(*) would
> bring
> back?
> 3) with select statements with many left outer joins used, do the order
> of
> the joins matter? Do the joins
> filter as they go along or does sql server figure it all out?
> 4) what is an IX lock? I can't figure it out from my reading. How is it
> different from the X lock? Is there
> a relationship?
>|||Thank you. From now on, I'll start separate threads. Sorry for these
questions. I am VB developer
who's been tabbed to be the "database guy" since the company does not want
to hire a real
dba *shrug*.
"Kalen Delaney" <replies@.public_newsgroups.com> wrote in message
news:OBeH6WNyGHA.2572@.TK2MSFTNGP06.phx.gbl...
> The decision between a view and a stored procedure is usually based on how
> the data will be used. A view can be used like a table, but a stored
> procedure cannot. A stored procedure can take parameters, and include
other
> statements, and error checking. There is lots of info available about
> optimizing stored procedures so search the Knowledgebase at
> http://support.microsoft.com/search/?adv=1 It's impossible to say which
> will perform better. It completely depends on what you're doing, what your
> parameters are, how the query or proc is called, etc, etc.
> The values in sysindexes can get out of date when you do certain bulk
> operations. SQL Server doesn't always update the counts for all bulk
inserts
> and truncates. You can run DBCC UPDATEUSAGE to correct the values. UPDATE
> STATISTICS will not do it.
> The order of joins should not matter. The query is optimized as a whole
and
> may be completely rewritten internally.
> An IX lock means INTENT-EXCLUSIVE, and is mainly a protection mechanism.
It
> does not mean a process is waiting for an X lock; if it were waiting, it
> would be blocked, it doesn't get a special kind of lock to indicate
waiting.
> An intent lock is acquired a higher granularities when a regular lock is
> acquired on a lower granularity unit. For example, if you have an
exclusive
> lock on a row, you will get an IX lock on the page and another IX lock on
> the table, which will keep other processes from getting a lock on the
whole
> page or table. So yes, there is a relationship between X and IX.
> Two IX locks are compatible with each other (if two processes each have X
> locks on separate rows in the same table, they will each have IX locks on
> the table itself), but X and IX are not compatible. You can read about
"Lock
> Compatibility" in the Books Online.
> I suggest if you need to follow up on any of these questions, you start a
> separate thread. It's a bit confusing to have so many separate topics in a
> single message. They are not dumb questions, and each topic is certainly
> worth of its own discussion. There is more info you can read about each of
> these topics. Once you've done that, feel free to post follow-up
questions.
> --
> HTH
> Kalen Delaney, SQL Server MVP
>
> "Dodo Lurker" <none@.noemailplease> wrote in message
> news:bPadneyD79wGI3LZnZ2dnUVZ_qGdnZ2d@.comcast.com...
> > Still learning so bear with me
> >
> > 1) Say I have a complicated select query (many joins) that I'd like
users
> > to run. Which typically
> > performs better and why: creating a view with the select query or
> > creating
> > a stored procedure with
> > the select query. Assume users don't care which mechanism is used.
> >
> > 2) I read where the rows column in sysindexes shows the number of rows
in
> > a
> > table and is a good
> > alternative to select count(*). The same article went on to state that
> > this
> > figure can become inaccurate.
> > What can cause this figure to not match what a select count(*) would
> > bring
> > back?
> >
> > 3) with select statements with many left outer joins used, do the order
> > of
> > the joins matter? Do the joins
> > filter as they go along or does sql server figure it all out?
> >
> > 4) what is an IX lock? I can't figure it out from my reading. How is
it
> > different from the X lock? Is there
> > a relationship?
> >
> >
>|||Don't worry, they're all valid questions.
Do you have access to the documentation (Books Online) that comes with SQL
Server? If you can't find it, that should be your first question.
--
HTH
Kalen Delaney, SQL Server MVP
"Dodo Lurker" <none@.noemailplease> wrote in message
news:J_6dnTrx3rlQ2W3ZnZ2dnUVZ_qGdnZ2d@.comcast.com...
> Thank you. From now on, I'll start separate threads. Sorry for these
> questions. I am VB developer
> who's been tabbed to be the "database guy" since the company does not want
> to hire a real
> dba *shrug*.
> "Kalen Delaney" <replies@.public_newsgroups.com> wrote in message
> news:OBeH6WNyGHA.2572@.TK2MSFTNGP06.phx.gbl...
>> The decision between a view and a stored procedure is usually based on
>> how
>> the data will be used. A view can be used like a table, but a stored
>> procedure cannot. A stored procedure can take parameters, and include
> other
>> statements, and error checking. There is lots of info available about
>> optimizing stored procedures so search the Knowledgebase at
>> http://support.microsoft.com/search/?adv=1 It's impossible to say which
>> will perform better. It completely depends on what you're doing, what
>> your
>> parameters are, how the query or proc is called, etc, etc.
>> The values in sysindexes can get out of date when you do certain bulk
>> operations. SQL Server doesn't always update the counts for all bulk
> inserts
>> and truncates. You can run DBCC UPDATEUSAGE to correct the values. UPDATE
>> STATISTICS will not do it.
>> The order of joins should not matter. The query is optimized as a whole
> and
>> may be completely rewritten internally.
>> An IX lock means INTENT-EXCLUSIVE, and is mainly a protection mechanism.
> It
>> does not mean a process is waiting for an X lock; if it were waiting, it
>> would be blocked, it doesn't get a special kind of lock to indicate
> waiting.
>> An intent lock is acquired a higher granularities when a regular lock is
>> acquired on a lower granularity unit. For example, if you have an
> exclusive
>> lock on a row, you will get an IX lock on the page and another IX lock on
>> the table, which will keep other processes from getting a lock on the
> whole
>> page or table. So yes, there is a relationship between X and IX.
>> Two IX locks are compatible with each other (if two processes each have X
>> locks on separate rows in the same table, they will each have IX locks on
>> the table itself), but X and IX are not compatible. You can read about
> "Lock
>> Compatibility" in the Books Online.
>> I suggest if you need to follow up on any of these questions, you start a
>> separate thread. It's a bit confusing to have so many separate topics in
>> a
>> single message. They are not dumb questions, and each topic is certainly
>> worth of its own discussion. There is more info you can read about each
>> of
>> these topics. Once you've done that, feel free to post follow-up
> questions.
>> --
>> HTH
>> Kalen Delaney, SQL Server MVP
>>
>> "Dodo Lurker" <none@.noemailplease> wrote in message
>> news:bPadneyD79wGI3LZnZ2dnUVZ_qGdnZ2d@.comcast.com...
>> > Still learning so bear with me
>> >
>> > 1) Say I have a complicated select query (many joins) that I'd like
> users
>> > to run. Which typically
>> > performs better and why: creating a view with the select query or
>> > creating
>> > a stored procedure with
>> > the select query. Assume users don't care which mechanism is used.
>> >
>> > 2) I read where the rows column in sysindexes shows the number of rows
> in
>> > a
>> > table and is a good
>> > alternative to select count(*). The same article went on to state that
>> > this
>> > figure can become inaccurate.
>> > What can cause this figure to not match what a select count(*) would
>> > bring
>> > back?
>> >
>> > 3) with select statements with many left outer joins used, do the
>> > order
>> > of
>> > the joins matter? Do the joins
>> > filter as they go along or does sql server figure it all out?
>> >
>> > 4) what is an IX lock? I can't figure it out from my reading. How is
> it
>> > different from the X lock? Is there
>> > a relationship?
>> >
>> >
>>
>|||I do and I did read them. I just can't seem to get my arms around some of
the topics so
I look for a layman's explanation.
Thank you for your help!
"Kalen Delaney" <replies@.public_newsgroups.com> wrote in message
news:#MVNBRSyGHA.5048@.TK2MSFTNGP05.phx.gbl...
> Don't worry, they're all valid questions.
> Do you have access to the documentation (Books Online) that comes with SQL
> Server? If you can't find it, that should be your first question.
> --
> HTH
> Kalen Delaney, SQL Server MVP
>
> "Dodo Lurker" <none@.noemailplease> wrote in message
> news:J_6dnTrx3rlQ2W3ZnZ2dnUVZ_qGdnZ2d@.comcast.com...
> > Thank you. From now on, I'll start separate threads. Sorry for these
> > questions. I am VB developer
> > who's been tabbed to be the "database guy" since the company does not
want
> > to hire a real
> > dba *shrug*.
> >
> > "Kalen Delaney" <replies@.public_newsgroups.com> wrote in message
> > news:OBeH6WNyGHA.2572@.TK2MSFTNGP06.phx.gbl...
> >> The decision between a view and a stored procedure is usually based on
> >> how
> >> the data will be used. A view can be used like a table, but a stored
> >> procedure cannot. A stored procedure can take parameters, and include
> > other
> >> statements, and error checking. There is lots of info available about
> >> optimizing stored procedures so search the Knowledgebase at
> >> http://support.microsoft.com/search/?adv=1 It's impossible to say
which
> >> will perform better. It completely depends on what you're doing, what
> >> your
> >> parameters are, how the query or proc is called, etc, etc.
> >>
> >> The values in sysindexes can get out of date when you do certain bulk
> >> operations. SQL Server doesn't always update the counts for all bulk
> > inserts
> >> and truncates. You can run DBCC UPDATEUSAGE to correct the values.
UPDATE
> >> STATISTICS will not do it.
> >>
> >> The order of joins should not matter. The query is optimized as a whole
> > and
> >> may be completely rewritten internally.
> >>
> >> An IX lock means INTENT-EXCLUSIVE, and is mainly a protection
mechanism.
> > It
> >> does not mean a process is waiting for an X lock; if it were waiting,
it
> >> would be blocked, it doesn't get a special kind of lock to indicate
> > waiting.
> >> An intent lock is acquired a higher granularities when a regular lock
is
> >> acquired on a lower granularity unit. For example, if you have an
> > exclusive
> >> lock on a row, you will get an IX lock on the page and another IX lock
on
> >> the table, which will keep other processes from getting a lock on the
> > whole
> >> page or table. So yes, there is a relationship between X and IX.
> >>
> >> Two IX locks are compatible with each other (if two processes each have
X
> >> locks on separate rows in the same table, they will each have IX locks
on
> >> the table itself), but X and IX are not compatible. You can read about
> > "Lock
> >> Compatibility" in the Books Online.
> >>
> >> I suggest if you need to follow up on any of these questions, you start
a
> >> separate thread. It's a bit confusing to have so many separate topics
in
> >> a
> >> single message. They are not dumb questions, and each topic is
certainly
> >> worth of its own discussion. There is more info you can read about each
> >> of
> >> these topics. Once you've done that, feel free to post follow-up
> > questions.
> >>
> >> --
> >> HTH
> >> Kalen Delaney, SQL Server MVP
> >>
> >>
> >> "Dodo Lurker" <none@.noemailplease> wrote in message
> >> news:bPadneyD79wGI3LZnZ2dnUVZ_qGdnZ2d@.comcast.com...
> >> > Still learning so bear with me
> >> >
> >> > 1) Say I have a complicated select query (many joins) that I'd like
> > users
> >> > to run. Which typically
> >> > performs better and why: creating a view with the select query or
> >> > creating
> >> > a stored procedure with
> >> > the select query. Assume users don't care which mechanism is used.
> >> >
> >> > 2) I read where the rows column in sysindexes shows the number of
rows
> > in
> >> > a
> >> > table and is a good
> >> > alternative to select count(*). The same article went on to state
that
> >> > this
> >> > figure can become inaccurate.
> >> > What can cause this figure to not match what a select count(*) would
> >> > bring
> >> > back?
> >> >
> >> > 3) with select statements with many left outer joins used, do the
> >> > order
> >> > of
> >> > the joins matter? Do the joins
> >> > filter as they go along or does sql server figure it all out?
> >> >
> >> > 4) what is an IX lock? I can't figure it out from my reading. How
is
> > it
> >> > different from the X lock? Is there
> >> > a relationship?
> >> >
> >> >
> >>
> >>
> >
> >
>

More confusion - tran and snapshot repl

When I look at my publisher and the publications in EM, they say they are
transaction replication.
HOWEVER, when I run sp_helppullsubscription on both of the subscriber DBs,
they return a publication_type of 1 (well except for one of them which
returns a 0 and is the one causing me high latency issues.) BOL says 1 =
snapshot and 0 = transactional replication.
Can someone please explain what in the heck is going on? My brain is about
to explode.
Thx,
Kristy
That's strange. I just deployed a transactional and snapshot publication and
verified that BOL is accurate here.
can you review the descriptions to make sure that they are referring to
current productions, sometimes data from old publications remain.
Hilary Cotter
Director of Text Mining and Database Strategy
RelevantNOISE.Com - Dedicated to mining blogs for business intelligence.
This posting is my own and doesn't necessarily represent RelevantNoise's
positions, strategies or opinions.
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"Kristy" <pleasereplyby@.posting.com> wrote in message
news:u8Cc2lEVGHA.4248@.TK2MSFTNGP10.phx.gbl...
> When I look at my publisher and the publications in EM, they say they are
> transaction replication.
> HOWEVER, when I run sp_helppullsubscription on both of the subscriber DBs,
> they return a publication_type of 1 (well except for one of them which
> returns a 0 and is the one causing me high latency issues.) BOL says 1 =
> snapshot and 0 = transactional replication.
> Can someone please explain what in the heck is going on? My brain is about
> to explode.
> Thx,
> Kristy
>
|||Not exactely sure what you mean by "review the descriptions". There is another thread on this that gives a little more info. It's called "Confused about snapshot repl". It has the scripted versions of the subscriptions. Let me know if you need anything else.
Thanks,
--Kristy
"Hilary Cotter" <hilary.cotter@.gmail.com> wrote in message news:uvuc5PXWGHA.2376@.TK2MSFTNGP03.phx.gbl...
> That's strange. I just deployed a transactional and snapshot publication and
> verified that BOL is accurate here.
> can you review the descriptions to make sure that they are referring to
> current productions, sometimes data from old publications remain.
> --
> Hilary Cotter
> Director of Text Mining and Database Strategy
> RelevantNOISE.Com - Dedicated to mining blogs for business intelligence.
> This posting is my own and doesn't necessarily represent RelevantNoise's
> positions, strategies or opinions.
> Looking for a SQL Server replication book?
> http://www.nwsu.com/0974973602.html
> Looking for a FAQ on Indexing Services/SQL FTS
> http://www.indexserverfaq.com
>
> "Kristy" <pleasereplyby@.posting.com> wrote in message
> news:u8Cc2lEVGHA.4248@.TK2MSFTNGP10.phx.gbl...
>

More browser support for Safari and Firefox?

When I try to run a simple table report using Safari or Firefox the experience is less than stellar.

Using Safari the report never returns, basically the pag body is blank.

Using Firefox the table layout is very compressed.

Anything changes in the works to allow the reporting server to be used by non IE browser.

We run in a multi-platform environment and need the multiple browser support.

Thanks.

--sean

SQL Server 2005 Reporting Services

Hi sean

try adding this into ReportingServices.css file ( in C:\Program Files\Microsoft SQL Server\MSSQL.3\Reporting Services\ReportManager\Styles)

/* Fix report IFRAME height for Firefox */
.DocMapAndReportFrame
{
min-height: 860px;
}

After adding this

Some changes has to be done in report as well

Add the table into Rectangle

Deploy the report.

Hope this solves the problem..

safari browser Not sure even i am facing lots of problem

if u find any solution please let me know

More browser support for Safari and Firefox?

When I try to run a simple table report using Safari or Firefox the experience is less than stellar.

Using Safari the report never returns, basically the pag body is blank.

Using Firefox the table layout is very compressed.

Anything changes in the works to allow the reporting server to be used by non IE browser.

We run in a multi-platform environment and need the multiple browser support.

Thanks.

--sean

SQL Server 2005 Reporting Services

Hi sean

try adding this into ReportingServices.css file ( in C:\Program Files\Microsoft SQL Server\MSSQL.3\Reporting Services\ReportManager\Styles)

/* Fix report IFRAME height for Firefox */
.DocMapAndReportFrame
{
min-height: 860px;
}

After adding this

Some changes has to be done in report as well

Add the table into Rectangle

Deploy the report.

Hope this solves the problem..

safari browser Not sure even i am facing lots of problem

if u find any solution please let me know

Friday, March 23, 2012

Monthly Subscriptions

Looking for assistance out of a difficult spot. I have several subscriptions
that need to run on the last day of each month. However, Reporting Services
has a problem if I set the schedule to run on the 30th or 31st of each month.
Has anyone figured out how to do this? Currently, I must manually reset the
schedule on each subscription at the beginning of each month to run on the
last day. I must be overlooking somthing obvious as I can't imagine
Reporting Services would have such an oversight. Can anyone help?
Thanks very muchThis is a multi-part message in MIME format.
--=_NextPart_000_0015_01C5CF46.69DFEB70
Content-Type: text/plain;
charset="Utf-8"
Content-Transfer-Encoding: quoted-printable
This is a hack, but for every subscription you set up, RS creates a SQL =Server Agent job with (amazingly!) the same schedule. You could and ="could" is the operative word, have another sql agent job that runs once =a day at a specific time and if it's the end of the month, then push the =same job step as is in the subscription.
HTH
-- TIM ELLISON
"Andy" <Andy@.discussions.microsoft.com> wrote in message =news:A21B735C-ACE2-4402-87EA-5CA15CB71DE5@.microsoft.com...
Looking for assistance out of a difficult spot. I have several =subscriptions that need to run on the last day of each month. However, Reporting =Services has a problem if I set the schedule to run on the 30th or 31st of each =month. Has anyone figured out how to do this? Currently, I must manually =reset the schedule on each subscription at the beginning of each month to run on =the last day. I must be overlooking somthing obvious as I can't imagine Reporting Services would have such an oversight. Can anyone help?
Thanks very much
--=_NextPart_000_0015_01C5CF46.69DFEB70
Content-Type: text/html;
charset="Utf-8"
Content-Transfer-Encoding: quoted-printable
=EF=BB=BF<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
&

This is a hack, but for every =subscription you set up, RS creates a SQL Server Agent job with (amazingly!) the same =schedule. You could and "could" is the operative word, have another sql agent job =that runs once a day at a specific time and if it's the end of the month, =then push the same job step as is in the subscription.
HTH
-- TIM ELLISON
"Andy" wrote in message news:A21=B735C-ACE2-4402-87EA-5CA15CB71DE5@.microsoft.com...Looking for assistance out of a difficult spot. I have several =subscriptions that need to run on the last day of each month. However, =Reporting Services has a problem if I set the schedule to run on the 30th or =31st of each month. Has anyone figured out how to do this? =Currently, I must manually reset the schedule on each subscription at the =beginning of each month to run on the last day. I must be overlooking =somthing obvious as I can't imagine Reporting Services would have such an oversight. Can anyone help?Thanks very =much

--=_NextPart_000_0015_01C5CF46.69DFEB70--|||Thanks Tim for the suggestion. Unfortunately, I don't have access to the SQL
Server to be able to access the agent.
Any other thoughts?
Anyone?
"Tim Ellison" wrote:
> This is a hack, but for every subscription you set up, RS creates a SQL Server Agent job with (amazingly!) the same schedule. You could and "could" is the operative word, have another sql agent job that runs once a day at a specific time and if it's the end of the month, then push the same job step as is in the subscription.
> HTH
> --
> TIM ELLISON
> "Andy" <Andy@.discussions.microsoft.com> wrote in message news:A21B735C-ACE2-4402-87EA-5CA15CB71DE5@.microsoft.com...
> Looking for assistance out of a difficult spot. I have several subscriptions
> that need to run on the last day of each month. However, Reporting Services
> has a problem if I set the schedule to run on the 30th or 31st of each month.
> Has anyone figured out how to do this? Currently, I must manually reset the
> schedule on each subscription at the beginning of each month to run on the
> last day. I must be overlooking somthing obvious as I can't imagine
> Reporting Services would have such an oversight. Can anyone help?
> Thanks very much|||I would use a data-driven subscription.
A data-driven subscription runs for each record returned in a specified
query (set when you set it up).
Set the schedule to run every day and set the query to be:
SELECT 1
WHERE CASE DATEPART(month, @.DATE)
WHEN DATEPART(month, DATEADD(day, 1, @.DATE)) THEN 0
ELSE 1
END = 1
This will return a record anytime that it is the last day of the month.
Hope it helps.
"Andy" wrote:
> Thanks Tim for the suggestion. Unfortunately, I don't have access to the SQL
> Server to be able to access the agent.
> Any other thoughts?
> Anyone?
> "Tim Ellison" wrote:
> > This is a hack, but for every subscription you set up, RS creates a SQL Server Agent job with (amazingly!) the same schedule. You could and "could" is the operative word, have another sql agent job that runs once a day at a specific time and if it's the end of the month, then push the same job step as is in the subscription.
> >
> > HTH
> >
> > --
> > TIM ELLISON
> > "Andy" <Andy@.discussions.microsoft.com> wrote in message news:A21B735C-ACE2-4402-87EA-5CA15CB71DE5@.microsoft.com...
> > Looking for assistance out of a difficult spot. I have several subscriptions
> > that need to run on the last day of each month. However, Reporting Services
> > has a problem if I set the schedule to run on the 30th or 31st of each month.
> > Has anyone figured out how to do this? Currently, I must manually reset the
> > schedule on each subscription at the beginning of each month to run on the
> > last day. I must be overlooking somthing obvious as I can't imagine
> > Reporting Services would have such an oversight. Can anyone help?
> > Thanks very much|||Not sure but: Change the @.DATE in my previous post to GETDATE(). I think I
posted something I was testing.
"Andy" wrote:
> Thanks Tim for the suggestion. Unfortunately, I don't have access to the SQL
> Server to be able to access the agent.
> Any other thoughts?
> Anyone?
> "Tim Ellison" wrote:
> > This is a hack, but for every subscription you set up, RS creates a SQL Server Agent job with (amazingly!) the same schedule. You could and "could" is the operative word, have another sql agent job that runs once a day at a specific time and if it's the end of the month, then push the same job step as is in the subscription.
> >
> > HTH
> >
> > --
> > TIM ELLISON
> > "Andy" <Andy@.discussions.microsoft.com> wrote in message news:A21B735C-ACE2-4402-87EA-5CA15CB71DE5@.microsoft.com...
> > Looking for assistance out of a difficult spot. I have several subscriptions
> > that need to run on the last day of each month. However, Reporting Services
> > has a problem if I set the schedule to run on the 30th or 31st of each month.
> > Has anyone figured out how to do this? Currently, I must manually reset the
> > schedule on each subscription at the beginning of each month to run on the
> > last day. I must be overlooking somthing obvious as I can't imagine
> > Reporting Services would have such an oversight. Can anyone help?
> > Thanks very much

Monthly parameter expressions [Formerly:Queried parameters]

Hello,

I need to be able to set the date parameters of a report dynamically when it is run based on system time. The problem I am having is being able to compare the dates (StartDate & EndDate) against [Service Date 1]. Essentially this report will only pull the current month's data.

The date fields being created with the GETDATE, DATEADD & DATEDIFF functions are working correctly. Do I need to create a separate dataset to be able to run the parameters automatically in the actual report?

Any help would be greatly appreciated!

SELECT TodaysDate =GetDate()-2,dbo.[Billing Detail].[Service Date 1], DATEADD(mm, DATEDIFF(mm, 0, DATEADD(yy, 0, GETDATE())), 0) AS StartDate, DATEADD(dd, - 1, DATEADD(mm, DATEDIFF(mm, -1, GETDATE()), 0)) AS EndDate, dbo.[Billing Detail].Billing, dbo.[Billing Detail].Chart, dbo.[Billing Detail].Item,
dbo.[Billing Detail].[Sub Item], dbo.Patient.[Patient Code], dbo.Patient.[Patient Type], dbo.[Billing Header].Charges, dbo.Practice.Name
FROM dbo.[Billing Detail] INNER JOIN
dbo.Patient ON dbo.[Billing Detail].Chart = dbo.Patient.[Chart Number] INNER JOIN
dbo.[Billing Header] ON dbo.[Billing Detail].Billing = dbo.[Billing Header].Billing CROSS JOIN
dbo.Practice
WHERE (dbo.[Billing Detail].Item = 0) AND (dbo.[Billing Detail].[Sub Item] = 0) AND (dbo.[Billing Detail].[Service Date 1] Between StartDate AND EndDate

Phorest,

You should be able to add the parameters to your query. If you are going against SQL Server, you can replace your parameters with @.StartDate AND @.EndDate. Then in the properies of the dataset, you can assign those parameters to Parameters!StartDate.Value and Parameters!EndDate.Value, respectively.

Jessica

|||

Thanks for your reply!

OK,

I think what I need to do is write the expression as a non-queried default value. However when I paste in what I know works in SQL Management Studio it returns an error "Name 'mm' is not declared"

<@.StartDate> =DATEADD(mm, DATEDIFF(mm, 0, DATEADD(yy, 0, GETDATE())), 0)

<@.EndDate> =DATEADD(dd, - 1, DATEADD(mm, DATEDIFF(mm, -1, GETDATE()), 0))

I tried putting an integer after DATEADD(mm, X , 102 DATEDIFF... but i can't get beyond intellisense. How can I fix my expression to work with Reporting Services?

What I need is to have expressions to choose the first day of the month to the last day of the same month compared to NOW()

|||

Apparently that is the trick to use non-queried default values as an expression, However what I posted yesterday will not work as an expression due to the expressions limitations in SSRS:

<@.StartDate> =DATEADD(mm, DATEDIFF(mm, 0, DATEADD(yy, 0, GETDATE())), 0)

<@.EndDate> =DATEADD(dd, - 1, DATEADD(mm, DATEDIFF(mm, -1, GETDATE()), 0))

Now I am using:

<@.StartDate> =DATEADD("D", -30, NOW())

<@.EndDate> =DATEADD("D", 1, NOW())

After much searching and experimentation I can get this to work well, but it isn't exactly what I want. Does any one have any tips as to being able to write the expression to select the first day of the current month and last day of the month?

It seems to be just beyond my grasp at this time...

Thanks!

|||

Phorest,

I'm afraid I misunderstood what you're trying to do. If you want a query that returns rows where the [Billing Detail].[Service Date 1] is between the start and the end of the current month, you can do that all in SQL.

It would look something similar to:

WHERE dbo.[Billing Detail].[Service Date 1]

BETWEEN dateadd(mm, datediff(mm,0,getdate()), 0)

AND dateadd(ms,-3,dateadd(mm, datediff(m,0,getdate() ) + 1, 0))

Does that work for you?

Jessica

|||

I'll have to try that in the SQL, though I was more after an expression more as a datetime datatype so it picks all the dates in the current month only and the user can then adjust the parameter manually after the initial running of the report if they so choose.

Thanks!

|||

I found what I was looking for here: http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=1581230&SiteID=1

In the Report Parameters properties I set the DataType to DateTime and using the Default Values, Non-Queried radio button set the expressions like the following:

@.StartDate =DateSerial(Year(NOW()), Month(NOW)) +0,1) gives me the first date of the current month.

@.EndDate =DateSerial(Year(NOW()), Month(NOW)) +1,0) gives me the last date of the current month.

All is wellnow!

sql

Monthly date range substitution

I would like to run a report for each month over two years. I am currently
using a date range like this. Then manually substitute the error_time
bounds for each month and rerun the query. How can I script this so I can
programmatically perform the substitution in a loop. Thanx in advance.

select count(*) from application_errors
where error_message like 'Time%'
and error_time >= '1Apr2004' and error_time < '1May2004'Robert (robert.j.sipe@.boeing.com) writes:
> Maybe this is a lot easier to do than I first thought:
> select count(*) from application_errors
> where error_message like 'Time%'
> and error_time >= '1Jan2004' and error_time < '1Jan2005'
> group by month (error_time)
> This saves me a lot of work. Now, if I could figure out how to span years
> and still group by months...

select convert(char(6), error_time, 112), count(*) from application_errors
where error_message like 'Time%'
and error_time >= '1Jan2004'
group by convert(char(6), error_time, 112)

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||Thanx Erland, I am not worthy!!

"Erland Sommarskog" <esquel@.sommarskog.se> wrote in message
news:Xns975E38A7771EYazorman@.127.0.0.1...
> Robert (robert.j.sipe@.boeing.com) writes:
>> Maybe this is a lot easier to do than I first thought:
>>
>> select count(*) from application_errors
>> where error_message like 'Time%'
>> and error_time >= '1Jan2004' and error_time < '1Jan2005'
>> group by month (error_time)
>>
>> This saves me a lot of work. Now, if I could figure out how to span
>> years
>> and still group by months...
>
> select convert(char(6), error_time, 112), count(*) from
> application_errors
> where error_message like 'Time%'
> and error_time >= '1Jan2004'
> group by convert(char(6), error_time, 112)
>
> --
> Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
> Books Online for SQL Server 2005 at
> http://www.microsoft.com/technet/pr...oads/books.mspx
> Books Online for SQL Server 2000 at
> http://www.microsoft.com/sql/prodin...ions/books.mspx|||Robert wrote:
> Thanx Erland, I am not worthy!!

You can as well use DATEPART to extract year and month from the timestamp
column.

robert

> "Erland Sommarskog" <esquel@.sommarskog.se> wrote in message
> news:Xns975E38A7771EYazorman@.127.0.0.1...
>> Robert (robert.j.sipe@.boeing.com) writes:
>>> Maybe this is a lot easier to do than I first thought:
>>>
>>> select count(*) from application_errors
>>> where error_message like 'Time%'
>>> and error_time >= '1Jan2004' and error_time < '1Jan2005'
>>> group by month (error_time)
>>>
>>> This saves me a lot of work. Now, if I could figure out how to span
>>> years
>>> and still group by months...
>>
>>
>> select convert(char(6), error_time, 112), count(*) from
>> application_errors
>> where error_message like 'Time%'
>> and error_time >= '1Jan2004'
>> group by convert(char(6), error_time, 112)
>>
>>
>> --
>> Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
>>
>> Books Online for SQL Server 2005 at
>>
http://www.microsoft.com/technet/pr...oads/books.mspx
>> Books Online for SQL Server 2000 at
>> http://www.microsoft.com/sql/prodin...ions/books.mspx|||Create a report range table:

CREATE TABLE ReportRanges
(range_name CHAR(15)
start_date DATETIME NOT NULL,
end_date DATETIME NOT NULL,
CHECK (start_date < end_date),
..);

INSERT INTO ReportRanges
VALUES ('2005 Jan', '2005-01-01', '2005-01-31 23:59:59.999');

INSERT INTO ReportRanges
VALUES ('2005 Feb', '2005-02-01', '2005-02-28 23:59:59.999');

etc.

INSERT INTO ReportRanges
VALUES ('2005 Total', '2005-01-01', '2005-12-31 23:59:59.999');

Now use it to drive all of your reports, so they will be consistent.

SELECT R.range_name, COUNT(*)
FROM AppErrors AS A, ReportRanges AS R
WHERE A.error_time BETWEEN R.start_date AND R.end_date
GROUP BY R.range_name;

You are making a classic newbie design flaw. You still think of
programming with procedural code and functions, but not with relational
operators.|||> CREATE TABLE ReportRanges
> (range_name CHAR(15)
> start_date DATETIME NOT NULL,
> end_date DATETIME NOT NULL,
> CHECK (start_date < end_date),
> ..);

This non-table is unusable. It has no key and cannot have a key because
range_name is NULLable.

> INSERT INTO ReportRanges
> VALUES ('2005 Jan', '2005-01-01', '2005-01-31 23:59:59.999');

Do you really think it a good idea to use the month name in the data like
this? What about other languages - French, Italian etc...

> SELECT R.range_name, COUNT(*)
> FROM AppErrors AS A, ReportRanges AS R
> WHERE A.error_time BETWEEN R.start_date AND R.end_date
> GROUP BY R.range_name;

You are still using the 89 syntax and should be using the more recent 92
syntax.

SELECT R.range_name, COUNT(*)
FROM AppErrors AS A
CROSS JOIN ReportRanges AS R
WHERE A.error_time BETWEEN R.start_date AND R.end_date
GROUP BY R.range_name;

--
Tony Rogerson
SQL Server MVP
http://sqlserverfaq.com - free video tutorials

"--CELKO--" <jcelko212@.earthlink.net> wrote in message
news:1138908826.954654.5580@.g47g2000cwa.googlegrou ps.com...
> Create a report range table:
> CREATE TABLE ReportRanges
> (range_name CHAR(15)
> start_date DATETIME NOT NULL,
> end_date DATETIME NOT NULL,
> CHECK (start_date < end_date),
> ..);
> INSERT INTO ReportRanges
> VALUES ('2005 Jan', '2005-01-01', '2005-01-31 23:59:59.999');
> INSERT INTO ReportRanges
> VALUES ('2005 Feb', '2005-02-01', '2005-02-28 23:59:59.999');
> etc.
> INSERT INTO ReportRanges
> VALUES ('2005 Total', '2005-01-01', '2005-12-31 23:59:59.999');
> Now use it to drive all of your reports, so they will be consistent.
> SELECT R.range_name, COUNT(*)
> FROM AppErrors AS A, ReportRanges AS R
> WHERE A.error_time BETWEEN R.start_date AND R.end_date
> GROUP BY R.range_name;
> You are making a classic newbie design flaw. You still think of
> programming with procedural code and functions, but not with relational
> operators.|||On 2 Feb 2006 11:33:47 -0800, --CELKO-- wrote:

>Create a report range table:
>CREATE TABLE ReportRanges
>(range_name CHAR(15)
> start_date DATETIME NOT NULL,
> end_date DATETIME NOT NULL,
> CHECK (start_date < end_date),
> ..);
>INSERT INTO ReportRanges
>VALUES ('2005 Jan', '2005-01-01', '2005-01-31 23:59:59.999');
>INSERT INTO ReportRanges
>VALUES ('2005 Feb', '2005-02-01', '2005-02-28 23:59:59.999');
>etc.
>INSERT INTO ReportRanges
>VALUES ('2005 Total', '2005-01-01', '2005-12-31 23:59:59.999');

Hi Joe,

1. Never omit the column list of an INSERT. THis, like SELECT *, is
extremely bad practice.

2. Please use unambiguous date formats:

* yyyymmdd for date only
* yyyy-mm-ddThh:mm:ss or yyyy-mm-ddThh:mm:ss.ttt for date plus time
(with or without milliseconds).

3. Because SQL Server has datetime precision of 1/300 seecond, the
values for end_date will be rounded UP to 2005-02-01T00:00:00.000,
2005-03-01T00:00:00.000, and 2006-01-01T00:00:00.000. Not the values you
want with the query you propose...

>Now use it to drive all of your reports, so they will be consistent.
>SELECT R.range_name, COUNT(*)
> FROM AppErrors AS A, ReportRanges AS R
>WHERE A.error_time BETWEEN R.start_date AND R.end_date
> GROUP BY R.range_name;

.... however, this query is no good either. Never use BETWEEN for date
comparisons.

You should populate the Reportanges table as follows:

INSERT INTO ReportRanges (range_name, start_date, end_date)
VALUES ('2005 Jan', '20050101', '20050201');
INSERT INTO ReportRanges (range_name, start_date, end_date)
VALUES ('2005 Feb', '20050201', '20050301');
(...)
INSERT INTO ReportRanges (range_name, start_date, end_date)
VALUES ('2005 Total', '20050101', '20060101');

And change the query to

SELECT R.range_name, COUNT(*)
FROM AppErrors AS A
INNER JOIN ReportRanges AS R
ON A.error_time >= R.start_date
AND A.error_time < R.end_date
GROUP BY R.range_name;

(Note the use of greater _OR EQUAL_ for start_date, but lesser (and not
equal) for end_date).
This will always work - both for datetime and smalldatetime, and it will
continue to work if Microsoft ever decides to change the precision of
their datetime datatypes.

--
Hugo Kornelis, SQL Server MVP|||>> Note the use of greater _OR EQUAL_ for start_date, but lesser (and not equal) for end_date). This will always work - both for datetime and smalldatetime, and it will
continue to work if Microsoft ever decides to change the precision of
their datetime datatypes. <<

Good point. I keep forgetting that SQL Server does not follow the
FIPS-127 rules about keeping at least five decimal places of seconds
like other products. Generally goiing to 1/100 of a second has worked
for me in the real world -- CAST ('2006-01-01 23:59:59.99' AS
DATETIME).

If we had the OVERLAPS predicate, we could use that, but I prefer the
BETWEEN with adjusted times in the non-conformng SQLs I use. I can
move the code with a text change.|||On 3 Feb 2006 17:38:40 -0800, --CELKO-- wrote:

>>> Note the use of greater _OR EQUAL_ for start_date, but lesser (and not equal) for end_date). This will always work - both for datetime and smalldatetime, and it will
>continue to work if Microsoft ever decides to change the precision of
>their datetime datatypes. <<
>Good point. I keep forgetting that SQL Server does not follow the
>FIPS-127 rules about keeping at least five decimal places of seconds
>like other products. Generally goiing to 1/100 of a second has worked
>for me in the real world -- CAST ('2006-01-01 23:59:59.99' AS
>DATETIME).

Hi Joe,

This will still bite you if smalldatetime is used. Or if ever an entry
makes it into the datebase with a 23:59:99.993 timestamp.

What is your objection to
SomeDate >= StartOfInterval
AND SomeDate < EndOfInterval
(with EndOfInterval actually being equal to the first fraction of a
second after the end of the interval, or the start of the next interval
if there are consecutive intervals)

AFAICT, this will work on ALL products, regardless of the precision of
the date and time datatypes used in the product. Am I wrong?

--
Hugo Kornelis, SQL Server MVP|||>> his will still bite you if smalldatetime is used. Or if ever an entrymakes it into the datebase with a 23:59:99.993 timestamp. <<

I never use SMALLDATETIME because it is soooo proprietary and does not
match the FIPS-127 requirements.

>> What is your objection to
SomeDate >= StartOfInterval
AND SomeDate < EndOfInterval <<

Mostly style and portable code. The BETWEEN predicate reads so much
better to a human. I would prefer OVERLAPS and some of Rick
Snodgrass's operators if i coudl get them.

>> FAICT, this will work on ALL products, regardless of the precision of the date and time datatypes used in the product. Am I wrong? <<

Yeah, yeah!! But I hate 5to split a single concept (between-ness) into
muliple predicates. I also hate a change of ORs when I can use IN(),
etc.

--|||On 4 Feb 2006 14:22:07 -0800, --CELKO-- wrote:

>>> his will still bite you if smalldatetime is used. Or if ever an entrymakes it into the datebase with a 23:59:99.993 timestamp. <<
>I never use SMALLDATETIME because it is soooo proprietary and does not
>match the FIPS-127 requirements.

Hi Joe,

So instead, you use DATETIME, which also is proprieatary, which also
doesn't match FIPS-127, and which takes twice the space. Good job. For a
table with mostly date columns, your performance will now be about twice
as slow.

>>> What is your objection to
> SomeDate >= StartOfInterval
> AND SomeDate < EndOfInterval <<
>Mostly style and portable code.

Style, like beauty, is in the eye of the beholder. So I won't comment on
that.

But "portable code"? <Cough!> Please tell me: what part of the code
above is not portable, and why?

> The BETWEEN predicate reads so much
>better to a human.

Maybe. But does '2006-02-28T23:59:59.997' also read better to a human
than '2006-03-01'?

SomeDate >= '2006-02-01'
AND SomeDate < '2006-03-01'

or

SomeDate BETWEEN '2006-02-01' AND '2006-02-28T23:59:59.997'

Are you really going to tell me that the latter reads better to a human?

--
Hugo Kornelis, SQL Server MVP

Wednesday, March 21, 2012

Monitoring the db size

Hi All,

I'm wonderring if someone has the script which can run on each server to get all dbs size , free space on this server ? Curently I am using the enterprise manager to check the db space usage manually, but this is very frustrated due to a server has many dbs located on it

Thanks,Hi All,

I'm wonderring if someone has the script which can run on each server to get all dbs size , free space on this server ? Curently I am using the enterprise manager to check the db space usage manually, but this is very frustrated due to a server has many dbs located on it

Thanks,

This (http://www.dbforums.com/t1006334.html) might be what you are looking for...

[Edit: some caveats]
1. Consider also using DBCC SHOWCONTIG WITH TABLERESULTS. You could embed this in the sp_MSForEachDB.
2. Note that the data may not match reality; you may need to run dbcc updateusage before you run the sizing extract.
3. YMMV

Regards,

hmscott|||Thanks a lot hmscott, I'll try it. I found a website which has a few scripts , they are very good, for people referecn if need auto manage db size, drive size and free space. It is:

http://www.sqlservercentral.com/columnists/mnash/monitoringdriveanddatabasefreespace.asp

Monitoring table size

We have a table whose size we wish to keep below 200 records. My thought is
to run a query every minute via SQL agent to get the record count. What I am
wondering is if it is possible to log the result of the query as a perfmon
counter so that we can both graph it realtime in perfmon and also alert off
it in Microsoft Operations Manager.
Thanks,
Mark
"Mark Murphy" <viosk@.newsgroup.nospam> schrieb im Newsbeitrag
news:7AE5DEC0-1F6C-4D60-AC99-FB50A13EFAD9@.microsoft.com...
> We have a table whose size we wish to keep below 200 records. My
thought is
> to run a query every minute via SQL agent to get the record count. What
I am
> wondering is if it is possible to log the result of the query as a
perfmon
> counter so that we can both graph it realtime in perfmon and also alert
off
> it in Microsoft Operations Manager.
And how will you react on this? If there's some action that lends itself
to automation, then a trigger is probably the most appropriate means.
That way you reduce load on the db and do the checks only when they are
necessary (i.e. on insertion).
Kind regards
robert
|||Hi Mark
In SQL Server 2000, there are user defined counters that can log any value
you send as a parameter. For example, if you had this statement: EXEC
sp_user_counter1 10, you could watch SQL Server User Counter 1 in a perfmon
graph, and see the value 10.
But you have to explicitly call the procedure to set the value, and you
could do that in a trigger every time the data in the table changes. The
trigger would do a select count(*), save the result into a variable, and
pass the variable to the sp_user_counter procedure. The trigger could also
check for the value being over your limit, and take the alerting action,
rather than using SQL Server's alert mechanism.
HTH
Kalen Delaney
SQL Server MVP
www.SolidQualityLearning.com
"Mark Murphy" <viosk@.newsgroup.nospam> wrote in message
news:7AE5DEC0-1F6C-4D60-AC99-FB50A13EFAD9@.microsoft.com...
> We have a table whose size we wish to keep below 200 records. My thought
> is
> to run a query every minute via SQL agent to get the record count. What I
> am
> wondering is if it is possible to log the result of the query as a perfmon
> counter so that we can both graph it realtime in perfmon and also alert
> off
> it in Microsoft Operations Manager.
> Thanks,
> Mark
|||Thanks Kalen and Robert,
Exactly the advice I was looking for. We have a sproc that needs to poll
the table for new records periodically (We know it's less efficient, but it's
beyond our control). I'll set the counter there.
-Mark

Monday, March 19, 2012

monitoring sql server box

I would like to find out if there are any free tools that could be used to
monitor a sql instance and list the
a. most frequently run stored procedures and how long each takes
b. most frequently run views and how long each takes
netiq used to have app manager - is that still available ?
also, which free tool can I use to monitor CPU, Memory and Netwok
Utilization for a windows box running SQL Server 2000 ?
Thanks"John Smith" <bukusu@.gmail.com> wrote in message
news:epVak7yuGHA.4756@.TK2MSFTNGP04.phx.gbl...
> I would like to find out if there are any free tools that could be used to
> monitor a sql instance and list the
> a. most frequently run stored procedures and how long each takes
> b. most frequently run views and how long each takes
> netiq used to have app manager - is that still available ?
> also, which free tool can I use to monitor CPU, Memory and Netwok
> Utilization for a windows box running SQL Server 2000 ?
> Thanks
>
Not sure about a. & b. but Windows Performance monitor (under Administrative
Tools) can monitor the rest, plus a lot of SQL specific items, too. You can
even monitor remotely.|||Hi John
You could use SQL profiler for a but I would not continually sample. You can
not "run" a view so b is not possible. You may want to look at
http://www.sqldbatips.com/showcode.asp?ID=7.
John
"John Smith" wrote:
> I would like to find out if there are any free tools that could be used to
> monitor a sql instance and list the
> a. most frequently run stored procedures and how long each takes
> b. most frequently run views and how long each takes
> netiq used to have app manager - is that still available ?
> also, which free tool can I use to monitor CPU, Memory and Netwok
> Utilization for a windows box running SQL Server 2000 ?
> Thanks
>
>|||John Smith wrote:
> I would like to find out if there are any free tools that could be used to
> monitor a sql instance and list the
> a. most frequently run stored procedures and how long each takes
> b. most frequently run views and how long each takes
> netiq used to have app manager - is that still available ?
> also, which free tool can I use to monitor CPU, Memory and Netwok
> Utilization for a windows box running SQL Server 2000 ?
> Thanks
>
You can capture stored procedure activity using Profiler, and then
analyze the resulting logs. There is a tool available for download here
(http://www.cleardata.biz/cleartrace/instructions.aspx) that will neatly
summarize the log activity, or you can do the same thing yourself, as we do.
Perfmon is your best bet for monitoring CPU, memory, and network
utilization.
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||NetIQ sold th app manager for SQL off to Idera and it is now their SQLdm
product. I should say that the Idera SQMdm product started as the NetIQ App
Manager for SQL. They have spent some time and money upgrading and
updating it since purchasing the software.
www.idera.com if you want to check it out.
--
Geoff N. Hiten
Senior Database Administrator
Microsoft SQL Server MVP
"John Smith" <bukusu@.gmail.com> wrote in message
news:epVak7yuGHA.4756@.TK2MSFTNGP04.phx.gbl...
>I would like to find out if there are any free tools that could be used to
>monitor a sql instance and list the
> a. most frequently run stored procedures and how long each takes
> b. most frequently run views and how long each takes
> netiq used to have app manager - is that still available ?
> also, which free tool can I use to monitor CPU, Memory and Netwok
> Utilization for a windows box running SQL Server 2000 ?
> Thanks
>

monitoring sql server box

I would like to find out if there are any free tools that could be used to
monitor a sql instance and list the
a. most frequently run stored procedures and how long each takes
b. most frequently run views and how long each takes
netiq used to have app manager - is that still available ?
also, which free tool can I use to monitor CPU, Memory and Netwok
Utilization for a windows box running SQL Server 2000 ?
Thanks"John Smith" <bukusu@.gmail.com> wrote in message
news:epVak7yuGHA.4756@.TK2MSFTNGP04.phx.gbl...
> I would like to find out if there are any free tools that could be used to
> monitor a sql instance and list the
> a. most frequently run stored procedures and how long each takes
> b. most frequently run views and how long each takes
> netiq used to have app manager - is that still available ?
> also, which free tool can I use to monitor CPU, Memory and Netwok
> Utilization for a windows box running SQL Server 2000 ?
> Thanks
>
Not sure about a. & b. but Windows Performance monitor (under Administrative
Tools) can monitor the rest, plus a lot of SQL specific items, too. You can
even monitor remotely.|||Hi John
You could use SQL profiler for a but I would not continually sample. You can
not "run" a view so b is not possible. You may want to look at
http://www.sqldbatips.com/showcode.asp?ID=7.
John
"John Smith" wrote:

> I would like to find out if there are any free tools that could be used to
> monitor a sql instance and list the
> a. most frequently run stored procedures and how long each takes
> b. most frequently run views and how long each takes
> netiq used to have app manager - is that still available ?
> also, which free tool can I use to monitor CPU, Memory and Netwok
> Utilization for a windows box running SQL Server 2000 ?
> Thanks
>
>|||John Smith wrote:
> I would like to find out if there are any free tools that could be used to
> monitor a sql instance and list the
> a. most frequently run stored procedures and how long each takes
> b. most frequently run views and how long each takes
> netiq used to have app manager - is that still available ?
> also, which free tool can I use to monitor CPU, Memory and Netwok
> Utilization for a windows box running SQL Server 2000 ?
> Thanks
>
You can capture stored procedure activity using Profiler, and then
analyze the resulting logs. There is a tool available for download here
(http://www.cleardata.biz/cleartrace/instructions.aspx) that will neatly
summarize the log activity, or you can do the same thing yourself, as we do.
Perfmon is your best bet for monitoring CPU, memory, and network
utilization.
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||NetIQ sold th app manager for SQL off to Idera and it is now their SQLdm
product. I should say that the Idera SQMdm product started as the NetIQ App
Manager for SQL. They have spent some time and money upgrading and
updating it since purchasing the software.
www.idera.com if you want to check it out.
Geoff N. Hiten
Senior Database Administrator
Microsoft SQL Server MVP
"John Smith" <bukusu@.gmail.com> wrote in message
news:epVak7yuGHA.4756@.TK2MSFTNGP04.phx.gbl...
>I would like to find out if there are any free tools that could be used to
>monitor a sql instance and list the
> a. most frequently run stored procedures and how long each takes
> b. most frequently run views and how long each takes
> netiq used to have app manager - is that still available ?
> also, which free tool can I use to monitor CPU, Memory and Netwok
> Utilization for a windows box running SQL Server 2000 ?
> Thanks
>

Monday, March 12, 2012

Monitoring More Than 1 DB's Growth Rate

Hello, Is there a Best Practice or a script I can run daily to monitor the
rate of growth of about 12 databases? Thanks, PanchoHi
Vyas' script may help!
http://vyaskn.tripod.com/track_sql_...file_growth.htm
John
"Pancho" wrote:

> Hello, Is there a Best Practice or a script I can run daily to monitor the
> rate of growth of about 12 databases? Thanks, Pancho|||Thanks, John. I'll give this a try!
"John Bell" wrote:
> Hi
> Vyas' script may help!
> http://vyaskn.tripod.com/track_sql_...file_growth.htm
> John
> "Pancho" wrote:
>

Monitoring database space usage

I'm interesting in monitoring database space usage in SQL Server 2005.
Ideally, once a day I'd like a job to run that looks for databases that are
within 20% of capacity. If one or more exists, I get an email. I don't
need/want it to monitor constantly as our business processes do not require
that. I do not want to have to manually monitor. For other business
reasons, our databases will be set to fixed size with autogrowth disabled,
hence our interest in monitoring.
What do you recommend?
Thanks,
Mark
Mark,
Take a look at the view sys.database_files, it should be fairly easy
to create a SP off of this data and then as long as you have your
database mail setup, you can e-mail yourself.
Here is an example to run:
use DBNAME
go
select
physical_name,
size * 8 AS [Current Size in KB],
max_size * 8 [Maximum Size in KB]
from sys.database_files
Note that size and max_size are the number of PAGES the files have,
and pages in SQL Server are 8K.
Hope this helps,
-Sean
On Mar 28, 10:58Xam, "Mark" <m...@.idonotlikespam.com> wrote:
> I'm interesting in monitoring database space usage in SQL Server 2005.
> Ideally, once a day I'd like a job to run that looks for databases that are
> within 20% of capacity. XIf one or more exists, I get an email. XI don't
> need/want it to monitor constantly as our business processes do not require
> that. XI do not want to have to manually monitor. XFor other business
> reasons, our databases will be set to fixed size with autogrowth disabled,
> hence our interest in monitoring.
> What do you recommend?
> Thanks,
> Mark
|||On Mar 28, 8:31Xam, Sean <ColdFusion...@.gmail.com> wrote:
> Mark,
> Take a look at the view sys.database_files, it should be fairly easy
> to create a SP off of this data and then as long as you have your
> database mail setup, you can e-mail yourself.
> Here is an example to run:
> use DBNAME
> go
> select
> physical_name,
> size * 8 AS [Current Size in KB],
> max_size * 8 [Maximum Size in KB]
> from sys.database_files
> Note that size and max_size are the number of PAGES the files have,
> and pages in SQL Server are 8K.
> Hope this helps,
> -Sean
> On Mar 28, 10:58Xam, "Mark" <m...@.idonotlikespam.com> wrote:
>
>
>
> - Show quoted text -
Mark, I don't think this is going to work. This will only tell you
what the size of the database file is - not how much space is actually
being used within that file.
There are a couple of options:
1) Look up FILEPROPERTY - this function has a property for returning
SpaceUsed.
2) Review the stored procedure sp_spaceused - create your own version
using the same logic
3) Look up DataSpaceUsage, IndexSpaceUsage and SpaceAvailable in
SMO. You can either create a program, or use Powershell to create a
script to capture the data.
I prefer the script method myself and use Powershell to pull this data
from all of our SQL Servers.
HTH,
Jeff

Monitoring database space usage

I'm interesting in monitoring database space usage in SQL Server 2005.
Ideally, once a day I'd like a job to run that looks for databases that are
within 20% of capacity. If one or more exists, I get an email. I don't
need/want it to monitor constantly as our business processes do not require
that. I do not want to have to manually monitor. For other business
reasons, our databases will be set to fixed size with autogrowth disabled,
hence our interest in monitoring.
What do you recommend?
Thanks,
MarkMark,
Take a look at the view sys.database_files, it should be fairly easy
to create a SP off of this data and then as long as you have your
database mail setup, you can e-mail yourself.
Here is an example to run:
use DBNAME
go
select
physical_name,
size * 8 AS [Current Size in KB],
max_size * 8 [Maximum Size in KB]
from sys.database_files
Note that size and max_size are the number of PAGES the files have,
and pages in SQL Server are 8K.
Hope this helps,
-Sean
On Mar 28, 10:58=A0am, "Mark" <m...@.idonotlikespam.com> wrote:
> I'm interesting in monitoring database space usage in SQL Server 2005.
> Ideally, once a day I'd like a job to run that looks for databases that ar=e
> within 20% of capacity. =A0If one or more exists, I get an email. =A0I don='t
> need/want it to monitor constantly as our business processes do not requir=e
> that. =A0I do not want to have to manually monitor. =A0For other business
> reasons, our databases will be set to fixed size with autogrowth disabled,=
> hence our interest in monitoring.
> What do you recommend?
> Thanks,
> Mark|||On Mar 28, 8:31=A0am, Sean <ColdFusion...@.gmail.com> wrote:
> Mark,
> Take a look at the view sys.database_files, it should be fairly easy
> to create a SP off of this data and then as long as you have your
> database mail setup, you can e-mail yourself.
> Here is an example to run:
> use DBNAME
> go
> select
> physical_name,
> size * 8 AS [Current Size in KB],
> max_size * 8 [Maximum Size in KB]
> from sys.database_files
> Note that size and max_size are the number of PAGES the files have,
> and pages in SQL Server are 8K.
> Hope this helps,
> -Sean
> On Mar 28, 10:58=A0am, "Mark" <m...@.idonotlikespam.com> wrote:
>
> > I'm interesting in monitoring database space usage in SQL Server 2005.
> > Ideally, once a day I'd like a job to run that looks for databases that =are
> > within 20% of capacity. =A0If one or more exists, I get an email. =A0I d=on't
> > need/want it to monitor constantly as our business processes do not requ=ire
> > that. =A0I do not want to have to manually monitor. =A0For other busines=s
> > reasons, our databases will be set to fixed size with autogrowth disable=d,
> > hence our interest in monitoring.
> > What do you recommend?
> > Thanks,
> > Mark- Hide quoted text -
> - Show quoted text -
Mark, I don't think this is going to work. This will only tell you
what the size of the database file is - not how much space is actually
being used within that file.
There are a couple of options:
1) Look up FILEPROPERTY - this function has a property for returning
SpaceUsed.
2) Review the stored procedure sp_spaceused - create your own version
using the same logic
3) Look up DataSpaceUsage, IndexSpaceUsage and SpaceAvailable in
SMO. You can either create a program, or use Powershell to create a
script to capture the data.
I prefer the script method myself and use Powershell to pull this data
from all of our SQL Servers.
HTH,
Jeff