Wednesday, March 28, 2012
More logical drives VS more striped drives
not get a reply there I am interested to know what the community's opinion
is.
Some of my Raid 10's are made up of 8 drives. Would it have been better to
have created 2 Raid 10's of 4 drives and then have two data files in the
file group?
I would expect an 8 drive Raid 10 to give a higher sequential data rate but
what about random IO when one has many users?
Regard
Paul Cahill
...
Posted by ZoomZoom
I have seen many articles that say to place log files separate from db
files, and use raid 1+0 for the db... but I haven't found anything to answer
this question for me though (unless it should be so obvious that I'm just
not seeing it).
Is it faster for the database to have the tables split into separate files
and put each file in smaller raid 1+0 configurations or is it faster to use
the same number of drives in a single raid 1+0. For example: 8 drives could
be split into 2 raid 1+0 arrays (4 each) or they could be configured in one
large raid 1+0 array (8 drives). The end result would be striping between 2
sets of 2 drives (in the 4 disk array due to 2 being only mirrors) or
striping between 4 drives (in the 8 disk array).
I guess another way of looking at the question is how much performance does
each additional pair of disks in a raid 1+0 array really add. Can an 8 disk
array handle twice as many IO's as as a 4 disk array? Or does it only add
maybe 30% more IO ability? If the later, would it make sense to add blocks
of 4 disk arrays and split the database tables into multiple files instead?
Would this theory work like putting the logs on separate drives from the
database files?
I realize it's an expensive solution... but with the price of database
per-processor licenses being what they are... it makes sense to try to
squeeze as much performance out of your database server as possible.I have thought about it many times.
When using more disks in one volume, random IO rises, but not by the same
factor as number of disks.
If load of these two files is the same, then two volumes are better, but if
it is not, one volume is better.
Are you sure that load is always uniformly distributed between files? Than
use separate volumes.
In real world the load always fluctuates so it is not possible to make a
common conclusion.
"Paul Cahill" <anon@.anon.com> píse v diskusním príspevku
news:%237m9RqZuHHA.3368@.TK2MSFTNGP02.phx.gbl...
>A user called ZoomZoom made the posting below on Tech Republic. As it did
>not get a reply there I am interested to know what the community's opinion
>is.
> Some of my Raid 10's are made up of 8 drives. Would it have been better to
> have created 2 Raid 10's of 4 drives and then have two data files in the
> file group?
> I would expect an 8 drive Raid 10 to give a higher sequential data rate
> but what about random IO when one has many users?
> Regard
> Paul Cahill
> ...
> Posted by ZoomZoom
>
> I have seen many articles that say to place log files separate from db
> files, and use raid 1+0 for the db... but I haven't found anything to
> answer this question for me though (unless it should be so obvious that
> I'm just not seeing it).
> Is it faster for the database to have the tables split into separate files
> and put each file in smaller raid 1+0 configurations or is it faster to
> use the same number of drives in a single raid 1+0. For example: 8 drives
> could be split into 2 raid 1+0 arrays (4 each) or they could be configured
> in one large raid 1+0 array (8 drives). The end result would be striping
> between 2 sets of 2 drives (in the 4 disk array due to 2 being only
> mirrors) or striping between 4 drives (in the 8 disk array).
> I guess another way of looking at the question is how much performance
> does each additional pair of disks in a raid 1+0 array really add. Can an
> 8 disk array handle twice as many IO's as as a 4 disk array? Or does it
> only add maybe 30% more IO ability? If the later, would it make sense to
> add blocks of 4 disk arrays and split the database tables into multiple
> files instead? Would this theory work like putting the logs on separate
> drives from the database files?
> I realize it's an expensive solution... but with the price of database
> per-processor licenses being what they are... it makes sense to try to
> squeeze as much performance out of your database server as possible.
>|||I pretty much agree with that explanation. If you know exactly how your
files will be accessed and they compete with each other at a fairly
intensive level you might get better performance from splitting them. This
is true of separating tempdb from data files as well. But if you don't know
or the load is not too high you are most likely better off having a larger
array with all the data files on it. Now placing the log files on another
array is always a good idea.
--
Andrew J. Kelly SQL MVP
"Jirí Lejsek" <jlejsek@.na_volnym_v_cesku> wrote in message
news:urgxfhauHHA.1184@.TK2MSFTNGP04.phx.gbl...
>I have thought about it many times.
> When using more disks in one volume, random IO rises, but not by the same
> factor as number of disks.
> If load of these two files is the same, then two volumes are better, but
> if it is not, one volume is better.
> Are you sure that load is always uniformly distributed between files? Than
> use separate volumes.
> In real world the load always fluctuates so it is not possible to make a
> common conclusion.
> "Paul Cahill" <anon@.anon.com> píse v diskusním príspevku
> news:%237m9RqZuHHA.3368@.TK2MSFTNGP02.phx.gbl...
>>A user called ZoomZoom made the posting below on Tech Republic. As it did
>>not get a reply there I am interested to know what the community's opinion
>>is.
>> Some of my Raid 10's are made up of 8 drives. Would it have been better
>> to have created 2 Raid 10's of 4 drives and then have two data files in
>> the file group?
>> I would expect an 8 drive Raid 10 to give a higher sequential data rate
>> but what about random IO when one has many users?
>> Regard
>> Paul Cahill
>> ...
>> Posted by ZoomZoom
>>
>> I have seen many articles that say to place log files separate from db
>> files, and use raid 1+0 for the db... but I haven't found anything to
>> answer this question for me though (unless it should be so obvious that
>> I'm just not seeing it).
>> Is it faster for the database to have the tables split into separate
>> files and put each file in smaller raid 1+0 configurations or is it
>> faster to use the same number of drives in a single raid 1+0. For
>> example: 8 drives could be split into 2 raid 1+0 arrays (4 each) or they
>> could be configured in one large raid 1+0 array (8 drives). The end
>> result would be striping between 2 sets of 2 drives (in the 4 disk array
>> due to 2 being only mirrors) or striping between 4 drives (in the 8 disk
>> array).
>> I guess another way of looking at the question is how much performance
>> does each additional pair of disks in a raid 1+0 array really add. Can an
>> 8 disk array handle twice as many IO's as as a 4 disk array? Or does it
>> only add maybe 30% more IO ability? If the later, would it make sense to
>> add blocks of 4 disk arrays and split the database tables into multiple
>> files instead? Would this theory work like putting the logs on separate
>> drives from the database files?
>> I realize it's an expensive solution... but with the price of database
>> per-processor licenses being what they are... it makes sense to try to
>> squeeze as much performance out of your database server as possible.
>>
>|||On Thu, 28 Jun 2007 17:02:37 -0400, "Andrew J. Kelly"
<sqlmvpnooospam@.shadhawk.com> wrote:
>I pretty much agree with that explanation. If you know exactly how your
>files will be accessed and they compete with each other at a fairly
>intensive level you might get better performance from splitting them. This
>is true of separating tempdb from data files as well. But if you don't know
>or the load is not too high you are most likely better off having a larger
>array with all the data files on it. Now placing the log files on another
>array is always a good idea.
The problem I have with targetting files at particular drives is that
it tunes for one particular load only. Tune it for the daytime OLTP
and it could be sub-optimal during heavy duty batch processing at
night. One big striped set spreads the load pretty evenly across all
spindles, regardless of load, and regardless of how the load changes
over time. It might not be as fast in some special instances, but I
think on average one big set of drives is going to perform as well or
better. Targetting files also means one more thing for the DBA to
keep an eye on and worry about changing later when the load changes or
space grows differently than expected.
Logs, of course, are another story as everyone knows.
Roy Harvey
Beacon Falls, CT|||Right that is why I qualified it with knowing "Exactly" how they are used
which is hard to do.
--
Andrew J. Kelly SQL MVP
"Roy Harvey" <roy_harvey@.snet.net> wrote in message
news:v798831lqurvqepcvtb9q294c4ha95liq9@.4ax.com...
> On Thu, 28 Jun 2007 17:02:37 -0400, "Andrew J. Kelly"
> <sqlmvpnooospam@.shadhawk.com> wrote:
>>I pretty much agree with that explanation. If you know exactly how your
>>files will be accessed and they compete with each other at a fairly
>>intensive level you might get better performance from splitting them. This
>>is true of separating tempdb from data files as well. But if you don't
>>know
>>or the load is not too high you are most likely better off having a larger
>>array with all the data files on it. Now placing the log files on another
>>array is always a good idea.
> The problem I have with targetting files at particular drives is that
> it tunes for one particular load only. Tune it for the daytime OLTP
> and it could be sub-optimal during heavy duty batch processing at
> night. One big striped set spreads the load pretty evenly across all
> spindles, regardless of load, and regardless of how the load changes
> over time. It might not be as fast in some special instances, but I
> think on average one big set of drives is going to perform as well or
> better. Targetting files also means one more thing for the DBA to
> keep an eye on and worry about changing later when the load changes or
> space grows differently than expected.
> Logs, of course, are another story as everyone knows.
> Roy Harvey
> Beacon Falls, CT
More logical drives VS more striped drives
not get a reply there I am interested to know what the community's opinion
is.
Some of my Raid 10's are made up of 8 drives. Would it have been better to
have created 2 Raid 10's of 4 drives and then have two data files in the
file group?
I would expect an 8 drive Raid 10 to give a higher sequential data rate but
what about random IO when one has many users?
Regard
Paul Cahill
...
Posted by ZoomZoom
I have seen many articles that say to place log files separate from db
files, and use raid 1+0 for the db... but I haven't found anything to answer
this question for me though (unless it should be so obvious that I'm just
not seeing it).
Is it faster for the database to have the tables split into separate files
and put each file in smaller raid 1+0 configurations or is it faster to use
the same number of drives in a single raid 1+0. For example: 8 drives could
be split into 2 raid 1+0 arrays (4 each) or they could be configured in one
large raid 1+0 array (8 drives). The end result would be striping between 2
sets of 2 drives (in the 4 disk array due to 2 being only mirrors) or
striping between 4 drives (in the 8 disk array).
I guess another way of looking at the question is how much performance does
each additional pair of disks in a raid 1+0 array really add. Can an 8 disk
array handle twice as many IO's as as a 4 disk array? Or does it only add
maybe 30% more IO ability? If the later, would it make sense to add blocks
of 4 disk arrays and split the database tables into multiple files instead?
Would this theory work like putting the logs on separate drives from the
database files?
I realize it's an expensive solution... but with the price of database
per-processor licenses being what they are... it makes sense to try to
squeeze as much performance out of your database server as possible.I have thought about it many times.
When using more disks in one volume, random IO rises, but not by the same
factor as number of disks.
If load of these two files is the same, then two volumes are better, but if
it is not, one volume is better.
Are you sure that load is always uniformly distributed between files? Than
use separate volumes.
In real world the load always fluctuates so it is not possible to make a
common conclusion.
"Paul Cahill" <anon@.anon.com> pse v diskusnm prspevku
news:%237m9RqZuHHA.3368@.TK2MSFTNGP02.phx.gbl...
>A user called ZoomZoom made the posting below on Tech Republic. As it did
>not get a reply there I am interested to know what the community's opinion
>is.
> Some of my Raid 10's are made up of 8 drives. Would it have been better to
> have created 2 Raid 10's of 4 drives and then have two data files in the
> file group?
> I would expect an 8 drive Raid 10 to give a higher sequential data rate
> but what about random IO when one has many users?
> Regard
> Paul Cahill
> ...
> Posted by ZoomZoom
>
> I have seen many articles that say to place log files separate from db
> files, and use raid 1+0 for the db... but I haven't found anything to
> answer this question for me though (unless it should be so obvious that
> I'm just not seeing it).
> Is it faster for the database to have the tables split into separate files
> and put each file in smaller raid 1+0 configurations or is it faster to
> use the same number of drives in a single raid 1+0. For example: 8 drives
> could be split into 2 raid 1+0 arrays (4 each) or they could be configured
> in one large raid 1+0 array (8 drives). The end result would be striping
> between 2 sets of 2 drives (in the 4 disk array due to 2 being only
> mirrors) or striping between 4 drives (in the 8 disk array).
> I guess another way of looking at the question is how much performance
> does each additional pair of disks in a raid 1+0 array really add. Can an
> 8 disk array handle twice as many IO's as as a 4 disk array? Or does it
> only add maybe 30% more IO ability? If the later, would it make sense to
> add blocks of 4 disk arrays and split the database tables into multiple
> files instead? Would this theory work like putting the logs on separate
> drives from the database files?
> I realize it's an expensive solution... but with the price of database
> per-processor licenses being what they are... it makes sense to try to
> squeeze as much performance out of your database server as possible.
>|||I pretty much agree with that explanation. If you know exactly how your
files will be accessed and they compete with each other at a fairly
intensive level you might get better performance from splitting them. This
is true of separating tempdb from data files as well. But if you don't know
or the load is not too high you are most likely better off having a larger
array with all the data files on it. Now placing the log files on another
array is always a good idea.
Andrew J. Kelly SQL MVP
"Jir Lejsek" <jlejsek@.na_volnym_v_cesku> wrote in message
news:urgxfhauHHA.1184@.TK2MSFTNGP04.phx.gbl...
>I have thought about it many times.
> When using more disks in one volume, random IO rises, but not by the same
> factor as number of disks.
> If load of these two files is the same, then two volumes are better, but
> if it is not, one volume is better.
> Are you sure that load is always uniformly distributed between files? Than
> use separate volumes.
> In real world the load always fluctuates so it is not possible to make a
> common conclusion.
> "Paul Cahill" <anon@.anon.com> pse v diskusnm prspevku
> news:%237m9RqZuHHA.3368@.TK2MSFTNGP02.phx.gbl...
>|||On Thu, 28 Jun 2007 17:02:37 -0400, "Andrew J. Kelly"
<sqlmvpnooospam@.shadhawk.com> wrote:
>I pretty much agree with that explanation. If you know exactly how your
>files will be accessed and they compete with each other at a fairly
>intensive level you might get better performance from splitting them. This
>is true of separating tempdb from data files as well. But if you don't know
>or the load is not too high you are most likely better off having a larger
>array with all the data files on it. Now placing the log files on another
>array is always a good idea.
The problem I have with targetting files at particular drives is that
it tunes for one particular load only. Tune it for the daytime OLTP
and it could be sub-optimal during heavy duty batch processing at
night. One big striped set spreads the load pretty evenly across all
spindles, regardless of load, and regardless of how the load changes
over time. It might not be as fast in some special instances, but I
think on average one big set of drives is going to perform as well or
better. Targetting files also means one more thing for the DBA to
keep an eye on and worry about changing later when the load changes or
space grows differently than expected.
Logs, of course, are another story as everyone knows.
Roy Harvey
Beacon Falls, CT|||Right that is why I qualified it with knowing "Exactly" how they are used
which is hard to do.
Andrew J. Kelly SQL MVP
"Roy Harvey" <roy_harvey@.snet.net> wrote in message
news:v798831lqurvqepcvtb9q294c4ha95liq9@.
4ax.com...
> On Thu, 28 Jun 2007 17:02:37 -0400, "Andrew J. Kelly"
> <sqlmvpnooospam@.shadhawk.com> wrote:
>
> The problem I have with targetting files at particular drives is that
> it tunes for one particular load only. Tune it for the daytime OLTP
> and it could be sub-optimal during heavy duty batch processing at
> night. One big striped set spreads the load pretty evenly across all
> spindles, regardless of load, and regardless of how the load changes
> over time. It might not be as fast in some special instances, but I
> think on average one big set of drives is going to perform as well or
> better. Targetting files also means one more thing for the DBA to
> keep an eye on and worry about changing later when the load changes or
> space grows differently than expected.
> Logs, of course, are another story as everyone knows.
> Roy Harvey
> Beacon Falls, CTsql
Monday, March 26, 2012
More complex Sum and Group by a week period
I have a set of data similar to the below one.
issueID IssueDate
1 1/17/2005
2 1/18/2005
3 1/19/2005
4 1/24/2005
5 1/27/2005
6 2/7/2005
I need to query the set between 2 dates for example,
1/3/2005 and 6/5/2005 and what I need to return is the
sum of issues per 7 day groups starting from 1/3/2005.
Even if there are no issues opened in the 7 days
following the 1/3/2005 date, I need the query to return
0.
I need The query output to look like this:
W

1/3/2005 0
1/10/2005 0
1/17/2005 3
1/24/2005 2
1/31/2005 1
The reason I need to see this output this way, is that I
need to compare it to a projected set that the managers
estimated per 7 day periods starting from 1/3/2005.
Is this doable as a query? I only have read access to
the database, so I won't be able to create stored
procedures or views.
Please let me know if you have ideas.Having a calendar table (http://www.aspfaq.com/show.asp?id=2519), you can tr
y
something like:
declare @.sd datetime, @.ed datetime
set @.sd = '20050103'
set @.ed = '20050605'
select
min(c.td),
count(t.IssueID)
from
calendar as c
left join
t
on c.dt = t.IssueDate
where
c.dt >= @.sd and c.dt < dateadd(day, 1, @.ed)
group by
datediff(day, c.dt, @.sd) / 7
go
Not tested.
AMB
"Danny Mansour" wrote:
> First thanks to Mike and Baie for helping me before.
> I have a set of data similar to the below one.
> issueID IssueDate
> 1 1/17/2005
> 2 1/18/2005
> 3 1/19/2005
> 4 1/24/2005
> 5 1/27/2005
> 6 2/7/2005
> I need to query the set between 2 dates for example,
> 1/3/2005 and 6/5/2005 and what I need to return is the
> sum of issues per 7 day groups starting from 1/3/2005.
> Even if there are no issues opened in the 7 days
> following the 1/3/2005 date, I need the query to return
> 0.
> I need The query output to look like this:
> W

> 1/3/2005 0
> 1/10/2005 0
> 1/17/2005 3
> 1/24/2005 2
> 1/31/2005 1
> The reason I need to see this output this way, is that I
> need to compare it to a projected set that the managers
> estimated per 7 day periods starting from 1/3/2005.
> Is this doable as a query? I only have read access to
> the database, so I won't be able to create stored
> procedures or views.
> Please let me know if you have ideas.
>|||If it's possible, (as your sample data indicates) to NOT have actual data in
some of the date range groups you need to output as aggregate "buckets", the
n
your best bet, if you can, is to create a calendar table, and do an Outer
Join from it to the data table(s)...
But you say you have only read access, so then the only real option is to,
within a batch, create a table variable that contains the range of dates you
want to group on, and join this table variable to the data table(s)
Declare @.Dts Table(Dt DateTime Not Null)
Declare @.DT DateTime Set @.DT = '20050103'
While @.DT < '20050630' Begin
Insert @.DTs (DT) Values(@.DT)
End
Select D.DT, Count(*)
From @.DTs D Left Join DataTable T
On T.IssueDate Between D.DT And DateAdd(w

Group By D.DT
-- Something like that--
"Danny Mansour" wrote:
> First thanks to Mike and Baie for helping me before.
> I have a set of data similar to the below one.
> issueID IssueDate
> 1 1/17/2005
> 2 1/18/2005
> 3 1/19/2005
> 4 1/24/2005
> 5 1/27/2005
> 6 2/7/2005
> I need to query the set between 2 dates for example,
> 1/3/2005 and 6/5/2005 and what I need to return is the
> sum of issues per 7 day groups starting from 1/3/2005.
> Even if there are no issues opened in the 7 days
> following the 1/3/2005 date, I need the query to return
> 0.
> I need The query output to look like this:
> W

> 1/3/2005 0
> 1/10/2005 0
> 1/17/2005 3
> 1/24/2005 2
> 1/31/2005 1
> The reason I need to see this output this way, is that I
> need to compare it to a projected set that the managers
> estimated per 7 day periods starting from 1/3/2005.
> Is this doable as a query? I only have read access to
> the database, so I won't be able to create stored
> procedures or views.
> Please let me know if you have ideas.
>|||Just to throw one more option onto the pile...
To generate the dates for the w

the following query
select dateadd(ww, a0+a1*2+a2*4+a3*8+a4*16, '20050103') start_dt,
dateadd(ww, 1+a0+a1*2+a2*4+a3*8+a4*16, '20050103') exclusive_end_dt from
(select 1 a0 union all select 0 ) x0
cross join (select 1 a1 union all select 0 ) x1
cross join (select 1 a2 union all select 0 ) x2
cross join (select 1 a3 union all select 0 ) x3
cross join (select 1 a4 union all select 0 ) x4
where dateadd(ww, a0+a1*2+a2*4+a3*8+a4*16, '20050103') < '20050605'
This will generate start and end dates for the target w

put in a table variable as CBretana suggests. You can also embed this query
in your original query like so...
select start_dt, count(issueID)
from (select dateadd(ww, a0+a1*2+a2*4+a3*8+a4*16, '20050103') start_dt,
dateadd(ww, 1+a0+a1*2+a2*4+a3*8+a4*16, '20050103') exclusive_end_dt from
(select 1 a0 union all select 0 ) x0
cross join (select 1 a1 union all select 0 ) x1
cross join (select 1 a2 union all select 0 ) x2
cross join (select 1 a3 union all select 0 ) x3
cross join (select 1 a4 union all select 0 ) x4
where dateadd(ww, a0+a1*2+a2*4+a3*8+a4*16, '20050103') < '20050605') w

left outer join test1
on issuedate >= start_dt and issuedate < exclusive_end_dt
group by start_dt
This approach is likely only useful for small sets of w

"Danny Mansour" wrote:
> First thanks to Mike and Baie for helping me before.
> I have a set of data similar to the below one.
> issueID IssueDate
> 1 1/17/2005
> 2 1/18/2005
> 3 1/19/2005
> 4 1/24/2005
> 5 1/27/2005
> 6 2/7/2005
> I need to query the set between 2 dates for example,
> 1/3/2005 and 6/5/2005 and what I need to return is the
> sum of issues per 7 day groups starting from 1/3/2005.
> Even if there are no issues opened in the 7 days
> following the 1/3/2005 date, I need the query to return
> 0.
> I need The query output to look like this:
> W

> 1/3/2005 0
> 1/10/2005 0
> 1/17/2005 3
> 1/24/2005 2
> 1/31/2005 1
> The reason I need to see this output this way, is that I
> need to compare it to a projected set that the managers
> estimated per 7 day periods starting from 1/3/2005.
> Is this doable as a query? I only have read access to
> the database, so I won't be able to create stored
> procedures or views.
> Please let me know if you have ideas.
>|||Danny Mansour wrote:
> First thanks to Mike and Baie for helping me before.
> I have a set of data similar to the below one.
> issueID IssueDate
> 1 1/17/2005
> 2 1/18/2005
> 3 1/19/2005
> 4 1/24/2005
> 5 1/27/2005
> 6 2/7/2005
> I need to query the set between 2 dates for example,
> 1/3/2005 and 6/5/2005 and what I need to return is the
> sum of issues per 7 day groups starting from 1/3/2005.
> Even if there are no issues opened in the 7 days
> following the 1/3/2005 date, I need the query to return
> 0.
> I need The query output to look like this:
> W

> 1/3/2005 0
> 1/10/2005 0
> 1/17/2005 3
> 1/24/2005 2
> 1/31/2005 1
> The reason I need to see this output this way, is that I
> need to compare it to a projected set that the managers
> estimated per 7 day periods starting from 1/3/2005.
> Is this doable as a query? I only have read access to
> the database, so I won't be able to create stored
> procedures or views.
> Please let me know if you have ideas.
A table of (nonnegative) integers is often handy.
CREATE VIEW Digits (d)
AS
SELECT 0 UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL
SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL
SELECT 6 UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL
SELECT 9
CREATE TABLE N
(
i INT NOT NULL CHECK (i>=0) PRIMARY KEY
)
INSERT INTO N (i)
SELECT Ones.d + 10*Tens.d + 100*Hundreds.d + 1000*Thousands.d +
10000*TenThousands.d
FROM Digits AS Ones
CROSS JOIN
Digits AS Tens
CROSS JOIN
Digits AS Hundreds
CROSS JOIN
Digits AS Thousands
CROSS JOIN
Digits AS TenThousands
CREATE VIEW Calendar (d)
AS
SELECT CAST('19000101' AS DATETIME) + N.i
FROM N
CREATE TABLE Issues
(
issue_id INT NOT NULL PRIMARY KEY,
issue_date DATETIME NOT NULL
)
-- Sample data
INSERT INTO Issues (issue_id, issue_date)
VALUES (1, '20050117')
INSERT INTO Issues (issue_id, issue_date)
VALUES (2, '20050118')
INSERT INTO Issues (issue_id, issue_date)
VALUES (3, '20050119')
INSERT INTO Issues (issue_id, issue_date)
VALUES (4, '20050124')
INSERT INTO Issues (issue_id, issue_date)
VALUES (5, '20050127')
INSERT INTO Issues (issue_id, issue_date)
VALUES (6, '20050207')
-- This is a purely relational solution. If you need to do
-- more general queries, this might be helpful.
-- start_date and end_date is the overall period
-- interval is a number of days
-- period_start and period_end is the period we're tallying where
-- period_start is inclusive and period_end is exclusive
-- issue_tally is the tally for the period
CREATE VIEW IssueTally
(start_date, end_date, interval,
period_start, period_end, issue_tally)
AS
SELECT SD.d, ED.d, L.i,
SD.d + L.i * T.i, SD.d + L.i * (T.i + 1),
COUNT(I.issue_id)
FROM Calendar AS SD -- start date
INNER JOIN
Calendar AS ED -- end date
ON ED.d > SD.d
INNER JOIN
N AS L -- interval length
ON L.i > 0
INNER JOIN
N AS T -- number of intervals
ON T.i <= DATEDIFF(DAY, SD.d, ED.d) / L.i
LEFT OUTER JOIN
Issues AS I
ON I.issue_date >= SD.d + L.i * T.i AND
I.issue_date < SD.d + L.i * (T.i + 1)
GROUP BY SD.d, ED.d, L.i, T.i
SELECT start_date, end_date, interval,
period_start, period_end, issue_tally
FROM IssueTally
WHERE start_date = '20050103' AND
end_date = '20050605' AND
interval = 7
ORDER BY period_start
JAG|||Thanks you very much. That helped.|||Thanks you very much. That helped.|||Thanks you very much. That helped.|||Thanks you very much. That helped.
Friday, March 23, 2012
Month Name
The SQL statement below works to provide the number of help desk requests submitted per month for help_company_id = 1.
The problem is that the months are return as numbers (1 for January). I tried using the DATENAME function but all the months were return as January. This is how I tried to use it: SELECT DATENAME(month, MONTH(help_date_submitted)) AS Month, COUNT(help_request_id) AS Requests)
How do I convert the month numbers to month names?
SELECT MONTH(help_date_submitted) AS Month, COUNT(help_request_id) AS Requests
FROM help_requests
WHERE (help_company_id = 1)
GROUP BY MONTH(help_date_submitted)
ORDER BY Month
Hi,
You should pass the current date to the DATENAME function as follows:
SELECT DATENAME(mm, GETDATE())
|||Assuming that this application does not need to be localized, and the SQL Server is running in the language you want it output then:
SELECT DATENAME(month, help_date_submitted) AS Month, COUNT(help_request_id) AS Requests
will work for you.
|||When I try SELECT DATENAME(month, help_date_submitted) AS Month, COUNT(help_request_id) AS Requests the following error is returned:
Column help_requests.help_date_submited is invalid in the list because it is not contained in either an aggregate function or the group by clause.
|||SELECT MONTH(help_date_submitted) AS MonthNo,MIN(DATENAME(month,help_date_submitted)) AS Month, COUNT(help_request_id) AS Requests
FROM help_requests
WHERE (help_company_id = 1)
GROUP BY MONTH(help_date_submitted)
ORDER BY MonthNo
Not exactly straight forward, but it'll work. Problem is that SQL Server doesn't see the month number (returned by the month function), and the month name being related. So you have to play some games with it, especially since you want it ordered by the month number rather than the month name.
|||Actually, as long as you focus on the grouped values you can do it. Here is a sample query to demonstrate:
1select2datename(3month4 ,dateadd(5month6 ,month( crdate ) - 17 , 0 ) ) [Month]8 ,count(*) CountOfMonth9from10sysobjects11group by12 month( crdate )13order by14 month( crdate )
So, using this logic for the user's query you would have
1select2datename(3month4 ,dateadd(5month6 ,month( hr.help_date_submitted ) - 17 , 0 ) ) [Month]8 ,count( hr.help_request_id ) Requests9from10 help_requests hr11where12 hr.help_company_id = 113group by14 month( hr.help_date_submitted )15order by16 month( hr.help_date_submitted )A couple of other points - I am sure you know, but this does group multiple years into each month. Is that the desired behavior? ( Just asking ).
month and 2 digit year
I return the month/year like my DESIRED RESULTS 1 & 2 below? Both examples
equal January, 2005? DATEPART seems to only allow 4 digit years.
CODE:
SELECT DATEPART(yy, GETDATE())
Desired Results 1: 05-1
Desired Results 2: 1-051.
SELECT REPLACE(LEFT(CONVERT(char(8), GETDATE(), 2),5),'.','-')
2.
SELECT RIGHT(CONVERT(char(8), GETDATE(), 5),5)
"Scott" wrote:
> How can I get the month and 2 digit year from a date? For example, how cou
ld
> I return the month/year like my DESIRED RESULTS 1 & 2 below? Both examples
> equal January, 2005? DATEPART seems to only allow 4 digit years.
>
> CODE:
> SELECT DATEPART(yy, GETDATE())
> Desired Results 1: 05-1
> Desired Results 2: 1-05
>
>
Wednesday, March 21, 2012
Monitoring table size
to run a query every minute via SQL agent to get the record count. What I am
wondering is if it is possible to log the result of the query as a perfmon
counter so that we can both graph it realtime in perfmon and also alert off
it in Microsoft Operations Manager.
Thanks,
Mark
"Mark Murphy" <viosk@.newsgroup.nospam> schrieb im Newsbeitrag
news:7AE5DEC0-1F6C-4D60-AC99-FB50A13EFAD9@.microsoft.com...
> We have a table whose size we wish to keep below 200 records. My
thought is
> to run a query every minute via SQL agent to get the record count. What
I am
> wondering is if it is possible to log the result of the query as a
perfmon
> counter so that we can both graph it realtime in perfmon and also alert
off
> it in Microsoft Operations Manager.
And how will you react on this? If there's some action that lends itself
to automation, then a trigger is probably the most appropriate means.
That way you reduce load on the db and do the checks only when they are
necessary (i.e. on insertion).
Kind regards
robert
|||Hi Mark
In SQL Server 2000, there are user defined counters that can log any value
you send as a parameter. For example, if you had this statement: EXEC
sp_user_counter1 10, you could watch SQL Server User Counter 1 in a perfmon
graph, and see the value 10.
But you have to explicitly call the procedure to set the value, and you
could do that in a trigger every time the data in the table changes. The
trigger would do a select count(*), save the result into a variable, and
pass the variable to the sp_user_counter procedure. The trigger could also
check for the value being over your limit, and take the alerting action,
rather than using SQL Server's alert mechanism.
HTH
Kalen Delaney
SQL Server MVP
www.SolidQualityLearning.com
"Mark Murphy" <viosk@.newsgroup.nospam> wrote in message
news:7AE5DEC0-1F6C-4D60-AC99-FB50A13EFAD9@.microsoft.com...
> We have a table whose size we wish to keep below 200 records. My thought
> is
> to run a query every minute via SQL agent to get the record count. What I
> am
> wondering is if it is possible to log the result of the query as a perfmon
> counter so that we can both graph it realtime in perfmon and also alert
> off
> it in Microsoft Operations Manager.
> Thanks,
> Mark
|||Thanks Kalen and Robert,
Exactly the advice I was looking for. We have a sproc that needs to poll
the table for new records periodically (We know it's less efficient, but it's
beyond our control). I'll set the counter there.
-Mark
Monday, February 20, 2012
Money datatype Substring
that
the first 5 characters are '80438' from iNum field.
Please help me correct the sql query listed below.
Thank You,
SELECT count(iNum) from Call_Movement
where substring(cast(iNum as money) as char(20),1,5) like '80438'
iNum Money Format
iNum Data In Call_Movement Table
803482000146220.0000
803482000147143.0000
803482000153805.0000Try,
SELECT count(iNum)
from Call_Movement
where ltrim(iNum) like '80348%'
go
Expect a table or index scan.
AMB
"Joe K." wrote:
> I have a sql query listed below, I would like to count the number of value
s
> that
> the first 5 characters are '80438' from iNum field.
> Please help me correct the sql query listed below.
> Thank You,
> SELECT count(iNum) from Call_Movement
> where substring(cast(iNum as money) as char(20),1,5) like '80438'
> iNum Money Format
> iNum Data In Call_Movement Table
> 803482000146220.0000
> 803482000147143.0000
> 803482000153805.0000
>|||Joe
create table #test (c1 varchar(100))
insert into #test values ('80438hwdgyde5787kioui')
insert into #test values ('5838hwdgyde5787kioui')
insert into #test values ('80438mjji')
select count(c1) from #test where c1 like '80438%'
"Joe K." <Joe K.@.discussions.microsoft.com> wrote in message
news:A5FC4F29-928C-4D6D-B8F6-4A964D6711B2@.microsoft.com...
> I have a sql query listed below, I would like to count the number of
> values
> that
> the first 5 characters are '80438' from iNum field.
> Please help me correct the sql query listed below.
> Thank You,
> SELECT count(iNum) from Call_Movement
> where substring(cast(iNum as money) as char(20),1,5) like '80438'
> iNum Money Format
> iNum Data In Call_Movement Table
> 803482000146220.0000
> 803482000147143.0000
> 803482000153805.0000
>|||I received the following error when applying the query listed below.
Please help me resolve the query error listed below.
Thanks,
Error:
Implicit conversion from data type money to varchar is not allowed. Use the
CONVERT function to run this query.
SQL Query:
SELECT count(iNum)
from Call_Movement
where ltrim(iNum) like '80348%'
________________________________________
___________________________
"Joe K." wrote:
> I have a sql query listed below, I would like to count the number of value
s
> that
> the first 5 characters are '80438' from iNum field.
> Please help me correct the sql query listed below.
> Thank You,
> SELECT count(iNum) from Call_Movement
> where substring(cast(iNum as money) as char(20),1,5) like '80438'
> iNum Money Format
> iNum Data In Call_Movement Table
> 803482000146220.0000
> 803482000147143.0000
> 803482000153805.0000
>|||Joe,
SELECT count(iNum)
from Call_Movement
where convert(varchar(50), iNum, 0) like '80348%'
AMB
"Joe K." wrote:
[vbcol=seagreen]
> I received the following error when applying the query listed below.
> Please help me resolve the query error listed below.
> Thanks,
> Error:
> Implicit conversion from data type money to varchar is not allowed. Use th
e
> CONVERT function to run this query.
> SQL Query:
> SELECT count(iNum)
> from Call_Movement
> where ltrim(iNum) like '80348%'
> ________________________________________
___________________________
> "Joe K." wrote:
>
Money datatype Substring
that
the first 5 characters are '80438' from iNum field.
Please help me correct the sql query listed below.
Thank You,
SELECT count(iNum) from Call_Movement
where substring(cast(iNum as money) as char(20),1,5) like '80438'
iNum Money Format
iNum Data In Call_Movement Table
803482000146220.0000
803482000147143.0000
803482000153805.0000Try,
SELECT count(iNum)
from Call_Movement
where ltrim(iNum) like '80348%'
go
Expect a table or index scan.
AMB
"Joe K." wrote:
> I have a sql query listed below, I would like to count the number of values
> that
> the first 5 characters are '80438' from iNum field.
> Please help me correct the sql query listed below.
> Thank You,
> SELECT count(iNum) from Call_Movement
> where substring(cast(iNum as money) as char(20),1,5) like '80438'
> iNum Money Format
> iNum Data In Call_Movement Table
> 803482000146220.0000
> 803482000147143.0000
> 803482000153805.0000
>|||Joe
create table #test (c1 varchar(100))
insert into #test values ('80438hwdgyde5787kioui')
insert into #test values ('5838hwdgyde5787kioui')
insert into #test values ('80438mjji')
select count(c1) from #test where c1 like '80438%'
"Joe K." <Joe K.@.discussions.microsoft.com> wrote in message
news:A5FC4F29-928C-4D6D-B8F6-4A964D6711B2@.microsoft.com...
> I have a sql query listed below, I would like to count the number of
> values
> that
> the first 5 characters are '80438' from iNum field.
> Please help me correct the sql query listed below.
> Thank You,
> SELECT count(iNum) from Call_Movement
> where substring(cast(iNum as money) as char(20),1,5) like '80438'
> iNum Money Format
> iNum Data In Call_Movement Table
> 803482000146220.0000
> 803482000147143.0000
> 803482000153805.0000
>|||I received the following error when applying the query listed below.
Please help me resolve the query error listed below.
Thanks,
Error:
Implicit conversion from data type money to varchar is not allowed. Use the
CONVERT function to run this query.
SQL Query:
SELECT count(iNum)
from Call_Movement
where ltrim(iNum) like '80348%'
___________________________________________________________________
"Joe K." wrote:
> I have a sql query listed below, I would like to count the number of values
> that
> the first 5 characters are '80438' from iNum field.
> Please help me correct the sql query listed below.
> Thank You,
> SELECT count(iNum) from Call_Movement
> where substring(cast(iNum as money) as char(20),1,5) like '80438'
> iNum Money Format
> iNum Data In Call_Movement Table
> 803482000146220.0000
> 803482000147143.0000
> 803482000153805.0000
>|||Joe,
SELECT count(iNum)
from Call_Movement
where convert(varchar(50), iNum, 0) like '80348%'
AMB
"Joe K." wrote:
> I received the following error when applying the query listed below.
> Please help me resolve the query error listed below.
> Thanks,
> Error:
> Implicit conversion from data type money to varchar is not allowed. Use the
> CONVERT function to run this query.
> SQL Query:
> SELECT count(iNum)
> from Call_Movement
> where ltrim(iNum) like '80348%'
> ___________________________________________________________________
> "Joe K." wrote:
> >
> > I have a sql query listed below, I would like to count the number of values
> > that
> > the first 5 characters are '80438' from iNum field.
> >
> > Please help me correct the sql query listed below.
> >
> > Thank You,
> >
> > SELECT count(iNum) from Call_Movement
> > where substring(cast(iNum as money) as char(20),1,5) like '80438'
> >
> > iNum Money Format
> >
> > iNum Data In Call_Movement Table
> > 803482000146220.0000
> > 803482000147143.0000
> > 803482000153805.0000
> >