Showing posts with label rows. Show all posts
Showing posts with label rows. Show all posts

Wednesday, March 28, 2012

More Efficient way to Concat Rows

Hi I have this problem with a query to process data concatenating
Units/Modules.
I have one View called Certificates thats lists Course IDs, Candidate
Names and Units. As each Certificate shows multiple Units achieved by
the Candidate I need to transform this data and combine the Units into
a single value - so I have 1 row per certificate rathan than 1 row per
unit.
I have been using this Query and several UDF's (Unit names, Unit
Grades, Unit Award dates etc) but as the database incresed in size the
query is timing out - Is there a more efficient way to do this ? Also
the function seems to create duplicates so i am using DISTINCT and
thisn seems to slow it down too. Help!
Thanks
hals_left
SELECT DISTINCT CourseID, CandidateID,dbo.GetUnits(CandidateID,
CourseID) AS Units
FROM Certificates
CREATE FUNCTION dbo.GetUnits
(@.CandidateID AS smallint, @.CourseID AS smallint )
RETURNS varchar(1000)
AS
BEGIN
DECLARE @.str AS varchar(1000)
SET @.str = ''
SELECT @.str = @.str + UnitTitle + char(13) + char(10)
FROM Certificates
WHERE CandidateID = @.CandidateID AND CourseID= @.CourseID
RETURN @.str
ENDIf this is just formatting for display then it isn't obvious why you
would need to do it in the database. The simplest and most likely
fastest method is in the client or presentation tier. See:
http://msdn.microsoft.com/library/d...
etstringmethod(recordset)ado.asp
David Portas
SQL Server MVP
--|||I agree but the client application (Word 2K Mailmerge) isnt capable of
this.|||Hello, hals_left
To make sure that the function is not called too many times, use a
query like this:
SELECT CourseID, CandidateID,
dbo.GetUnits(CandidateID, CourseID) AS Units
FROM (
SELECT DISTINCT CourseID, CandidateID,
FROM Certificates
) x
Razvan

Monday, March 12, 2012

Monitoring progress on a join

I'm doing a big old join on one table with 10,000,000 rows, and
another with
400,000 rows. As you can imagine, this is taking a long time.

Is there any way to monitor the progress of the join after executing
the sql statement (more specifically, from code)?

(Oh, and any good practices for speeding up this join would be
appreciated, too).

Thanks,

AndrewAndrew (ajperrins@.hotmail.com) writes:
> I'm doing a big old join on one table with 10,000,000 rows, and
> another with
> 400,000 rows. As you can imagine, this is taking a long time.
> Is there any way to monitor the progress of the join after executing
> the sql statement (more specifically, from code)?

If it's a SELECT statement that produces a result set, I don't think
there is much you can monitor. You can keep an on the cpu, physical_io
and memusage columns in sysprocesses, but since you don't know what
the target values, you can only see that the process is working, but
not how much work that is left.

If you are also inserting the data into a table, a SELECT with NOLOCK
on the table can give some progress indication. On the INSERT that
is, so if finding the qualifying rows is what takes time, you will
still not see much.

> (Oh, and any good practices for speeding up this join would be
> appreciated, too).

Without any knowledge about the query and the tables, it is difficult
to give precise advice. But selective indexes for the WHERE condition.
If you are not including too many columns, a covering index or two may
be worth investigating.

--
Erland Sommarskog, SQL Server MVP, sommar@.algonet.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||In addition to Erland's notes: if your query is about the only thing
running on the machine, and you have collected information from previous
runs, then you can use

select @.@.cpu_busy, @.@.io_busy

to monitor the cpu and i/o usage before and during your execution, and
deduct the progress.

Hope this helps,
Gert-Jan

Andrew wrote:
> I'm doing a big old join on one table with 10,000,000 rows, and
> another with
> 400,000 rows. As you can imagine, this is taking a long time.
> Is there any way to monitor the progress of the join after executing
> the sql statement (more specifically, from code)?
> (Oh, and any good practices for speeding up this join would be
> appreciated, too).
> Thanks,
> Andrew|||On 15 Sep 2003 12:01:53 -0700 in comp.databases.ms-sqlserver,
ajperrins@.hotmail.com (Andrew) wrote:

>I'm doing a big old join on one table with 10,000,000 rows, and
>another with
>400,000 rows. As you can imagine, this is taking a long time.
>Is there any way to monitor the progress of the join after executing
>the sql statement (more specifically, from code)?
>(Oh, and any good practices for speeding up this join would be
>appreciated, too).

The tools you want are in Query Analyser: Show Execution Plan, also
Index analysis (on that data you might want to leave that running
overnight <g>). Don't know about "from code" :-\

--
A)bort, R)etry, I)nfluence with large hammer.

(replace sithlord with trevor for email)

Monday, February 20, 2012

money data-type query

I have a database table with a field of datatype "money".
I need to have this populated with rows that only have data up to a maximum
of two decimal places (i.e. pounds and pence).
However, I have come across some rows that have up to 3 significant figures
after the decimal points (i.e. 1/10th of a penny).
I need to identify all the rows that have more than two significant numbers
after the decimal place.
I thought:
SELECT * FROM myTable where
CAST(myField AS VARCHAR(20)) LIKE '%.[0-9][0-9][1-9]%'
would do the trick.
It doesn't.
Any suggestions would be really appreciated!
Thanks
GriffSorted:
WHERE (myField<> round(myField,2))
Griff|||"Griff" <Howling@.The.Moon> wrote in message
news:%23KcVFIIEGHA.140@.TK2MSFTNGP12.phx.gbl...
>I have a database table with a field of datatype "money".
> I need to have this populated with rows that only have data up to a
> maximum of two decimal places (i.e. pounds and pence).
> However, I have come across some rows that have up to 3 significant
> figures after the decimal points (i.e. 1/10th of a penny).
> I need to identify all the rows that have more than two significant
> numbers after the decimal place.
> I thought:
> SELECT * FROM myTable where
> CAST(myField AS VARCHAR(20)) LIKE '%.[0-9][0-9][1-9]%'
> would do the trick.
> It doesn't.
> Any suggestions would be really appreciated!
> Thanks
> Griff
>
Try this:
SELECT x
FROM tbl
WHERE x<>ROUND(x,2,1);
Be very careful when using MONEY for monetary amounts. In my opinion the
rounding errors caused by MONEY make DECIMAL a much better choice for
currency values in every case. See the example below. Do you have a good
excuse for using MONEY?
DECLARE
@.mon1 MONEY,
@.mon2 MONEY,
@.mon3 MONEY,
@.mon4 MONEY,
@.num1 DECIMAL(19,4),
@.num2 DECIMAL(19,4),
@.num3 DECIMAL(19,4),
@.num4 DECIMAL(19,4) ;
SELECT
@.mon1 = 100, @.mon2 = 339, @.mon3 = 10000,
@.num1 = 100, @.num2 = 339, @.num3 = 10000 ;
SET @.mon4 = @.mon1/@.mon2*@.mon3 ;
SET @.num4 = @.num1/@.num2*@.num3 ;
SELECT @.mon4 AS money_result,
@.num4 AS decimal_result ;
Result:
money_result decimal_result
-- --
2949.0000 2949.8525
(1 row(s) affected)
David Portas
SQL Server MVP
--|||The money data type stores data upto 4 decimal places. If you only require 2
decimal places you could consider storing the data as decimal or numeric
instead
HTH. Ryan
"Griff" <Howling@.The.Moon> wrote in message
news:OjWSPMIEGHA.344@.TK2MSFTNGP11.phx.gbl...
> Sorted:
> WHERE (myField<> round(myField,2))
> Griff
>