Showing posts with label sqlserver. Show all posts
Showing posts with label sqlserver. Show all posts

Friday, March 30, 2012

More Newbie Questions

Two questions --

Why is it that some of my tables are not updateable, and some are??

And I'm told that there is no "memo" type field for SQLServer, and 8000
characters is the limit. Unless I go to blob or binary. What does that
entail, and what are the consequences?

Thanks!

JA> Why is it that some of my tables are not updateable, and some are??

There could be more than one reason depending on how you are doing the
updating and what you mean by "not updateable". I'll take a guess that you
are using a UI such as Enterprise Manager or Access that allows you to edit
a table. In that case you won't be able to make changes unless your table
has a unique key (constraint or index). Every table should always have a
unique key otherwise it isn't possible to guarantee that individual rows can
be updated. The solution is to add a primary key constraint to the table.

If I'm wrong then please explain how you are attempting to update the table
and what happens when you try (do you get an error message for example).

> And I'm told that there is no "memo" type field for SQLServer, and 8000
> characters is the limit. Unless I go to blob or binary. What does that
> entail, and what are the consequences?

VARCHAR columns are limited to 8000 characters. For text up to 2GB in size
you can use the TEXT or NTEXT datatype. TEXT isn't as easy to use in your
SQL code as VARCHAR is and there are some limitations on what you can and
cannot do - for example some of the string functions won't work with TEXT
values. Lookup the TEXT datatype in Books Online.

--
David Portas
SQL Server MVP
--|||David,

You're right, I'm using Access 2000. I made a table with a primary key and
had no problem adding data.

I looked up TEXT and NTEXT on Books Online. I don't guess I know enough to
see the problems. The field is for product descriptions that are over 8000
characters (quite a few are). They have HTML tags in them, and are displayed
on an asp webpage. Would there be any problems with that?

Thanks, JA

"David Portas" <REMOVE_BEFORE_REPLYING_dportas@.acm.org> wrote in message
news:tMSdne7GYNyvpQXfRVn-1w@.giganews.com...
>> Why is it that some of my tables are not updateable, and some are??
> There could be more than one reason depending on how you are doing the
> updating and what you mean by "not updateable". I'll take a guess that
> you are using a UI such as Enterprise Manager or Access that allows you to
> edit a table. In that case you won't be able to make changes unless your
> table has a unique key (constraint or index). Every table should always
> have a unique key otherwise it isn't possible to guarantee that individual
> rows can be updated. The solution is to add a primary key constraint to
> the table.
> If I'm wrong then please explain how you are attempting to update the
> table and what happens when you try (do you get an error message for
> example).
>> And I'm told that there is no "memo" type field for SQLServer, and 8000
>> characters is the limit. Unless I go to blob or binary. What does that
>> entail, and what are the consequences?
> VARCHAR columns are limited to 8000 characters. For text up to 2GB in size
> you can use the TEXT or NTEXT datatype. TEXT isn't as easy to use in your
> SQL code as VARCHAR is and there are some limitations on what you can and
> cannot do - for example some of the string functions won't work with TEXT
> values. Lookup the TEXT datatype in Books Online.
> --
> David Portas
> SQL Server MVP
> --

Wednesday, March 28, 2012

More effiect update from linked server

I am updating data from a smaller linked table into a larger table (SQL
server 2K sp4, Win server 2k3 sp1) (data which had been inserted
previously). The update now looks like:
UPDATE tableA
SET colx = (select colx FROM linkedtableB rem
WHERE rem.pkcol = tableA.pkcol),
coly = (select coly FROM linkedtableB rem
WHERE rem.pkcol = tableA.pkcol)
WHERE EXISTS (SELECT * FROM linkedtableB as rem
WHERE rem.pkcol = tableA.pkcol)
This seems to run an inordinately long time (in excess of 1hr + for
this one table, and I have to update 40 more tables). My question it,
what would be a more effiecient way to update this data? My reasoning
for updating this data is, while doing the insert earlier, I may have
gathered partial data during an operation, thus requiring an update at
some later point.
Any insight into a better solution would be greatly appriciated.
Daniel
danielp
try this one
UPDATE tableA SET colx =rem.colx , coly =rem.coly FROM linkedtableB rem
JOIN tableA ON
rem.pkcol = tableA.pkcol
"danielp" <danielsmith611@.gmail.com> wrote in message
news:1162310539.241683.96330@.e3g2000cwe.googlegrou ps.com...
>I am updating data from a smaller linked table into a larger table (SQL
> server 2K sp4, Win server 2k3 sp1) (data which had been inserted
> previously). The update now looks like:
> UPDATE tableA
> SET colx = (select colx FROM linkedtableB rem
> WHERE rem.pkcol = tableA.pkcol),
> coly = (select coly FROM linkedtableB rem
> WHERE rem.pkcol = tableA.pkcol)
> WHERE EXISTS (SELECT * FROM linkedtableB as rem
> WHERE rem.pkcol = tableA.pkcol)
> This seems to run an inordinately long time (in excess of 1hr + for
> this one table, and I have to update 40 more tables). My question it,
> what would be a more effiecient way to update this data? My reasoning
> for updating this data is, while doing the insert earlier, I may have
> gathered partial data during an operation, thus requiring an update at
> some later point.
> Any insight into a better solution would be greatly appriciated.
> Daniel
>
|||Hi Daniel
Try
UPDATE A
SET colx = B.colx
coly = B.coly
FROM tableA A
JOIN linkedtableB B ON B.pkcol = A.pkcol
John
"danielp" wrote:

> I am updating data from a smaller linked table into a larger table (SQL
> server 2K sp4, Win server 2k3 sp1) (data which had been inserted
> previously). The update now looks like:
> UPDATE tableA
> SET colx = (select colx FROM linkedtableB rem
> WHERE rem.pkcol = tableA.pkcol),
> coly = (select coly FROM linkedtableB rem
> WHERE rem.pkcol = tableA.pkcol)
> WHERE EXISTS (SELECT * FROM linkedtableB as rem
> WHERE rem.pkcol = tableA.pkcol)
> This seems to run an inordinately long time (in excess of 1hr + for
> this one table, and I have to update 40 more tables). My question it,
> what would be a more effiecient way to update this data? My reasoning
> for updating this data is, while doing the insert earlier, I may have
> gathered partial data during an operation, thus requiring an update at
> some later point.
> Any insight into a better solution would be greatly appriciated.
> Daniel
>
|||I missed out a comma!
UPDATE A
SET colx = B.colx,
coly = B.coly
FROM tableA A
JOIN linkedtableB B ON B.pkcol = A.pkcol
John
"John Bell" wrote:
[vbcol=seagreen]
> Hi Daniel
> Try
> UPDATE A
> SET colx = B.colx
> coly = B.coly
> FROM tableA A
> JOIN linkedtableB B ON B.pkcol = A.pkcol
> John
> "danielp" wrote:
|||Thanks Uri and John!
John Bell wrote:[vbcol=seagreen]
> I missed out a comma!
> UPDATE A
> SET colx = B.colx,
> coly = B.coly
> FROM tableA A
> JOIN linkedtableB B ON B.pkcol = A.pkcol
> John
> "John Bell" wrote:

More effiect update from linked server

I am updating data from a smaller linked table into a larger table (SQL
server 2K sp4, Win server 2k3 sp1) (data which had been inserted
previously). The update now looks like:
UPDATE tableA
SET colx = (select colx FROM linkedtableB rem
WHERE rem.pkcol = tableA.pkcol),
coly = (select coly FROM linkedtableB rem
WHERE rem.pkcol = tableA.pkcol)
WHERE EXISTS (SELECT * FROM linkedtableB as rem
WHERE rem.pkcol = tableA.pkcol)
This seems to run an inordinately long time (in excess of 1hr + for
this one table, and I have to update 40 more tables). My question it,
what would be a more effiecient way to update this data? My reasoning
for updating this data is, while doing the insert earlier, I may have
gathered partial data during an operation, thus requiring an update at
some later point.
Any insight into a better solution would be greatly appriciated.
Danieldanielp
try this one
UPDATE tableA SET colx =rem.colx , coly =rem.coly FROM linkedtableB rem
JOIN tableA ON
rem.pkcol = tableA.pkcol
"danielp" <danielsmith611@.gmail.com> wrote in message
news:1162310539.241683.96330@.e3g2000cwe.googlegroups.com...
>I am updating data from a smaller linked table into a larger table (SQL
> server 2K sp4, Win server 2k3 sp1) (data which had been inserted
> previously). The update now looks like:
> UPDATE tableA
> SET colx = (select colx FROM linkedtableB rem
> WHERE rem.pkcol = tableA.pkcol),
> coly = (select coly FROM linkedtableB rem
> WHERE rem.pkcol = tableA.pkcol)
> WHERE EXISTS (SELECT * FROM linkedtableB as rem
> WHERE rem.pkcol = tableA.pkcol)
> This seems to run an inordinately long time (in excess of 1hr + for
> this one table, and I have to update 40 more tables). My question it,
> what would be a more effiecient way to update this data? My reasoning
> for updating this data is, while doing the insert earlier, I may have
> gathered partial data during an operation, thus requiring an update at
> some later point.
> Any insight into a better solution would be greatly appriciated.
> Daniel
>|||Hi Daniel
Try
UPDATE A
SET colx = B.colx
coly = B.coly
FROM tableA A
JOIN linkedtableB B ON B.pkcol = A.pkcol
John
"danielp" wrote:

> I am updating data from a smaller linked table into a larger table (SQL
> server 2K sp4, Win server 2k3 sp1) (data which had been inserted
> previously). The update now looks like:
> UPDATE tableA
> SET colx = (select colx FROM linkedtableB rem
> WHERE rem.pkcol = tableA.pkcol),
> coly = (select coly FROM linkedtableB rem
> WHERE rem.pkcol = tableA.pkcol)
> WHERE EXISTS (SELECT * FROM linkedtableB as rem
> WHERE rem.pkcol = tableA.pkcol)
> This seems to run an inordinately long time (in excess of 1hr + for
> this one table, and I have to update 40 more tables). My question it,
> what would be a more effiecient way to update this data? My reasoning
> for updating this data is, while doing the insert earlier, I may have
> gathered partial data during an operation, thus requiring an update at
> some later point.
> Any insight into a better solution would be greatly appriciated.
> Daniel
>|||I missed out a comma!
UPDATE A
SET colx = B.colx,
coly = B.coly
FROM tableA A
JOIN linkedtableB B ON B.pkcol = A.pkcol
John
"John Bell" wrote:
[vbcol=seagreen]
> Hi Daniel
> Try
> UPDATE A
> SET colx = B.colx
> coly = B.coly
> FROM tableA A
> JOIN linkedtableB B ON B.pkcol = A.pkcol
> John
> "danielp" wrote:
>|||Thanks Uri and John!
John Bell wrote:[vbcol=seagreen]
> I missed out a comma!
> UPDATE A
> SET colx = B.colx,
> coly = B.coly
> FROM tableA A
> JOIN linkedtableB B ON B.pkcol = A.pkcol
> John
> "John Bell" wrote:
>

Monday, March 26, 2012

More connect feedback required please

[Microsoft follow-up]

I submitted a posting to connect here: https://connect.microsoft.com/SQLServer/feedback/ViewFeedback.aspx?FeedbackID=287213

Here is what I wrote:

A number of tasks output data to variables or text files. e.g. WMI Data Reader Task, Execute SQL Task, Web Service Task.

Consuming that output in a data-flow isn't particularly easy. You either need a Flat File source adapter (which requires an othrewise superfluous connection manager) or write come code in a script component to parse the recordset. There is simply no easy way to push data from these sources into the dataflow.

Thw built-in mechanism for passing data between different tasks is raw files. Currently they can only be used by the dataflow but I see no reason why they couldn't be used by other tasks as well. It makes complete sense to me for a WMI Datareader Task to push some data into a raw file and then we chew that data up in a dataflow.

The following response came back

Our current architecture actually doesn't have the buffer system as in Data Flow, when you are in the Control Flow. What you are asking would require us to build a similar buffer system in the Control Flow, which is a fundemantal architectural change. We'll not be able to take this, sorry.

I'm afraid I don't understand that response. Obviously I know that buffers are only in the data-flow - but I don't see why that's relevant. Raw files are just files on the file system, same as any other. OK, their format is very proprietary but its you guys that built the format. Essentially all I'm asking you to do is output the data in raw file format as opposed to flat file format. There's no notion of buffers in a raw file because its just a lump of data. Or is there? If not, I'm afraid I don't understand your answer.

Please could you clarify the answer?

-Jamie

Hi,

Are the [Microsoft follow-up] tags not working anymore? Smile

-Jamie

|||

They do work. Sometimes, is not easy to find a brave soul to catch a hot potato. Smile

I was waiting for somebody from the group that triaged this item to respond, but let me present my view:

The row files are basically data flow buffers streamed into files. More or less it is collection of "raw" data flow buffers copied byte by byte. It would be possible to mimic this format from other tasks but it would not be a trivial task. The data other tasks are dealing with is differently structured and it will probably need to go through chains of adaptations/conversions to make it conform to the metadata parameters understandable by the data flow.

I understand your motivation behind this request and it is more than sensible. I just wish we had a standardized metadata system across runtime and pipeline that would allow us to easily implement this.

Thanks,

Bob

|||

Bob Bojanic - MSFT wrote:

They do work. Sometimes, is not easy to find a brave soul to catch a hot potato.

Fair enough. Credit to you then Bob for picking it up!

Bob Bojanic - MSFT wrote:

I was waiting for somebody from the group that triaged this item to respond, but let me present my view:

The row files are basically data flow buffers streamed into files. More or less it is collection of "raw" data flow buffers copied byte by byte. It would be possible to mimic this format from other tasks but it would not be a trivial task. The data other tasks are dealing with is differently structured and it will probably need to go through chains of adaptations/conversions to make it conform to the metadata parameters understandable by the data flow.

I understand your motivation behind this request and it is more than sensible. I just wish we had a standardized metadata system across runtime and pipeline that would allow us to easily implement this.

Thanks,

Bob

OK, that kinda makes sense. Here's another even hotter potato for you then. Why DON'T you have "a standardized metadata system across runtime and pipeline ". It has always seemed strange to me that (e.g.) variable data types are different to pipeline data types.

-Jamie

|||

Jamie Thomson wrote:


OK, that kinda makes sense. Here's another even hotter potato for you then. Why DON'T you have "a standardized metadata system across runtime and pipeline ". It has always seemed strange to me that (e.g.) variable data types are different to pipeline data types.

-Jamie

HAHAHA! You know, that's so true. If, in your example, we had consistent data types between variables, pipelines, parameter mappings, etc... consumer adoption of SSIS would be much greater and our jobs on this forum would be made SO much easier.

|||

Jamie Thomson wrote:

Here's another even hotter potato for you then. Why DON'T you have "a standardized metadata system across runtime and pipeline ". It has always seemed strange to me that (e.g.) variable data types are different to pipeline data types.

Hmm. I can try to answer it but have to watch not to burn my fingers too much. Smile

IMHO, and I can be off since these designs/implementations date before my time with this group, we focused too much on breaking DTS into two functional modules (runtime and pipeline) that we failed to realize some obvious synergies they have. We started building two separate modules from the start and divided our resources around them. That resulted in separate designs, implementations and overall look and feel. We even had two hamsters, at some point, named "Runtime" and "Pipeline". Smile

Again, this is only my view and other people on the team might have different insights. I wish we did a better job on this, and I believe we should have a serious thinking around making it better. It is not likely we will be able to improve much of it in Katmai, though.

Also, to be fair, coming up with standardized data type system that could be used for many purposes is quite a complex task given the ambiguity of implementations in existing data sources and apparent lack of standardization. Even seemingly simple tool like the Import/Export wizard, which is supposed to move data between all sorts of heterogeneous data sources, faces loads of problems in its quest to make the transfers "simply work". In a perfect world, we should have had standardized implementations of data type systems, provided by a platform (OS), all other layers (DBs, CLR, tools and services) can plug into, and we would not need to worry about these things.

Anyway, I hope this adds some light…

Thanks,

Bob

|||

Wow. That's a very candid response Bob. Thank you very much, I really appreciate the honesty.

As always, if we in the community can help in any way to shape what happens here in the future then come and knock on our door.


-Jamie

|||

Thank you, Jamie, for keeping us honest.

The community is here to keep us both; inspired and humble, at the same time. I like it for that.

|||

Now here's something I keep thinking about

SSIS trying to be all things to all people. Why? (the telephone toaster spring to mind at this point)

Oracle and SQL are both now excellent products. Chances are your typical Oracle system will appear to be better because the design of it will be of higher quality. This is only because the barriers to entry are so high, unlike MS where any muppet can come along and design some crappy Access database, migrate using a wizard, and then think they are a dba. This is known and historic and as time goes by the overall design quality of SQL db will meet that of Oracle.

This is also true because most SQL installations originally came about because people didn't or couldn't or wouldn't spend the money on hiring Oracle and all their costly consultants to come and set them up a db. But also they wouldn't spend the money on training people how to use SQL properly either.

This means that any company using Oracle or SQL is likely to stay doing so, and any new companies are likely to use SQL. It's "cheaper". MS cleverly understand that the extra costs of getting MS stuff going is not measurable on the balance sheet Smile

So assuming this logic, I would bet money that the number of people using SSIS in a non SQL environment is on a par with the number of people using obscure (non IE/FF) browsers.

If you had a website, you wouldn't even bother spending the money trying to make it work with the obscure browser. It is not worth the pain.

So assume that no CIO in their right mind is going to sanction the switching of a (probably working very nicely thankyou) Oracle ETL tool, to SSIS. Why do MS persist with this strategy of trying to make SSIS all things to all people?

Just make all of the SSIS datatypes the same as SQL! That's where the data is going to largely end up anyway.

I would bet that 99% of SQL connection managers are the following:

sql connection; flat file/csv connection; excel connection.

For the 1% (and please someone from MS correct me on these figures - you do after all capture 'usage' information) of other datasources, why not have dataconverters, to just convert them all to sql datatypes?

|||Personally, I have done a fair percentage of my work in SSIS against non-SQL Server databases, so I really like their strategy. If SSIS didn't work with other databases, it would be fairly useless as an enterprise ETL tool.|||

jwelch wrote:

Personally, I have done a fair percentage of my work in SSIS against non-SQL Server databases, so I really like their strategy. If SSIS didn't work with other databases, it would be fairly useless as an enterprise ETL tool.

Couldn't agree more. Many of the SSIS I have built don't go near SQL Server except for logging purposes.

-Jamie

|||

I'm just basing it on my experience of the FTSE listed companies I have worked/consulted at since 1995.

Out of 10, only 1 was Oracle, the rest were SQL and only SQL.

I also don't remember attending any interviews at places where they were also using Oracle etc.

For every multinational giant, there are hundreds of SMEs all running SQL.

--

So if you have another DB and SQL. You are still having to convert to SSIS types along the way.

At least if they used SQL types, you would only have convert once.

|||

We do not live inside SQL Server and simply reusing the SQL types is not going to help us much under the hoods. There is still additional layer between us and SQL engine and conversions are not easy to avoid. It would possibly make the exposed semantics simpler, but only if we were able to consolidate the data type system in the entire product.

The devil lies in details -- making these conversions transparent and seamless is quite delicate business, and its complexity jumps exponentially as we increase the number of considered data sources.

Thanks,

Bob

|||

Product suggestion:

Allow the package to have an optional mode.

"CommonDataTypeMode"

Which you can set to SQL Server, Oracle etc

By setting this is will automatically default all of your datatypes to the chosen mode. e.g. any datetime will default to DT_DBTIMESTAMP

I am wasting so much time having to go in and 'correct'* data type information.

What is gained by having it:

Less time spent typing into miniscule textboxes.

Fewer errors due to always having the correct type.

*I have also raised a bug on the connect site regarding SSIS changing your datatypes when you change an expression in the derived column editor. I haven't heard anything back and don't expect to.

sql

More connect feedback required please

[Microsoft follow-up]

I submitted a posting to connect here: https://connect.microsoft.com/SQLServer/feedback/ViewFeedback.aspx?FeedbackID=287213

Here is what I wrote:

A number of tasks output data to variables or text files. e.g. WMI Data Reader Task, Execute SQL Task, Web Service Task.

Consuming that output in a data-flow isn't particularly easy. You either need a Flat File source adapter (which requires an othrewise superfluous connection manager) or write come code in a script component to parse the recordset. There is simply no easy way to push data from these sources into the dataflow.

Thw built-in mechanism for passing data between different tasks is raw files. Currently they can only be used by the dataflow but I see no reason why they couldn't be used by other tasks as well. It makes complete sense to me for a WMI Datareader Task to push some data into a raw file and then we chew that data up in a dataflow.

The following response came back

Our current architecture actually doesn't have the buffer system as in Data Flow, when you are in the Control Flow. What you are asking would require us to build a similar buffer system in the Control Flow, which is a fundemantal architectural change. We'll not be able to take this, sorry.

I'm afraid I don't understand that response. Obviously I know that buffers are only in the data-flow - but I don't see why that's relevant. Raw files are just files on the file system, same as any other. OK, their format is very proprietary but its you guys that built the format. Essentially all I'm asking you to do is output the data in raw file format as opposed to flat file format. There's no notion of buffers in a raw file because its just a lump of data. Or is there? If not, I'm afraid I don't understand your answer.

Please could you clarify the answer?

-Jamie

Hi,

Are the [Microsoft follow-up] tags not working anymore? Smile

-Jamie

|||

They do work. Sometimes, is not easy to find a brave soul to catch a hot potato. Smile

I was waiting for somebody from the group that triaged this item to respond, but let me present my view:

The row files are basically data flow buffers streamed into files. More or less it is collection of "raw" data flow buffers copied byte by byte. It would be possible to mimic this format from other tasks but it would not be a trivial task. The data other tasks are dealing with is differently structured and it will probably need to go through chains of adaptations/conversions to make it conform to the metadata parameters understandable by the data flow.

I understand your motivation behind this request and it is more than sensible. I just wish we had a standardized metadata system across runtime and pipeline that would allow us to easily implement this.

Thanks,

Bob

|||

Bob Bojanic - MSFT wrote:

They do work. Sometimes, is not easy to find a brave soul to catch a hot potato.

Fair enough. Credit to you then Bob for picking it up!

Bob Bojanic - MSFT wrote:

I was waiting for somebody from the group that triaged this item to respond, but let me present my view:

The row files are basically data flow buffers streamed into files. More or less it is collection of "raw" data flow buffers copied byte by byte. It would be possible to mimic this format from other tasks but it would not be a trivial task. The data other tasks are dealing with is differently structured and it will probably need to go through chains of adaptations/conversions to make it conform to the metadata parameters understandable by the data flow.

I understand your motivation behind this request and it is more than sensible. I just wish we had a standardized metadata system across runtime and pipeline that would allow us to easily implement this.

Thanks,

Bob

OK, that kinda makes sense. Here's another even hotter potato for you then. Why DON'T you have "a standardized metadata system across runtime and pipeline ". It has always seemed strange to me that (e.g.) variable data types are different to pipeline data types.

-Jamie

|||

Jamie Thomson wrote:


OK, that kinda makes sense. Here's another even hotter potato for you then. Why DON'T you have "a standardized metadata system across runtime and pipeline ". It has always seemed strange to me that (e.g.) variable data types are different to pipeline data types.

-Jamie

HAHAHA! You know, that's so true. If, in your example, we had consistent data types between variables, pipelines, parameter mappings, etc... consumer adoption of SSIS would be much greater and our jobs on this forum would be made SO much easier.

|||

Jamie Thomson wrote:

Here's another even hotter potato for you then. Why DON'T you have "a standardized metadata system across runtime and pipeline ". It has always seemed strange to me that (e.g.) variable data types are different to pipeline data types.

Hmm. I can try to answer it but have to watch not to burn my fingers too much. Smile

IMHO, and I can be off since these designs/implementations date before my time with this group, we focused too much on breaking DTS into two functional modules (runtime and pipeline) that we failed to realize some obvious synergies they have. We started building two separate modules from the start and divided our resources around them. That resulted in separate designs, implementations and overall look and feel. We even had two hamsters, at some point, named "Runtime" and "Pipeline". Smile

Again, this is only my view and other people on the team might have different insights. I wish we did a better job on this, and I believe we should have a serious thinking around making it better. It is not likely we will be able to improve much of it in Katmai, though.

Also, to be fair, coming up with standardized data type system that could be used for many purposes is quite a complex task given the ambiguity of implementations in existing data sources and apparent lack of standardization. Even seemingly simple tool like the Import/Export wizard, which is supposed to move data between all sorts of heterogeneous data sources, faces loads of problems in its quest to make the transfers "simply work". In a perfect world, we should have had standardized implementations of data type systems, provided by a platform (OS), all other layers (DBs, CLR, tools and services) can plug into, and we would not need to worry about these things.

Anyway, I hope this adds some light…

Thanks,

Bob

|||

Wow. That's a very candid response Bob. Thank you very much, I really appreciate the honesty.

As always, if we in the community can help in any way to shape what happens here in the future then come and knock on our door.


-Jamie

|||

Thank you, Jamie, for keeping us honest.

The community is here to keep us both; inspired and humble, at the same time. I like it for that.

|||

Now here's something I keep thinking about

SSIS trying to be all things to all people. Why? (the telephone toaster spring to mind at this point)

Oracle and SQL are both now excellent products. Chances are your typical Oracle system will appear to be better because the design of it will be of higher quality. This is only because the barriers to entry are so high, unlike MS where any muppet can come along and design some crappy Access database, migrate using a wizard, and then think they are a dba. This is known and historic and as time goes by the overall design quality of SQL db will meet that of Oracle.

This is also true because most SQL installations originally came about because people didn't or couldn't or wouldn't spend the money on hiring Oracle and all their costly consultants to come and set them up a db. But also they wouldn't spend the money on training people how to use SQL properly either.

This means that any company using Oracle or SQL is likely to stay doing so, and any new companies are likely to use SQL. It's "cheaper". MS cleverly understand that the extra costs of getting MS stuff going is not measurable on the balance sheet Smile

So assuming this logic, I would bet money that the number of people using SSIS in a non SQL environment is on a par with the number of people using obscure (non IE/FF) browsers.

If you had a website, you wouldn't even bother spending the money trying to make it work with the obscure browser. It is not worth the pain.

So assume that no CIO in their right mind is going to sanction the switching of a (probably working very nicely thankyou) Oracle ETL tool, to SSIS. Why do MS persist with this strategy of trying to make SSIS all things to all people?

Just make all of the SSIS datatypes the same as SQL! That's where the data is going to largely end up anyway.

I would bet that 99% of SQL connection managers are the following:

sql connection; flat file/csv connection; excel connection.

For the 1% (and please someone from MS correct me on these figures - you do after all capture 'usage' information) of other datasources, why not have dataconverters, to just convert them all to sql datatypes?

|||Personally, I have done a fair percentage of my work in SSIS against non-SQL Server databases, so I really like their strategy. If SSIS didn't work with other databases, it would be fairly useless as an enterprise ETL tool.|||

jwelch wrote:

Personally, I have done a fair percentage of my work in SSIS against non-SQL Server databases, so I really like their strategy. If SSIS didn't work with other databases, it would be fairly useless as an enterprise ETL tool.

Couldn't agree more. Many of the SSIS I have built don't go near SQL Server except for logging purposes.

-Jamie

|||

I'm just basing it on my experience of the FTSE listed companies I have worked/consulted at since 1995.

Out of 10, only 1 was Oracle, the rest were SQL and only SQL.

I also don't remember attending any interviews at places where they were also using Oracle etc.

For every multinational giant, there are hundreds of SMEs all running SQL.

--

So if you have another DB and SQL. You are still having to convert to SSIS types along the way.

At least if they used SQL types, you would only have convert once.

|||

We do not live inside SQL Server and simply reusing the SQL types is not going to help us much under the hoods. There is still additional layer between us and SQL engine and conversions are not easy to avoid. It would possibly make the exposed semantics simpler, but only if we were able to consolidate the data type system in the entire product.

The devil lies in details -- making these conversions transparent and seamless is quite delicate business, and its complexity jumps exponentially as we increase the number of considered data sources.

Thanks,

Bob

|||

Product suggestion:

Allow the package to have an optional mode.

"CommonDataTypeMode"

Which you can set to SQL Server, Oracle etc

By setting this is will automatically default all of your datatypes to the chosen mode. e.g. any datetime will default to DT_DBTIMESTAMP

I am wasting so much time having to go in and 'correct'* data type information.

What is gained by having it:

Less time spent typing into miniscule textboxes.

Fewer errors due to always having the correct type.

*I have also raised a bug on the connect site regarding SSIS changing your datatypes when you change an expression in the derived column editor. I haven't heard anything back and don't expect to.

Friday, March 23, 2012

monthly database merging

Hi,

I have to set up a project in SQLServer Express that will allow the export of tables from multiple PC's running SQLServer Express each month.

These have to be loaded into a single identical database on SQLServer Express each month for combined reporting.

so its basicaly

insert update on PC

export

import overlaying last months data (handle dup keys from the other pc's etc)

report

I've had a look at the SQLServer replication docs and got confused....

So if anyone can point me at appropriate documentation, or suggest a good method for this it would be appreciated

Thanks

SQL Server 2005 Express to SQL Server 2005 Express does NOT support replication. However there is a Import/Export tool (DTSWizard.exe) included with the Express Toolkit (Note: Requires SQL Express SP1 or higher.)

Download the Express Toolkit (and SQL Express SP2) here: http://msdn.microsoft.com/vstudio/express/sql/download/

After installing the toolkit, the Import/Export tool (called DTSWizard.exe) is located at:

"C:\Program Files\Microsoft SQL Server\90\DTS\Binn\DTSWizard.exe".

Add a 'shortcut to SSMSE: [Tools], [External Tools], name and add the shortcut by linking to the DTS file above.

In order to Schedule the process, you may have to employ the Windows Scheduler and add a Task that will run DTSRun.exe.

I recommend creating 'staging' tables, identical schemas to the actual tables. Load your data into the staging tables, then you can move the conforming data to the actual tables, leaving behind the non-conforming data for 'clean-up'.

|||

Thanks

Peter

monthly database merge

Hi,

I have to set up a project in SQLServer Express that will allow the export of tables from multiple PC's running SQLServer Express each month.

These have to be loaded into a single identical database on SQLServer Express each month for combined reporting.

so its basicaly

insert update on PC

export

import overlaying last months data (handle dup keys from the other pc's etc)

report

I've had a look at the SQLServer replication docs and got confused....

Can I do this with replication ?

So if anyone can point me at appropriate documentation, or suggest a good method for this it would be appreciated

Thanks

Yes, it's possible to do it with replication.

Look up the topic on merge replication on MSDN.

Also check out the following links for some additional info:

http://forums.microsoft.com/TechNet/ShowPost.aspx?PostID=1527294&SiteID=17

http://msdn2.microsoft.com/en-us/library/ms151819.aspx

Gary

|||

Hello Peter,

Express Edition is not able to be a Publisher. It's capable of being a Subscriber only.

Check header "Integration and Interoperability" from the following link:

Features Comparison:

http://www.microsoft.com/sql/prodinfo/features/compare-features.mspx

and these links to learn more about SQL Server Replication from BOL:

Replication Basics for Express Edition:

http://msdn2.microsoft.com/en-us/library/ms165700.aspx

SQL Server Replication:

http://msdn2.microsoft.com/en-us/library/ms151198.aspx

Ekrem ?nsoy

|||

Unfortunatly that was the documentation that confused me. Is there something a little more practical and less theoretical around.?

|||

The Remote PC's in my system are self sufficient (ie dont need recieve data back from the merge) and are not physically connected to the machine that aggregates the data.

The documentation seems to imply that although I can set up a copy of SQLServer Express to merge the data recieved on CD from the remote sites, that I cannot set up the SQLServer Express remote sites to generate the replication data to the CD's

Do you agree with that statement, from your experiance?

Wednesday, March 21, 2012

monitoring tool

Which real-time monitoring toll you use for SQLServer ?
thanks
Message posted via droptable.com
http://www.droptable.com/Uwe/Forum...server/200605/1What do you want to monitor? Perfmon (Performance Monitor) is
extremely useful for looking at hardware stats and certain critical SQL
metrics. The sysprocesses table is full of good info about what's
running in your SQL environment.|||We use NetIQ for monitoring critical SQL Server conditions such as SQL Serve
r
severe errors, stopped SQL instance, SQL instance restart, database that
can't be used, port binding failure, SQL instance that has changed its
cluster owner node, and so on.
Linchi
"jkostic via droptable.com" wrote:

> Which real-time monitoring toll you use for SQLServer ?
> thanks
> --
> Message posted via droptable.com
> http://www.droptable.com/Uwe/Forum...server/200605/1
>|||We use a sweet of tools from Compuware.

monitoring tool

Which real-time monitoring toll you use for SQLServer ?
thanks
--
Message posted via SQLMonster.com
http://www.sqlmonster.com/Uwe/Forums.aspx/sql-server/200605/1What do you want to monitor? Perfmon (Performance Monitor) is
extremely useful for looking at hardware stats and certain critical SQL
metrics. The sysprocesses table is full of good info about what's
running in your SQL environment.|||We use NetIQ for monitoring critical SQL Server conditions such as SQL Server
severe errors, stopped SQL instance, SQL instance restart, database that
can't be used, port binding failure, SQL instance that has changed its
cluster owner node, and so on.
Linchi
"jkostic via SQLMonster.com" wrote:
> Which real-time monitoring toll you use for SQLServer ?
> thanks
> --
> Message posted via SQLMonster.com
> http://www.sqlmonster.com/Uwe/Forums.aspx/sql-server/200605/1
>|||We use a sweet of tools from Compuware.sql

Monitoring the server

I am new to the forum and have a question. I would like to monitor the SQL
Server 2000 to make sure that it is running fine. Sometimes our server
couldn't be connected even the server is running. I have to re-start the SQL
Server in order to solve the problem. I am not sure why does this happen.
That would be nice if I could monitoring the server. Does anyone know how
to do it? Can I monitoring certain port(s)? Or ping those ports? Thanks in
advance for your help
"Skylander" <Skylander@.discussions.microsoft.com> wrote in message
news:10710AC1-A2C8-45D1-9CE5-A7032970D502@.microsoft.com...
> I am new to the forum and have a question. I would like to monitor the
SQL
> Server 2000 to make sure that it is running fine. Sometimes our server
> couldn't be connected even the server is running. I have to re-start the
SQL
> Server in order to solve the problem. I am not sure why does this happen.
> That would be nice if I could monitoring the server. Does anyone know how
> to do it? Can I monitoring certain port(s)? Or ping those ports? Thanks
in
> advance for your help
There are a variety of monitoring options available, from high-end to basic:
Microsoft Operation Monitor (MOM), network monitor, performance monitor,
home grown application.
Also, you'll want to give this article a read:
http://support.microsoft.com/default...b;en-us;827422
Steve
sql

Monitoring the server

I am new to the forum and have a question. I would like to monitor the SQL
Server 2000 to make sure that it is running fine. Sometimes our server
couldn't be connected even the server is running. I have to re-start the SQ
L
Server in order to solve the problem. I am not sure why does this happen.
That would be nice if I could monitoring the server. Does anyone know how
to do it? Can I monitoring certain port(s)? Or ping those ports? Thanks in
advance for your help "Skylander" <Skylander@.discussions.microsoft.com> wrote in message
news:10710AC1-A2C8-45D1-9CE5-A7032970D502@.microsoft.com...
> I am new to the forum and have a question. I would like to monitor the
SQL
> Server 2000 to make sure that it is running fine. Sometimes our server
> couldn't be connected even the server is running. I have to re-start the
SQL
> Server in order to solve the problem. I am not sure why does this happen.
> That would be nice if I could monitoring the server. Does anyone know how
> to do it? Can I monitoring certain port(s)? Or ping those ports? Thanks
in
> advance for your help
There are a variety of monitoring options available, from high-end to basic:
Microsoft Operation Monitor (MOM), network monitor, performance monitor,
home grown application.
Also, you'll want to give this article a read:
http://support.microsoft.com/defaul...kb;en-us;827422
Steve

Monitoring the server

I am new to the forum and have a question. I would like to monitor the SQL
Server 2000 to make sure that it is running fine. Sometimes our server
couldn't be connected even the server is running. I have to re-start the SQL
Server in order to solve the problem. I am not sure why does this happen.
That would be nice if I could monitoring the server. Does anyone know how
to do it? Can I monitoring certain port(s)? Or ping those ports? Thanks in
advance for your help
Hi
There are quite a few different ways to monitor your server, but for your
immendiate problem you will have to look at the SQL Server error log or the
event log to see if there are any messages relating to the times you are
having the problems. Going forward you may want to look at third party tools
such as Tivoli or Perfmon or SQL Server Alerts or your own scripts that
monitor various aspects of the system.
John
"Skylander" wrote:

> I am new to the forum and have a question. I would like to monitor the SQL
> Server 2000 to make sure that it is running fine. Sometimes our server
> couldn't be connected even the server is running. I have to re-start the SQL
> Server in order to solve the problem. I am not sure why does this happen.
> That would be nice if I could monitoring the server. Does anyone know how
> to do it? Can I monitoring certain port(s)? Or ping those ports? Thanks in
> advance for your help
>
|||Hi John,
Thanks for your quick reply. I have an application called 'SiteScope' that
can monitoring ports and stuff (don't know the detail function of this
application). Can I setup that to monitor certain SQL Server components to
make sure it's running fine?
BTW, is SQL Server Alert part of the components of SQL Server 2000? Thanks
Vince
"John Bell" wrote:
[vbcol=seagreen]
> Hi
> There are quite a few different ways to monitor your server, but for your
> immendiate problem you will have to look at the SQL Server error log or the
> event log to see if there are any messages relating to the times you are
> having the problems. Going forward you may want to look at third party tools
> such as Tivoli or Perfmon or SQL Server Alerts or your own scripts that
> monitor various aspects of the system.
> John
> "Skylander" wrote:
|||Hi
Monitoring SQL Server is not really about monitoring the ports, that
tend to imply you have network issues or even a DOS attack!!!!
You can look at what alerts can do by reading books online or starting
at
http://msdn.microsoft.com/library/de...omaem_0uwk.asp
John
|||Hi
You may also find these useful
http://www.databasejournal.com/scrip...le.php/3494931
or SQLPing useful
http://www.sqlsecurity.com/DesktopDefault.aspx?tabid=26
John
John Bell wrote:
> Hi
> Monitoring SQL Server is not really about monitoring the ports, that
> tend to imply you have network issues or even a DOS attack!!!!
> You can look at what alerts can do by reading books online or
starting
> at
>
http://msdn.microsoft.com/library/de...omaem_0uwk.asp
> John

Monitoring the server

I am new to the forum and have a question. I would like to monitor the SQL
Server 2000 to make sure that it is running fine. Sometimes our server
couldn't be connected even the server is running. I have to re-start the SQ
L
Server in order to solve the problem. I am not sure why does this happen.
That would be nice if I could monitoring the server. Does anyone know how
to do it? Can I monitoring certain port(s)? Or ping those ports? Thanks in
advance for your help Hi
There are quite a few different ways to monitor your server, but for your
immendiate problem you will have to look at the SQL Server error log or the
event log to see if there are any messages relating to the times you are
having the problems. Going forward you may want to look at third party tools
such as Tivoli or Perfmon or SQL Server Alerts or your own scripts that
monitor various aspects of the system.
John
"Skylander" wrote:

> I am new to the forum and have a question. I would like to monitor the SQ
L
> Server 2000 to make sure that it is running fine. Sometimes our server
> couldn't be connected even the server is running. I have to re-start the
SQL
> Server in order to solve the problem. I am not sure why does this happen.
> That would be nice if I could monitoring the server. Does anyone know how
> to do it? Can I monitoring certain port(s)? Or ping those ports? Thanks
in
> advance for your help
>|||Hi John,
Thanks for your quick reply. I have an application called 'SiteScope' that
can monitoring ports and stuff (don't know the detail function of this
application). Can I setup that to monitor certain SQL Server components to
make sure it's running fine?
BTW, is SQL Server Alert part of the components of SQL Server 2000? Thanks
Vince
"John Bell" wrote:
[vbcol=seagreen]
> Hi
> There are quite a few different ways to monitor your server, but for your
> immendiate problem you will have to look at the SQL Server error log or th
e
> event log to see if there are any messages relating to the times you are
> having the problems. Going forward you may want to look at third party too
ls
> such as Tivoli or Perfmon or SQL Server Alerts or your own scripts that
> monitor various aspects of the system.
> John
> "Skylander" wrote:
>|||Hi
Monitoring SQL Server is not really about monitoring the ports, that
tend to imply you have network issues or even a DOS attack!!!!
You can look at what alerts can do by reading books online or starting
at
http://msdn.microsoft.com/library/d...>
aem_0uwk.asp
John|||Hi
You may also find these useful
http://www.databasejournal.com/scri...cle.php/3494931
or SQLPing useful
http://www.sqlsecurity.com/DesktopDefault.aspx?tabid=26
John
John Bell wrote:
> Hi
> Monitoring SQL Server is not really about monitoring the ports, that
> tend to imply you have network issues or even a DOS attack!!!!
> You can look at what alerts can do by reading books online or
starting
> at
>
[url]http://msdn.microsoft.com/library/default.asp?url=/library/en-us/howtosql/ht_automaem_0uwk.asp[/ur
l]
> John

Monitoring SqlServer 2000 performance with Express

Hi,

I'm using Visual Web Developer Express and Management Studio Express, and my web site is on a shared web host′, running SqlServer2000. I'm looking for software that enables me to monitor the server, but is it possible? The only apps I've found (and downloaded and installed and unistalled) so far need administrative rights to the server so they won't work on a shared web host.

All help would be welcome!

Thanks in advance,

Pettrer

Goto Administrative Tools...Performance.

Right mouse click on the graph and select "Add Counters..."

Drop down the Performance Objects listbox.

There are many performance counters installed with SQL Server, IIS and .NET.

|||

Hello,

This little app seems quite powerful, but it's only for times when one has physical access and the correct permissions to the db server, isn't it? I can only find SQLServer Express in the list of availabale servers, that is, my own computer's (inactive) web server.

I was thinking more of a third-party purchase, if any of those works in a hosted environment.

Thanks for replying!

Pettrer, Sweden

|||

>> but it's only for times when one has physical access and the correct permissions to the db server, isn't it?

A third party tool will also need permissions. In the Add Counters dialog, there's a pull down to select from other computers. However, I admit I've never tried or done this :)

There must be a way to get it to work. From the help:

Choosing the computer to use for monitoring
When monitoring computers remotely, you have some options for how to collect data. For example, you could run performance logging on the administrator's computer, drawing data continuously from each remote computer. In another case, you could have each computer running the service to collect data and, at regular intervals, run a batch program to transfer the data to the administrator's computer for analysis and archiving.

Centralized data collection (that is, collection on a local computer from remote computers that you are monitoring) is simple to implement because only one logging service is running. You can collect data from multiple systems into a single log file. However, it causes additional network traffic and might be restricted by available memory on the administrator's computer. To do centralized data collection, use the Add Counters dialog box to select a remote computer while running System Monitor on your local computer.

Distributed data collection (that is, data collection that occurs on the remote computers you are monitoring) does not incur the memory and network traffic problems of centralized collection. However, it does result in delayed availability of the data, requiring that the collected data be transferred to the administrator's computer for review. To do distributed data collection, use Computer Management on a local computer to select a remote computer on which to collect data.

When monitoring remote computers, note that the remote computer will only allow access to user accounts that have permission to access it. In order to monitor remote systems from your computer, you must start the Performance Logs and Alerts service using an account that has permission to access the remote computers you want to monitor. By default, the service is started under the local computer's "system" account, which generally has permission only to access services and resources on the local computer. To start this under a different account, use the Run As option when configuring data collection, or use Services under Computer Management and update the properties of the Performance Logs and Alerts service.

|||

Hi again,

Thanks for your replies. I found the option you mentioned but didn't see a "remote" option or the like. In any case, I bellieve I wont be able to access the server's system folder anyway. I don't know much about this, but it seems as if the majority of these apps are diving into the system, which may be a prerequisite, at least for diagnosing the internal performance. What I'm looking for is how well the server performs, how long before it jams and so on, and it might not be possible to it this way.

Pettrer

Wednesday, March 7, 2012

Monitor SQL Statements

I would like to monitor any non stored procedure activity on a sql
server (2000 and 2005). Mainly I need to watch for ad hoc sql from
applications and for people running sql from query analyzer.
I think that Profiler will do this, but I have concerns about how much
overhead that will cause. At times, the server is very busy and I have
SLA in place that would cause me to pay a penalty for slow response
time.
Any suggestions?
Jim
St LouisFirst off you should never use Profiler for traces on busy systems. You can
simply use TRACE and send the results to a binary file on a disk attached
locally to the server. Then you can copy them to another machine for
processing or viewing. The easiest way (if you are not familiar with that)
to do that is to briefly use profiler. Set up the events and columns and
filters exactly how you want it. Then run the trace and immediately hit
stop. Then go to File - Script Trace - For 2000. This will generate a tsql
script that you can use to run a trace without the overhead of profiler.
Follow the instructions in the script to add the filename and adjust the
maxfilesize etc. Look up sp_tracecreate in booksonline for more details. If
the sp's are being run as RPC's and not batches you can just look for Batch
Completed events and ignore all the sps.
Andrew J. Kelly SQL MVP
"Jim Youmans" <jdyoumans@.gmail.com> wrote in message
news:1150981788.688853.289590@.u72g2000cwu.googlegroups.com...
>I would like to monitor any non stored procedure activity on a sql
> server (2000 and 2005). Mainly I need to watch for ad hoc sql from
> applications and for people running sql from query analyzer.
> I think that Profiler will do this, but I have concerns about how much
> overhead that will cause. At times, the server is very busy and I have
> SLA in place that would cause me to pay a penalty for slow response
> time.
> Any suggestions?
> Jim
> St Louis
>|||First off you should never use Profiler for traces on busy systems. You can
simply use TRACE and send the results to a binary file on a disk attached
locally to the server. Then you can copy them to another machine for
processing or viewing. The easiest way (if you are not familiar with that)
to do that is to briefly use profiler. Set up the events and columns and
filters exactly how you want it. Then run the trace and immediately hit
stop. Then go to File - Script Trace - For 2000. This will generate a tsql
script that you can use to run a trace without the overhead of profiler.
Follow the instructions in the script to add the filename and adjust the
maxfilesize etc. Look up sp_tracecreate in booksonline for more details. If
the sp's are being run as RPC's and not batches you can just look for Batch
Completed events and ignore all the sps.
Andrew J. Kelly SQL MVP
"Jim Youmans" <jdyoumans@.gmail.com> wrote in message
news:1150981788.688853.289590@.u72g2000cwu.googlegroups.com...
>I would like to monitor any non stored procedure activity on a sql
> server (2000 and 2005). Mainly I need to watch for ad hoc sql from
> applications and for people running sql from query analyzer.
> I think that Profiler will do this, but I have concerns about how much
> overhead that will cause. At times, the server is very busy and I have
> SLA in place that would cause me to pay a penalty for slow response
> time.
> Any suggestions?
> Jim
> St Louis
>

Monday, February 20, 2012

MONEY FORMAT presentation

Hi all,
I am tryng to locate the SQLServer vesion of Informix's DBMONEY env variable
which is used in the Informix env by tools like ISQL and I4GL to format
values stored as type MONEY into a nice presentation pattern. Is there such
a
property for SQLServer?
Why am I looking for this? Basically the application that we are building
handles all this preso stuff at the front-end, problem is we are trying to
rapid prototype a number of MIS reports (in SQLAnalyser) and the cllient is
used to seeing the numbers in a pre-formatted pattern as they currently use
ACCESS and the FORMAT function. Myself and my buddy are going slowly
blind/mad having to cast/convert values stored as money into strings to get
$'s and commas!
Slainte,
TaggartTaagart
DECLARE @.m AS DECIMAL (18,3)
SET @.m=100554545.36
SELECT convert(varchar,cast(@.m as money),1)
"Taggart" <Taggart@.discussions.microsoft.com> wrote in message
news:11E0F555-2BEF-4424-807C-3D6CEF31F6B7@.microsoft.com...
> Hi all,
> I am tryng to locate the SQLServer vesion of Informix's DBMONEY env
> variable
> which is used in the Informix env by tools like ISQL and I4GL to format
> values stored as type MONEY into a nice presentation pattern. Is there
> such a
> property for SQLServer?
> Why am I looking for this? Basically the application that we are building
> handles all this preso stuff at the front-end, problem is we are trying to
> rapid prototype a number of MIS reports (in SQLAnalyser) and the cllient
> is
> used to seeing the numbers in a pre-formatted pattern as they currently
> use
> ACCESS and the FORMAT function. Myself and my buddy are going slowly
> blind/mad having to cast/convert values stored as money into strings to
> get
> $'s and commas!
> Slainte,
> Taggart
>
>

MONEY FORMAT presentation

Hi all,
I am tryng to locate the SQLServer vesion of Informix's DBMONEY env variable
which is used in the Informix env by tools like ISQL and I4GL to format
values stored as type MONEY into a nice presentation pattern. Is there such a
property for SQLServer?
Why am I looking for this? Basically the application that we are building
handles all this preso stuff at the front-end, problem is we are trying to
rapid prototype a number of MIS reports (in SQLAnalyser) and the cllient is
used to seeing the numbers in a pre-formatted pattern as they currently use
ACCESS and the FORMAT function. Myself and my buddy are going slowly
blind/mad having to cast/convert values stored as money into strings to get
$'s and commas!
Slainte,
TaggartTaagart
DECLARE @.m AS DECIMAL (18,3)
SET @.m=100554545.36
SELECT convert(varchar,cast(@.m as money),1)
"Taggart" <Taggart@.discussions.microsoft.com> wrote in message
news:11E0F555-2BEF-4424-807C-3D6CEF31F6B7@.microsoft.com...
> Hi all,
> I am tryng to locate the SQLServer vesion of Informix's DBMONEY env
> variable
> which is used in the Informix env by tools like ISQL and I4GL to format
> values stored as type MONEY into a nice presentation pattern. Is there
> such a
> property for SQLServer?
> Why am I looking for this? Basically the application that we are building
> handles all this preso stuff at the front-end, problem is we are trying to
> rapid prototype a number of MIS reports (in SQLAnalyser) and the cllient
> is
> used to seeing the numbers in a pre-formatted pattern as they currently
> use
> ACCESS and the FORMAT function. Myself and my buddy are going slowly
> blind/mad having to cast/convert values stored as money into strings to
> get
> $'s and commas!
> Slainte,
> Taggart
>
>|||Hi Uri,
That's much better, but still missing the $'s. Apart from hardwiring the $
sign and then concatanating this to the result, any ideas?
What I don't get is the need to cast a column that is already set as money
(pity I didn't use decimal!?) to money, maybe we are missing something
fundamental but isn't this just consuming CPU resource for no apparent reason?
As ex-informix guys we are somewhat puzzled by this and, on a similar point,
all the manipulation one has to undertake for datetime columns when you only
want to use the date portion - presumably something to do with no date
datatype in SQLServer, which is something we find very strange both in the
additional manipulation and also in terms of datastorage?
Slainte,
Taggart|||On Sun, 4 Jun 2006 19:53:02 -0700, Taggart wrote:
>Hi Uri,
>That's much better, but still missing the $'s. Apart from hardwiring the $
>sign and then concatanating this to the result, any ideas?
Hi Taggart,
SQL Server is not intended to be used for formatting - that task is
usually handled by the front-end. As a result, SQL Server doesn't have
as much formatting features as some other tools.
If you're sure that your application is only used in dollar-using
countries, hardcoding the $ sign is probably the best solution. If the
app might be used all over the world, you'd be better off letting the
front-end determine the correct currency symbol from the useer's locale
settings and append that symbol to the amount.
>What I don't get is the need to cast a column that is already set as money
>(pity I didn't use decimal!?) to money, maybe we are missing something
>fundamental but isn't this just consuming CPU resource for no apparent reason?
There's no need for the extra CAST - Uri used it becuase the variable he
used in his example was not money. If your column is monmey, you can
just use
SELECT convert(varchar, Column_Name, 1)
>As ex-informix guys we are somewhat puzzled by this and, on a similar point,
>all the manipulation one has to undertake for datetime columns when you only
>want to use the date portion - presumably something to do with no date
>datatype in SQLServer, which is something we find very strange both in the
>additional manipulation and also in terms of datastorage?
Not having seperate date and time datatypes is indeed a pity.
However, there's no need for much manipulation to remove the time
portion from a datetime column. If you need the result as datetime (only
withoout time portion - or rather, with the default midnight time
portion), use
SELECT DATEADD(day, DATEDIFF(day, 0, CURRENT_TIMESTAMP), 0)
And if you need it in character format (for presentation purposes), use
CONVERT with an appropriate style parameter and define the length of the
result such that the time will be cut off, for instance
SELECT CONVERT(char(8), CURRENT_TIMESTAMP, 112)
--
Hugo Kornelis, SQL Server MVP|||Thanks Hugo your use of convert is much. much neater than my attempt!
"Hugo Kornelis" wrote:
> On Sun, 4 Jun 2006 19:53:02 -0700, Taggart wrote:
> >Hi Uri,
> >
> >That's much better, but still missing the $'s. Apart from hardwiring the $
> >sign and then concatanating this to the result, any ideas?
> Hi Taggart,
> SQL Server is not intended to be used for formatting - that task is
> usually handled by the front-end. As a result, SQL Server doesn't have
> as much formatting features as some other tools.
> If you're sure that your application is only used in dollar-using
> countries, hardcoding the $ sign is probably the best solution. If the
> app might be used all over the world, you'd be better off letting the
> front-end determine the correct currency symbol from the useer's locale
> settings and append that symbol to the amount.
> >What I don't get is the need to cast a column that is already set as money
> >(pity I didn't use decimal!?) to money, maybe we are missing something
> >fundamental but isn't this just consuming CPU resource for no apparent reason?
> There's no need for the extra CAST - Uri used it becuase the variable he
> used in his example was not money. If your column is monmey, you can
> just use
> SELECT convert(varchar, Column_Name, 1)
> >
> >As ex-informix guys we are somewhat puzzled by this and, on a similar point,
> >all the manipulation one has to undertake for datetime columns when you only
> >want to use the date portion - presumably something to do with no date
> >datatype in SQLServer, which is something we find very strange both in the
> >additional manipulation and also in terms of datastorage?
> Not having seperate date and time datatypes is indeed a pity.
> However, there's no need for much manipulation to remove the time
> portion from a datetime column. If you need the result as datetime (only
> withoout time portion - or rather, with the default midnight time
> portion), use
> SELECT DATEADD(day, DATEDIFF(day, 0, CURRENT_TIMESTAMP), 0)
> And if you need it in character format (for presentation purposes), use
> CONVERT with an appropriate style parameter and define the length of the
> result such that the time will be cut off, for instance
> SELECT CONVERT(char(8), CURRENT_TIMESTAMP, 112)
> --
> Hugo Kornelis, SQL Server MVP
>