Showing posts with label tables. Show all posts
Showing posts with label tables. Show all posts

Friday, March 30, 2012

More Questions....Relationships

Ok heres the scenario:

3 tables

Movie - MovieID, MovieName, Starring1, Starring 2, Director

Director - MovieID, DirectorID, DirectorName

Actors - MovieID, ActorName

In bold are the names of the tables --

in bold+italic are the PK - Primary Keys.

Ive tried to create a relationship between:

1. Movie - MovieID to Director -MovieID - Movie is related to the director

2. Actors - MovieID to Movie-MovieID - Actor is related to Movie

Second relationship failed for some reason.

Im trying to achieve this result:

Movie = Pulp Fiction

Director = Quentin Tan

Actor = Sam L Jackson

Once i do reports etc.....What am i doing wrong here as i think its my relationships im having difficulty with?

Thank you in advance

try:

Select movieName,

DirectorName,

actorName

from movie m

join director d

on m.movieId = d.movieId

and m.movieName = 'Pulp Fiction'

and d.directorName = 'Quentin Tan'

join actor a

on m.movieId = a.movieId

and a.actorName = 'Sam L Jackson'

an alternative is:

Select movieName,

DirectorName,

actorName

from movie m

join director d

on m.movieId = d.movieId

join actor a

on m.movieId = a.movieId

where m.movieName = 'Pulp Fiction'

and d.directorName = 'Quentin Tan'

and a.actorName = 'Sam L Jackson'

|||Thanks but am i on the right lines about creating these relationships in the way i described or would you do anything differently?|||

Assuming that a movie has only one director, something like this:

Director -- DirectorID (PK), DirecorName

Actor -- ActorID (PK), ActorName

Movie -- MovieID (PK), MovieName, DirectorID

Cast -- MovieID, Order, Part, ActorID, (PK: MovieID, ActorID, Part)

The "Cast" table is necessary so that you can list all of the actors of the movie. I added in "Order" so that you can control "Promotion Order" or "Billing Order" or whatever you want to call it -- it is a control so that you can control the order in which the actors might be listed in credits.

I added in "Part" for movies in which somebody like Eddie Murphy might play multiple parts.

From my perspective the biggest problem here is the "Cast" table as I have made it. I still don't feel that it is general enough. For instance, say that you want to use this as the basis for listing movie credits. You want to include Producers, stunt coordinators, etc. It would be best to generalize the "Cast" table into a "Role" table and along the way establish "Subcategories" of the "Role" table. But now we are getting ahead of ourselves. This should get you started.

More questions about porting from MySQL to MS SQL

1) In several tables, in my MySQL version, I created columns using
something like the following:

`ab_timestamp` timestamp NOT NULL default CURRENT_TIMESTAMP on update
CURRENT_TIMESTAMP,

This allowed me to ensure that when a record is either added or edited,
the value in the field is set to the current date and time. I.E.,
ab_timestamp is given the current date and time when a record is
created, and then it is updated to the date and time at which the
record is updated. I learned the hard way that MS SQL does not like
"on update CURRENT_TIMESTAMP". So, it looks like MS SQL will allow me
to initialize ab_timestamp to the current date and time, but not
automatically update it to the date and time at which the record is
updated. I have plenty of code to port that depends on the behaviour
supported by MySQL. DO I have to modify all that code, or is there a
way to get MS SQL to provide it? (Yes, I know 'timestamp' is
deprecated in MS SQL and that I should use datetime instead, and in
fact have already done so.)

2) I began with a single SQL script that creates all the tables, views,
functions and triggers the database needs. On trying to get MS SQL to
accept it, I encountered a number of error messages saying that CREATE
FUNCTION and CREATE VIEW need to be the first statement in a script.
Why? I know I can work around this odd constraint by putting each
function and view (and IIRC trigger) into its own script, but that
seems like a make work effort imposed for some unknown reason by MS
SQL, unless there is another way to get around it.

3) I see, in the documentation for CREATE FUNCTION, functions are not
allowed to use a timestamp for either a parameter or a return value.
This is in reference to a pair of scalar functions I am using which
need to manipulate date and time values. For the purpose of
clarification, is this documentation refering to all date/time data
types, or only the deprecated timestamp type? As examples, consider
one function that needs to return the most recent date in a date column
in a specific table, or another function that computes a date from a
date and an offset (e.g. if called with the value returned by the first
function as the first argument and '-7' as the second, returns the date
of the day that is a week earlier than that date). These two functions
are frequently used in the SQL code I'm trying to port and I really
don't want to complicate so many of those statements if I don't have
to.

Thanks

TedTed wrote:

Quote:

Originally Posted by

1) In several tables, in my MySQL version, I created columns using
something like the following:
>
`ab_timestamp` timestamp NOT NULL default CURRENT_TIMESTAMP on update
CURRENT_TIMESTAMP,
>
This allowed me to ensure that when a record is either added or edited,
the value in the field is set to the current date and time. I.E.,
ab_timestamp is given the current date and time when a record is
created, and then it is updated to the date and time at which the
record is updated. I learned the hard way that MS SQL does not like
"on update CURRENT_TIMESTAMP". So, it looks like MS SQL will allow me
to initialize ab_timestamp to the current date and time, but not
automatically update it to the date and time at which the record is
updated. I have plenty of code to port that depends on the behaviour
supported by MySQL. DO I have to modify all that code, or is there a
way to get MS SQL to provide it? (Yes, I know 'timestamp' is
deprecated in MS SQL and that I should use datetime instead, and in
fact have already done so.)
>
2) I began with a single SQL script that creates all the tables, views,
functions and triggers the database needs. On trying to get MS SQL to
accept it, I encountered a number of error messages saying that CREATE
FUNCTION and CREATE VIEW need to be the first statement in a script.
Why? I know I can work around this odd constraint by putting each
function and view (and IIRC trigger) into its own script, but that
seems like a make work effort imposed for some unknown reason by MS
SQL, unless there is another way to get around it.
>
3) I see, in the documentation for CREATE FUNCTION, functions are not
allowed to use a timestamp for either a parameter or a return value.
This is in reference to a pair of scalar functions I am using which
need to manipulate date and time values. For the purpose of
clarification, is this documentation refering to all date/time data
types, or only the deprecated timestamp type? As examples, consider
one function that needs to return the most recent date in a date column
in a specific table, or another function that computes a date from a
date and an offset (e.g. if called with the value returned by the first
function as the first argument and '-7' as the second, returns the date
of the day that is a week earlier than that date). These two functions
are frequently used in the SQL code I'm trying to port and I really
don't want to complicate so many of those statements if I don't have
to.
>
Thanks
>
Ted


1) You can use an UPDATE trigger to simulate the same functionality.
Usually though it is better to use stored procedures to perform all
your data access. That way you can easily include the timestamp as part
of your update procs.

2) You don't need separate scripts for each View / Function. You do
need separate batches. A batch is separated using the GO keyword in
Query Analyzer.

3) Don't confuse TIMESTAMP with DATETIME. They are not at all the same!
You are referring to DATETIME values, which ARE permitted as parameters
and return values in functions. BTW, you don't need to write a function
to do date arithmetic - it already exists as a built-in function:
DATEADD().

--
David Portas, SQL Server MVP

Whenever possible please post enough code to reproduce your problem.
Including CREATE TABLE and INSERT statements usually helps.
State what version of SQL Server you are using and specify the content
of any error messages.

SQL Server Books Online:
http://msdn2.microsoft.com/library/...US,SQL.90).aspx
--|||Ted (r.ted.byers@.rogers.com) writes:

Quote:

Originally Posted by

1) In several tables, in my MySQL version, I created columns using
something like the following:
>
`ab_timestamp` timestamp NOT NULL default CURRENT_TIMESTAMP on update
CURRENT_TIMESTAMP,
>
This allowed me to ensure that when a record is either added or edited,
the value in the field is set to the current date and time. I.E.,
ab_timestamp is given the current date and time when a record is
created, and then it is updated to the date and time at which the
record is updated. I learned the hard way that MS SQL does not like
"on update CURRENT_TIMESTAMP". So, it looks like MS SQL will allow me
to initialize ab_timestamp to the current date and time, but not
automatically update it to the date and time at which the record is
updated. I have plenty of code to port that depends on the behaviour
supported by MySQL. DO I have to modify all that code, or is there a
way to get MS SQL to provide it?


You will have to modify all that code. To have to be portable is indeed
painful, and a good start is to pay attention to what is in ANSI standards,
what is commonly supported. And not to the least to what are extensions
added by a certain vendor. My knowledge of ANSI and other engines are
poor (since I live in a sheltered world where I only need to support
SQL Server), but I would suspect that the ON UPDATE clause for the
default constraint is properitary to MySQL.

The way to do this in SQL Server is to use a trigger. Not that triggers
usually are very portable...

You can also modify the UPDATE statements so that they read:

UPDATE tbl
SET ...,
ab_timestamp = DEFAULT,
WHERE ...

I believe this syntax is portable.

Quote:

Originally Posted by

(Yes, I know 'timestamp' is deprecated in MS SQL and that I should use
datetime instead, and in fact have already done so.)


timestamp is not deprecated in SQL Server, but it's a completely different
data type, which is used to implement optimistic locking. A timestamp
is an 8-bit value is updated each time the row is updated and it's
unique within the database. Further more timestamp value are monotonically
increasing. But there is no correlation with time. Timestamp is
proprietary to SQL Server, so you should probably stay away from it
entirely.

Quote:

Originally Posted by

2) I began with a single SQL script that creates all the tables, views,
functions and triggers the database needs. On trying to get MS SQL to
accept it, I encountered a number of error messages saying that CREATE
FUNCTION and CREATE VIEW need to be the first statement in a script.
Why? I know I can work around this odd constraint by putting each
function and view (and IIRC trigger) into its own script, but that
seems like a make work effort imposed for some unknown reason by MS
SQL, unless there is another way to get around it.


Not a separate script, but a separate batch. Batches are separated with
"go" in all query tools. The graphic tools permit you to specify a
different batch separator.

Why? Because else it would be difficult to tell where a procedure ends.
Say that you have:

CREATE PROCEDURE ... AS
...
CREATE TABLE ...

Is that CREATE TABLE part of the procedure or not? (Yes, if you have
BEGIN END it's clear. But of legacy BEGIN END is not required in
stored procedures.)

Quote:

Originally Posted by

3) I see, in the documentation for CREATE FUNCTION, functions are not
allowed to use a timestamp for either a parameter or a return value.


As noted above, there is little reason for your to use the timestamp
data type.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx

More on Full Text Searching across multiple tables

Imagine you're building a search for a movie database which you wanted to index actors, quotes from scenes and movie names into a single search.... how do you accomplish this using full text search? The closest I can think of is... (for a sample search for "Walken"

select RANK, actorId as Id from FREETEXTTABLE( actors, *, 'ISABOUT (+ WALKEN + WEIGHT(1.0))') a JOIN actors b on a.[key] = b.actorid
UNION ALL
select RANK, sceneId as Id from FREETEXTTABLE( scenes, *, 'ISABOUT (+ WALKEN + WEIGHT(1.0))') a JOIN scenes b on a.[key] = b.sceneId
UNION ALL
select RANK, movieId as Id from FREETEXTTABLE( movie, *, 'ISABOUT (+ WALKEN + WEIGHT(1.0))') a JOIN movie b on a.[key] = b.movieId

But it doesn't rank correctly. Suggestions?

How do you want the ranking to work?

What do you want to return? have you tried

select RANK, actorId as Id from CONTAINSTABLE( actors, *, 'WALKEN') a JOIN actors b on a.[key] = b.actorid
UNION ALL
select RANK, sceneId as Id from CONTAINSTABLE( scenes, *, 'WALKEN') a JOIN scenes b on a.[key] = b.sceneId
UNION ALL
select RANK, movieId as Id from CONTAINSTABLE( movie, *, 'WALKEN') a JOIN movie b on a.[key] = b.movieId

The other thing to note is that RANK is not a fixed number it is relative to a number of factors, largely it is related to the batch of data a record was indexed with. So if you have records being indexed in differing batchs you can end up with odd ranking.

Bottom linke is a rank from tableA can't be used to compare with a rank from tableB

|||Ideally, i'd like to return the item(s), in order that match the query string. This means if the scene is a closer match than the movie, the scene is returned first. Each one should have the id returned which i can then construct a result from.|||

Unfortunately you can't use the rank from one index to compare with a rank from another index.

Have a lookin BOL under the heading "Understanding Ranking "

|||Right, what i'm looking for is a solution that will work across multiple tables... any suggestions?

More on dependecies

It's not possible to drop tables which violate FOREIGN KEY
CONSTRAINTS. SQL Server, however, is completely blase about dropping
tables that are referenced by either sprocs or views.
Short of writing unit tests for every stored procedure and view are
there any tools to test integrity? I suspect that someone will ask if
I've considered the "Tools -> Check Integrity" option. So where is it?have you consider building your databases from source code. This will find a
ll breakages.
check out www.dbghost.com|||kofisarfo@.gmail.com (Kofi) wrote in message news:<5c157557.0405130510.4c98fce@.posting.google
.com>...
> It's not possible to drop tables which violate FOREIGN KEY
> CONSTRAINTS. SQL Server, however, is completely blase about dropping
> tables that are referenced by either sprocs or views.
> Short of writing unit tests for every stored procedure and view are
> there any tools to test integrity? I suspect that someone will ask if
> I've considered the "Tools -> Check Integrity" option. So where is it?
Have a look at sp_depends, which uses the sysdepends system table.
However, dependency metadata in MSSQL isn't completely reliable, due
to issues like deferred name resolution (you can create a proc which
references a table you haven't created yet), and dynamic SQL (the
table name may not even be in the procedure). And indeed there may be
SQL code hidden in client applications also.
You can create views WITH SCHEMABINDING, to prevent a table referenced
by the view from being dropped, but there's no equivalent for stored
procedures.
Finally, it could be argued that creating unit tests for all your code
is a good idea anyway.
Simon|||I thought of that only moments after posting and that's exactly what I
did in fact do.
I found out about DBGhost in the process though so it was still worth
asking
KS
http://www.single-blend.net/dotnet/
*** Sent via Developersdex http://www.codecomments.com ***
Don't just participate in USENET...get rewarded for it!|||Indeed. Just when I thought I'd nailed all dependencies I found some
implicit dependencies in EXEC statements run within some stored
procedures. They won't be picked up either.
"sp_depends" ain't all that. There's "sp_MSdependencies" which goes
beyond first level dependencies.
http://www.microsoft.com/sql/techin...splayingdepende
ncies.asp
Unit tests would be ideal but there are just way too many units and I
almost certainly wouldn't be able to get the go-ahead.
Cheers
KS
http://www.single-blend.net/dotnet/
*** Sent via Developersdex http://www.codecomments.com ***
Don't just participate in USENET...get rewarded for it!

More on dependecies

It's not possible to drop tables which violate FOREIGN KEY
CONSTRAINTS. SQL Server, however, is completely blase about dropping
tables that are referenced by either sprocs or views.

Short of writing unit tests for every stored procedure and view are
there any tools to test integrity? I suspect that someone will ask if
I've considered the "Tools -> Check Integrity" option. So where is it?kofisarfo@.gmail.com (Kofi) wrote in message news:<5c157557.0405130510.4c98fce@.posting.google.com>...
> It's not possible to drop tables which violate FOREIGN KEY
> CONSTRAINTS. SQL Server, however, is completely blase about dropping
> tables that are referenced by either sprocs or views.
> Short of writing unit tests for every stored procedure and view are
> there any tools to test integrity? I suspect that someone will ask if
> I've considered the "Tools -> Check Integrity" option. So where is it?

Have a look at sp_depends, which uses the sysdepends system table.
However, dependency metadata in MSSQL isn't completely reliable, due
to issues like deferred name resolution (you can create a proc which
references a table you haven't created yet), and dynamic SQL (the
table name may not even be in the procedure). And indeed there may be
SQL code hidden in client applications also.

You can create views WITH SCHEMABINDING, to prevent a table referenced
by the view from being dropped, but there's no equivalent for stored
procedures.

Finally, it could be argued that creating unit tests for all your code
is a good idea anyway.

Simon

More on dependecies

It's not possible to drop tables which violate FOREIGN KEY
CONSTRAINTS. SQL Server, however, is completely blase about dropping
tables that are referenced by either sprocs or views.
Short of writing unit tests for every stored procedure and view are
there any tools to test integrity? I suspect that someone will ask if
I've considered the "Tools -> Check Integrity" option. So where is it?
have you consider building your databases from source code. This will find all breakages.
check out www.dbghost.com
|||kofisarfo@.gmail.com (Kofi) wrote in message news:<5c157557.0405130510.4c98fce@.posting.google.c om>...
> It's not possible to drop tables which violate FOREIGN KEY
> CONSTRAINTS. SQL Server, however, is completely blase about dropping
> tables that are referenced by either sprocs or views.
> Short of writing unit tests for every stored procedure and view are
> there any tools to test integrity? I suspect that someone will ask if
> I've considered the "Tools -> Check Integrity" option. So where is it?
Have a look at sp_depends, which uses the sysdepends system table.
However, dependency metadata in MSSQL isn't completely reliable, due
to issues like deferred name resolution (you can create a proc which
references a table you haven't created yet), and dynamic SQL (the
table name may not even be in the procedure). And indeed there may be
SQL code hidden in client applications also.
You can create views WITH SCHEMABINDING, to prevent a table referenced
by the view from being dropped, but there's no equivalent for stored
procedures.
Finally, it could be argued that creating unit tests for all your code
is a good idea anyway.
Simon
|||I thought of that only moments after posting and that's exactly what I
did in fact do.
I found out about DBGhost in the process though so it was still worth
asking
KS
http://www.single-blend.net/dotnet/
*** Sent via Developersdex http://www.codecomments.com ***
Don't just participate in USENET...get rewarded for it!
|||Indeed. Just when I thought I'd nailed all dependencies I found some
implicit dependencies in EXEC statements run within some stored
procedures. They won't be picked up either.
"sp_depends" ain't all that. There's "sp_MSdependencies" which goes
beyond first level dependencies.
http://www.microsoft.com/sql/techinf...playingdepende
ncies.asp
Unit tests would be ideal but there are just way too many units and I
almost certainly wouldn't be able to get the go-ahead.
Cheers
KS
http://www.single-blend.net/dotnet/
*** Sent via Developersdex http://www.codecomments.com ***
Don't just participate in USENET...get rewarded for it!

More on dependecies

It's not possible to drop tables which violate FOREIGN KEY
CONSTRAINTS. SQL Server, however, is completely blase about dropping
tables that are referenced by either sprocs or views.
Short of writing unit tests for every stored procedure and view are
there any tools to test integrity? I suspect that someone will ask if
I've considered the "Tools -> Check Integrity" option. So where is it?kofisarfo@.gmail.com (Kofi) wrote in message news:<5c157557.0405130510.4c98fce@.posting.google.com>...
> It's not possible to drop tables which violate FOREIGN KEY
> CONSTRAINTS. SQL Server, however, is completely blase about dropping
> tables that are referenced by either sprocs or views.
> Short of writing unit tests for every stored procedure and view are
> there any tools to test integrity? I suspect that someone will ask if
> I've considered the "Tools -> Check Integrity" option. So where is it?
Have a look at sp_depends, which uses the sysdepends system table.
However, dependency metadata in MSSQL isn't completely reliable, due
to issues like deferred name resolution (you can create a proc which
references a table you haven't created yet), and dynamic SQL (the
table name may not even be in the procedure). And indeed there may be
SQL code hidden in client applications also.
You can create views WITH SCHEMABINDING, to prevent a table referenced
by the view from being dropped, but there's no equivalent for stored
procedures.
Finally, it could be argued that creating unit tests for all your code
is a good idea anyway.
Simon

More not in issue

Hi,
I have three tables, one is the client information table (T1) contain an
uniqueid for the client, the other is client address table (T2), which
contains different type of address for the client (e.g. business address,
home address) and the different type of address is identified by addtype_c,
and it is link to T1 via clientid. The third table (T3) contains the media
request from the clients and the address (add_c) they want the media sent to
.
and of course, it has the fields clientid.
I need to generate the mailing lable for the client according to the address
that the client want the materials sent to. However, how can I create a
query to alert me that the address type that the clients want the materials
sent to is not in the address table.
Thanks in advance.One option change your business logic to require a ship-to address when the
client is filling in this information. Or add a WHERE clause to your query
to exclude NULL addresses or if you use a DEFAULT to exclude addresses = to
the DEFAULT.
HTH
Jerry
"qjlee" <qjlee@.discussions.microsoft.com> wrote in message
news:421420FC-090A-499F-9715-32A63D9A2930@.microsoft.com...
> Hi,
> I have three tables, one is the client information table (T1) contain an
> uniqueid for the client, the other is client address table (T2), which
> contains different type of address for the client (e.g. business address,
> home address) and the different type of address is identified by
> addtype_c,
> and it is link to T1 via clientid. The third table (T3) contains the
> media
> request from the clients and the address (add_c) they want the media sent
> to.
> and of course, it has the fields clientid.
> I need to generate the mailing lable for the client according to the
> address
> that the client want the materials sent to. However, how can I create a
> query to alert me that the address type that the clients want the
> materials
> sent to is not in the address table.
> Thanks in advance.
>|||On Fri, 30 Sep 2005 07:16:04 -0700, qjlee wrote:

>Hi,
>I have three tables, one is the client information table (T1) contain an
>uniqueid for the client, the other is client address table (T2), which
>contains different type of address for the client (e.g. business address,
>home address) and the different type of address is identified by addtype_c,
>and it is link to T1 via clientid. The third table (T3) contains the media
>request from the clients and the address (add_c) they want the media sent t
o.
>and of course, it has the fields clientid.
>I need to generate the mailing lable for the client according to the addres
s
>that the client want the materials sent to. However, how can I create a
>query to alert me that the address type that the clients want the materials
>sent to is not in the address table.
>Thanks in advance.
>
Hi qjlee,
Since you didn't post DDL and sample data (www.aspfaq.com/5006), I'll
have to make some wild assumptions about your tables.
SELECT T3.ClientID, T3.add_c
FROM T3
WHERE NOT EXISTS
(SELECT *
FROM T2
WHERE T2.ClientID = T3.ClientID
AND T2.AddressType = T3.AddressType)
or
SELECT T3.ClientID, T3.add_c
FROM T3
LEFT JOIN T2
ON T2.ClientID = T3.ClientID
AND T2.AddressType = T3.AddressType
WHERE T2.ClientID IS NULL
Best, Hugo
--
(Remove _NO_ and _SPAM_ to get my e-mail address)

More Newbie Questions

Two questions --

Why is it that some of my tables are not updateable, and some are??

And I'm told that there is no "memo" type field for SQLServer, and 8000
characters is the limit. Unless I go to blob or binary. What does that
entail, and what are the consequences?

Thanks!

JA> Why is it that some of my tables are not updateable, and some are??

There could be more than one reason depending on how you are doing the
updating and what you mean by "not updateable". I'll take a guess that you
are using a UI such as Enterprise Manager or Access that allows you to edit
a table. In that case you won't be able to make changes unless your table
has a unique key (constraint or index). Every table should always have a
unique key otherwise it isn't possible to guarantee that individual rows can
be updated. The solution is to add a primary key constraint to the table.

If I'm wrong then please explain how you are attempting to update the table
and what happens when you try (do you get an error message for example).

> And I'm told that there is no "memo" type field for SQLServer, and 8000
> characters is the limit. Unless I go to blob or binary. What does that
> entail, and what are the consequences?

VARCHAR columns are limited to 8000 characters. For text up to 2GB in size
you can use the TEXT or NTEXT datatype. TEXT isn't as easy to use in your
SQL code as VARCHAR is and there are some limitations on what you can and
cannot do - for example some of the string functions won't work with TEXT
values. Lookup the TEXT datatype in Books Online.

--
David Portas
SQL Server MVP
--|||David,

You're right, I'm using Access 2000. I made a table with a primary key and
had no problem adding data.

I looked up TEXT and NTEXT on Books Online. I don't guess I know enough to
see the problems. The field is for product descriptions that are over 8000
characters (quite a few are). They have HTML tags in them, and are displayed
on an asp webpage. Would there be any problems with that?

Thanks, JA

"David Portas" <REMOVE_BEFORE_REPLYING_dportas@.acm.org> wrote in message
news:tMSdne7GYNyvpQXfRVn-1w@.giganews.com...
>> Why is it that some of my tables are not updateable, and some are??
> There could be more than one reason depending on how you are doing the
> updating and what you mean by "not updateable". I'll take a guess that
> you are using a UI such as Enterprise Manager or Access that allows you to
> edit a table. In that case you won't be able to make changes unless your
> table has a unique key (constraint or index). Every table should always
> have a unique key otherwise it isn't possible to guarantee that individual
> rows can be updated. The solution is to add a primary key constraint to
> the table.
> If I'm wrong then please explain how you are attempting to update the
> table and what happens when you try (do you get an error message for
> example).
>> And I'm told that there is no "memo" type field for SQLServer, and 8000
>> characters is the limit. Unless I go to blob or binary. What does that
>> entail, and what are the consequences?
> VARCHAR columns are limited to 8000 characters. For text up to 2GB in size
> you can use the TEXT or NTEXT datatype. TEXT isn't as easy to use in your
> SQL code as VARCHAR is and there are some limitations on what you can and
> cannot do - for example some of the string functions won't work with TEXT
> values. Lookup the TEXT datatype in Books Online.
> --
> David Portas
> SQL Server MVP
> --

Wednesday, March 28, 2012

More Info:

We have setup a replication in SQL2000:

We have DTS package automatically pouring data into the publishing database(source tables). During this process, we want to temporary disable certain triggers. However, the command

Alter table 'tbl' disable trigger 'abc' errored out. The error message said:

''Cannot alter the table 'tbl' because it is being published for replication."

I've digged more into this and found although it's not allowed to disable a triggers,

the SQLServer do allow delete the trigger and recreate them.

Is there any way to disable the trigger directly?

Thanks in advance,

Don

BTW:

I've used the following sql directly, however the trigger still fires.

UPDATE
sysobjects
SET
status = status|2048
WHERE
type = 'TR'
AND
parent_obj = OBJECT_ID (@.table_name)

The only other way around now is to create stored procedures that dynamically create the trigger. Because our trigger is normmally larger than 8000 bytes. We have to create one stored procedure per trigger. This option is not acceptable because not only it takes quite a time, but also a maintainance nightmare.

More Info: We are using transactional publication.

And trying the sql command in query analyzer quickly give us an error on published article table.

alter table tbl disable trigger abc

|||have you tried adding NOT FOR REPLICATION in your trigger?|||

We did try added not for replication in my trigger. and it does not work.

This issue is for loading data for the master table.

Besides, we did not publish triggers.

|||Disabling the trigger is not the issue. An ALTER TABLE command of ANY kind is not supported in 2000 against a replicated table. The only option in this case is to drop the trigger and recreate it, which is allowed in 2000 against a replicated table.|||

We kown this and we've found a better way to create the triggers. Nobody like this approach through.

If dropping trigger is allowed, there's no reason to not allow disable trigger.

It looks to me it's kind of lack of insight of microsoft. they have sp_repladdcolumn...

they should have a sp_replenabletrigger...

If you look at me post, I tried to modify sysobject tables trigger flag to change it directly.

However, it does not work so my guesses microsoft have more tables involved in this.

I was expecting somebody from microsoft could give me a hint or give me a sp_replenabletrigger.

|||

sp_repladdcolumn and sp_repldropcolumn were added as very specific patches to very specific problems. The issue was quite simply not having enough time to redesign a major portion of the replication engine to allow an ALTER TABLE statement. There are no hints, no work arounds, and no bypasses. There isn't any code like you're looking for either. The reason for that is quite simple. For the last 6 or so years, all development efforts were focused on SQL Server 2005, so if SQL Server 2005 already had a feature that would address and eliminate this entire issue, why waste the time and resources to graft this into SQL Server 2000?

I saw the message in the post. You are more than welcome to hack the system tables if you choose to. I'm certainly not going to hand anyone code to do so. Hacking the system tables is completely unsupported. If you blow up the system, you will not get any support from Microsoft in fixing it.

You have exactly two options:

1. Stay with SQL Server 2000 and write the code as a drop/create trigger

2. Upgrade to SQL Server 2005 and use an ALTER TABLE to disable the trigger

|||

Thanks, Michael.

That's a clear message.

We do encounter other problems with recreating triggers. if we using cmdshell to create the trigger though.

For example, sp-a is the one to create the trigger.

if sp-b calls sp-a,

sp-b do this:

Begin transaction

exec sp-a

Commit Transaction

calling sp-b will stuck inthe command shell out.That's another unpleasant finding yesterday.

Every developer want to upgrade to sql2005. Unfortunately upgrade to sql2005 require massive re-testing of all existing applciations,

This could be a valid reason but it's not enough for the move to sql2005.

I have to considering other solution other than replication now since we are also tight on schedule.

Thanks again for the response.

Don

sql

more empty fields

I will explain the problem as best I can(bad English, sorry)
There are 2 tables in the database, there is 1 field with the same name in both tables.
The report shows information based on this common field witch I use to filter the data.
As soon as the common field is not yet in the second table because the user has not yet placed data there with the common field the reports flips totally blank.

I hope this explanation helps, this problem bugs me for some time and I have to make more reports based on this database structure.

ThanksYou need a left outer join from the first table to the second one.

Monday, March 26, 2012

More DTS Help

Hello,
I'm using a DTS to make two tables of a csv passed file, I can make the transformations from the file to one table, but not for two, what I do is check if an email is valid and if it is I want to put it in one table, and if it isn't in another. Here is the code I'm using for validating:

Function Main()
'esemail is the validation function
if EsEmail(DTSSource("email")) then
DTSDestination("email") = DTSSource("email")
else
'Here i need HELP!!!!!
end if

Main = DTSTransformStat_OK
End Function

So what should I put in the else statement to transfer the email to another destination table?

Thanks,
A.Hello,

I think you work with the DataPumpTask, aren't you?
If I'm right, then you have to change the destination table in this task when going through the ELSE-path.

For details how to change the properties of a DataPumpTask refer to http://www.sqldts.com/default.aspx?213.

Hope this helps otherwise give me a hint!
Carsten

Originally posted by ArgenSQL
Hello,
I'm using a DTS to make two tables of a csv passed file, I can make the transformations from the file to one table, but not for two, what I do is check if an email is valid and if it is I want to put it in one table, and if it isn't in another. Here is the code I'm using for validating:

Function Main()
'esemail is the validation function
if EsEmail(DTSSource("email")) then
DTSDestination("email") = DTSSource("email")
else
'Here i need HELP!!!!!
end if

Main = DTSTransformStat_OK
End Function

So what should I put in the else statement to transfer the email to another destination table?

Thanks,
A.|||Tambien te puede convenir hacer dos tareas
Una para correos validos y la otra para los no.

Suerte|||Thanks CarstenK, I read that sample, it help a lot but I'm still having a problem I'm getting an error.
This is the code I'm using:

Function Main()
Dim oPkg, oDataPump

Set oPkg = DTSGlobalVariables.Parent
Set oDataPump = oPkg.Tasks("DTSStep_DTSDataPumpTask_1").CustomTask

if EsEmail(DTSSource("email")) then

oDataPump.DestinationObjectName = "dbo.ValidEmail"
DTSDestination("email") = DTSSource("email")

else

oDataPump.DestinationObjectName = "dbo.NonValidEmail"
DTSDestination("email") = DTSSource("email")

end if

Set oDataPump = Nothing
Set oPkg = Nothing

Main = DTSTransformStat_OK
End Function

but I'm getting this error "Task 'DTSStep_DTSDataPumpTask_1' not found"

In the link you gave me they explain how to get the name, I did it but still the same error, and I'm sure the name is correct (copy & paste).

Maybe I'm doing something wrong, I'm using this ActiveX code in the "Transform Data Task Properties", "transformations" tab, in the arrow connecting the source column with the destination column. I think there is where the script should go...
well... if anyone can help me with this would be great!

fhnth: hacer dos tareas no seria practico por que tendria que recorrer la tabla dos veces.

Bye,
A.|||Originally posted by ArgenSQL
Hello,
I'm using a DTS to make two tables of a csv passed file, I can make the transformations from the file to one table, but not for two, what I do is check if an email is valid and if it is I want to put it in one table, and if it isn't in another. Here is the code I'm using for validating:

Could you import to a staging table and use T-SQL to move the records into the correct destination table?|||Originally posted by RogerWilco
Could you import to a staging table and use T-SQL to move the records into the correct destination table?

I could but I don't want to do that, what I want to do is a dts package that do all the work in one (or few) steps. I hope someone with experience on that could help me...
DTS have all the functionality in the world, I want to learn how to use it so I can do things in the best way and using all the technology I have access to.

Bye,
A.|||Hello !

After reading your reply a third time, I think I do know your problem. You said
Maybe I'm doing something wrong, I'm using this ActiveX code in the "Transform Data Task Properties", "transformations" tab, in the arrow connecting the source column with the destination column.

I think you should create an ActiveXTask, that contains your code and runs before the actual data import and adjusts the properties.

I hope that brings you a step further!

Greetings,
Carsten|||I am not sure whether this can be achieved in a single DTS step (one source (a .csv file) and 2 destinations (2 diff. tables).

You can make a 2 step DTS package>
1). 1st Step:
source : .csv file
dest : 1st table
ActiveX :
If email1 = email2
Main = DTSTransformStat_OK
else
Main = DTSTransformStat_SkipRow
end if

2). 2nd Step:
source : .csv file
dest : 2nd table
ActiveX :
If email1 = email2
Main = DTSTransformStat_SkipRow
else
Main = DTSTransformStat_OK
end if|||Hi!

My thoghts are going this way:

Create an ActiveX-Task which determines whether an email is valid or not and then adjusts the appropriate destination for the following import task.
By taking this way, you can avoid processing an email twice!

Carsten|||Originally posted by CarstenK
Create an ActiveX-Task which determines whether an email is valid or not and then adjusts the appropriate destination for the following import task.

I can't make it work, I tryed everyway and still getting different errors, if I put the ActiveX code before the data pump, I don't get an error by detecting the task name, but I get an error because I ask for the email in the DTSsource and that doesn't happen until I get in the data part, so I can assign a different destination table but not commanded by the email type. I can't find a solution for this I search the web a lot but couldn't find a solution without passing two times trought the table, so I'm doing that I don't like it but I have to finish this anyway, if anyone finds the way to do this, please tell me.

Bye, Thanks all for your help.sql

Friday, March 23, 2012

monthly database merging

Hi,

I have to set up a project in SQLServer Express that will allow the export of tables from multiple PC's running SQLServer Express each month.

These have to be loaded into a single identical database on SQLServer Express each month for combined reporting.

so its basicaly

insert update on PC

export

import overlaying last months data (handle dup keys from the other pc's etc)

report

I've had a look at the SQLServer replication docs and got confused....

So if anyone can point me at appropriate documentation, or suggest a good method for this it would be appreciated

Thanks

SQL Server 2005 Express to SQL Server 2005 Express does NOT support replication. However there is a Import/Export tool (DTSWizard.exe) included with the Express Toolkit (Note: Requires SQL Express SP1 or higher.)

Download the Express Toolkit (and SQL Express SP2) here: http://msdn.microsoft.com/vstudio/express/sql/download/

After installing the toolkit, the Import/Export tool (called DTSWizard.exe) is located at:

"C:\Program Files\Microsoft SQL Server\90\DTS\Binn\DTSWizard.exe".

Add a 'shortcut to SSMSE: [Tools], [External Tools], name and add the shortcut by linking to the DTS file above.

In order to Schedule the process, you may have to employ the Windows Scheduler and add a Task that will run DTSRun.exe.

I recommend creating 'staging' tables, identical schemas to the actual tables. Load your data into the staging tables, then you can move the conforming data to the actual tables, leaving behind the non-conforming data for 'clean-up'.

|||

Thanks

Peter

monthly database merge

Hi,

I have to set up a project in SQLServer Express that will allow the export of tables from multiple PC's running SQLServer Express each month.

These have to be loaded into a single identical database on SQLServer Express each month for combined reporting.

so its basicaly

insert update on PC

export

import overlaying last months data (handle dup keys from the other pc's etc)

report

I've had a look at the SQLServer replication docs and got confused....

Can I do this with replication ?

So if anyone can point me at appropriate documentation, or suggest a good method for this it would be appreciated

Thanks

Yes, it's possible to do it with replication.

Look up the topic on merge replication on MSDN.

Also check out the following links for some additional info:

http://forums.microsoft.com/TechNet/ShowPost.aspx?PostID=1527294&SiteID=17

http://msdn2.microsoft.com/en-us/library/ms151819.aspx

Gary

|||

Hello Peter,

Express Edition is not able to be a Publisher. It's capable of being a Subscriber only.

Check header "Integration and Interoperability" from the following link:

Features Comparison:

http://www.microsoft.com/sql/prodinfo/features/compare-features.mspx

and these links to learn more about SQL Server Replication from BOL:

Replication Basics for Express Edition:

http://msdn2.microsoft.com/en-us/library/ms165700.aspx

SQL Server Replication:

http://msdn2.microsoft.com/en-us/library/ms151198.aspx

Ekrem ?nsoy

|||

Unfortunatly that was the documentation that confused me. Is there something a little more practical and less theoretical around.?

|||

The Remote PC's in my system are self sufficient (ie dont need recieve data back from the merge) and are not physically connected to the machine that aggregates the data.

The documentation seems to imply that although I can set up a copy of SQLServer Express to merge the data recieved on CD from the remote sites, that I cannot set up the SQLServer Express remote sites to generate the replication data to the CD's

Do you agree with that statement, from your experiance?

Wednesday, March 21, 2012

monitoring tables access

I have a sql server 2000 in windows 2003 domain.
A third part software connects to this sql and make query and it writes
record.
How can I keep track of the activity of this software on sql tables ?
Can I see which query the software make on the sql?
Many Thanks
> Can I see which query the software make on the sql?
SQL Profiler is your friend. Do please check it in Books OnLine.
Dejan Sarka, SQL Server MVP
Mentor, www.SolidQualityLearning.com
Anything written in this message represents solely the point of view of the
sender.
This message does not imply endorsement from Solid Quality Learning, and it
does not represent the point of view of Solid Quality Learning or any other
person, company or institution mentioned in this message

monitoring tables access

I have a sql server 2000 in windows 2003 domain.
A third part software connects to this sql and make query and it writes
record.
How can I keep track of the activity of this software on sql tables ?
Can I see which query the software make on the sql?
Many Thanks> Can I see which query the software make on the sql?
SQL Profiler is your friend. Do please check it in Books OnLine.
Dejan Sarka, SQL Server MVP
Mentor, www.SolidQualityLearning.com
Anything written in this message represents solely the point of view of the
sender.
This message does not imply endorsement from Solid Quality Learning, and it
does not represent the point of view of Solid Quality Learning or any other
person, company or institution mentioned in this message

monitoring tables access

I have a sql server 2000 in windows 2003 domain.
A third part software connects to this sql and make query and it writes
record.
How can I keep track of the activity of this software on sql tables ?
Can I see which query the software make on the sql?
Many Thanks> Can I see which query the software make on the sql?
SQL Profiler is your friend. Do please check it in Books OnLine.
--
Dejan Sarka, SQL Server MVP
Mentor, www.SolidQualityLearning.com
Anything written in this message represents solely the point of view of the
sender.
This message does not imply endorsement from Solid Quality Learning, and it
does not represent the point of view of Solid Quality Learning or any other
person, company or institution mentioned in this message

Friday, March 9, 2012

monitoring changes on a view

I have a number of tables, each with many fields. I put together
information by a view onto the relevant tables and fields. Now I'd like
to monitor if data in my view has been updated or inserted. I don't care
about the fields not included in the view. If something changes, the ID
of the corresponding dataset should be inserted into a log table.
I have no idea how to handle this with a trigger. If I put a trigger on
the basetables, it will fire on every change. On the other hand i can't
put a trigger on the view. Any ideas?
Thanks in advance
Michael
Example:
Table1
a1
a2
a3
a4
Table2
b1
b2
b3
b4
Table3
c1
c2
c3
c4
c5
View combines
a1,a2,b1,c4,c5Michael Schroeder" wrote:

> I have a number of tables, each with many fields. I put together
> On the other hand i can't
> put a trigger on the view. Any ideas?
>
Do you mean that certain business rules prevent you from putting a trigger
on the view? You can put an INSTEAD OF trigger on the view, and put logic
into the trigger to update the underlying base tables, as well as put an
entry into an audit log.|||I don't know what type of events you are wanting to audit, but you can
create an insert, update, or delete triggers on the base tables. Whether or
not this presents a problem depends on how complex the logic of the trigger
is. Below is an example of a simple and low cost implementation of an
auditing trigger that I have used in the past. In this case, the table
EmployeeAudit has the same column layout of Employee but with the addition
of a column named AuditDate and AuditType (delete or insert). An update is a
delete immediately followed by an insert.
CREATE trigger EmployeeAudit on Employee
AFTER INSERT, UPDATE, DELETE
AS
insert into EmployeeAudit select getdate(), 'D', * from deleted
insert into EmployeeAudit select getdate(), 'I', * from inserted
GO
"Michael Schroeder" <schroeder@.idicos.[germany]> wrote in message
news:%23zqtnQ1JGHA.1544@.TK2MSFTNGP11.phx.gbl...
>I have a number of tables, each with many fields. I put together
>information by a view onto the relevant tables and fields. Now I'd like to
>monitor if data in my view has been updated or inserted. I don't care about
>the fields not included in the view. If something changes, the ID of the
>corresponding dataset should be inserted into a log table.
> I have no idea how to handle this with a trigger. If I put a trigger on
> the basetables, it will fire on every change. On the other hand i can't
> put a trigger on the view. Any ideas?
> Thanks in advance
> Michael
> --
> Example:
> Table1
> a1
> a2
> a3
> a4
> Table2
> b1
> b2
> b3
> b4
> Table3
> c1
> c2
> c3
> c4
> c5
> View combines
> a1,a2,b1,c4,c5|||Mark Williams schrieb:
> Michael Schroeder" wrote:
>
> Do you mean that certain business rules prevent you from putting a trigger
> on the view? You can put an INSTEAD OF trigger on the view, and put logic
> into the trigger to update the underlying base tables, as well as put an
> entry into an audit log.
No. I can put an INSTEAD OF trigger on that view. But it does not fire
when something in the underlying basetables changes or something is
beeing inserted/deleted.
Maybe I have to mention that the basetables a beeing updated by
replication and the view and its trigger are just for monitoring and
reporting.
Putting triggers on each basetable is not a good idea, because there are
many of them and just a fraction of their data is interesting for
processing.
Thanks for your reply
Michael

Monitoring Changes in SP, Table and Views - Server Objects

Hello,
I would like to monitor the changes in current SQL Server Database
Objets like Tables, Stored Procedures, views and Triggers. I would like to
store the information in one table and would like to get the report.
How can i do this?Your best bet is via 3rd party tools. Take a look at the offerings of
lumigent. www.lumigent.com
HTH. Ryan
"Mehul Gurjar" <Mehul Gurjar@.discussions.microsoft.com> wrote in message
news:4C2A4D8D-BE04-4B66-9D95-6A631C11A6B9@.microsoft.com...
> Hello,
> I would like to monitor the changes in current SQL Server Database
> Objets like Tables, Stored Procedures, views and Triggers. I would like to
> store the information in one table and would like to get the report.
> How can i do this?
>|||Hello Ryan,
Thanks for the response. But can i do this using SQL server only.
The third party tool will either monitor the Changes using Processes and wil
l
be storing the Data in some database only. We have Jobs, Stored procedures
and we can run the processes from the SQL server only.
Do we have any other method except the third party tool ?
If we can do this, we can use it to all other client's environment
and can monitor the changes.
With regards,
"Ryan" wrote:

> Your best bet is via 3rd party tools. Take a look at the offerings of
> lumigent. www.lumigent.com
>
> --
> HTH. Ryan
> "Mehul Gurjar" <Mehul Gurjar@.discussions.microsoft.com> wrote in message
> news:4C2A4D8D-BE04-4B66-9D95-6A631C11A6B9@.microsoft.com...
>
>|||In SQL Server 2005 you can take a lookn at modify_date in sys.all_objects
catalog view
MS wrote that you can see the changes if you use an ALTER statement, however
modify_date will be changed if you did it direclty in the object by right
click--Modify ( I checked views) as well.
"Mehul Gurjar" <MehulGurjar@.discussions.microsoft.com> wrote in message
news:8F3AF150-AB1F-423C-B863-94D5A7010AE3@.microsoft.com...
> Hello Ryan,
> Thanks for the response. But can i do this using SQL server only.
> The third party tool will either monitor the Changes using Processes and
> will
> be storing the Data in some database only. We have Jobs, Stored procedures
> and we can run the processes from the SQL server only.
> Do we have any other method except the third party tool ?
> If we can do this, we can use it to all other client's environment
> and can monitor the changes.
> With regards,
> "Ryan" wrote:
>

Monitoring changes in any table in an instance

Is there a way to monitor changes in any tables of an instance? I have a
database driven application that I want to reverse engineer. I want to find
out if a function would perform changes to which tables in the instance. Is
there a realistic way to do this? Thank you.
Lito Kusnadi
Lito
CREATE TABLE AuditDDLEvents
(
LSN INT NOT NULL IDENTITY,
posttime DATETIME NOT NULL,
eventtype SYSNAME NOT NULL,
loginname SYSNAME NOT NULL,
schemaname SYSNAME NOT NULL,
objectname SYSNAME NOT NULL,
targetobjectname SYSNAME NOT NULL,
eventdata XML NOT NULL,
CONSTRAINT PK_AuditDDLEvents PRIMARY KEY(LSN)
)
GO
CREATE TRIGGER trg_audit_ddl_events ON DATABASE FOR
DDL_DATABASE_LEVEL_EVENTS
AS
DECLARE @.eventdata AS XML
SET @.eventdata = eventdata()
INSERT INTO dbo.AuditDDLEvents(
posttime, eventtype, loginname, schemaname,
objectname, targetobjectname, eventdata)
VALUES(
CAST(@.eventdata.query('data(//PostTime)') AS VARCHAR(23)),
CAST(@.eventdata.query('data(//EventType)') AS SYSNAME),
CAST(@.eventdata.query('data(//LoginName)') AS SYSNAME),
CAST(@.eventdata.query('data(//SchemaName)') AS SYSNAME),
CAST(@.eventdata.query('data(//ObjectName)') AS SYSNAME),
CAST(@.eventdata.query('data(//TargetObjectName)') AS SYSNAME),
@.eventdata)
GO
The trigger simply extracts all event attributes
of interest from the eventdata() function using XQuery,
and inserts those into the AuditDDLEvents table. To test the trigger,
submit a few DDL statements and query the audit table:
CREATE TABLE T1(col1 INT NOT NULL PRIMARY KEY)
ALTER TABLE T1 ADD col2 INT NULL
ALTER TABLE T1 ALTER COLUMN col2 INT NOT NULL
CREATE NONCLUSTERED INDEX idx1 ON T1(col2)
SELECT * FROM AuditDDLEvents
SELECT posttime, eventtype, loginname,
CAST(eventdata.query('data(//TSQLCommand)') AS NVARCHAR(2000))
AS tsqlcommand
FROM dbo.AuditDDLEvents
WHERE schemaname = N'dbo' AND N'T1' IN(objectname, targetobjectname)
ORDER BY posttime
"Lito Kusnadi" <LitoKusnadi@.discussions.microsoft.com> wrote in message
news:E78250B3-C634-4ADA-9262-24C85C4EFC92@.microsoft.com...
> Is there a way to monitor changes in any tables of an instance? I have a
> database driven application that I want to reverse engineer. I want to
> find
> out if a function would perform changes to which tables in the instance.
> Is
> there a realistic way to do this? Thank you.
> --
> Lito Kusnadi
>
|||In addition to Uri's recommendation you may want to take a look at the
information available in SQL Server 2005 default trace.
Take a look at this report in Management Studio, Reports - Standard Reports
- Schema Changes History.
Also see the path of the trace file by running
select * from sys.traces
where the default trace is usually id 1.
Hope this helps,
Ben Nevarez
"Uri Dimant" wrote:

> Lito
> CREATE TABLE AuditDDLEvents
> (
> LSN INT NOT NULL IDENTITY,
> posttime DATETIME NOT NULL,
> eventtype SYSNAME NOT NULL,
> loginname SYSNAME NOT NULL,
> schemaname SYSNAME NOT NULL,
> objectname SYSNAME NOT NULL,
> targetobjectname SYSNAME NOT NULL,
> eventdata XML NOT NULL,
> CONSTRAINT PK_AuditDDLEvents PRIMARY KEY(LSN)
> )
> GO
> CREATE TRIGGER trg_audit_ddl_events ON DATABASE FOR
> DDL_DATABASE_LEVEL_EVENTS
> AS
> DECLARE @.eventdata AS XML
> SET @.eventdata = eventdata()
> INSERT INTO dbo.AuditDDLEvents(
> posttime, eventtype, loginname, schemaname,
> objectname, targetobjectname, eventdata)
> VALUES(
> CAST(@.eventdata.query('data(//PostTime)') AS VARCHAR(23)),
> CAST(@.eventdata.query('data(//EventType)') AS SYSNAME),
> CAST(@.eventdata.query('data(//LoginName)') AS SYSNAME),
> CAST(@.eventdata.query('data(//SchemaName)') AS SYSNAME),
> CAST(@.eventdata.query('data(//ObjectName)') AS SYSNAME),
> CAST(@.eventdata.query('data(//TargetObjectName)') AS SYSNAME),
> @.eventdata)
> GO
> The trigger simply extracts all event attributes
> of interest from the eventdata() function using XQuery,
> and inserts those into the AuditDDLEvents table. To test the trigger,
> submit a few DDL statements and query the audit table:
>
> CREATE TABLE T1(col1 INT NOT NULL PRIMARY KEY)
> ALTER TABLE T1 ADD col2 INT NULL
> ALTER TABLE T1 ALTER COLUMN col2 INT NOT NULL
> CREATE NONCLUSTERED INDEX idx1 ON T1(col2)
> SELECT * FROM AuditDDLEvents
> SELECT posttime, eventtype, loginname,
> CAST(eventdata.query('data(//TSQLCommand)') AS NVARCHAR(2000))
> AS tsqlcommand
> FROM dbo.AuditDDLEvents
> WHERE schemaname = N'dbo' AND N'T1' IN(objectname, targetobjectname)
> ORDER BY posttime
>
> "Lito Kusnadi" <LitoKusnadi@.discussions.microsoft.com> wrote in message
> news:E78250B3-C634-4ADA-9262-24C85C4EFC92@.microsoft.com...
>
>