Friday, March 30, 2012
More on merge replication an FK constraints
and foreign keys.
As I've mentioned, I have one publication that pushes 'customers' table to
the subscribers.
I have another publication that pushes 'invoices' and 'invoices_det' tables
to the subscribers. In table invoices there is cust_id field wich is foreign
key to the customers table primary key 'cust_id'. I have created all
publications like so that first table is DROPed and then recreated. I have
'checked' options for creating primary keys and referential integrity.
FIrst I push CUSTOMERS publication to the subsrciber, it's ok. customer
table is created on the subscriber, primary key is set, everything is ok.
Then I push INVOICES publication to the subscriber. That's not ok. I don't
have foreign key to customers table in the table invoices.
Enterprise manage wizzard suggests that I include customers table in
INVOICES publication to avoid things like mentioned above. But, I have
problem with that approach. That would mean that I'd have only as many
publications as I have subscribers (one subscriber - one publication). That
means that I'd have to have, for instance, table 'customers' published dozen
times, if I have dozen subscribers.
How do I solve this one?
As far as I see it, i could:
1) drop the foreign key constraints
- I'd hate to do this!
2) set up publication snapshot so it 'delete the data on the subscriber':
This seems ok, but! If I, for any reason, need to break replication,
when setting it back agan, I'll still have data on the subscriber. I then
push CUSTOMERS publication, the snapshot tries to DELETE data from
'customers' table, but, since it's FKeyd to 'invoices' table, it will bring
up an error.
3) set up publication snapshot so it 'leaves the existing table unchanged'.
This is the best bet I could have, as far as I see. I only need to
make sure that the database on the remote is 'fresh and clear'. Wich means,
do the script to do the DDL on the subscriber (with all the constraints!),
and then push the snapshot in 'right' order (meaning, first the customers,
then the invoices, in the above example).
Any help here would be much appreciated.
Mike
"I can do it quick. I can do it cheap. I can do it well. Pick any two."
Mario Splivalo
msplival@.jagor.srce.hr
For foreign keys can you not set them all to "Not for replication", combine
CUSTOMERS, INVOICES and INVOICES_DET in one publication?
Jim.
"Mario Splivalo" wrote:
> So, let me relate to my earlier post in this group about merge replication
> and foreign keys.
> As I've mentioned, I have one publication that pushes 'customers' table to
> the subscribers.
> I have another publication that pushes 'invoices' and 'invoices_det' tables
> to the subscribers. In table invoices there is cust_id field wich is foreign
> key to the customers table primary key 'cust_id'. I have created all
> publications like so that first table is DROPed and then recreated. I have
> 'checked' options for creating primary keys and referential integrity.
> FIrst I push CUSTOMERS publication to the subsrciber, it's ok. customer
> table is created on the subscriber, primary key is set, everything is ok.
> Then I push INVOICES publication to the subscriber. That's not ok. I don't
> have foreign key to customers table in the table invoices.
> Enterprise manage wizzard suggests that I include customers table in
> INVOICES publication to avoid things like mentioned above. But, I have
> problem with that approach. That would mean that I'd have only as many
> publications as I have subscribers (one subscriber - one publication). That
> means that I'd have to have, for instance, table 'customers' published dozen
> times, if I have dozen subscribers.
> How do I solve this one?
> As far as I see it, i could:
> 1) drop the foreign key constraints
> - I'd hate to do this!
> 2) set up publication snapshot so it 'delete the data on the subscriber':
> This seems ok, but! If I, for any reason, need to break replication,
> when setting it back agan, I'll still have data on the subscriber. I then
> push CUSTOMERS publication, the snapshot tries to DELETE data from
> 'customers' table, but, since it's FKeyd to 'invoices' table, it will bring
> up an error.
> 3) set up publication snapshot so it 'leaves the existing table unchanged'.
> This is the best bet I could have, as far as I see. I only need to
> make sure that the database on the remote is 'fresh and clear'. Wich means,
> do the script to do the DDL on the subscriber (with all the constraints!),
> and then push the snapshot in 'right' order (meaning, first the customers,
> then the invoices, in the above example).
> Any help here would be much appreciated.
> Mike
> --
> "I can do it quick. I can do it cheap. I can do it well. Pick any two."
> Mario Splivalo
> msplival@.jagor.srce.hr
>
|||On 2004-12-06, Jim Breffni <JimBreffni@.discussions.microsoft.com> wrote:
> For foreign keys can you not set them all to "Not for replication", combine
> CUSTOMERS, INVOICES and INVOICES_DET in one publication?
I use 'NOT FOR REPLICATION' for foreign keys.
Combining all those into one publication now seems like a good thing. But, I
have dozen of publication of wich 8 are non-filtered publications. Customers
is an example of such publication. I have 30 stores around the country, and
I need to be able for all the stores to enter new customers, and that new
customer should be visible in all locations. Invoices, for instance, is
exmaple of filtered publication (i have 4 like those), only the cetnral
database (the publisher) needs to see all the invoices (for the reporting
purposes, and financial stuff). Other subscribers don't see other invoices.
Same goes for inventory lists, some other financial stuff I don't know
english names, and so on.
So, your approach leaves me with ONE publication for each subscriber! I
don't like that approach, it just creates mess. I'd like if someone could
tell me now 'stop, you're wrong here'.
My database well normalized (is that the term when you reference data trough
foreign keys?), for instance, in invoices I have payement methods (cash,
credit card,...) wich is FK to the table wich holds payement methods. That
table is updated ONLY at the publisher. Another example is stocks. Each
invoice holds stock_id, wich is foreign key to table l_stocks. Then users.
Then tax rates. Then a lot of other things. All those tables are updated at
the publisher only, so I created snapshot replication to push that data to
subscribers. You suggest I put all that tables into merge publication, just
because they're referenced by FK from tables invoices and such? What is
wrong with my understanding of replication if I belive that is not the way?
Mike
"I can do it quick. I can do it cheap. I can do it well. Pick any two."
Mario Splivalo
msplival@.jagor.srce.hr
|||If you have a set of tables that rely on each other using foreign keys and
they are likely to be updated together then they should all be in the same
publication.
As I understand it, you have them separated and are getting foreign key
errors because items are not present when you need them.
You may have very good reasons for splitting them that I am not aware of but
I see no other solution to your problem.
Jim.
"Mario Splivalo" wrote:
> On 2004-12-06, Jim Breffni <JimBreffni@.discussions.microsoft.com> wrote:
> I use 'NOT FOR REPLICATION' for foreign keys.
> Combining all those into one publication now seems like a good thing. But, I
> have dozen of publication of wich 8 are non-filtered publications. Customers
> is an example of such publication. I have 30 stores around the country, and
> I need to be able for all the stores to enter new customers, and that new
> customer should be visible in all locations. Invoices, for instance, is
> exmaple of filtered publication (i have 4 like those), only the cetnral
> database (the publisher) needs to see all the invoices (for the reporting
> purposes, and financial stuff). Other subscribers don't see other invoices.
> Same goes for inventory lists, some other financial stuff I don't know
> english names, and so on.
> So, your approach leaves me with ONE publication for each subscriber! I
> don't like that approach, it just creates mess. I'd like if someone could
> tell me now 'stop, you're wrong here'.
> My database well normalized (is that the term when you reference data trough
> foreign keys?), for instance, in invoices I have payement methods (cash,
> credit card,...) wich is FK to the table wich holds payement methods. That
> table is updated ONLY at the publisher. Another example is stocks. Each
> invoice holds stock_id, wich is foreign key to table l_stocks. Then users.
> Then tax rates. Then a lot of other things. All those tables are updated at
> the publisher only, so I created snapshot replication to push that data to
> subscribers. You suggest I put all that tables into merge publication, just
> because they're referenced by FK from tables invoices and such? What is
> wrong with my understanding of replication if I belive that is not the way?
> Mike
> --
> "I can do it quick. I can do it cheap. I can do it well. Pick any two."
> Mario Splivalo
> msplival@.jagor.srce.hr
>
|||On 2004-12-07, Jim Breffni <JimBreffni@.discussions.microsoft.com> wrote:
> If you have a set of tables that rely on each other using foreign keys and
> they are likely to be updated together then they should all be in the same
> publication.
Yes. That goes for master/detail tables. But, what if I have a table with,
let's say, tax rates, and in invocie table I have reference via foreign key
to the tax rates table? I'm publishing tax rates table (along with others)
via snapshot publication. So, why would I include those in merge
publication? And tables in merge publication still reference those fields in
tax rates table. So, when deploying initial snapshot the invoices table
WON'T have FK constraint to tax rates table. How do I deal with that?
> As I understand it, you have them separated and are getting foreign key
> errors because items are not present when you need them.
Initial snapshot won't create foreign key constraints for tables that are
not in that publication. That is my problem.
> You may have very good reasons for splitting them that I am not aware of but
> I see no other solution to your problem.
Do you have any reference on the web where I could read more about this? I
see no point in puting all the database tables in one publication just
because almost all tables are FKey with each other. Or, to put it more
simple, how do I establish snapshot and merge publication so that tables in
merge publication have foreign keys to tables in snapshot publication?
Mike
"I can do it quick. I can do it cheap. I can do it well. Pick any two."
Mario Splivalo
msplival@.jagor.srce.hr
|||Yes, what you are doing is logical ok - I just don't know of a way to prevent
FK errors when you split things like that.
For me, even if the table rarely changes, I include everything in a
publication. The only price I pay for that is a larger initial snapshot.
The benefit I get is simplicity and easier administration.
|||On 2004-12-07, Jim Breffni <JimBreffni@.discussions.microsoft.com> wrote:
> Yes, what you are doing is logical ok - I just don't know of a way to prevent
> FK errors when you split things like that.
> For me, even if the table rarely changes, I include everything in a
> publication. The only price I pay for that is a larger initial snapshot.
> The benefit I get is simplicity and easier administration.
So, you'd generaly be ok with this: put rarely changed tables as articles
into many publications, although you could have only one snapshot
publication for those?
Mike
"I can do it quick. I can do it cheap. I can do it well. Pick any two."
Mario Splivalo
msplival@.jagor.srce.hr
|||I try to keep it one publication, one snapshot. I then have many subscribers
- it is very easy to administer.
"Mario Splivalo" wrote:
> On 2004-12-07, Jim Breffni <JimBreffni@.discussions.microsoft.com> wrote:
> So, you'd generaly be ok with this: put rarely changed tables as articles
> into many publications, although you could have only one snapshot
> publication for those?
> Mike
> --
> "I can do it quick. I can do it cheap. I can do it well. Pick any two."
> Mario Splivalo
> msplival@.jagor.srce.hr
>
|||On 2004-12-08, Jim Breffni <JimBreffni@.discussions.microsoft.com> wrote:
> I try to keep it one publication, one snapshot. I then have many subscribers
> - it is very easy to administer.
>
Thnx. I'll see with what i'll come up at the end, and post here...
Mike
"I can do it quick. I can do it cheap. I can do it well. Pick any two."
Mario Splivalo
msplival@.jagor.srce.hr
|||Mario,
I'm looking for the same answer as you, I have basically the same problem
with inicial snapshot with transactional replication, as you, I have also
Unchecked the enforce for replication in the relationship's dialog for the
foreign keys.
If I have some answers or more info I will post it here...
Adrian Parra.
"Mario Splivalo" wrote:
> On 2004-12-06, Jim Breffni <JimBreffni@.discussions.microsoft.com> wrote:
> I use 'NOT FOR REPLICATION' for foreign keys.
> Combining all those into one publication now seems like a good thing. But, I
> have dozen of publication of wich 8 are non-filtered publications. Customers
> is an example of such publication. I have 30 stores around the country, and
> I need to be able for all the stores to enter new customers, and that new
> customer should be visible in all locations. Invoices, for instance, is
> exmaple of filtered publication (i have 4 like those), only the cetnral
> database (the publisher) needs to see all the invoices (for the reporting
> purposes, and financial stuff). Other subscribers don't see other invoices.
> Same goes for inventory lists, some other financial stuff I don't know
> english names, and so on.
> So, your approach leaves me with ONE publication for each subscriber! I
> don't like that approach, it just creates mess. I'd like if someone could
> tell me now 'stop, you're wrong here'.
> My database well normalized (is that the term when you reference data trough
> foreign keys?), for instance, in invoices I have payement methods (cash,
> credit card,...) wich is FK to the table wich holds payement methods. That
> table is updated ONLY at the publisher. Another example is stocks. Each
> invoice holds stock_id, wich is foreign key to table l_stocks. Then users.
> Then tax rates. Then a lot of other things. All those tables are updated at
> the publisher only, so I created snapshot replication to push that data to
> subscribers. You suggest I put all that tables into merge publication, just
> because they're referenced by FK from tables invoices and such? What is
> wrong with my understanding of replication if I belive that is not the way?
> Mike
> --
> "I can do it quick. I can do it cheap. I can do it well. Pick any two."
> Mario Splivalo
> msplival@.jagor.srce.hr
>
Wednesday, March 28, 2012
More Merge Join question
All,
I want to use Merge Join to join two sources,
but the join condition should like this
and input1.servicedate <= input2.MonthFirstDay
and input2.MonthFirstDay <= getdate()
Does Merge Join support this or What’s the best way to do it?
Thanks
Here is more detail:
With join with input2, the records in input1 will have multiple records based on the months between the service date and system date.
The query for selecting input2 is a select distinct from a table
Thanks
|||Here is more detail:
In input1, the records look like this:
EmpID, Dept_ID, service_date
12320060101
220 20060502
Input2 is a time dimension,
MonthKey, firstdayOf_month, lastdayOf_Month
2006012006010120060131
2006022006020120060228
*
*
*
2006052003050120030531
the data of input1 and input2 are from two different servers.
If use link server, the query looks like
Select empID, edpt_ID, service_date
From input1, linkserver.input2
Where input1.servicedate <= input2.MonthFirstDay
And input2.MonthFirstDay <= getdate()
The records returned from the query:
empID, dept_ID, service_date, monthkey
12320060101200601
12320060101200602
12320060101200603
12320060101200604
12320060101200605
22020060201200602
22020060201200603
22020060201200604
22020060201200605
In my SSIS package, I try not to use link server, so I use two OLE db source to query data and the data from input1 have some loop up process to go. At the end, input1 have to join with input2, which transformation should I use to join with those two inputs and what kind join should I use here?
Thanks a lot
|||I can't think of any way to do this without writing script - in this case I actually think the linked server approach may be most efficient.
Donald Farmer
Friday, March 23, 2012
monthly database merge
Hi,
I have to set up a project in SQLServer Express that will allow the export of tables from multiple PC's running SQLServer Express each month.
These have to be loaded into a single identical database on SQLServer Express each month for combined reporting.
so its basicaly
insert update on PC
export
import overlaying last months data (handle dup keys from the other pc's etc)
report
I've had a look at the SQLServer replication docs and got confused....
Can I do this with replication ?
So if anyone can point me at appropriate documentation, or suggest a good method for this it would be appreciated
Thanks
Yes, it's possible to do it with replication.
Look up the topic on merge replication on MSDN.
Also check out the following links for some additional info:
http://forums.microsoft.com/TechNet/ShowPost.aspx?PostID=1527294&SiteID=17
http://msdn2.microsoft.com/en-us/library/ms151819.aspx
Gary
|||Hello Peter,
Express Edition is not able to be a Publisher. It's capable of being a Subscriber only.
Check header "Integration and Interoperability" from the following link:
Features Comparison:
http://www.microsoft.com/sql/prodinfo/features/compare-features.mspx
and these links to learn more about SQL Server Replication from BOL:
Replication Basics for Express Edition:
http://msdn2.microsoft.com/en-us/library/ms165700.aspx
SQL Server Replication:
http://msdn2.microsoft.com/en-us/library/ms151198.aspx
Ekrem ?nsoy
|||Unfortunatly that was the documentation that confused me. Is there something a little more practical and less theoretical around.?
|||The Remote PC's in my system are self sufficient (ie dont need recieve data back from the merge) and are not physically connected to the machine that aggregates the data.
The documentation seems to imply that although I can set up a copy of SQLServer Express to merge the data recieved on CD from the remote sites, that I cannot set up the SQLServer Express remote sites to generate the replication data to the CD's
Do you agree with that statement, from your experiance?
Monday, March 19, 2012
Monitoring replication from subscriber
I have a replication scenario where multiple laptops merge replicate with a
central server. The subscriptions are set up as push subscriptions on the
distributor. My users want a tool to tell when they last replicated, and if
possible, how many records are waiting to be replicated. They need this
while disconnected from the server. Is there a way to get this information,
without having access to the publisher/distributor?
Thanks,
John Loveland
John,
have a look on my site for a script to find pending merge changes. There is
some discussion about the reliability of the data in it, but I've found it a
useful bit of data to use.
For the last time merged using a push, perhaps look at
sysmergesubscriptions.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)
|||Paul,
Thanks. That looks like exactly what I need.
John Loveland
"Paul Ibison" <Paul.Ibison@.Pygmalion.Com> wrote in message
news:%23A7Ma05FGHA.2568@.TK2MSFTNGP10.phx.gbl...
> John,
> have a look on my site for a script to find pending merge changes. There
> is some discussion about the reliability of the data in it, but I've found
> it a useful bit of data to use.
> For the last time merged using a push, perhaps look at
> sysmergesubscriptions.
> Cheers,
> Paul Ibison SQL Server MVP, www.replicationanswers.com
> (recommended sql server 2000 replication book:
> http://www.nwsu.com/0974973602p.html)
>
Monday, March 12, 2012
Monitoring Merge Push Subscription from Subscriber
Hi all!
Is there an option to monitor current state of Merge Push Subscription from the Subscriber, without connecting to the Publisher Server? I have examined many SPs and system tables at Subscriber, but didn't find any reliable method...What we do is record the current UTC time every time the sync job completes successfully. Then another job which sends out an alert if the last sync time is too far out of date. However another thing you can do is look at MSMerge_genhistory. There is a genstatus in there 1 or 0 for delivered or not delivered. If the last genstatus is still zero and the coldate is X number of hours out of date then you can send out some alerts.
Martin
|||Thanks, Martin!
I also discovered, that table sysmergesubscriptions at Subscriber contains some information about when the last synchronization occured and it's message.
Wednesday, March 7, 2012
monitor replication
How do I automate the successful / error working of
replication - both transactional as well as merge. Is
there any system tables based on which i can do a join?
Appreciate if someone can provide a query...
regards.
bharathIf you want to monitor replication you need to query information from the
table
sysjobs and sysjobhistory table in MSDB database.
In sysjobs the name of the job would be like this
<Remote Server Name>-<Publication name>-<Publication database>-<Local sql
server name>-<Database name>- 0
For this particular job name find the job_id and using it query the table
sysjobhistory. Watch for the columns step_name and message.
Regards,
Jagan Mohan
MCP
"bharath" <barathsing@.hotmail.com> wrote in message
news:051401c3b7e0$b3f08eb0$a401280a@.phx.gbl...
> Hi,
> How do I automate the successful / error working of
> replication - both transactional as well as merge. Is
> there any system tables based on which i can do a join?
> Appreciate if someone can provide a query...
> regards.
> bharath
>
>|||One method I have always been using and have found to be very effective and
simple is to create an end-to-end trace myself and then monitor the trace
for the end-to-end transactional replication deplay.
To create such a trace, I include a dummy table in every publication. The
dummy table has a datetime column and I schedule a job to update the dummy
table at a regular interval. Then, when the change in the dummy table has
reached the subscriber via replication, I compare the the time the datetime
is updated at the publisher and the time the same row is updated at the
subscriber. The time difference gives me the end-to-end replication deplay.
I can then send alerts if the deplay is longer than a certain threshold.
--
Linchi Shea
linchi_shea@.NOSPAMml.com
"bharath" <barathsing@.hotmail.com> wrote in message
news:051401c3b7e0$b3f08eb0$a401280a@.phx.gbl...
> Hi,
> How do I automate the successful / error working of
> replication - both transactional as well as merge. Is
> there any system tables based on which i can do a join?
> Appreciate if someone can provide a query...
> regards.
> bharath
>
>
Saturday, February 25, 2012
monitor data change of merge replication
you can take advantage of the history table:
select publisher_insertcount, publisher_updatecount,
publisher_deletecount,
subscriber_insertcount, subscriber_updatecount,
subscriber_deletecount from
dbo.MSmerge_history
Also, you can use performance monitor for a less granular
approach:
SQL Server:Replication Merge - Downloaded Changes
SQL Server:Replication Merge - Uploaded Changes
SQL Server:Replication Merge - Conflicts
Rgds,
Paul Ibison SQL Server MVP, www.replicationanswers.com
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)
Thanks Paul.
Is it possible that I can trace which record and what data is changed?
Thanks
Nathan
"Paul Ibison" wrote:
> Nathan,
> you can take advantage of the history table:
> select publisher_insertcount, publisher_updatecount,
> publisher_deletecount,
> subscriber_insertcount, subscriber_updatecount,
> subscriber_deletecount from
> dbo.MSmerge_history
> Also, you can use performance monitor for a less granular
> approach:
> SQL Server:Replication Merge - Downloaded Changes
> SQL Server:Replication Merge - Uploaded Changes
> SQL Server:Replication Merge - Conflicts
> Rgds,
> Paul Ibison SQL Server MVP, www.replicationanswers.com
> (recommended sql server 2000 replication book:
> http://www.nwsu.com/0974973602p.html)
>
|||Not really. On a particular computer you could look at
the MSmerge_contents and MSmerge_tombstone tables and map
back the GUIDS. But to sit on a particular computer and
to determine if a row has been updated/inserted from the
last merge synchronization process is not possible,
AFAIK. I suppose if you really wanted this functionality,
you could use triggers to make your own audit table.
HTH,
Paul Ibison SQL Server MVP, www.replicationanswers.com
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)
Monitor Data and Log File Growth
We are having more than 20 databases and we are merge replicating the
databases also. Currently for all databases we set the Data and Log file
growth as Automatically by percent. The size of each database is around 1277
MB.
Now if we Change the Data and Log File growth to resctricted mode is there
is any problem will cause, if not then how we will be able to monitor the
Data and Log file growth and when we should increase the File Size.
Thanks,
HerbertHerbert
If you open SQL Server Profiler you will find an event AutoGrowth (If I
remember well) , so you can track the info as well as querying sysfiles
system table (size column) (not recomended)
"Herbert" <Herbert@.discussions.microsoft.com> wrote in message
news:7973C596-C566-42F5-80CA-011D89D189AE@.microsoft.com...
> Hi,
> We are having more than 20 databases and we are merge replicating the
> databases also. Currently for all databases we set the Data and Log file
> growth as Automatically by percent. The size of each database is around
1277
> MB.
> Now if we Change the Data and Log File growth to resctricted mode is there
> is any problem will cause, if not then how we will be able to monitor the
> Data and Log file growth and when we should increase the File Size.
> Thanks,
> Herbert|||Hi,
As a good practice, I recommend you to monitor the MDF and LDF files daily
once or twice manually. The best approach for automatic monitoring is :-
1. Set the DB Size to a maximum size and make unrestricted growth by
percentage or MB
2. Then set up hard disk free space monitoring alert. So when ever your hard
disk goes beyond a specific amount you will get an alert.
See the below link to configure hard disk monitoring.
http://support.microsoft.com/?kbid=299921
Thanks
Hari
SQL Server MVP
"Uri Dimant" <urid@.iscar.co.il> wrote in message
news:uxCBebdOFHA.3444@.tk2msftngp13.phx.gbl...
> Herbert
> If you open SQL Server Profiler you will find an event AutoGrowth (If I
> remember well) , so you can track the info as well as querying sysfiles
> system table (size column) (not recomended)
>
>
> "Herbert" <Herbert@.discussions.microsoft.com> wrote in message
> news:7973C596-C566-42F5-80CA-011D89D189AE@.microsoft.com...
>> Hi,
>> We are having more than 20 databases and we are merge replicating the
>> databases also. Currently for all databases we set the Data and Log file
>> growth as Automatically by percent. The size of each database is around
> 1277
>> MB.
>> Now if we Change the Data and Log File growth to resctricted mode is
>> there
>> is any problem will cause, if not then how we will be able to monitor the
>> Data and Log file growth and when we should increase the File Size.
>> Thanks,
>> Herbert
>|||Also read the following link
http://www.databasejournal.com/features/mssql/article.php/3339681
"Hari Prasad" <hari_prasad_k@.hotmail.com> wrote in message
news:eSLV51dOFHA.2604@.TK2MSFTNGP10.phx.gbl...
> Hi,
> As a good practice, I recommend you to monitor the MDF and LDF files daily
> once or twice manually. The best approach for automatic monitoring is :-
> 1. Set the DB Size to a maximum size and make unrestricted growth by
> percentage or MB
> 2. Then set up hard disk free space monitoring alert. So when ever your
hard
> disk goes beyond a specific amount you will get an alert.
> See the below link to configure hard disk monitoring.
> http://support.microsoft.com/?kbid=299921
> Thanks
> Hari
> SQL Server MVP
> "Uri Dimant" <urid@.iscar.co.il> wrote in message
> news:uxCBebdOFHA.3444@.tk2msftngp13.phx.gbl...
> > Herbert
> > If you open SQL Server Profiler you will find an event AutoGrowth (If I
> > remember well) , so you can track the info as well as querying sysfiles
> > system table (size column) (not recomended)
> >
> >
> >
> >
> >
> > "Herbert" <Herbert@.discussions.microsoft.com> wrote in message
> > news:7973C596-C566-42F5-80CA-011D89D189AE@.microsoft.com...
> >> Hi,
> >>
> >> We are having more than 20 databases and we are merge replicating
the
> >> databases also. Currently for all databases we set the Data and Log
file
> >> growth as Automatically by percent. The size of each database is
around
> > 1277
> >> MB.
> >>
> >> Now if we Change the Data and Log File growth to resctricted mode is
> >> there
> >> is any problem will cause, if not then how we will be able to monitor
the
> >> Data and Log file growth and when we should increase the File Size.
> >>
> >> Thanks,
> >> Herbert
> >
> >
>|||When your free space percentage on the disk gets below 20%, it is time to
start thinking about a new allocation.
For trend analysis, use the Backup repository in msdb. Every time a backup
is written, the size of the database files is logged.
Sincerely,
Anthony Thomas
"Hari Prasad" <hari_prasad_k@.hotmail.com> wrote in message
news:eSLV51dOFHA.2604@.TK2MSFTNGP10.phx.gbl...
Hi,
As a good practice, I recommend you to monitor the MDF and LDF files daily
once or twice manually. The best approach for automatic monitoring is :-
1. Set the DB Size to a maximum size and make unrestricted growth by
percentage or MB
2. Then set up hard disk free space monitoring alert. So when ever your hard
disk goes beyond a specific amount you will get an alert.
See the below link to configure hard disk monitoring.
http://support.microsoft.com/?kbid=299921
Thanks
Hari
SQL Server MVP
"Uri Dimant" <urid@.iscar.co.il> wrote in message
news:uxCBebdOFHA.3444@.tk2msftngp13.phx.gbl...
> Herbert
> If you open SQL Server Profiler you will find an event AutoGrowth (If I
> remember well) , so you can track the info as well as querying sysfiles
> system table (size column) (not recomended)
>
>
> "Herbert" <Herbert@.discussions.microsoft.com> wrote in message
> news:7973C596-C566-42F5-80CA-011D89D189AE@.microsoft.com...
>> Hi,
>> We are having more than 20 databases and we are merge replicating the
>> databases also. Currently for all databases we set the Data and Log file
>> growth as Automatically by percent. The size of each database is around
> 1277
>> MB.
>> Now if we Change the Data and Log File growth to resctricted mode is
>> there
>> is any problem will cause, if not then how we will be able to monitor the
>> Data and Log file growth and when we should increase the File Size.
>> Thanks,
>> Herbert
>
Monitor Data and Log File Growth
We are having more than 20 databases and we are merge replicating the
databases also. Currently for all databases we set the Data and Log file
growth as Automatically by percent. The size of each database is around 127
7
MB.
Now if we Change the Data and Log File growth to resctricted mode is there
is any problem will cause, if not then how we will be able to monitor the
Data and Log file growth and when we should increase the File Size.
Thanks,
HerbertHerbert
If you open SQL Server Profiler you will find an event AutoGrowth (If I
remember well) , so you can track the info as well as querying sysfiles
system table (size column) (not recomended)
"Herbert" <Herbert@.discussions.microsoft.com> wrote in message
news:7973C596-C566-42F5-80CA-011D89D189AE@.microsoft.com...
> Hi,
> We are having more than 20 databases and we are merge replicating the
> databases also. Currently for all databases we set the Data and Log file
> growth as Automatically by percent. The size of each database is around
1277
> MB.
> Now if we Change the Data and Log File growth to resctricted mode is there
> is any problem will cause, if not then how we will be able to monitor the
> Data and Log file growth and when we should increase the File Size.
> Thanks,
> Herbert|||Hi,
As a good practice, I recommend you to monitor the MDF and LDF files daily
once or twice manually. The best approach for automatic monitoring is :-
1. Set the DB Size to a maximum size and make unrestricted growth by
percentage or MB
2. Then set up hard disk free space monitoring alert. So when ever your hard
disk goes beyond a specific amount you will get an alert.
See the below link to configure hard disk monitoring.
http://support.microsoft.com/?kbid=299921
Thanks
Hari
SQL Server MVP
"Uri Dimant" <urid@.iscar.co.il> wrote in message
news:uxCBebdOFHA.3444@.tk2msftngp13.phx.gbl...
> Herbert
> If you open SQL Server Profiler you will find an event AutoGrowth (If I
> remember well) , so you can track the info as well as querying sysfiles
> system table (size column) (not recomended)
>
>
> "Herbert" <Herbert@.discussions.microsoft.com> wrote in message
> news:7973C596-C566-42F5-80CA-011D89D189AE@.microsoft.com...
> 1277
>|||Also read the following link
http://www.databasejournal.com/feat...cle.php/3339681
"Hari Prasad" <hari_prasad_k@.hotmail.com> wrote in message
news:eSLV51dOFHA.2604@.TK2MSFTNGP10.phx.gbl...
> Hi,
> As a good practice, I recommend you to monitor the MDF and LDF files daily
> once or twice manually. The best approach for automatic monitoring is :-
> 1. Set the DB Size to a maximum size and make unrestricted growth by
> percentage or MB
> 2. Then set up hard disk free space monitoring alert. So when ever your
hard
> disk goes beyond a specific amount you will get an alert.
> See the below link to configure hard disk monitoring.
> http://support.microsoft.com/?kbid=299921
> Thanks
> Hari
> SQL Server MVP
> "Uri Dimant" <urid@.iscar.co.il> wrote in message
> news:uxCBebdOFHA.3444@.tk2msftngp13.phx.gbl...
the[vbcol=seagreen]
file[vbcol=seagreen]
around[vbcol=seagreen]
the[vbcol=seagreen]
>|||When your free space percentage on the disk gets below 20%, it is time to
start thinking about a new allocation.
For trend analysis, use the Backup repository in msdb. Every time a backup
is written, the size of the database files is logged.
Sincerely,
Anthony Thomas
"Hari Prasad" <hari_prasad_k@.hotmail.com> wrote in message
news:eSLV51dOFHA.2604@.TK2MSFTNGP10.phx.gbl...
Hi,
As a good practice, I recommend you to monitor the MDF and LDF files daily
once or twice manually. The best approach for automatic monitoring is :-
1. Set the DB Size to a maximum size and make unrestricted growth by
percentage or MB
2. Then set up hard disk free space monitoring alert. So when ever your hard
disk goes beyond a specific amount you will get an alert.
See the below link to configure hard disk monitoring.
http://support.microsoft.com/?kbid=299921
Thanks
Hari
SQL Server MVP
"Uri Dimant" <urid@.iscar.co.il> wrote in message
news:uxCBebdOFHA.3444@.tk2msftngp13.phx.gbl...
> Herbert
> If you open SQL Server Profiler you will find an event AutoGrowth (If I
> remember well) , so you can track the info as well as querying sysfiles
> system table (size column) (not recomended)
>
>
> "Herbert" <Herbert@.discussions.microsoft.com> wrote in message
> news:7973C596-C566-42F5-80CA-011D89D189AE@.microsoft.com...
> 1277
>
Monitor Data and Log File Growth
We are having more than 20 databases and we are merge replicating the
databases also. Currently for all databases we set the Data and Log file
growth as Automatically by percent. The size of each database is around 1277
MB.
Now if we Change the Data and Log File growth to resctricted mode is there
is any problem will cause, if not then how we will be able to monitor the
Data and Log file growth and when we should increase the File Size.
Thanks,
Herbert
Herbert
If you open SQL Server Profiler you will find an event AutoGrowth (If I
remember well) , so you can track the info as well as querying sysfiles
system table (size column) (not recomended)
"Herbert" <Herbert@.discussions.microsoft.com> wrote in message
news:7973C596-C566-42F5-80CA-011D89D189AE@.microsoft.com...
> Hi,
> We are having more than 20 databases and we are merge replicating the
> databases also. Currently for all databases we set the Data and Log file
> growth as Automatically by percent. The size of each database is around
1277
> MB.
> Now if we Change the Data and Log File growth to resctricted mode is there
> is any problem will cause, if not then how we will be able to monitor the
> Data and Log file growth and when we should increase the File Size.
> Thanks,
> Herbert
|||Hi,
As a good practice, I recommend you to monitor the MDF and LDF files daily
once or twice manually. The best approach for automatic monitoring is :-
1. Set the DB Size to a maximum size and make unrestricted growth by
percentage or MB
2. Then set up hard disk free space monitoring alert. So when ever your hard
disk goes beyond a specific amount you will get an alert.
See the below link to configure hard disk monitoring.
http://support.microsoft.com/?kbid=299921
Thanks
Hari
SQL Server MVP
"Uri Dimant" <urid@.iscar.co.il> wrote in message
news:uxCBebdOFHA.3444@.tk2msftngp13.phx.gbl...
> Herbert
> If you open SQL Server Profiler you will find an event AutoGrowth (If I
> remember well) , so you can track the info as well as querying sysfiles
> system table (size column) (not recomended)
>
>
> "Herbert" <Herbert@.discussions.microsoft.com> wrote in message
> news:7973C596-C566-42F5-80CA-011D89D189AE@.microsoft.com...
> 1277
>
|||Also read the following link
http://www.databasejournal.com/featu...le.php/3339681
"Hari Prasad" <hari_prasad_k@.hotmail.com> wrote in message
news:eSLV51dOFHA.2604@.TK2MSFTNGP10.phx.gbl...
> Hi,
> As a good practice, I recommend you to monitor the MDF and LDF files daily
> once or twice manually. The best approach for automatic monitoring is :-
> 1. Set the DB Size to a maximum size and make unrestricted growth by
> percentage or MB
> 2. Then set up hard disk free space monitoring alert. So when ever your
hard[vbcol=seagreen]
> disk goes beyond a specific amount you will get an alert.
> See the below link to configure hard disk monitoring.
> http://support.microsoft.com/?kbid=299921
> Thanks
> Hari
> SQL Server MVP
> "Uri Dimant" <urid@.iscar.co.il> wrote in message
> news:uxCBebdOFHA.3444@.tk2msftngp13.phx.gbl...
the[vbcol=seagreen]
file[vbcol=seagreen]
around[vbcol=seagreen]
the
>
|||When your free space percentage on the disk gets below 20%, it is time to
start thinking about a new allocation.
For trend analysis, use the Backup repository in msdb. Every time a backup
is written, the size of the database files is logged.
Sincerely,
Anthony Thomas
"Hari Prasad" <hari_prasad_k@.hotmail.com> wrote in message
news:eSLV51dOFHA.2604@.TK2MSFTNGP10.phx.gbl...
Hi,
As a good practice, I recommend you to monitor the MDF and LDF files daily
once or twice manually. The best approach for automatic monitoring is :-
1. Set the DB Size to a maximum size and make unrestricted growth by
percentage or MB
2. Then set up hard disk free space monitoring alert. So when ever your hard
disk goes beyond a specific amount you will get an alert.
See the below link to configure hard disk monitoring.
http://support.microsoft.com/?kbid=299921
Thanks
Hari
SQL Server MVP
"Uri Dimant" <urid@.iscar.co.il> wrote in message
news:uxCBebdOFHA.3444@.tk2msftngp13.phx.gbl...
> Herbert
> If you open SQL Server Profiler you will find an event AutoGrowth (If I
> remember well) , so you can track the info as well as querying sysfiles
> system table (size column) (not recomended)
>
>
> "Herbert" <Herbert@.discussions.microsoft.com> wrote in message
> news:7973C596-C566-42F5-80CA-011D89D189AE@.microsoft.com...
> 1277
>