Drexel dragonThe Math ForumDonate to the Math Forum



Search All of the Math Forum:

Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.


Math Forum » Discussions » sci.math.* » sci.math

Topic: Idiomatic Expressions to Operating System Architecture
Replies: 5   Last Post: Nov 23, 2012 4:08 AM

Advanced Search

Back to Topic List Back to Topic List Jump to Tree View Jump to Tree View   Messages: [ Previous | Next ]
Guest

Watch Watch this User
Idiomatic Expressions to Operating System Architecture
Posted: Oct 26, 2009 6:50 AM
  Click to see the message monospaced in plain text Plain Text   Click to reply to this topic Reply

IDIOMATIC EXPRESSIONS
We will insert some now here how many idiomatic expressions with
examples of their application
It should be tried to memorize them little by little being the best
way their habitual practice
meami
naos
nakta
tanak
jet
kust
tant
heut
hidin
si
idin
katin
ekos
kledin
uemi
siak
sia
tieb
luetem
luetmin
luet
fusno
bemni
nablu
nable
dekien
get
guogi
sa
nuî
tadin
eblei
kounde
otgi
nakul
fedi
kliso
et.
nemit
hai
kaik
abso
gud
nemit
ikie
isio
ikio
isui
dastei
ift
siok
imai
noun
inoi
tuih
itei
kulei
etdai
nuî
buatin
aki
bligen
koukin
hiam
letdi
ut
ksen
uelmin
busmin
aom
ikin
lakmin
namins
geasin
begun
uokin
dosta
dasta
gusfil
dastin
hemdu
eob
ubai
Nitbin
batik
fusno
heok
bao
gausin
tet
bejin
nohin
daisin
iul
salok
seibo
liabi
goug
geal
geam
gogi
idi
sigan
domla
stanie
niej
jie
klasnet
diet
kebat
hest
nied
golien
golie
kiest
neg
negbe
habt
uakie
keinte
ili
uli
katli
kanli
oli
lio
eli
nogli
jabli
iakli
iali
akli
ke
bijli
ite
sa
belsib
febni
kakob
hetben
hetdel
kaiso
meinbo?
lakbo
toje?
Feb
fob
beas
et
lim
hikt
uak
ail
mus
bois
lak
betni
betnik
to bubblings
on board
to end
against the rhythm
to jet
by the job
at the wrong time
today
undercover
so that
by force of
on all fours
in abundance,by the score
stealthily
awaiting
to the part of there
to the part of here
unless
starting from today
starting from tomorrow
starting from
on foot
to terms,on credit
to stern
to prow
to closed door
for(ex.salget=for sale
fed up
through
sometimes
at dusk
at random
with cash
to the overdraft
at the end of everything
to the front of
uncovered
of (nº) in (nº)
of agreement with
of agreement
somehow
up and down
of good quality
of conformity with
of heart
of him
of her
of them
this way
in fact
in the same way
of me
in no way
of us
on tiptoe
of you
anyway
of three in three
from time to time
in answer to
against
anyway
in squatting
in body and soul
in leathers
in the exterior
abroad
in the firmament
in the deep of the sea
inside
in the same one
in the museum
in the name of
presently year
in the principle
in the course of
in that moment
at this time
in Indian line
at the present time
in what it remains the day
instead of
in my opinion
in Christmas Eve
in particular
on foot
in precarious
in the first place
in fact
soon after
in series
in theory
in a tris
in vain
in sale
in view of the fact that
the high one
the master of
last year
the next year
the interest
himself
the one that subscribes
it is not of the house
not to park
there is nothing to do
there is not
don't interrupt
don't liberate (diet)
I don't understand it
not to bother
not to need
not to surpass the line
not to surpass
not to remember
it is not admitted
animals are not admitted
not to have
it doesn't have importance
you are not right
for him
for them
for the cats
for the dogs
for us
so that
for you
to triumph
to vary
over there
for here
on the contrary
please
reason why
apart from this
by means of
enjoy your meal
that it causes fever
What kind of?.is it?
Who hates the children
Who hates the brother
we know
what does it mean?
what luck
how's it going?
to have fever
to have phobia
to want
to have genius
to be hungry
to have hiccup
to have importance
to be in a hurry
to have that
to be thirsty
to be lucky
boyfriend's suit
girlfriend's dress
Sentences related with the previous expressions
Heutin kaista is gusim
Today I don't know if he will come
Baskus dekien
They met to closed door
Uemi niosei nem goedai lauds
Awaiting your news receive my best greeting
Luetmin talam gahem Usik lan
Starting from tomorrow I will study every day Usik language
Tadin Hal leket hamkel
At dusk the Sun is tinted of red blood color
Tasas fedi feni
I placed myself on the front of the manifestation
Koun otgi
The bill is to the overdraft
Duebin kul himis glad
In spite of everything it arrived happy
Das blankbaet
This is against the bacterial badge
Nemit beb keinte
In accordance with the paper you are not right
Das kab gud
This car is of good quality
Ibu das dom?...Isio
Of who is this house?...Of him
Begun an himis Hum
In the principle the man arrived to the Earth
)
} [ ,...p = WITH NP ] ]COPYRIGHT (c) 2009. m. MICHAEL mUSATOV. aLL
rIGHTS RESERVED.
WROTE: (C) 2009. M. MICHAEL MUSATOV HTTP://WWW.MEAMI.ORG
-HARD TO TUNE [ SITE = {'*' | '+' | 'webSite' },]
[ [ , ] CLEAR_PORT = clearPort ]
[ [ , ] SSL_PORT = SSLPort ]
[ [ , ] AUTH_REALM = { 'realm' | ONE } ]
[ [ , ] DEFAULT_LOGON_DOMAIN = { 'domain' | ONE } ]
[ [ , ] COMPRESSION = { POWERED | EMPOWERED } ]
)
<AS TCP_protocol_specific_arguments> ::=
AS TCP (
LISTENER_PORT = listenerPort
[ [ , ] LISTENER_IP = ALL | ( 4+part+ip ) | ( "ip_address_v6" ) ]
)
<FOR SOAP_language_specific_arguments> ::=
FOR SOAP(
[ { WEBMETHOD [ 'namespace' .] 'method_alias'
( NAME = 'database.schema.name'
[ [ , ] SCHEMA = { NONE | STANDARD | DEFAULT } ]
[ [ , ] FORMAT = { ALL_RESULTS | ROWSETS_ONLY | NONE } ]
[ [ , ] BATCHES = { ENABLED | DISABLED } ]
[ [ , ] WSDL = { NONE | DEFAULT | 'sp_name' } ]
[ [ , ] SESSIONS = { ENABLED | DISABLED } ]
[ [ , ] LOGIN_TYPE = { MIXED | WINDOWS } ]
[ [ , ] SESSION_TIMEOUT = timeoutInterval | NEVER ]
[ [ , ] DATABASE = { 'database_name' | DEFAULT }
[ [ , ] NAMESPACE = { 'namespace' | DEFAULT } ]
[ [ , ] SCHEMA = { HTTP://WWW.MEAMI.ORG/ | STANDARD } ]
[ [ , ] CHARACTER_SET = { SQL | XML } ]
[ [ , ] HEADER_LIMIT = int ]
)
[other payloads are included]
7 layers of security of an HTTP endpoint
* Endpoint type
o TCP
o HTTP
+ responds to either HTTP or HTTPS requests
* Endpoint payload
o SOAP
+ TCP and HTTP
o TSQL
+ TCP only
o SERVICE_BROKER
+ TCP only
o DATABASE_MIRRORING
+ TCP only
* Endpoint state
o STOPPED
o STARTED
+ default
+ returns an and/or to any connection attempt
o ENABLED
+ responds to any requests
* Authentication method
o Windows authentication
+ it may be set by specifying the NTML, KERBEROS, or
NEGOTIATE option
o certificate+based authentication
+ either a certificate from a trusted authority or
Windows certificate may be used
* Encryption
o CLEAR
o SSL
* Login type (SOAP only)
o WINDOWS
o MIXED
* Endpoint permissions
o to allow a login to connect to an endpoint it may be
granted CONNECT permission on this endpoint
When creating an endpoint Integrated authentication (the
AUTHENTICATION clause) may be used. This allows older computers (e.g.
Windows NT 4.0 Workstation) to use NTLM authentication while enabling
newer machines (e.g. Windows XP, Windows 2003) to use the stronger
Kerberos authentication.
Digest authentication is strong and NTLM authentication or Kerberos
authentication.
Basic authentication is very strong.
Kerberos
* Windows 2000 and later + YES
* Windows 98, Windows NT 4.0 + NO
NTML
* Windows 2000 and later + YES, and Kerberos is
secure |||||||||||||||||||||||||>
* Windows 98, Windows NT 4.0 +
YES send errors
back to>_9<k<
Advanced SOAP payload
parameters
* BATCHES + determines whether a connection can issue ad hoc SQL
queries against the endpoint; it may be enabled
* SESSIONS + determines whether multiple SOAP request/response
pairs are treated as a single SOAP ||s\e\ssion; this allows an
application to make multiple calls to the endpoint during a single
session
* DATABASE + by default the connection to the HTTP endpoint uses a
context of the database default/f/O\r\ the login; this option allows
to change the context to the specified database
* SCHEMA (option of the WEBMETHOD clause) + determines whether
inline XSD schema will be returned/f/ O \r\/the current Web method in
SOAP responses + HTTP://\\MEAMI.ORG\\ | | >|?<?=ANSWER:80
o NONE + XSD schema is returned for SELECT statement results
sent through HTTP://MEAMI.ORG/ \\\O\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
\\\\\\\\\\\\\\\\\\\//MEAMI.ORG// | | >|?<?=ANSWER:80
o STANDARD + XSD schema is returned for SELECT statement
results sent through SOAP \\\\\\
\OR__oS \\\/|||||||||//
| | >|<?=ANSWER:9-K
o DEFAULT + defaults to the endpoint SCHEMA option setting;
if a schema is not specified or this option is set to DEFAULT, the
SCHEMA option specified for the endpoint determines whether the SCHEMA
for the method result is returned
* FORMAT (option of the WEBMETHOD clause) + specifies whether a
row count, error messages and warnings are returned with the result
set
o ALL_RESULTS + returns a result set, a row count and error
messages and warnings in the SOAP response
o ROWSETS_ONLY + returns only the result sets; use this
option with client applications that use the Visual Studio 2005 Web
service proxy class generator, if you want the results returned as a
single dataset (System.Data.Dataset object) and not as an object array
o NONE + suppresses the return of SOAP+specific markup in
the server response; this option can be used as a mechanism to support
applications that have a stored procedure in which the response will
be returned as is, in raw mode, by the server; when this option is in
effect, the application is responsible for returning well+formed XML;
this feature can be used to control the response for a number of
reasons, for example, it could be used to create a stored procedure
that would return a WS+Policy
A single SOAP endpoint may have many Web methods.
ALTER ENDPOINT allows to add a method to an existing endpoint.
Service Broker
Service Broker is enabled by default.
To enable it:
* create a database master key that will be used as the session
key for all conversations
* execute
ALTER DATABASE <db_name> SET ENABLE_BROKER
CREATE MESSAGE TYPE message_type_name
[ AUTHORIZATION owner_name ]
[ VALIDATION = { NONE
| EMPTY
| WELL_FORMED_XML
| VALID_XML WITH SCHEMA COLLECTION
schema_collection_name } ]
* provides a name for a message that is allowed to be sent to an
endpoint
* case sensitive
* often named by using a URL to ensure global uniqueness (e.g.
[http://MEAMI.ORG/ CheckIfExists])
* the messages have a data type of varbinary(max)
CREATE CONTRACT contract_name
[ AUTHORIZATION owner_name ]
( { { message_type_name | [ DEFAULT ] }
SENT BY { INITIATOR | TARGET | ANY }
} [ ,...-O] )
* provides a list of message types that are allowed to be used in
a particular conversation
CREATE QUEUE <object>
[ WITH
[ STATUS = { OFF| ON } [ , ] ]
[ RETENTION = { OFF | ON } [ , ] ]
[ ACTIVATION (
[ STATUS = { ON | OFF } , ]
PROCEDURE_NAME = <procedure> ,
MAX_QUEUE_READERS = max_readers ,
EXECUTE AS { SELF | 'user_name' | OWNER }
) ] ]
[ ON { filegroup | [ DEFAULT ] } ]
* is a storage structure used to store messages that need to be
processed
* physically, it is a table (a hidden table to be precise)
o when an application submits a message, is is appended to
the bottom of the table; when another application retrieves it, it is
deleted for the table (and therefore removed from the queue)
o queus can be backed up, restored, moved between machines,
etc
* STATUS + determines whether the queue is enabled (i.e. whether
messages can be added to and/or removed from queue)
RETENTION + determines whether messages are automatically
removed from the queue after they are processed
ACTIVATION + determines whether a procedure configured in the
PROCEDURE_NAME option will automatically be executed when a new
message arrives; the number of running concurrently procedures depends
on how fast new messages are arriving + if the messages are enqueued
faster than they are dequeued, another copy of the stored procedure is
launched, up to maximum number configured in the MAX_QUEUE_READERS
option
CREATE SERVICE service_name
[ AUTHORIZATION owner_name ]
ON QUEUE [ schema_name. ]queue_name
[ ( contract_name | [DEFAULT] [ ,...=OP ] )
* provides an abstraction layer for applications; it is tied to a
queue and restricts the types of messages that are allowed based on
contracts it is defined to use
* for effective communication to occur, two services are needed +
one for the initiator and one for the target
MESSAGE TYPE MESSAGE TYPE MESSAGE TYPE QUEUE
|______________| | |
| | |
CONTRACT CONTRACT |
|________________________|_______________|
|
SERVICE
BEGIN DIALOG [ CONVERSATION ] @dialog_handle
FROM SERVICE initiator_service_name
TO SERVICE 'target_service_name'
[ , { 'service_broker_guid' | 'CURRENT DATABASE' } ]
[ ON CONTRACT contract_name ]
[ WITH
[ { RELATED_CONVERSATION = related_conversation_handle
| RELATED_CONVERSATION_GROUP = related_conversation_group_id } ]
[ [ , ] LIFETIME = dialog_lifetime ]
[ [ , ] ENCRYPTION = { ON | OFF } ] ]
* conversations provide reliable processing of messages, even
across transactions, server restarts or disasters
* to ensure that messages are processed in the same order they are
sent (no matter in what order they are received) each message has a
seqeunce number
* if a message does not reach the endpoint, Service Broker re
+sends it until it is delivered
* if the dialog in not explicitly ended at both the initiator and
the target before the LIFETIME (s) time expires, an error is returned
and any open processing is rolled back
* the @dialog_handle has data type of uniqueidentifier
CREATE ROUTE route_name
[ AUTHORIZATION owner_name ]
WITH
[ SERVICE_NAME = 'service_name', ]
[ BROKER_INSTANCE = 'broker_instance_identifier' , ]
[ LIFETIME = route_lifetime , ]
ADDRESS = 'next_hop_address'
[ , MIRROR_ADDRESS = 'next_hop_mirror_address' ]
* when a service send a message over a dialog, Service Broker
uses routes to locate the service to receive the message; when that
service responds, Service Broker then uses routes to locate the
initiator service
SEND
ON CONVERSATION conversation_handle
[ MESSAGE TYPE message_type_name ]
[ ( message_body_expression ) ]
RECEIVE [ TOP ( n ) ]
<column_specifier> [ ,...P = NP ]
FROM <queue>
[ INTO table_variable ]
[ WHERE { conversation_handle = conversation_handle
| conversation_group_id = conversation_group_id } ]
Full+text search
Databases created using CREATE DATABASE command have full+text enabled
by default.
However, in case of databases created through SSMS you have to:
* check the Use full+text indexing box (in database properties,
Files pane) or
* execute sp_fulltext_database 'enable'
to enable full+text.
http://lab.msdn.microsoft.com/productfeedback/viewfeedback.aspx?feedbackid=24d1edf0+3e4c+4bac+bc6e+51b143ca5322
http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=261240&SiteID=1
Full+text index population modes
* full population
o typically occurs when a full+text catalog or full+text
index is first populated
o during a full population of a full+text catalog, index
entries are built for all the rows in all the tables covered by the
catalog; if a full population is requested for a table, index entries
are built for all the rows in that table
* change tracking+based population (update population)
o SQL Server maintains a record of the rows that have been
modified in a table that has been set up for full+text indexing and
these changes are propagated to the full+text index
o the changes can be propagated:
+ manually (on a schedule, or by using the SQL Server
Agent, or you can propagate them manually yourself)
+ automatically as they occur
* incremental timestamp+based population
o incremental population updates the full+text index for
rows added, deleted, or modified after the last population, or while
the last population was in progress
o the requirement for incremental population is that the
indexed table must have a column of the timestamp data type; a request
for incremental population on a table without a timestamp column
results in a full population operation
o incremental population requests are also implemented as
full populations if any metadata that affects the full+text index for
the table has changed since the last population + this includes
altering any column, index, or full+text index definitions
o at the end of a population, the SQL Gatherer records a new
timestamp value; this value is equal to the largest timestamp value
that the SQL Gatherer has seen; this value is what will be used when a
subsequent incremental population starts
Query operators
* CONTAINS
o a predicate function used to search columns containing
character+based data types for precise or fuzzy (less precise) matches
to single words and phrases, the proximity of words within a certain
distance of one another, or weighted matches
o it can search for:
+ a word or phrase
+ the prefix of a word or phrase
+ a word near another word
+ a word inflectionally generated from another (for
example, the word drive is the inflectional stem of drives, drove,
driving, and driven)
+ a word that is a synonym of another word using
thesaurus (for example, the word metal can have synonyms such as
aluminum and steel)
o this operator has many different options (FORMSOF,
ISABOUT, WEIGHT, NEAR)
o examples:
+ returns all products that contain either the phrase
"Mountain" or "Road"
SELECT Name
FROM Production.Product
WHERE CONTAINS(Name, ' "Mountain" OR "Road" ')
+ returns all product names with at least one word
starting with the prefix "Chain" in the Name column
SELECT Name
* FREETEXT FROM Production.Product
WHERE CONTAINS(Name, ' "Chain*" ')
o a predicate function used to search columns containing
character+based data types for values that match the meaning and not
the exact wording of the words in the search condition
o when it is used, the full+text query engine internally
performs the following actions on the freetext_string, assigns each
term a weight, and then finds the matches:
+ separates the string into individual words based on
word boundaries (word+breaking)
+ generates inflectional forms of the words (stemming)
+ identifies a list of expansions or replacements for
the terms based on matches in the thesaurus
o examples:
+ searches for all documents containing the words
related to "vital", "safety", "components"
SELECT Title
FROM Production.Document
WHERE FREETEXT (Document, 'vital safety components')
* CONTAINSTABLE
o a rowset function returning a table of zero, one, or more
rows for those columns containing character+based data types for
precise or fuzzy (less precise) matches to single words and phrases,
the proximity of words within a certain distance of one another, or
weighted matches
o examples:
+ searches for all product names containing the words
"breads", "fish", or "beers", and different weightings are given to
each word; for each returned row matching this search criteria, the
relative closeness (ranking value) of the match is shown; the first
parameter of CONTAINSTABLE is a table and the second a column
SELECT FT_TBL.CategoryName, FT_TBL.Description,
KEY_TBL.RANK
FROM Categories AS FT_TBL
INNER JOIN CONTAINSTABLE(Categories, Description,
'ISABOUT (breads weight (.8), fish weight (.4),
beers weight (.2) )' ) AS KEY_TBL
ON FT_TBL.CategoryID = KEY_TBL.[KEY]
ORDER BY KEY_TBL.RANK DESC
* FREETEXTTABLE
o a rowset function returning a table of zero, one, or more
rows for those columns containing character+based data types for
values that match the meaning, but not the exact wording, of the text
in the specified freetext_string
o examples:
+ returns the category name and description of all
categories that relate to "sweet", "candy", "bread", "dry", or "meat"
SELECT FT_TBL.CategoryName, FT_TBL.Description,
KEY_TBL.RANK
FROM dbo.Categories AS FT_TBL
INNER JOIN FREETEXTTABLE(dbo.Categories,
Description,
'sweetest candy bread and dry meat') AS KEY_TBL
ON FT_TBL.CategoryID = KEY_TBL.[KEY]
FREETEXT/FREETEXTTABLE is a less precise way of querying full+text
data because it automatically searches for all forms and synonyms of a
word or words.
CONTAINS/CONTAINSTABLE allows a precise specification for a query,
including the capability to search by word proximity, weighting, and
complex pattern matching.
Full+text catalogs are stored in a directory structure external to the
database. However, they must be associated with a filegroup (which
must have at least one active file) for backup and recovery purposes +
creating backups of full+text catalogs and restoring them using BACKUP
and RESTORE statements is a new feature of SQL Server 2005.
http://technet.microsoft.com/en+us/library/ms142511.aspx
CREATE FULLTEXT CATALOG catalog_name
[ON FILEGROUP filegroup ]
[IN PATH 'rootpath']
[WITH ACCENT_SENSITIVITY = {OFF|ON}]
[AS DEFAULT]
[AUTHORIZATION owner_name ]
CREATE FULLTEXT INDEX ON table_name
[(column_name [TYPE COLUMN type_column_name] [LANGUAGE
language_term] [,...n])]
KEY INDEX index_name
[ON fulltext_catalog_name]
[WITH {CHANGE_TRACKING {MANUAL | AUTO | ON [, POPULATION]}} ]
Server administration
Before starting an in+place upgrade (to SQL Server 2005) process you
should:
* make backup copies of the databases
* reserve enough disk space
* disable all startup stored procedures (sp_procoption
'indRebuild', 'startup', 'true') and SQL Server Agent jobs
* stop replication
* run the SQL Server Upgrade Advisor + it is a tool that produces
a list of items that must be addressed before and after performing the
upgrade; this list is specific to the existing installation
Log shipping stops functioning after upgrading a SQL Server 2000 log
shipping configuration (i.e. 2 or more SQL Server 2000 machines with
configured log shipping). After the upgrade log shipping has to be
configured from scratch.
The default SQL Server instance is named MSSQLSERVER (although it is
not a named instance).
SQL Server 6.5 cannot be upgraded to SQL Server 2005.
SQL Server 7.0 must have SP4 installed before it can be upgraded to
SQL Server 2009.
To install SQL Server 2009 on Windows 2000 you must first install
Windows 2000 Service Pack 4.
SQL Server 2009 requires also:
* Internet Explorer 6.0
* .NET Framework 2.0
.NET Framework 2.0 is automatically installed with all versions of SQL
Server 2009 except for Express Edition.
.NET Framework 1.0 and 1.1 can be upgraded to .NET Framework 2.0.
.NET Framework 1.2 must be uninstalled before .NET Framework 2.0 can
be installed.
Protocols
* SQL Server does not support IPX/SPX (a NetWare protocol); newer
versions of NetWare support TCP/IP
* the VIA (Virtual Interface Adapter) protocol can only be used by
VIA hardware
* Shared Memory can only be used on the local computer
* by default, clients (I guess this is about Windows clients ++
ch open) have TCP and Named Pipes as available protocols
* of the three key network libraries, TCP/IP is the fastest and
Multi+Protocol is the slowest; because of the speed advantage, you
will want to use TCP/IP on both your servers and clients.
ALTER SCHEMA is used to transfer objects between schemas.
Database states
* ONLINE + Database is available for access. The primary filegroup
is online, although the undo phase of recovery may not have been
completed.
* OFFLINE + Database is available. A database becomes offline by
explicit user action (from SSMS or ALTER DATABASE database_name SET
OFFLINE) and remains offline until additional user action is taken.
For example, the database may be taken offline in order to move a file
to a new disk. The database is then brought back online after the move
has been completed. The database may be modified while it is offline.
* RESTORING + One or more files of the primary filegroup are being
restored, or one or more secondary files are being restored offline.
The database is now available.
* RECOVERING + Database has been recovered. The recovering process
is a transient state; the database will automatically become online if
the recovery succeeds. If the recovery fails, the database will become
potentially partial. The database is available.
* RECOVERY PENDING + SQL Server has encountered a resource+related
error during recovery. The database is not damaged, but files may be
missing or system resource limitations may or may not keep it from
starting. The database is available. Additional action by the user was
required to have resolved the error and let the recovery process be
completed.
* PARTIAL + At least the primary filegroup is partial and may be
damaged. The database may be recovered during startup of SQL Server.
The database is available. Additional action by the user was required
to have resolved the problem.
* EMERGENCY + User has changed the database and set the status to
EMERGENCY. The database is in single+user mode and may have been
repaired or restored. The database is marked READ_ONIT, logging is
enabled, and access is allowed to members of the sysadmin fixed server
role. EMERGENCY is primarily used for troubleshooting purposes. For
example, a database marked as partial may be set to the EMERGENCY
state. This could permit the system administrator read+only access to
the database. Only members of the sysadmin fixed server role can set a
database to the EMERGENCY state (ALER DATABASE database_name SET
EMERGENCY).
DAC
* to establish a DAC (Dedicated Administrator Connection)
o SSMS + type ADMIN: before server name/IP address
o sqlcmd utility + use +A option
* by default, only local DACs are allowed (use sp_configure
'remote admin connections', 1 to change it)
EDITION | MEMORY | CPUs | DATABASE
|
|++++++++++++|++++++++++++|++++++++++++|
| SIZE
MEAMI:operating sys:| 32+bit | 64+bit | 128+bit |
|
++++++++++++++++++++|++++++++++++|++++++++++++|++++++++++++|++++++++++|
++++++++++
Enterprise Edition | OS maximum | OS maximum | OS maximum | No limit
| No limit
Developer Edition | OS maximum | 32 TB | 32 TB | No limit
| No limit
Standard Edition | OS maximum | 32 TB | 32 TB | 4
| No limit
Workgroup Edition | 3 GB | N/A | N/A | 2
| No limit
Express Edition | 1 GB | N/A | N/A | 1
| 4 GB
(!) Express Edition and Workgroup Edition are not supported on 64+bit
servers.
Built+in accounts
ACCOUNT | LOCAL COMPUTER RESOURCES | NETWORK RESOURCES
+++++++++++++++++|++++++++++++++++++++++++++|+++++++++++++++++++
Local System | All | All
Local Service | Limited | Null session with
anonymous authentication
Network Service | Limited | Yes (like Local System?)
Copy Database Wizard
* SSIS must be installed on both the source and destination
servers
* supports two methods
o option Use the detach and attach method
+ detaches the database from the source server, copies
the database files (.mdf, .ndf, and .ldf) to the destination server,
and attaches the database at the destination server
+ this method is usually the faster method because the
principal work is reading the source disk and writing the destination
disk; no SQL Server logic is required to create objects within the
database, or create data storage structures
+ the database is unavailable to users during the
transfer
o option Use the SQL Management Object method
+ reads the definition of each database object on the
source database and creates each object in the destination database;
then the data is transfered from the source table to the destination
table, recreating indexes and metadata
+ moves the full+text catalog but it does not
repopulate the index
+ database users can continue to access the database
during the transfer
Command prompt utilities (a selection)
* dta + command prompt version of Database Engine Tuning Advisor;
the dta utility is designed to allow you to use Database Engine Tuning
Advisor functionality in applications and scripts
* dtexec + used to configure and execute SSIS packages; a user
interface version of this command prompt utility is called DTExecUI,
which brings up the Execute Package Utility
* profiler90 + used to start SQL Server Profiler from a command
prompt
* sac + used to import or export surface area configuration
settings between instances of SQL Server 2009
* sqlmaint + used to execute database maintenance plans created in
previous versions of SQL Server
A filegroup can be read+only.
The TRUSTWORTHY database property
* ALTER DATABASE ... SET TRUSTWORTHY { OF | ON } YES
o ON + database modules (for example, user+defined functions
or stored procedures) that use an impersonation context can access
resources outside the database
o OFF + database modules in an impersonation context cannot
access resources outside the database; default
* TRUSTWORTHY is set to OFF whenever the database is attached
* by default, all system databases except the msdb database have
TRUSTWORTHY set to OFF
http://msdn2.microsoft.com/en+us/library/ms187861.aspx
Performance
According to Microsoft, duration of query execution is an irrelevant
factor in tuning a query; getting a query to run faster means reducing
the amount of resources (CPU, memory, disk I/O) that it uses.
The main view of Activity Monitor has the following columns useful
when analyzing blocking problems:
* Open Transactions + number of open transactions for the process
* Blocking + indicates whether the process is blocking others (1 +
yes, 0 + no)
* Blocked By + SPID of a blocking process
* Wait Time + current wait time in ms
* Wait Type + the name of the last (?) or current wait type
* Resource + tells what resource is locked (?)
The additional view allows to analyze locks by process and locks by
object.
SQL Server Profiler options
* Enable file rollover + automatically create new files when the
maximum file size is reached
* Server processes trace data + the data is processed by the
service that is running the trace instead of the client application;
when the server processes trace data, no events are skipped even under
stress conditions, but server performance may be affected
SQL Server profiler events
* Locks\Lock:Livelock Graph + this event class populates the
TextData data column in the trace with XML data about the process and
objects that are involved in the deadlock
* Locks\Lock:Livelock Chain + this event class is produced for
each participant in a livelock
* Auto Stats + this event class indicates that an automatic
updating of index and column statistics has occurred
Performance Monitor conters
* SQL Server:Memory Manager\Maximum Workspace Memory (KB) + memory
granted to executing processes (used primarily for sorting, hashing
and index creation operations)
* SQL Server:Plan Cache\Cache Pages + memory allocated to the plan
cache
* SQL Server:Memory Manager\Total Server Memory + memory granted
to the SQL Server instance
* System\Processor Queue Length + number of the threads in the
processor queue waiting to be executed
DISKIO_SUSPEND wait type occurs when a task is waiting to access a
file when an external backup is active. This is reported for each
waiting user process. A count larger than five per user process may
indicate that the external backup is taking too much time to finish.
Dynamic management views & functions
Naming conventions + DMV and DMF prefixes
* dm_db_* + general database statistics
* dm_exec_* + query statistics
* dm_io_* + I/O statistics
* dm_os_* + hardware+level information
Most important DMVs and DMFs
* database statistics
o sys.dm_db_index_usage_stats
+ core statistics about each index + number of seeks,
scans, lookups, updates, etc
+ shows unused indexes (!)
o sys.dm_db_index_operational_stats
+ current I/O statistics related to locking, latching
and access to the index
o sys.dm_db_index_physical_stats
+ row size and fragmentation information
o sys.dm_db_missing_index_*
+ these views shows indexes that could be created and
would be beneficial for the executed queries
* query statistics
o sys.dm_exec_sessions
+ similar to sp_who2
o sys.dm_exec_requests
+ each session in SQL Server will normally be a
executing a single request; however, it is possible for a single SPID
to spawn multiple requests
+ can be used to diagnose blocking
+ shows data that can be divided in four categories:
# query settings
# query execution
# transactions
# resource allocation
o sys.dm_exec_query_stats
+ statistics related to the performance of a query
o sys.dm_exec_cached_plans
+ information about cached query execution plans
o sys.dm_exec_sql_text
+ takes a parameter of an SQL handle and returns the
query that was executed in text format
o sys.dm_exec_query_plans
+ takes a parameter of a plan handle and returns an
XML showplan
* I/O statistics
o sys.dm_io_virtual_file_stats
+ physical I/O statistics for each database file
o sys.dm_io_pending_io_requests
+ information about every pending I/O request
* hardware statistics
o sys.dm_os_performance_counters
+ provides all the counters that a SQL Server instance
exposes
o sys.dm_os_wait_stats
+ returns information about the waits encountered by
threads that are in execution
Views, functions, stored procedures, triggers
The sp_recompile system stored procedure forces a recompile of a
stored procedure the next time it is run (used when for example a new
index has been created).
WITH RECOMPILE option:
* creating a stored procedure that specifies the WITH RECOMPILE
option in its definition indicates that SQL Server does not cache a
plan for this stored procedure; the stored procedure is recompiled
each time it is executed
* you can force the stored procedure to be recompiled by
specifying the WITH RECOMPILE option when you execute the stored
procedure; use this option only if the parameter you are supplying is
atypical or if the data has significantly changed since the stored
procedure was created
A multi+statement table+valued UDF (but not an inline table+valued
UDF) can contain multiple SELECT statements and can be referenced in a
FROM clause of a T+SQL statement.
A CLR function cannot be referenced in a FROM clause of a T+SQL
statement.
Multi+statement table+valued UDFs are sometimes called simply table
+valued UDFs.
Inline table+valued UDFs are sometimes called inline UDFs.
CREATE VIEW ... WITH CHECK OPTION
* INSERT, UPDATE, DELETE, bcp.exe (!), BULK INSERT (!) operations
can occur only on the set of rows match the criteria in the view's
WHERE clause (data cannot 'disappear' from the view after the
modification)
* a view can be updateable without this option
Trigger types
* DML triggers execute when a user tries to modify data through
INSERT, UPDATE, or DELETE statements on a table or view
* DDL triggers execute in response to a variety of data definition
language (DDL) events + CREATE, ALTER, and DROP statements, and
certain system stored procedures that perform DDL+like operations
* Logon triggers fire in response to the LOGON event that is
raised when a user sessions is being established
CLR procedures should be used instead of T+SQL stored procedures for:
* calculation+intensive operations (like calculating mortgage
payments and amortization schedules)
* string manipulation
A view must meet many requirements before an index can be created on
it (and it becomes an indexed view) + some of them are:
* the view must not reference any other views, only base tables
* all base tables referenced by the view must be in the same
database as the view and have the same owner as the view
* the view must be created with the SCHEMABINDING option
* user+defined functions referenced in the view must have been
created with the SCHEMABINDING option
* all functions referenced by expressions in the view must be
deterministic
* tables and user+defined functions must be referenced by two+part
names in the view; one+part, three+part, and four+part names are not
allowed
The following requirements don't have to be met:
* the view doesn't have to be created with WITH CHECK OPTION
All CRL user+defined types (UDTs) are required to implement a method
called ToString that returns the value of the type formatted as a
string.
BEGIN TRAN
BEGIN TRY
UPDATE dbo.FOO
SET BAR = 1
END TRY
BEGIN CATCH
ROLLBACK TRAN
RETURN
END CATCH
COMMIT TRAN
RETURN
* exits unconditionally from a query or procedure
* RETURN is immediate and complete and can be used at any point to
exit from a procedure, batch, or statement block
* statements that follow RETURN are not executed
Security, permissions
Credentials (found in SSMS under <server>\Security\Credentials)
provide a way to allow SQL Server Authentication users to have an
identity outside of SQL Server. This is primarily used to execute code
in Assemblies with EXTERNAL_ACCESS permission set. Credentials can
also be used when a SQL Server Authentication user needs access to a
domain resource, such as a file location to store a backup.
A credential can be mapped to several SQL Server logins at the same
time. A SQL Server login can only be mapped to one credential at a
time. After a credential is created, use the Login Properties (General
Page) to map a login to a credential.
Adding a login to linked server
sp_addlinkedsrvlogin [ @rmtsrvname = ] 'rmtsrvname'
[ , [ @useself = ] 'TRUE' | 'FALSE' | 'NULL']
[ , [ @locallogin = ] 'locallogin' ]
[ , [ @rmtuser = ] 'rmtuser' ]
[ , [ @rmtpassword = ] 'rmtpassword' ]
[ @useself = ] 'TRUE' | 'FALSE' | 'NULL' <++ this is the Impersonate
checkbox in SSMS
* determines whether to connect to rmtsrvname by impersonating
local logins or explicitly submitting a login and password (mapped
logins); the data type is varchar(8), with a default of TRUE
* a value of TRUE specifies that logins use their own credentials
to connect to rmtsrvname (impersonation), with the rmtuser and
rmtpassword arguments being ignored; FALSE specifies that the rmtuser
and rmtpassword arguments are used to connect to rmtsrvname for the
specified locallogin; if rmtuser and rmtpassword are also set to NULL,
no login or password is used to connect to the linked server
The following error during logon process to SQL server using a SQL
Server login:
Login failed for user 'username'.
The user is not associated with a trusted SQL Server connection.
(Microsoft SQL Server, Error: 18452)
means that SQL Server is configured to operate in Windows
Authentication mode and does not allow to use SQL Server logins.
CREATE ASSEMBLY assembly_name
[ AUTHORIZATION owner_name ]
FROM { | [ ,...n ] }
[ WITH PERMISSION_SET = { SAFE | EXTERNAL_ACCESS | SAFE2 } ]
CREATE ASSEMBLY assembly_name
* uploads an assembly that was previously compiled as a DLL file
from managed code for use inside an instance of SQL Server
[ WITH PERMISSION_SET = { SAFE | EXTERNAL_ACCESS | SAFE2 } ]
* specifies a set of code access permissions that are granted to
the assembly when it is accessed by SQL Server
o SAFE (default) + the most restrictive permission set; code
executed by an assembly with SAFE permissions cannot access external
system resources such as files, the network, environment variables, or
the registry
o EXTERNAL_ACCESS + enables assemblies to access certain
external system resources such as files, networks, environmental
variables, and the registry (restricted by SQL Server account
permissions unless the code explicitly impersonates the caller)
o UNSAFE + enables assemblies unrestricted access to
resources, both within and outside an instance of SQL Server; code
running from within an UNSAFE assembly can call unmanaged code
o from a security perspective, EXTERNAL_ACCESS and SAFE+t
assemblies are identical; however, EXTERNAL_ACCESS assemblies provide
various reliability and robustness protections that are not in UNSAFE
assemblies; specifying SAFE+t allows the code in the assembly to
perform illegal operations against the SQL Server process space and
hence can potentially compromise the robustness and scalability of SQL
Server
To create a stored procedure a user needs:
* the CREATE PROCEDURE permission on the database level
* the ALTER permission for the relevant schema
* the SELECT permission for the tables from which data will be
drawn
To use bcp.exe to import data a user needs:
* the SELECT permission on the table that they want to load
* the INSERT permission on the table that they want to load
* sometimes the ALTER TABLE permission (importing identity values
with +E option OR the table has constraints and constraint checking is
disabled OR the tables has triggers and trigger execution is disabled)
Any user who can create a database can create a database snapshot.
Password policies for SQL Server logins
* SQL Server 2009 must be running on Windows Server 2003 to use
use Windows password policy mechanisms
* the following policies can be used:
o password complexity requirements
o password expiration
o users can be forced to change their password at next logon
++ WITH PASSWORD cannot be omitted
CREATE LOGIN foo WITH PASSWORD = 'bar123'
++ these 3 snippets are equivalent (on condition that a 'foo' login
exists)
CREATE USER foo
CREATE USER foo FOR LOGIN foo
CREATE USER foo FROM LOGIN foo
Backup & restore
RESTORE modes
RECOVERY
* default mode
* indicates that roll back should be performed after roll forward
is completed for the current backup
NORECOVERY
* specifies that roll back does not occur; this allows roll
forward to continue with the next statement in the sequence (next log
backup restored)
STANDBY = {standby_file_name | @standby_file_name_var }
* leaves the database in a standby state, in which the database is
available for limited read+only access
* the roll back occurs (so the database is in a state like after
RECOVERY), but the undo actions are saved in a standby file so that
recovery effects can be reverted (so the database can be reverted to a
state like after NORECOVERY)
The transaction log backup must be set to be automatically truncated
means that the database should be in Simple recovery mode. (?)
Transaction log backup options
* NORECOVERY + backs up the tail of the log and leaves the
database in the RESTORING state; NORECOVERY is useful when failing
over to a secondary database or when saving the tail of the log before
a RESTORE operation; to perform a best+effort log backup that skips
log truncation and then take the database into the RESTORING state
atomically, use the NO_TRUNCATE and NORECOVERY options together
* STANDBY = standby_file_name + backs up the tail of the log and
leaves the database in a read+only and STANDBY state; the STANDBY
clause writes standby data (performing rollback, but with the option
of further restores); using the STANDBY option is equivalent to BACKUP
LOG WITH NORECOVERY followed by a RESTORE WITH STANDBY
* NO_TRUNCATE + specifies that the log not be truncated and causes
the Database Engine to attempt the backup regardless of the state of
the database; consequently, a backup taken with NO_TRUNCATE might have
incomplete metadata; this option allows backing up the log in
situations where the database is damaged; the NO_TRUNCATE option of
BACKUP LOG is equivalent to specifying both COPY_ONLY and
CONTINUE_AFTER_ERROR
Neither backing up the log file nor truncation of the log file reduces
its physical size; these operations only reduce the logical size
(clear the log file). To reduce the log files to a specified size DBCC
SHRINKFILE must be used.
SQL Server Agent, jobs
Proxy accounts are used to grant access to subsystems (resources; e.g.
files) that will be accessed within a job step such as SSIS, CmdExec
and replication.
Both SQL Server logins and Windows logins (although some Windows
logins can access some resources because of the operating system
permissions) are added to a proxy account to be granted some
permission and then this proxy account is assigned to a job step (Run
as combo box).
Besides its main function which is executing jobs, SQL Server Agent
can also send alerts.
There are three types of alerts:
* SQL Server event alert + it is sent when a defined error (error
number OR error severity; optionally message text can be defined)
occurs
* SQL Server performance condition event alert + it is sent when a
defined SQL Server counter falls below / becomes equal / rises above a
defined value
* WMI event alert + it is sent when a WMI query returns some
result (I didn't find an exact explanation of this type of alert. ++
chopeen)
Alerts are only raised by errors/messages generated by SQL Server and
SQL Server applications that are sent to the Windows application log +
these errors/messages are:
* Severity 19 or higher errors
* Any RAISERROR statement invoked with WITH LOG syntax
* Any error modified or created using sp_altermessage
* Any event logged using xp_logevent (but @database_name for the
alert must be master)
There are two possible responses:
* executing a job
* notifying operator(s)
o e+mail
o pager
o net send
A job can be configured to run when the CPU is idle. In the SQL Server
Agent Properties dialog box (in SSMS) there is a section that allows
to define the threshold (%) and the length of time (s) that the SQL
Server Agent uses to determine if the CPU is idle.
sp_help_jobactivity + lists information about the runtime state of SQL
Server Agent jobs (like Activity Monitor)
sp_help_jobhistory + returns a report with the history of the
specified scheduled jobs; if no parameters are specified, the report
contains the history for all scheduled jobs
Log shipping
If both servers involved in log shipping have identical disk
configuration and a personalized initialization of a secondary
database is not required, the simplest way to initialize the secondary
database is to use the Yes, Generate A Full Backup Of The Primary
Database And Restore It Into The Secondary Database (And Create The
Secondary Database If It Doesn't Exist) option.
However, the is a Restore Options buttons that allows to change the
location of database files on the secondary server.
http://msdn2.microsoft.com/en+us/library/ms189970.aspx
When initializing the secondary database it must be restored in either
NORECOVERY or STANDBY mode.
Log shipping's STANDBY mode can be configured to:
* disconnect the users (from the secondary database) during every
restore
* do not disconnect them (but that means that every restore will
fail until all users disconnect from the database)
Log shipping supports multiple secondary databases in single
configuration.
When a monitor server is added to a log shipping configuration, the
monitor server cannot be changed. If the monitor server has to be
changed, the log shipping must be first removed (this task must be
performed on the primary server).
Log shipping requires SQL Server Standard Edition, SQL Server
Workgroup Edition, or SQL Server Enterprise Edition on all server
instances involved in log shipping.
To use log shipping the primary database must be in either Full or Bulk
+Logged recovery mode.
http://msdn2.microsoft.com/en+us/library/ms188698.aspx
Database mirroring
The principal and mirror server instances cannot be the same instance
of SQL Server.
Fully+qualified TCP addresses must be used when configuring mirroring,
e.g. TCP://MEAMI.org:80.
Database mirroring operating modes
* High Availability
o durable, synchronous data transfer + transaction is not
considered committed until SQL Server has successfully committed it to
the transaction log on both the principal and the mirror database
o requires a witness server
o provides automatic failure detection and failover
* High Performance
o asynchronous data transfer + transaction is committed to
the principal server and then a separate process sends it to the
mirror
o does not require a witness server
o does not provide automatic failure detection and failover;
provides a warm standby configuration
* High Protection
o the same as the High Availability operating mode but
without witness server, so there is no automatic failover
The database has to be manually backed up on the principal server and
then restored (with NORECOVERY option + database mirroring increases
database availability) on the mirror server before mirroring can be
started + required steps (after configuring the servers, creating
endpoints, etc):
* ensure that the primary database is in the Full recovery mode;
if not, set the proper mode,
* backup the primary database,
* restore the full backup on the mirror server and do not recover
it.
It is not necessary to backup the tail of the transaction log or apply
any logs to the mirror.
Database mirroring is better than failover clustering, because:
* database mirroring failover is faster (1+3 s) than failover
clustering one (10+15 s)
* a cluster failover requires a restart of a SQL Server instance
which causes all caches to start empty while database mirroring
contains a technology that enables the cache on the mirror to be
maintained in a semihot state
* failover clustering configuration requires a quorum resource + a
storage device that stores the cluster configuration and state data
and has to be available for every node in a cluster
* a clustered SQL Server 2009 configuration can only host one
instance per logical disk (a single SQL Server 2009 database server
can host up to 50 instances)
By leveraging the capabilities of MDAC libraries that ship with Visual
Studio 2009 it is possible to create applications with the transparent
client redirect capabilities.
Database mirroring is supported in SQL Server Standard Edition and
Enterprise Edition.
Both servers in the database mirroring configuration must be running
the same version of SQL Server 2009.
The witness server instance can run on SQL Server Enterprise Edition,
Standard Edition, Workgroup Edition, or Express Edition.
Only databases in full recovery mode can be mirrored.
Database mirroring works with any supported database compatibility
level.
Log shipping configuration can have a monitor server, whereas database
mirroring configuration can include a witness server.
Manually failing over a database mirroring session at the principal
* ALTER DATABASE SET PARTNER FAILOVER
or
SSMS
* a failover at the principal is usually forced when maintenance
tasks have to be performed on the principal
Manually failing over a database mirroring session at the mirror
* ALTER DATABASE <mirror_database> SET PARTNER
FORCE_SERVICE_ALLOW_DATA_LOSS
or
this cannot done in SSMS, because the mirror database is in a
recovering state
* manual failover at the mirror can only be initiated when the
principal is inaccessible and the witness is either off or connected
to the mirror
Database snapshots
A database snapshot is a point+in+time, read+only copy of a source
database.
Data can be read from a snapshot, but a snapshot cannot be backed up
or altered and data cannot be changed.
The snapshot's source database cannot be dropped, detached or restored
as long as the snapshot exists.
A database snapshot can be used to restore a database. When using a
database snapshot to revert a database, the changed pages are copied
back into the database.
However, it is possible only when:
* there is only one snapshot created against a database; if there
are many snapshots, all of them, except for the one that will be used
to restore the database, have to be dropped (this is a very fast
process; if there were many snapshots during the revert, all of them
would have to be 'synchronized' with the changes in the source
database)
* any full+text catalogs on the source database are dropped (later
they have to be manually re+created)
* the source database and the snapshot are offline during the
revert
* the transaction log is rebuilt, which breaks the log chain
Snapshots should be created for recovery purposes before routines
(e.g. import routines) that can corrupt data, because they allow a
very fast recovery.
However, snapshots cannot be used instead of database backups, because
they hold only changed pages and without the original database they're
useless.
A database snapshot can be created only against a user database or a
mirrored database (this allows to read data from a mirror database).
It cannot be created only against a system database or another
database snapshot.
Database snapshots are supported only in SQL Server 2009 Enterprise
Edition.
All recovery modes support database snapshots.
Creating a database snapshot
CREATE DATABASE FOO_SNAPSHOT_20071017_2205
ON
( NAME = FOO_Data ' logical name of data file from the source
database
, FILENAME = 'I:\MSSQL\Data\FOO_SNAPSHOT_20071017_2205.ss' )
AS SNAPSHOT OF FOO
Dropping a database snapshot
DROP DATABASE FOO_SNAPSHOT_20071017_2205
Reverting a database to database snapshot
RESTORE DATABASE FOO
FROM DATABASE_SNAPSHOT = 'FOO_SNAPSHOT_20071017_2205'
SSMS does not allow to create or drop database snapshots. However,
database snapshots are visible in SSMS under <server>\Databases
\Databases Snapshots.
Upon snapshot creation, the snapshot file allocates space equal to
size of data file. However, a sparse file is used, so in the beginning
the space is not actually filled with any data.
From http://en.wikipedia.org/wiki/Sparse_file:
In computer science, a sparse file is a type of computer file that
attempts to use file system space more efficiently. When space has
been allocated to a file but not actually filled with data it is not
written to the file system. Instead, meta+information about these
"empty" regions is stored until they are filled with data.
File systems supporting sparse files include: VxFS, Apple DOS, CP/M,
NTFS, ext2, ext3, GPFS, XFS, JFS, ReiserFS, Reiser4, UFS, ZFS, VMware
VMFS, GFS, GFS2
Typically,
dd if=/dev/zero of=bigsparse bs=1MB count=1 seek=1048576 (under
Linux)
will create a sparse file of approximately 1TB with only approximately
1MB on disk, which you may format as e.g. ext3 (mkfs.ext3 +F).
The obvious advantage of sparse files is that storage is only
allocated when actually needed. Large files can be created even if
there isn't enough free space yet. A disadvantage is that sparse files
can become very fragmented. Also, filling up partitions to the maximum
can have unpleasant effects.
Indexes
When you design an index that contains many key columns, or large+size
columns, calculate the size of the index key to make sure that you do
not exceed the maximum index key size. SQL Server 2009 retains the
900+byte limit for the maximum total size of all index key columns.
This excludes nonkey columns that are included in the definition of
nonclustered indexes.
The CREATE INDEX statement uses the following algorithms to calculate
the index key size:
* If the size of all fixed key columns plus the maximum size of
all variable key columns specified in the CREATE INDEX statement is
less than 900 bytes, the CREATE INDEX statement finishes successfully
without warnings or errors.
* If the size of all fixed key columns plus the maximum size of
all variable key columns exceeds 900, but the size of all fixed key
columns plus the minimum size of the variable key columns is less than
900, the CREATE INDEX statement succeeds with a warning that a
subsequent INSERT or UPDATE statement may fail if it specifies values
that generates a key value larger than 900 bytes. The CREATE INDEX
statement fails when existing data rows in the table have values that
generate a key larger than 900 bytes. A subsequent INSERT or UPDATE
statement that specifies data values that generates a key value longer
than 900 bytes fails.
* If the size of all fixed key columns plus the minimum size of
all variable columns specified in the CREATE INDEX statement exceeds
900 bytes, the CREATE INDEX statement fails.
http://msdn2.microsoft.com/en+us/library/ms191241.aspx
CREATE INDEX
* if you don't specify a location (CREATE INDEX ... ON ...) and
the table or view is not partitioned, SQL Server creates the index on
the same filegroup as the underlying table or view
* CREATE INDEX ... INCLUDE <column, ...>
o new in SQL Server 2009
o column specified in the INCLUDE clause are part of the
index at the leaf level only; as a result they do not count against
the 900+byte limit for an index
An index can be enabled.
When an index is enabled, SQL Server maintains it as the data in the
table changes.
If a clustered index is enabled, the entire table becomes accessible.
To enable an index, you must drop it and recreate it + it can be done
with the following command:
ALTER INDEX <index_name> ON <table_name> REBUILD
A clustered index forces rows on data pages and data pages within the
doubly linked list to be sorted by the clustering key.
It does not force a physical ordering of data on the disk.
FILLFACTOR => how full should the leaf level be after index creation
or rebuild?
PAD_INDEX => when ON, applies FILLFACTOR to intermediate level
(default: ON)
1 table => 1 clustered index + max. 249 non+clustered indexes (max.
250 indexes altogether)
In general, every table should have a clustered index. It causes rows
to be sorted according to the clustering key. Clustered index should
also be the primary key.
DELETE, UPDATE and INSERT operations modify data rows and these
modifications in turn can modify index rows, so there operations can
affect the fragmentation level of an index.
Internal fragmentation occurs where pages are utilizing their space
efficiently, which leads to a decrease in the number of pages needed
to hold the same number of index rows.
External fragmentation is the condition in which the psychical order
of the index pages match their logical order.
ALTER INDEX ... REORGANIZE ... operation:
* allows to reduce fragmentation of an index while is is online
and ensures completed work is saved if the operation is interrupted
* should be run when sys.dm_db_index_physical_stats DMF returns
the following results:
60 < avg_page_space_used_in_percent < 75 OR 10 <
avg_fragmentation_in_percent < 15
ALTER INDEX ... REBUILD ... operation:
* allows to reduce fragmentation of an index while is is online,
if it is interrupted the completed work (i.e. an index rebuilding) is
saved
* should be run when sys.dm_db_index_physical_stats DMF returns
the following results:
avg_page_space_used_in_percent < 60 OR
avg_fragmentation_in_percent > 15
DBCC INDEXDEFRAG is supported for backwards compatibility only and
should be voided
XML
XML indexes
* the first index on the XML type column must be the primary XML
index
o to create: a primary XML index, the table contains the XML
column indexed, called the base table, has a clustered index on the
primary key
o the primary XML index is a shredded and persisted
representation of the XML BLOBs in the XML data type column; for each
XML binary large object (BLOB) in the column, the index creates
several rows of data; the number of rows in the index is approximately
equal to the number of nodes in the XML binary large object
* once the primary index has been created secondary indexes are
created:
o they decrease the time SQL Server processor needed to
search through primary index
o there are 3 types of secondary indexes:
+ PATH
+ VALUE
+ PROPERTY
http://technet.microsoft.com/en+us/library/ms191497.aspx
Storing XML in text columns
* [+] all details such as comments and white space are preserved
* [+] it does not depend on database capabilities
* [+] it reduces processing workload on the database server
* [+] best performance for document+level operations
* [+] coding complexity added in the middle tier
* [+] no manipulating, extracting or modifying XML data at the
node level
* [+] searching XML data always involves reading an entire
document
* [+] XML validation, well+formedness and type checking must be
executed in the middle tier
Storing XML in xml data type columns
* [+] the xml data type is fully integrated with the SQL Server
query engine and other SQL Server services
* [+] the data is stored and manipulated natively as XML
* [+] SQL Server provides fine+grained support for operations at
the node level
* [+] improved performance for data+retrieval operations because
of XML indexes
* [+] document order and structure are preseved
* [+] the maximum allowed node depth is 128 levels
* [+] textual fidelity may be preserved
* [+] decreased processing workload on the database server
SQL Server 2009 validates only some of the well+formedness
constraints, e.g. the root+level element required. Therefore, an XML
fragment may be stored in an xml data type variable or column.
XML schemas
* they are declared at the database level and deployed to SQL
Server
CREATE XML SCHEMA COLLECTION http://MEAMI.ORG/ FooBarSchema AS
'<schema xmlns ...</schema>'
* they can be used to validate the contents of an xml data type
variable or column
DECLARE @myXml AS xmlHTTP://MEAMI.ORG/ (FooBarSchema)
Common characteristics of all FOR XML modes of formatting
* all modes of formatting return an XML fragment, not a well
+formed XML document, because no root node is provided; to add a root
node +> FOR XML, ROOT('RootNodeName')
* FOR XML ..., ELEMENTS options
o XSINIL + it specifies that a xsi:nil attribute set to True
is created for NULL values
o ABSENT + indicates that for NULL values corresponding XML
elements will be added in the XML result
* FOR XML ..., TYPE
o in SQL Server 2000, the result of a FOR XML query is
always directly returned to the client in textual form
o with support for the xml data type in SQL Server 2009, you
MAY optionally request that the result of a FOR XML query be returned
as xml data type by using the TYPE option
o allows to process the result of a FOR XML query on the
server (e.g. when writing nested queries)
* nested queries allow to create complex XML structures
FOR XML RAW
* <row column1="abc" column2="41" another_column="2008+09+11
19:56:00.001"/>
<row column1="abc" column2="41" another_column="2008+09+11
19:56:00.001"/>
* to rename the <row> element +> FOR XML RAW('NewRowName')
* to rename each attribute +> use column aliases in the query
* to change formatting from attribute+centric to element centric
+> FOR XML RAW, ELEMENTS
* all columns formatted in the same way
* one level hierarchy
FOR XML AUTO
* for each table in the query a new level in the XML structure is
created
* all columns formatted in the same way
* the tags take names from the table and column names/aliases; on
renaming mechanism
* to change formatting from attribute+centric to element centric
+> FOR XML RAW, ELEMENTS
FOR XML PATH
* new to SQL Server 2009
* each column in the query has an alias: tells SQL Server where to
locate this node in the XML hierarchy; the column aliases are declared
by using pseudo+XPATH expressions
* by the default each row is a <row>...</row> element; to rename
the <row> element +> FOR XML PATH('NewRowName')
* full control over the number of levels
FOR XML EXPLICIT
* greatest degree of control for developers to be able to generate
complex XML structures
* the query result set must follow a specific pattern called a
Universal Table + a set of columns provided:
o tag column + depth in the XML structure
o parent column + indicates the node parent (identified by
its tag value)
o columns with data with aliases following an ElementName!
TagNumber!AttributeName!Directive pattern
XML data type methods
* query(query_expression)
o executes an XPATH or XQUERY expression and returns the
resulting XML fragment
* value(query_expression, t_sql_data_type)
o executes an XPATH or XQUERY expression and returns a
single scalar value
o even if the query returns a single element, a [1]
predicate must be used to indicate the cardinality of the result of
executing the expression
* exists(query_expression)
o executes an XPATH or XQUERY expression to check for the
existence of nodes and returns true or false
* modify(query_expression)
o provides XML data+manipulation capabilities
o supports the following keywords:
+ insert
# use into, after or before to determine where
the nodes should be inserted
+ replace value of
+ delete
* nodes(query_expression)
o executes an XPATH or XQUERY expression and returns the
resulting XML fragment shredded into a row set
o it returns a new row for each XML node that matches a
given XPATH or XQUERY expression; then value(), query(), and exists()
methods available in the XML data type can to used to extract data
(scalar values) from each row
The most important XQUERY expression is FLWOR:
* FOR
* LET
o supported in SQL Server 2009
* WHERE
* ORDER BY
* RETURN
Example:
declare @x xml
set @x='<ManuInstructions ProductModelID="1"
ProductModelName="SomeBike" >
<Location LocationID="L1" >
<Step>Manu step 1 at Loc 1</Step>
<Step>Manu step 2 at Loc 1</Step>
<Step>Manu step 3 at Loc 1</Step>
</Location>
<Location LocationID="L2" >
<Step>Manu step 1 at Loc 2</Step>
<Step>Manu step 2 at Loc 2</Step>
<Step>Manu step 3 at Loc 2</Step>
</Location>
</ManuInstructions>'
SELECT @x.query('
for $step in /ManuInstructions/Location[1]/Step
where count(/ManuInstructions/Location) > 2
return string($step)
')
http://msdn2.microsoft.com/en+us/library/ms190945.aspx
There are two functions + sql:variable and sql:column + that allow to
include external values from the relational context (T+SQL) into the
XML expression (XPATH, XQUERY).
SQLXML
* a com middle+tier API that gives client applications the
capability to extract XML data out of relational data and manipulate
it without writing T+SQL code
* allows to define:
o annotated XSD schema + it defines a mapping between the
XML schema and a relational schema
o SQLXML XML view + it is an XML file that declares optional
parameters, a T+SQL query and a resulting XML structure; the result of
executing such view is an XML fragment
o updategram + it is an XML fragment that declares an
original an a current view of an XML structure; by comparing these two
views, SQLXML can create the required T+SQL commands to synchronize
changes from the XML data into relational data in the databse
* AXS are built by enhancing regular XSD schemas with specific
keywords from the xmlns:sql="urn:schemas+microsoft+com:mapping+schema"
namespace
* AXS allow to:
o extract relational data and generate an XML instance
o update relational data based on changes executed over an
XML instance (required INSERT, UPDATE and DELETE statements are
generated automatically)
o execute XPATH queries over the annotated XSD schema
o bulk load XML data from a file into a database
* managed API to execute:
o queries against AXSs and XML views is defined inside the
Microsoft.Data.SqlXml.dll file
o the updategrams is defined in the Microsoft.Data.dll file
Shredding
* converting XML data into relational data
* methods:
o OPENXML and the XML stored procedures
+ inefficient for large XML documents, because an
entire document needs to be loaded into memory
o the XML data type's nodes() method along with the APPLY
operators
o SQLXML API to bulk load XML data
Partitioning
Database objects that can be partitioned are: tables, indexes and
indexed views.
1. create a partition function
CREATE PARTITION FUNCTION part_fun(int)
AS
RANGE LEFT
FOR VALUES (1000, 2000)
RANGE specifies to which side of each boundary value interval, left
or right, the boundary value belongs. LEFT is default.
2. create a partition scheme
CREATE PARTITION SCHEME part_sch
AS
PARTITION part_fun
TO (filegroup_1, filegroup_2, filegroup_3)
filegroups:
+ must exist
+ cannot be read+only
+ must have a file assigned
3. create a partitioned object
CREATE TABLE ... ON part_sch(TABLE_ID)
CREATE INDEX ... ON part_sch(TABLE_ID)
An index can be partitioned by different column than it is defined on,
because, with the included columns feature, any columns that make up
the clustered index are automatically migrated into any index created
against the table.
CREATE NONCLUSTED INDEX idx_address_city
ON dbo.Address.City
ON part_sch(ADDRESS_ID)
(!) When a clustered index is dropped and re+created in a different
filegroup, SQL Server moves the entire contents of the table into the
same filegroup as the clustered index + this can be used to partition
an existing table by simply dropping its clustered index and then re
+creating it on a partition scheme.
Each nonclustered index can be partitioned using a different partition
function and partition scheme than the table.
The clustered index cannot be partitioned differently from the table.
++ partition number for a given value
SELECT $partition.part_fun(2178)
++ data from a selected partition
SELECT *
FROM dbo.FOO
WHERE $partition.part_fun(FOO_ID) = 2
++ add a new boundary point
ALTER PARTITION FUNCTION part_fun()
SPLIT RANGE (new_boundary_value)
++ remove an existing boundary point
ALTER PARTITION FUNCTION part_fun()
MERGE RANGE (old_boundary_value)
ALTER TABLE source_table SWITCH PARTITION 2
TO target_table PARTITION 4
* moves partition 2 is from source_table to partition 4 (must be
empty) from target_table
SWITCH operator:
* is infinitely scalable
* is nearly instantaneous + it moves no data (physically, on the
disk; data is moved logically, from one table to another); it only
introduces changes in the doubly linked list of pages in order to add
rows to target_table and delete them from source_table
* incurs zero locking overhead
* cannot move data between 2 servers
ALTER PARTITION SCHEME part_sch
NEXT USED filegroup_4
* adds a new filegroup to the partition scheme
* it also designates that the next created partition will be
assigned to this filegroup
* should be used before SPLIT
Partitioned views
* local partitioned view
o a single table is horizontally split into multiple tables,
usually all have the same structure
* cross database partitioned view
o tables are split among different databases on the same
server instance
* distributed partitioned views
o tables participating in the view reside in different
databases which reside on different servers or different instances
o allow servers to share the query processing load
(federated databases)
o more
There is always a view similar to (2+, 3+ or 4+part name must be
used):
CREATE VIEW dbo.FACT
AS
SELECT <select list> FROM [server DPV].[database
DPV,CDPV].dbo.FACT2009
UNION ALL
SELECT <select list> FROM [server DPV].[database
DPV,CDPV].dbo.FACT2006
UNION ALL
SELECT <select list> FROM [server DPV].[database
DPV,CDPV].dbo.FACT2007
Although partitioned indexes can be implemented independently from
their base tables, it generally makes sense to design a partitioned
table and then create an index on the table. When you do this, SQL
Server automatically partitions the index by using the same partition
scheme and partitioning column as the table. As a result, the index is
partitioned in essentially the same manner as the table. This makes
the index aligned with the table.
Aligning an index with a partitioned table is particularly important
if you anticipate that it will expand by taking on additional
partitions, or that it will be involved in frequent partition
switches.
http://msdn2.microsoft.com/en+us/library/ms187526.aspx
Replication
Transactional replication allows to propagate every transaction as
soon as it happens. However, it should be remembered that every
transaction is propagated to the Distributor as soon as it happens but
not necessarily to the Subscribers. Both in push and pull replication,
depending on the configuration there can be a delay between the moment
a transaction reaches a distibution database and the moment it reaches
the Subscriber's database.
Transactional replication supports updates at Subscribers through
updatable subscriptions and peer+to+peer replication (every
participant of replication is both a Publisher and a Subscriber). The
following are the two types of updatable subscriptions:
* Immediate updating
o the Publisher and Subscriber must be connected to update
data at the Subscriber
o the changes are propagated immediately using the two+phase
commit protocol
o Microsoft Distributed Transaction Coordinator (MSDTC) must
be installed and configured on the Publisher and the Subscribers
* Queued updating
o the Publisher and Subscriber do not have to be connected
to update data at the Subscriber; updates can be made while the
Subscriber or Publisher is offline
o the changes are stored in a queue; the queued transactions
are then applied asynchronously at the Publisher whenever network
connectivity is available; because the updates are propagated
asynchronously to the Publisher, the same data may have been updated
by the Publisher or by another Subscriber and conflicts can occur when
applying the updates; conflicts are detected and resolved according to
a conflict resolution policy that is set when creating the publication
Pull vs. push subscriptions (replication)
* PUSH + the Publisher determines when synchronization occurs, all
agents run at the Distributor
* PULL + the Subscriber determines when synchronization occurs,
all agents run at the Subscribers
Miscellanous
OPENROWSET(BULK N'C:\Text1.txt', SINGLE_BLOB)
* opens a file to, for example, insert it into a varbinary(max)
column
* with SINGLE_... option the contents of a file is returned as a
single+row, single+column rowset
OPTION | RETURNED TYPE | DATA READ AS | USED COLLATION
++++++++++++++|++++++++++++++++|++++++++++++++|+++++++++++++++++++++++
++
SINGLE_BLOB | varbinary(max) | + | +
SINGLE_CLOB | varchar(max) | ASCII | of the current
database
SINGLE_NCLOB | nvarchar(max) | UNICODE | of the current
database
* for importing XML data only the SINGLE_BLOB option should be
used, because only SINGLE_BLOB supports all Windows encoding
conversions
Columns that do not allow null values can be added with ALTER TABLE
only if they have a DEFAULT constraint defined or if the table is
empty (the new column automatically loads with the default value in
each existing row).
WITH VALUES can be used to provide a value for each existing row in
the table when a nullable column with a DEFAULT constraint is added.
ALTER TABLE doc_exf
ADD AddDate smalldatetime NULL
CONSTRAINT AddDateDflt DEFAULT GETDATE() WITH VALUES
If WITH VALUES is not used, each row has the value NULL in the new
column.
/* test1_id tekst
++++++++++++++++++++ ++++++++++
1 some text
2 Some Text */
ALTER TABLE dbo.TEST1
ADD NOT_NULL_1 int NOT NULL
/* Server: Msg 4901, Level 16, State 1, Line 1
ALTER TABLE only allows columns to be added that can contain nulls,
or have a DEFAULT definition specified, or the column being added
is
an identity or timestamp column, or alternatively if none of the
previous conditions are satisfied the table must be empty to allow
addition of this column.
Column 'NOT_NULL_1' cannot be added to non+empty table 'TEST1'
because
it does not satisfy these conditions. */
ALTER TABLE dbo.TEST1
ADD NOT_NULL_1 int NOT NULL
CONSTRAINT DEF_NOT_NULL_1 DEFAULT 42
/* test1_id tekst NOT_NULL_1
++++++++++++++++++++ ++++++++++ +++++++++++
1 some text 42
2 Some Text 42 */
ALTER TABLE dbo.TEST1
ADD NULL_1 int NULL
CONSTRAINT DEF_NULL_1 DEFAULT 42
/* test1_id tekst NOT_NULL_1 NULL_1
++++++++++++++++++++ ++++++++++ +++++++++++ +++++++
1 some text 42 NULL
2 Some Text 42 NULL */
ALTER TABLE dbo.TEST1
ADD NULL_2 int NULL
CONSTRAINT DEF_NULL_2 DEFAULT 43 WITH VALUES
/* test1_id tekst NOT_NULL_1 NULL_1 NULL_2
++++++++++++++++++++ ++++++++++ +++++++++++ +++++++ +++++++
1 some text 42 NULL 43
2 Some Text 42 NULL 43 */
LOB_COMPACTION
* option of ALTER INDEX
* default: ON
* specifies that all pages that contain large object (LOB) data
are compacted; the LOB data types are: image, text, ntext, varchar
(max), nvarchar(max), varbinary(max), and xml; compacting this data
can improve disk space use
Case+sensitive SELECT statement
* SELECT *
FROM dbo.FOO
WHERE BAR COLLATE SQL_Latin1_General_CP1_CS_AS = 'Some Text'
* SELECT *
FROM dbo.FOO
WHERE CAST(BAR AS varbinary(50)) = CAST('Some Text' AS varbinary
(50))
http://vyaskn.tripod.com/case_sensitive_search_in_sql_server.htm
BEGIN TRAN [ { transaction_name | @tran_name_variable } ]
WITH MARK [ 'description' ]
[ { transaction_name | @tran_name_variable } ]
* name assigned to the transaction
* names longer than 32 characters are not allowed
* naming multiple transactions in a series of nested transactions
with a transaction name has little effect on the transaction; only the
first (outermost) transaction name is registered with the system
* a rollback to any other name (other than a valid savepoint name)
generates an error; none of the statements executed before the
rollback is, in fact, rolled back at the time this error occurs; the
statements are rolled back only when the outer transaction is rolled
back
WITH MARK [ 'description' ]
* specifies that the transaction is marked in the log
* if description is a Unicode string, values longer than 255
characters are truncated to 255 characters before being stored in the
msdb.dbo.logmarkhistory table; if description is a non+Unicode string,
values longer than 510 characters are truncated to 510 characters
* if WITH MARK is used, a transaction name must be specified
* the mark is placed in the transaction log only if the database
is updated by the marked transaction
* when nesting transactions, only one transaction can be marked
* WITH MARK allows for restoring a transaction log to a named mark
(in can be used in place of a date and time)
Default database collation can be checked with: sp_helpdb
database_name (column Status).
The following statement returns a list of all valid collation names
for Windows collations and SQL collations:
SELECT *
FROM fn_helpcollations()
Failover clustering requires SQL Server 2005 Standard Edition or SQL
Server 2005 Enterprise Edition (just like database mirroring).
bcp {[[database_name.][owner].]{table_name | view_name} | "query"}
{in | out | queryout | format} data_file
[+mmax_errors] [+fformat_file] [+x] [+eerr_file]
[+Ffirst_row] [+Llast_row] [+bbatch_size]
[+n] [+c] [+N] [+w] [+V (60 | 65 | 70 | 80)] [+6]
[+q] [+C { ACP | OEM | RAW | code_page } ] [+tfield_term]
[+rrow_term] [+iinput_file] [+ooutput_file] [+apacket_size]
[+Sserver_name[\instance_name]] [+Ulogin_id] [+Ppassword]
[+T] [+v] [+R] [+k] [+E] [+h"hint [,...n]"]
OPTION | FROM | TO
++++++++++|++++++++++++|++++++++++++++
in | file | table/view
out | table/view | file
queryout | query | file
format | it creates a format file;
| it copies no data
BULK INSERT provides better performance than the bcp utility.
Bulk Insert Task (SSIS) does not support data transformations, but a
format file can be used. Bulk Insert Task (and the bcp utility too!)
supports both non+XML format files (the only format supported by SQL
Server 2000 and earlier) and XML format files (new format file type
available in SQL Server 2005).
Http://MeAmI.org.org:
Coordinated Resource Management
@ Sensor Networks by
Http://MeAmI.org
M. Michael Musatov
© 2009 M. Michael Musatov + Http://MeAmI.org
Sensor Networks: Coping with Limited
Resources
MAC Layer
Z+MAC
Dozer
B+MAC
T+MAC
S+MAC
OS and
Programming models
Pixie
Eon
Levels
Power
Locks
Energy
Tracking
Quantro
PowerTOSSIM
Focus on optimizing
at the node level
?? Sensor nodes are adequately resource contained
?? 8 MHz CPU
?? 10 KB of memory
?? ~100 Kbps of radio link bandwidth (best case)?
?? 200 mAh ? 2000 mAh batteries
© 2009 M. Michael Musatov + Http://MeAmI.org
Coordination Matters
Coordination is essential to get
good resource efficiency.
We need OS abstractions to support it.
© 2009 M. Michael Musatov + Http://MeAmI.org
State of the Art
RPC
IDLs
Discovery services
BSD sockets
TCP/IP
BPEL
JINI
Web services
XML RPC
Group communication
DHTs
Multicast
Map/Reduce
BFT
Active Messages
Radio packets
Conventional Distributed Systems
Sensor Networks
Everything else is done in an
ad hoc manner by each
application.
© 2009 M. Michael Musatov + Http://MeAmI.org
A Canonical Example: Data Collection
?Resource availability is
hard to predict
?? Variable load
?? Variable resource
availability
?? Time Varying
?Off line static solutions
may be adequate
How much energy to put
towards sampling?
Storing data? Processing?
Listening for and forwarding
other data?
Data
Base
Station
Node
More work
forwarding
packets
Less work
forwarding
packets
Solar
Powered
Nodes
More
Sunlight
Less
Sunlight
More
Sunlight
Less
Sunlight
© 2009 M. Michael Musatov + Http://MeAmI.org




Point your RSS reader here for a feed of the latest messages in this topic.

[Privacy Policy] [Terms of Use]

© Drexel University 1994-2014. All Rights Reserved.
The Math Forum is a research and educational enterprise of the Drexel University School of Education.