Sunday, December 3, 2017

Working with XML Data Type

The XML data type, introduced in SQL Server 2005. XML data type can store either a complete XML or a fragment of XML document. Before xml data type the xml data was stored in varchar or text data type, that was proven to be poor in terms of querying and manipulating the XML data.

When to use XML data type:
  • If one wants to store the XML document in the database and retrieve and update the document as a whole - that is, if you never need to query or modify the individual XML components - you should consider using one of the large object data types like VARCHAR(MAX), NVARCHAR(MAX) or VARINARY(MAX).
  • If you want to preserve the original form such as legal documents or want to retain an exact textual copy, use large object storage.
  • The rest of the time, one should consider using XML data type which ensures that the data is well formed according to ISO standards, and supports fine-grained queries and modifications to specific elements and attributes within the XML. One can also index an XML column and associate its data with an XML schema collection in order to preserve its content and structure.
Limitations of using XML data type:
  • This cannot be used as a key in an index
  • Cannot exceed 2 GB size
  • XML data type cannot be compared, sorted, or used in GROUP BY
  • It does not support casting or converting to either text or ntext
  • It cannot be used as a parameter to any scalar, or built-in functions other than ISNULL, COALESCE and DATALENGTH
Data Manipulations on XML column: Below illustrations explains how to SELECT, INSERT, UPDATE and DELETE data in a XML column.

Below is the sample XML used to walk through the manipulations:
<Employees>
<Employee>
<ID>1</ID>
<Name>John Test</Name>
<Department>IT</Department>
<Address>
<Street1>4404, Main Road</Street1>
<Street2/>
<City>Reston</City>
<State>VA</State>
<Zip>20171</Zip>
</Address>
</Employee>
<Employee>
<ID>2</ID>
<Name>King George</Name>
<Department>Admin</Department>
<Address>
<Street1>3004, St. Patric Road</Street1>
<Street2>Apt D</Street2>
<City>Vienna</City>
<State>VA</State>
<Zip>20171</Zip>
</Address>
</Employee>
</Employees>
SELECTing Employee Name, Department and Address for the given Employee ID
SELECT
t.c.value('Name[1]', 'varchar(max)') as 'Employee Name',
t.c.value('Department[1]', 'varchar(max)') as 'Department',
t1.c1.value('Street1[1]', 'varchar(max)') + ', ' +
t1.c1.value('Street2[1]', 'varchar(max)') +
-- Conditionalyy append comma based on Street2 value, Append when Street2 available else don't
case when ((t1.c1.value('Street2[1]', 'varchar(max)') != '')) then (', ') else ('') end +
t1.c1.value('City[1]', 'varchar(max)') + t1.c1.value('State[1]', 'varchar(max)') + ' - ' +
t1.c1.value('Zip[1]', 'varchar(max)') as 'Address'
FROM
@XML.nodes('//Employees/Employee') as t(c)
CROSS APPLY t.c.nodes('Address') as t1(c1)
WHERE
t.c.value('ID[1]', 'int') = 1
UPDATing Street2 of Address for the given Employee ID
SET @employeeID = 2 -- Provide employee ID
-- Check for provided employee ID and if Street2 value is NULL
IF @Xml.exist('/Employees/Employee/ID[text()=sql:variable("@employeeID")]/../Address/Street2/text()') = @employeeID
BEGIN -- will execute for employeeID = 2
PRINT 'First Part'
SET
@Xml.modify('replace value of (/Employees/Employee/ID[text()=sql:variable("@employeeID")]/../Address/Street2/text())[1] with "Apt F"')
END
ELSE -- IF Street2 tag value is present execute below
BEGIN -- will execute for employeeID = 1
PRINT 'Second Part'
SET @Xml.modify('delete (/Employees/Employee/ID[text()=sql:variable("@employeeID")]/../Address/Street2)')
SET @Xml.modify('insert (<Street2>Apt A</Street2>) after (/Employees/Employee/ID[text()=sql:variable("@employeeID")]/../Address/Street1)[1]')
END
SELECT @Xml
DELETing Street2 of Address for the given Employee ID
SET @employeeID = 1 -- Provide employee ID
BEGIN
SET @Xml.modify('delete (/Employees/Employee/ID[text()=sql:variable("@employeeID")]/../Address/Street2)')
END
SELECT @Xml
INSERTing Street3 after Street2 of Address for the given Employee ID
SET @employeeID = 1 -- Provide employee ID
BEGIN
SET @Xml.modify('insert (<Street3>Near</Street3>) after (/Employees/Employee/ID[text()=sql:variable("@employeeID")]/../Address/Street2)[1]')
END
SELECT @Xml

Friday, December 1, 2017

Performance Tuning

LINQ to Entity is a ORM for querying and managing database. Below are some tips and tricks that we should keep an eye on while designing and querying database using entity framework ORM.

Cold Vs Warm Query Execution:
The very first time any query is made against a given model, the Entity Framework does lot of work behind the scenes to load and validate the model. This is called Cold query. These are slow in nature. Further queries against an already loaded model are known as Warm queries and are much faster.

Below section explains different ways of reducing performance cost of both cold and warm queries.
Caching
Entity Framework comes with below built-in cache mechanisms:

Object Caching: This is also called as First Level Caching and uses ObjectContext instance's ObjectStateManager to keep track of the objects in memory that have been retrieved using that instance.

The behavior of this caching is - when EF materializes an entity returned as resultant of a query, the ObjectContext will check if an entity with the same key has already been loaded into its ObjectStateManager, if found EF will include it in the results. Although EF will still issue the query against the database. This behavior will decrease the performance and can be bypassed to eliminate materializing the same entity multiple times as below:
  • Getting entities from the object cache using DbContext Find: Unlike a regular query, the Find method in DbSet will perform a search in memory before even issuing the query against the database. Find uses the primary key value to attempt to find an entity tracked by the context. NULL will be returned if the entity is not found in the context or database.

    Example of Find with auto-detect changes disabled:
    context.Configuration.AutoDetectChangesEnabled = false;
    var product = context.Products.Find(productId);
    context.configuration.AutoDetectChangesEnabled = true;

    Per MSDN, Find method can be used n below scenarios:
    • If the object is not cache the benefits of find are negated, but the syntax is still simpler than a query by key
    • If auto detect changes is enabled the cost of the Find method may increase by one order or magnitude, or even more depending on the complexity of your model and the amount in your object cache.
  • Issues when the object cache has many entities: If the Object Cache has a very large amount of entities loaded, it may effect operations like Add, Remove, Find,Entry and more. Especially DefectChanges will negatively affected by large object caches. This synchronizes the object graph with the object state manager and its performance will determine directly by the size of the object graph.

    Range methods like AddRange and RemoveRange should be used instead of iterating through the collection as the advantage of using range methods is that the cost of DetectChange is called only once for the entire set of entities as opposed to once per each added entity.
Query Plan Caching: Reusing the generated store command when a query is executed more than once. When the query is executed first time, it goes through the internal plan compiler to translate the conceptual query into the store command. When the query plan caching is enables, whenever the query is executed next time, it retrieves the store command from query plan cache for execution bypassing the plan compiler. The query plan caching is shared across ObjectContext instances within the same AppDomain.
  • Query plan cache is shared for all Entity SQL, LINQ to Entities and Compiled Query types
  • Query plan caching is enabled by default for Entity SQL queries and LINQ to Entity queries, whether they are executed through an EntityCommand or through an ObjectQuery.
  • Query plan cache can be disabled by setting the EnablePlanCaching property to false as below:
    var query = from customer in context.Customer where customer.CustomerId == id
    select new { customer.CustomerId, customer.Name };

    ObjectQuery oQuery = query as ObjectQuery;
    oQuery.EnablePlanCaching = false;
  • For parameterized queries, changing the parameter value will still use the cached query
Metadata Caching : This is essentially caching of type information and type-to-database mapping information across different connections to the same model and unique per AppDomain.

Results caching: This is also called as Second Level Caching which keeps the results of queries in a local cache. One can check this before running a query against the database/store. This caching is not directly supported by EF but can be implemented using wrapping provider.

MSDN wrapper samples can be found at: Wrappers
Avoid using One Single Entity Model:
Entity Model specifies a single unit of work and not the whole database. So to eliminate the wastage of space and performance degradation, create separate entity models for related DB objects. For ex: all logs, batch process etc.
Disable Change Tracking for Entity
Object Tracking is not required whenever we are handling the read only data. So, disable object tracking by using MergeOption as below:

TestContext context = new TestContext();
context.testTable.MergeOption = MergeOption.NoTracking;
Using Views
Whenever the ObjectContext is created first time in an application, entity framework creates a set of classes which are required to access database. These set of classes are called views and if the model is large then creating this view may delay the web application's response time for the first request. Try to create this view at compile time using T4 templates or EdmGen.exe command line tool to reduce the response time.
Fetch only Required Fields
Avoid fetching all the fields of a database table. For example, if a table contains 25 fields and the current scenario requires only 5 fields, then just fetch only the required 5 fields instead of all the 25 fields.

Good Practice
var customer = (from cust in dataContext.Customers select new {customer. CustomerID, customer.Name, customer.Address }). ToList ();
Bad Practice
var customer = (from cust in dataContext.Customers select cust).ToList();
Use Appropriate Collection
As LINQ is highly used to query ObjectContext in EF and LINQ has Var, IEnumerable, IQueryable, IList type collections for data manipulation, and each collection has its own purpose in regards to performance, beware of using these collections for data manipulation.
Use Compiled Query
Compiled queries are helpful in scenarios where we know that it is used very frequently to fetch records. Comparatively this query is slow while executing first time but boost the performance significantly in following calls. Query is compiled once and can be used any number of time. Compiled Query does not need to be recompiled even if the parameter of the query is being changed.

Steps to create Compiled Query:
  • Create a static class
  • Add System.Data.Linq namespace
  • Use LINQ's CompiledQuery class to create compiled LINQ query
Below example illustrates creating CompiledQuery with no parameters:
static class MyCompliedQueries
{
public static Func<DataClasses1DataContext ,IQueryable<Person>>
CompliedQueryForPesron = CompiledQuery.Compile((DataClasses1DataContext context)=>from c in context.Persons select c );
}
Above CompiledQuery can be called as below:
DataClasses1DataContext context = new DataClasses1DataContext();
var result = MyCompliedQueries.CompliedQueryForPesron(context);

Below example illustrates creating CompiledQuery with parameters:
static class MyCompliedQueries
{
public static Func<DataClasses1DataContext, int, IQueryable<Person>>
CompliedQueryForPesron = CompiledQuery.Compile(
(DataClasses1DataContext context, int personId)=>from c in context.Persons where c.PersonID == personId select c );

}
Above CompiledQuery can be called as below:
DataClasses1DataContext context = new DataClasses1DataContext();
var result = MyCompliedQueries.CompliedQueryForPesron(context, 9);
Retrieve only Required Records
To improve performance while retrieving records, retrieve only required no of records by using Take, While and Skip methods.

DataClasses1DataContext context = new DataClasses1DataContext();
int pageSize=10,startingPageIndex=2;

List lstCus = context.Persons.Take(pageSize).Skip(startingPageIndex * pageSize).ToList();
Avoid Contains
Try avoid using Contains of LINQ. This will be converted as "WHERE IN" in SQL which cause performance degrades.
Avoid Views
SQL Views degrade LIQ query performance. These are slow and impact performance badly. So avoid using Views in LINQ to Entities.
Debug and Optimize LINQ Query
With the help of LINQ Pad one can debug and optimize query. This tool can be used to optimize LINQ query.

Thursday, November 30, 2017

LINQ: Language Integrated Query

LINQ is an acronym for Language Integrated Query. The Language Integrated part means that it is part of programming language syntax, and Query means, as it explains - is a means to retrieve data from a data source.

To retrieve data from different/multiple data sources in .NET, the underlying query languages use are:

  • SQL and ADO.Net for relational databases
  • XQuery and XSLT for XML

LINQ simplifies this working model by providing a common platform to execute the query and get the results. Basically LINQ query always works with objects, so the basic coding to query and transform data in XML, SQL database, ADO.NET Datasets, Collections and any other format for which LINQ provider available is common.

LINQ Archiecture

Below are different Types of LINQ:

  • LINQ to Objects
  • LINQ to XML (XLINQ)
  • LINQ to Dataset
  • LINQ to SQL (DLINQ)
  • LINQ to Entites

LINQ Syntax: Following are the two ways of LINQ syntaxes:

  • Lamda (Method) Syntax:
  • var testWords = words.where(w => w.length >10);
  • Query (Comprehension) Syntax:
  • var testWords = (from word in words where w.length > 10);

Per MSDN, Query expression consists below clauses:

  • FROM refers to Data Source
  • WHERE takes care of Filtering the requirement
  • SELECT takes responsibility of elements to return

Below are the Advantages and Disadvantages of using LINQ:

Advantages:
  • LINQ can be used against different data sources and it is not limited to RDBMS
  • Viewing table relationships is easy due to its hierarchical feature, which also enables composing queries by joining multiple table in less time
  • LINQ allows a single syntax while querying different data sources with support of its unitive foundation
  • LINQ is extensible, mean it is possible to use LINQ to query new data sources
  • A single LINQ query can join several data sources
  • Easy transformation for conversion of one data type to another, ex: SQL data to XML data
  • Debugging is easy due to its integration in C#
  • Writing more accurate queries is easy using LINQ intellisense
  • LINQ queries are typesafe as the errors will be type checked at compile time
Disadvantages:
  • For every query change, assembly needs to be recompiled and deployed
  • LINQ queries are not precompiled so need to take extra cautions to handle performance
  • In given scenarios, it is hard to understand advance LINQ queries
  • There will always be some things you can do in SQL but not in LINQ
With this I am concluding the illustration. Feel free to share your feedback.

Happy Programming !!!

URL Rewrite

URL Rewrite is the process of altering the parameters in a URL (Uniform Resource Locator). The URL rewrite module is an extension to IIS which is available as a download for your stand-alone IIS Server. URL Rewrites can be managed at the server level or for individual sites as required.

Patterns are used through the URL Rewrite module. These are in one of three modes:
  • Exact Match
  • Wildcards - where an asterisk is used to mean "anything here" and is captured when matched
  • ECMAScript regular expressions, which are Perl compatible regular expressions

Rules are two types as below:
  • Inbound rules - looks at the request URLs and change them
  • Outbound rules - inspects the traffic sent out, look for URLs within it, and rewrite them as needed. This is very handy when the content may use an absolute URL that is not what the user should be receiving (especially handy in reverse proxy scenarios)

Built in Rules URL rewrite supports various built in rules as below:
  • Rule with rewrite map: allows you to define a set of paths and their replacements as a simple list
  • Request blocking: disallow access to a path
  • User-friendly URL: quickly creates rules to map path segments (separated by slashes) to query strings
  • Reverse Proxy: allows the current server to reverse proxy another
  • Enforce lowercase URLs: makes the client always use lowercase URLs via an HTTP status 301 ("Permanent") redirect
  • Canonical domain name: uses an HTTP status 301 ("Permanent") redirect to ensure that clients always use the specified domain name
  • Append or remove the trailing slash symbol: will either always add or always remove the trailing slash in an URL path using an HTTP status 301 ("Permanent") redirect

Friday, April 7, 2017

Database Schema

Here are some helpful queries to compare two databases at schema level to identify the newly added, modified or deleted database objects. Below illustration uses ExistingDB and LatestDB as two databases. LatestDB refers to latest and greatest whereas ExistingDB refers to existing PROD database.

ADDED Tables: Query to identify newly added tables in LatestDB
DECLARE @latestDB = 'latestDB' -- latestDB
DECLARE @existingDB = 'existingDB' -- existing DB

PRINT N'START OF Identifying ADDED TABLES'
SELECT schema_name(schema_id), name FROM [@latestDB].sys.tables
EXCEPT
SELECT schema_name(schema_id), name FROM [@existingDB].sys.tables
ORDER BY schema_name(schema_id)
PRINT N'END OF Identifying ADDED TABLES'
DELETED Tables: Query to identify deleted tables in LatestDB
DECLARE @latestDB = 'latestDB' -- latestDB
DECLARE @existingDB = 'existingDB' -- existing DB

PRINT N'START OF Identifying DELETED TABLES'
SELECT schema_name(schema_id), name FROM [@existingDB].sys.tables
EXCEPT
SELECT schema_name(schema_id), name FROM [@latestDB].sys.tables
ORDER BY schema_name(schema_id)
PRINT N'END OF Identifying DELETED TABLES'
ADDED Columns: Query to identify newly added columns in LatestDB
DECLARE @latestDB = 'latestDB' -- latestDB
DECLARE @existingDB = 'existingDB' -- existing DB

PRINT N'START OF Identifying ADDED COLUMNS'
SELECT tab.SchemaName, tab.tablename, col.name as columnname FROM [@latestDB].sys.columns col
INNER JOIN (SELECT object_id, schema_name(schema_id) SchemaName, name as tablename FROM [@latestDB].sys.tables
          INTERSECT
          SELECT object_id, schema_name(schema_id) SchemaName, name as tablename FROM [@existingDB].sys.tables) tab
ON col.object_id = tab.object_id
EXCEPT
SELECT tab.SchemaName, tab.tablename, col.name as columnname FROM [@existingDB].sys.columns col
INNER JOIN (SELECT object_id, schema_name(schema_id) SchemaName, name as tablename FROM [@existingDB].sys.tables
          INTERSECT
          SELECT object_id, schema_name(schema_id) SchemaName, name as tablename FROM [@latestDB].sys.tables) tab
ON col.object_id = tab.object_id
PRINT N'END OF Identifying ADDED COLUMNS'
DELETED Columns: Query to identify deleted columns in LatestDB
DECLARE @latestDB = 'latestDB' -- latestDB
DECLARE @existingDB = 'existingDB' -- existing DB

PRINT N'START OF Identifying DELETED COLUMNS'
SELECT tab.SchemaName, tab.tablename, col.name as columnname FROM [@existingDB].sys.columns col
INNER JOIN (SELECT object_id, schema_name(schema_id) SchemaName, name as tablename FROM [@existingDB].sys.tables
          INTERSECT
          SELECT object_id, schema_name(schema_id) SchemaName, name as tablename FROM [@latestDB].sys.tables) tab
ON col.object_id = tab.object_id
EXCEPT
SELECT tab.SchemaName, tab.tablename, col.name as columnname FROM [@latestDB].sys.columns col
INNER JOIN (SELECT object_id, schema_name(schema_id) SchemaName, name as tablename FROM [@latestDB].sys.tables
          INTERSECT
          SELECT object_id, schema_name(schema_id) SchemaName, name as tablename FROM [@existingDB].sys.tables) tab
ON col.object_id = tab.object_id
PRINT N'END OF Identifying DELETED COLUMNS'
ADDED Procedures: Query to identify newly added procedures in LatestDB
DECLARE @latestDB = 'latestDB' -- latestDB
DECLARE @existingDB = 'existingDB' -- existing DB

PRINT N'START OF Identifying ADDED PROCEDURES'
SELECT schema_name(schema_id) as SchemaName, prc.name as ProcedureName FROM [@latestDB].sys.procedures prc
EXCEPT
SELECT schema_name(schema_id) as SchemaName, prc.name as ProcedureName FROM [@existingDB].sys.procedures prc
ORDER BY SchemaName, ProcedureName
PRINT N'END OF Identifying ADDED PROCEDURES'
DELETED Procedures: Query to identify deleted procedures in LatestDB
DECLARE @latestDB = 'latestDB' -- latestDB
DECLARE @existingDB = 'existingDB' -- existing DB

PRINT N'START OF Identifying DELETED PROCEDURES'
SELECT schema_name(schema_id) as SchemaName, prc.name as ProcedureName FROM [@existingDB].sys.procedures prc
EXCEPT
SELECT schema_name(schema_id) as SchemaName, prc.name as ProcedureName FROM [@latestDB].sys.procedures prc
ORDER BY SchemaName, ProcedureName
PRINT N'END OF Identifying DELETED PROCEDURES'
MODIFIED Procedures: Query to identify modified procedures in LatestDB
DECLARE @latestDB = 'latestDB' -- latestDB
DECLARE @existingDB = 'existingDB' -- existing DB

PRINT N'START OF Identifying MODIFIED PROCEDURES'
SELECT schema_name(prc.schema_id) as SchemaName, prc.name as ProcedureName, prc.modify_date FROM [@latestDB].sys.procedures prc
JOIN [@existingDB].sys.procedures prc2 on prc.name = prc2.name AND object_definition(prc.object_id) <> object_definition(prc2.object_id)

SELECT schema_name(prc.schema_id) as SchemaName, prc.name as ProcedureName, prc.modify_date FROM [@existingDB].sys.procedures prc
JOIN [@latestDB].sys.procedures prc2 on prc.name = prc2.name AND object_definition(prc.object_id) <> object_definition(prc2.object_id)
PRINT N'END OF Identifying MODIFIED PROCEDURES'
ADDED Views: Query to identify newly added views in LatestDB
DECLARE @latestDB = 'latestDB' -- latestDB
DECLARE @existingDB = 'existingDB' -- existing DB

PRINT N'START OF Identifying ADDED VIEWS'
SELECT schema_name(schema_id) as SchemaName, vie.name as ViewName FROM [@latestDB].sys.views vie
EXCEPT
SELECT schema_name(schema_id) as SchemaName, vie.name as ViewName FROM [@existingDB].sys.views vie
ORDER BY SchemaName, ViewName
PRINT N'END OF Identifying ADDED VIEWS'
DELETED Views: Query to identify deleted views in LatestDB
DECLARE @latestDB = 'latestDB' -- latestDB
DECLARE @existingDB = 'existingDB' -- existing DB

PRINT N'START OF Identifying DELETED VIEWS'
SELECT schema_name(schema_id) as SchemaName, vie.name as ViewName FROM [@existingDB].sys.views vie
EXCEPT
SELECT schema_name(schema_id) as SchemaName, vie.name as ViewName FROM [@latestDB].sys.views vie
ORDER BY SchemaName, ViewName
PRINT N'END OF Identifying DELETED VIEWS'
ADDED Indexes: Query to identify newly added indexes in LatestDB
DECLARE @latestDB = 'latestDB' -- latestDB
DECLARE @existingDB = 'existingDB' -- existing DB

PRINT N'START OF Identifying ADDED INDEXES'
SELECT i.Name as IndexName, OBJECT_NAME(i.object_ID) as TableName, c.Name as ColumnName, i.type_desc FROM [@latestDB].sys.indexes i
INNER JOIN sys.index_columns ic ON i.index_id = ic.index_id AND i.object_id = ic.object_id
INNER JOIN [@latestDB].sys.columns c ON ic.column_id = c.column_id AND ic.object_id = c.object_id
EXCEPT
SELECT i.Name as IndexName, OBJECT_NAME(i.object_ID) as TableName, c.Name as ColumnName, i.type_desc FROM [@existingDB].sys.indexes i
INNER JOIN sys.index_columns ic ON i.index_id = ic.index_id AND i.object_id = ic.object_id
INNER JOIN [@existingDB].sys.columns c ON ic.column_id = c.column_id AND ic.object_id = c.object_id
ORDER BY TableName, ColumnName
PRINT N'END OF Identifying ADDED INDEXES'
DELETED Indexes: Query to identify deleted indexes in LatestDB
DECLARE @latestDB = 'latestDB' -- latestDB
DECLARE @existingDB = 'existingDB' -- existing DB

PRINT N'START OF Identifying DELETED INDEXES'
SELECT i.Name as IndexName, OBJECT_NAME(i.object_ID) as TableName, c.Name as ColumnName, i.type_desc FROM [@existingDB].sys.indexes i
INNER JOIN sys.index_columns ic ON i.index_id = ic.index_id AND i.object_id = ic.object_id
INNER JOIN [@existingDB].sys.columns c ON ic.column_id = c.column_id AND ic.object_id = c.object_id
EXCEPT
SELECT i.Name as IndexName, OBJECT_NAME(i.object_ID) as TableName, c.Name as ColumnName, i.type_desc FROM [@latestDB].sys.indexes i
INNER JOIN sys.index_columns ic ON i.index_id = ic.index_id AND i.object_id = ic.object_id
INNER JOIN [@latestDB].sys.columns c ON ic.column_id = c.column_id AND ic.object_id = c.object_id
PRINT N'END OF Identifying DELETED INDEXES'
ADDED Triggers: Query to identify newly added triggers in LatestDB
DECLARE @latestDB = 'latestDB' -- latestDB
DECLARE @existingDB = 'existingDB' -- existing DB

PRINT N'START OF Identifying ADDED TRIGGERS'
SELECT object_name(parent_id) as TableName, trg.name as TriggerName FROM [@latestDB].sys.triggers trg
EXCEPT
SELECT object_name(parent_id) as TableName, trg.name as TriggerName FROM [@existingDB].sys.triggers trg
ORDER BY TableName, TriggerName
PRINT N'END OF Identifying ADDED TRIGGERS'
DELETED Triggers: Query to identify deleted triggers in LatestDB
DECLARE @latestDB = 'latestDB' -- latestDB
DECLARE @existingDB = 'existingDB' -- existing DB

PRINT N'START OF Identifying DELETED TRIGGERS'
SELECT object_name(parent_id) as TableName, trg.name as TriggerName FROM [@existingDB].sys.triggers trg
EXCEPT
SELECT object_name(parent_id) as TableName, trg.name as TriggerName FROM [@latestDB].sys.triggers trg
ORDER BY TableName, TriggerName
PRINT N'END OF Identifying DELETED TRIGGERS'
ADDED Functions: Query to identify newly added functions in LatestDB
DECLARE @latestDB = 'latestDB' -- latestDB
DECLARE @existingDB = 'existingDB' -- existing DB

PRINT N'START OF Identifying ADDED FUNCTIONS'
SELECT schema_name(schema_id) as SchemaName, fun.name as FunctionName, fun.type as FunctionType FROM [@latestDB].sys.objects fun
WHERE (type = 'TF' or type = 'FN' or type = 'IF')
EXCEPT
SELECT schema_name(schema_id) as SchemaName, fun.name as FunctionName, fun.type as FunctionType FROM [@existingDB].sys.objects fun
WHERE (type = 'TF' or type = 'FN' or type = 'IF')
ORDER BY SchemaName, FunctionName, FunctionType
PRINT N'END OF Identifying ADDED FUNCTIONS'
DELETED Functions: Query to identify deleted functions in LatestDB
DECLARE @latestDB = 'latestDB' -- latestDB
DECLARE @existingDB = 'existingDB' -- existing DB

PRINT N'START OF Identifying DELETED FUNCTIONS'
SELECT schema_name(schema_id) as SchemaName, fun.name as FunctionName, fun.type as FunctionType FROM [@existingDB].sys.objects fun
WHERE (type = 'TF' or type = 'FN' or type = 'IF')
EXCEPT
SELECT schema_name(schema_id) as SchemaName, fun.name as FunctionName, fun.type as FunctionType FROM [@latestDB].sys.objects fun
WHERE (type = 'TF' or type = 'FN' or type = 'IF')
ORDER BY SchemaName, FunctionName, FunctionType
PRINT N'END OF Identifying DELETED FUNCTIONS'
ADDED Schemas: Query to identify newly added schemas in LatestDB
DECLARE @latestDB = 'latestDB' -- latestDB
DECLARE @existingDB = 'existingDB' -- existing DB

PRINT N'START OF Identifying ADDED SCHEMAS'
SELECT schema_name(schema_id) as SchemaName FROM [@latestDB].sys.schemas
EXCEPT
SELECT schema_name(schema_id) as SchemaName FROM [@existingDB].sys.schemas
ORDER BY SchemaName
PRINT N'END OF Identifying ADDED SCHEMAS'
DELETED Schemas: Query to identify deleted schemas in LatestDB
DECLARE @latestDB = 'latestDB' -- latestDB
DECLARE @existingDB = 'existingDB' -- existing DB

PRINT N'START OF Identifying DELETED SCHEMAS'
SELECT schema_name(schema_id) as SchemaName FROM [@existingDB].sys.schemas
EXCEPT
SELECT schema_name(schema_id) as SchemaName FROM [@latestDB].sys.schemas
ORDER BY SchemaName
PRINT N'END OF Identifying DELETED SCHEMAS'
With this I am concluding the illustration. Feel free to share your feedback.

Happy Programming !!!

Wednesday, February 1, 2017

.NET Communication

.NET Provides different types of communication technologies like Web Services, WCF and Web API as below:

Web Service

A web service is any piece of software that makes itself available over the internet and uses a standardized XML messaging system. XML is used to encode all communications to a web service.

  • Based on SOAP and return data in XML form
  • It Supports only HTTP protocol
  • Consumed by any client that understands XML SOAP services
  • Hosted only in IIS
  • It is not a Open source

WCF

Windows Communication Foundation (WCF) is a framework for building service-oriented applications. Using WCF, one can send data as asynchronous messages from one service endpoint to another. A service endpoint can be part of a service hosted by IIS, or be a service hosted with in an application.

  • Based on SOAP and return XML data, which is heavy compare than JSON
  • Supports multiple protocols like TCP, HTTP, HTTPS, Names Pipes, MSMQ via configuration
  • Consumed by any client which is able to understand XML SOAP services
  • Can be hosted either by Self-hosting, IIS and using Windows Services
  • Only support GET and POST verbs by [webget] and [webInvoke]
  • Supports XML, JSON and ATOM (an XML based format which defines XML elements and their meaning) data formats
  • Can be configured to support REST over HTTP by enabling "webhttpbindings"
  • More reliable, when both client and server have .NET
  • It's Implementation and configuration is complex, and overhead over Network

Web API

Web API is a framework to build HTTP services that reach a broad range of clients, including browsers and mobile devices, and is an ideal platform for building RESTful applications on the .NET Framework.

  • Designed specifically for building HTTP Restful services on .NET Framework
  • Support all features of HTTP like URIs, Request/Response, Headers, Caching and Versioning
  • Supports GET, POST, PUT and DELETE HTTP verbs
  • Can be self-hosted, hosted with in the application and on IIS
  • It supports all MVC features like, controllers, action results, routing, filter, model binders, IOC container or dependency injection that makes it more simple and robust
  • OWIN(Open Web Interface for .NET) is used for self-hosting
  • Supports OData query options for sort, filter and page the of OData query
  • Response can be formatted by Web API's MediaTypeFormatter into JSON, XML or any format which you can add as a MedialTypeFormatted
  • It is stateless and easily readable as easy as JSON

What to choose between Web API or WCF?

Here are some of the differences between WCF and Web API:


  • Choose WCF when you want to create a service that should support special scenarios such as one way messaging, message queues, duplex communication etc.
  • Choose WCF when you want to create a service that can use fast transport channels when available, such as TCP, Named Pipes, or maybe even UDP (in WCF 4.5), and you also want to support HTTP when all other transport channels are unavailable.
  • Choose Web API when you want to create a resource-oriented services over HTTP that can use the full features of HTTP (like URIs, request/response headers, caching, versioning, various content formats).
  • Choose Web API when you want to expose your service to a broad range of clients including browsers, mobiles, iphone and tablets.

Tuesday, January 24, 2017

Adding / Modifying Database objects to EDMX

Entity Framework has the ability to automatically build native commands for the database based on your LINQ to Entities or Entity SQL queries, as well as build the commands for inserting, updating, or deleting data. You may want to override these steps and use your own predefined database objects instead.

Entity Framework lets you override it's built in functionality and use your own predefined database objects. Here in this illustration I am trying to walk you through the way of working with Stored Procedures and Table Valued Functions.

Stored Procedures in Entity Framework:

Database objects like Stored procedures and user-defined functions (UDFs) are represented as functions in Entity Framework. So these will not have any defined entity representation in the EDM designer.

For this example walk through of how to work with Stored Procedures in EF, we will consider overriding EF's Find All Courses for a Student functionality with an existing Stored Procedure called "GetCoursesByStudentId" as below:

  • Create new ADO.NET Entity Model (.edmx) by picking "EF Designer from Database" option from Choose Model Contents section, and click "Next"
  • On Choose your Database Objects and Settings section, select "GetCoursesByStudentId" stored procedure by navigating to the object as below. Don't forget to check option "Import selected stored procedures and functions into the entity model", and clicck "Finish"
  • You will see GetCoursesByStudentId added in Function Imports with new complex type GetCoursesByStudentId_Result in the Model Browser. Whenever you import a stored procedure into a model, it creates new complex type with the name {sp name}_Result by default.
  • No need to add a new compex type for the returned data from GetCoursesByStudentId as it returns the same fields defined in Course entity. You can change it by right clicking on GetCoursesByStudentId in function imports and selecting Edit. Check Entities and select Course from dropdown in popup window as shown below:
  • You will see the function in the context class for GetCoursesByStudentId as shown below:
  • Now, GetCoursesByStudentId function can be called as below to get the result, which is equivalent to
    exec [dbo].[GetCoursesByStudentId] @StudentId=1
  • using (var context = new SchoolDBEntities())
    {
    var courses = context.GetCoursesByStudentId(1);

    foreach (Course cs in courses)
    Console.WriteLine(cs.CourseName);
    }
Table-Valued Functions in Entity Framework

Table-valued functions are similar to stored procedure with one key difference: the result of TVF is composable which means results of TVF can be used in a LINQ query.

  • Here is how we can include a Table-Valued function to our EDM. Right click on the designer → select Update Model from the Database.
  • Expand Stored Procedures and Functions node → expand schema node (dbo schema in our case) → select 'GetCourseListByStudentID' and click Finish. Make sure that checkbox for 'Import selected procedures and functions into the entity model' is checked (this will import the function automatically).
  • After you imported the function, you can verify it: Open Model Browser → expand Function Imports → right click on imported function 'GetCourseListByStudentID' → click Edit:
  • You can see that EDM has automatically created the complex type GetCourseListByStudentID_Result as a return collection type.
  • You can also select an existing entity as a return type if TVF returns the same columns as entity:
  • Now, the created Table-Valued function can be used in code as beow:
  • using (var ctx = new SchoolDBEntities())
    {
    //Execute TVF and filter result
    var courseList = ctx.GetCourseListByStudentID(1).Where(c => c.Location == "City1")
    .ToList();

    foreach (GetCourseListByStudentID_Result cs in courseList)
    Console.WriteLine("Course Name: {0}, Course Location: {1}", cs.CourseName, cs.Location);
    }
Modifying / Refreshing Database Objects

Following below steps, we can apply the changes made to the existing database objects:
  • Make sure that the connection string used in .edmx is activate by saving config file (even though no changes made to it)
  • Open .edmx file and select "Model Browser" option by right clicking on .edmx file, and pick "Update Model from Database" option by right clicking on "Stored Procedures / Functions" option from Model Browser window
  • Pick "Refresh" tab from "Update Wizard" window. Select database object by expanding "Stored Procedure and Functions" node to refresh and click "Finish", as below:

With this I am concluding the illustration. Feel free to share your feedback.

Happy Programming !!!

Friday, January 20, 2017

EF Architecture and Fundamentals

ADO.NET Entity Framework is an Object/Relational Mapping (ORM) framework which:
  • Enable developers to work with relational data as domain-specific objects, eliminating the need for most of the data access plumbing code that developers usually need to write, OR simply Is an enhancement to ADO.NET that gives developers an automated mechanism for accessing & storing the data in the database, and for working with the results in addition to DataReader and DataSet
  • Provides services like change tracking, identity resolution, lazy loading, and query translation so that developers can focus on their application-specific business logic rather than the data access fundamentals
  • Is a tool for storing data from domain objects to relational database like MS SQL Server, in an automated way without much programming
  • Allows us to keep our database design separate from our domain class design. This makes the application maintainable and extendable. It also automates standard CRUD operation (Create, Read, Update & Delete) so that the developer doesn't need to write it manually
  • Includes three main parts: Domain class objects, Relational database objects and Mapping information on how domain objects map to relational database objects (tables, views & stored procedures)

Architecture

EDM (Entity Data Model):consists of below three parts:
  • Conceptual Model: The conceptual model contains the model classes and their relationships. This will be independent from your database table design
  • Storage Model: Storage model is the database design model which includes tables, views, stored procedures and their relationships and keys
  • Mapping: Mapping consist information about how the conceptual model is mapped to storage model

LINQ to Entities: LINQ to Entities is a query language used to write queries against the object model. It returns entities, which are defined in the conceptual model. You can use your LINQ skills here.

Entity SQL: Entity SQL is another query language just like LINQ to Entities. However, it is a little more difficult than L2E and also the developer will need to learn it separately.

Object Service: Object service is a main entry point for accessing data from the database and to return it back. Object service is responsible for materialization, which is process of converting data returned from entity client data provider (next layer) to an entity object structure.

Entity Client Data Provider: The main responsibility of this layer is to convert L2E or Entity SQL queries into a SQL query which is understood by the underlying database. It communicates with the ADO.Net data provider which in turn sends or retrieves data from database.

ADO.Net Data Provider: This layer communicates with database using standard ADO.Net.


Types of Entities: There were four types of Entities in Entity Framework 4.x, but from 5.0 is limited to POCO and POCO Proxy entities.
  • EntityObject
  • POCO (Plain Old CLR Object): It is like any other normal .net class. These support most of the same query, insert, update, and delete behaviors as entity types that are generated by the Entity Data Model
  • POCO Proxy (dynamic proxy): Dynamic Proxy is a runtime proxy class of POCO entity. POCO entity becomes POCO Proxy entity if it meets certain requirements in order to enable lazy loading and automatic change tracking. It adds some methods at runtime to your POCO class which does instant change tracking and lazy loading stuff
  • Self-Tracking Entities

POCO entity should meet the following requirements to become a POCO proxy:
  • A custom data class must be declared with public access.
  • A custom data class must not be sealed (NotInheritable in Visual Basic)
  • A custom data class must not be abstract (MustInherit in Visual Basic).
  • ProxyCreationEnabled option should not set to false (default is true) in context class
  • Each navigation property must be declared as public, virtual

By default dynamic proxy is enabled for every entity. However, you can disable dynamic proxy by setting the ProxyCreationEnabled option to false in context class. context.Configuration.ProxyCreationEnabled = false;

EDM generates POCO entities which satisfy the above requirements for a dynamic proxy by default. You can use ObjectContext.GetObjectType() to find the actual type of dynamic proxy.


Entity Properties
  • Scalar properties: Scalar properties are properties whose actual values are contained in the entity. For example, Student entity has scalar properties e.g. StudentId, StudentName. These correspond with the Student table columns.
  • Navigation properties: Navigation properties are pointers to other related entities. The Student has Standard property as a navigation property that will enable application to navigate from a Student to related Standard entity.

Querying with EDM

You can query EDM mainly in three ways: LINQ to Entities, Entity SQL, and Native SQL

LINQ to Entities: L2E query syntax is easier to learn than Entity SQL. You can use your LINQ skills for querying with EDM. These are LINQ Method Syntax with Lamda expression OR by LINQ query syntax itself, examples as below:

LINQ Method Syntax:
using (var context = new SchoolDBEntities())
{
var L2EQuery = context.Students.where(s => s.StudentName == “Bill”);
}
LINQ Query Syntax:
using (var context = new SchoolDBEntities())
{
var L2EQuery = (from st in context.Students
where st.StudentName == "Bill"
select st);
}
Entity SQL: Entity SQL is another way to create a query. It is processed by the Entity Framework’s Object Services directly. It returns ObjectQuery instead of IQueryable. You need ObjectContext to create a query using Entity SQL. The following code snippet shows the same query result as L2E query above:
string sqlString = "SELECT VALUE st FROM SchoolDBEntities.Students AS st WHERE st.StudentName == 'Bill'";

var objctx = (ctx as IObjectContextAdapter).ObjectContext;

ObjectQuery student = objctx.CreateQuery(sqlString);
Student newStudent = student.First();

You can also use EntityConnection and EntityCommand to execute Entity SQL as shown below:

using (var con = new EntityConnection("name=SchoolDBEntities"))
{
con.Open();
EntityCommand cmd = con.CreateCommand();

cmd.CommandText = "SELECT VALUE st FROM SchoolDBEntities.Students as st where st.StudentName='Bill'";
Dictionary dict = new Dictionary();

using (EntityDataReader rdr = cmd.ExecuteReader(CommandBehavior.SequentialAccess | CommandBehavior.CloseConnection))
{
while (rdr.Read())
{
int a = rdr.GetInt32(0);
var b = rdr.GetString(1);
dict.Add(a, b);
}
}
}
Native SQL: You can execute native SQL queries for a relational database as shown below:
using (var ctx = new SchoolDBEntities())
{
var studentName = ctx.Students.SqlQuery("Select studentid, studentname, standardId from Student
               where studentname='Bill'").FirstOrDefault();
}
With this I am concluding this article. Feel free to share your feedback.

Happy Programming !!!

Monday, January 16, 2017

RequiredIf RegEx - Conditional Validation Attribute using MVC / Web API

This topic illustrates how to extend ValidationAttribute to enforce customized validation of checking if a provided field's value is matching to a pattern (regular expression) or not when dependent field's value is matching to a specific value condition.

Scenario: Address.Phone should accept only numbers between 0-9 when Address.Country's value is "USA".

We can achieve this by following below steps, and the illustration is developed using MVC 5, Web API 2, EF 6 and Mock

As a first step, Create a new class with name "RequiredIfRegExValidator" (preferably in a common location of the solution ie. either in Model or a Common project, as applied) and copy below code:
public class RequiredIfRegExValidator : ValidationAttribute
{
private RequiredAttribute _innerAttribute = new RequiredAttribute();
public string DependentProperty { get; set; }
public object TargetValue { get; set; }

public RequiredIfRegExValidator(string dependentProperty, object targetValue)
{
this.DependentProperty = dependentProperty;
this.TargetValue = targetValue;
}

protected override ValidationResult IsValid(object value, ValidationContext validationContext)
{
// get a reference to the property this validation depends upon
var containerType = validationContext.ObjectInstance.GetType();
var field = containerType.GetProperty(this.DependentProperty);

if (field != null)
{
var phoneNumber = Convert.ToString(value, CultureInfo.CurrentCulture);

// get the value of the dependent property
object dependentValue = field.GetValue(validationContext.ObjectInstance, null);

// trim spaces and convert dependent value to uppercase to support case senstive comparison
if (dependentValue != null && dependentValue is string)
{
dependentValue = (dependentValue as string).Trim();
dependentValue as string).Length == 0 ? null : (dependentValue as string).ToUpper();
}

// trim spaces and convert TargetValue to uppercase to support case senstive comparison
if (TargetValue != null && TargetValue is string)
{
TargetValue = (TargetValue as string).Trim();
TargetValue = (TargetValue as string).Length == 0 ? null : (TargetValue as string).ToUpper();
}

// compare the value against the target value
if ((dependentValue == null && TargetValue.Equals("") ||
(dependentValue == null && !TargetValue.Equals("") ||
(dependentValue != null && dependentValue.Equals(this.TargetValue)))))
{
// try validating this field
if (!_innerAttribute.IsValid(value))
{
     var match = _regex.Match(phoneNumber);
     if (!match.Success)
           // validation failed - return an error
           return new ValidationResult(FormatErrorMessage(validationContext.DisplayName), new[] { validationContext.MemberName });
}
}
}
// validation success - return success
return ValidatioResult.Success;
}
}
We need to create or make changes to Model(s) and Controller(s) as below to implement the created custom validation.

POCO / Model: Create or Modify Address class to apply the custom validation on City property as below to define City is required when selected Country is USA.
public class Address {
public int AddressId {get; set; }

public string Line1 { get; set; }

public string Line2 { get; set; }

public string City { get; set; }

public string State { get; set; }

public string Zip { get; set; }

public IEnumerable Country { get; set; }

[RequiredIfRegExValidator(@"^\d{10}$", "Country", "USA", ErrorMessage = "Phone Requires Numbers between 0-9")]
public string Phone { get; set; }
}
Controller: The controller for Address entity with a POST method will look like below.

Here, before persisting the changes in database, I am forcing process to re-validate Modal State by calling "this.Validate()", which will help the process to identify and throw an error upfront instead of making a call to DB.
public class AddressController : ApiController
{
[ResponseType(typeof(Address))]
public IHttpActionResult PostAddress(Address address)
{
// use below to force validation before doing anything
this.Validate(address);

if (!ModelState.IsValid)
{
return BadRequest(ModelState);
}
db.Address.Add(address);
db.SaveChanges();

return CreatedAtRoute("DefaultApi", new { id = address.AddressId }, address);
}
}
Testing: Below TestMethods will help testing the post method to check for the expected validation error:

Here, The first test method will represent a case forcing this.Validate() method and second without forcing this.Validate() in controller's POST method.
// Forcing this.Validate() - this case Controller will thow the error
[TestMethod]
public void RequiredPhoneinNumbersWhenCountryIsUSATest()
{
Address address = new Address { Line1 = "200 E Main St", Line2 = "Apt B", State = "VA", Country = "USA", Phone="732654ABC" };
AddressController addressController = new AddressController();
AddressController.Request = new HttpRequestMessage();
AddressController.Request.Properties["MS_HttpConfiguration"] = new HttpConfiguration();

addressController.PostAddress(address);

Assert.IsFalse(addressController.ModelState.IsValid);
Assert.IsTrue(addressController.ModelState.Count == 1, "Phone Requires Numbers between 0-9");
}
// Without forcing this.Validate() - this case DB will throw the error
[TestMethod]
public void RequiredPhoneinNumbersWhenCountryIsUSATest()
{
Address address = new Address { Line1 = "200 E Main St", Line2 = "Apt B", State = "VA", Country = "USA", Phone="732654ABC" };
try
{
AddressController addressController = new AddressController();
AddressController.Request = new HttpRequestMessage();
AddressController.Request.Properties["MS_HttpConfiguration"] = new HttpConfiguration();

addressController.PostAddress(address);
}
catch (System.Data.Entity.Validation.DbEntityValidationException dbEx)
{
foreach (var validationErrors in dbEx.EntityValidationErrors)
{
foreach (var validationError in validationErrors.ValidationErrors)
{
Assert.AreEqual("Phone Requires Numbers between 0-9", validationError.ErrorMessage);
}
}
}
}
With this I am concluding the illustration. Feel free to share your feedback.

Happy Programming !!!

RequiredIf Not RegEx - Conditional Validation Attribute using MVC / Web API

This topic illustrates how to extend ValidationAttribute to enforce customized validation of checking if a provided field's value is matching to a pattern (regular expression) or not when dependent field's value is not matching to a specific value condition.

Scenario: Address.Phone should accept only numbers between 0-9 when Address.Country's value is not "USA".

We can achieve this by following below steps, and the illustration is developed using MVC 5, Web API 2, EF 6 and Mock

As a first step, Create a new class with name "RequiredIfRegExValidator" (preferably in a common location of the solution ie. either in Model or a Common project, as applied) and copy below code:
public class RequiredIfNotRegExValidator : ValidationAttribute
{
private RequiredAttribute _innerAttribute = new RequiredAttribute();
public string DependentProperty { get; set; }
public object TargetValue { get; set; }

public RequiredIfNotRegExValidator(string dependentProperty, object targetValue)
{
this.DependentProperty = dependentProperty;
this.TargetValue = targetValue;
}

protected override ValidationResult IsValid(object value, ValidationContext validationContext)
{
// get a reference to the property this validation depends upon
var containerType = validationContext.ObjectInstance.GetType();
var field = containerType.GetProperty(this.DependentProperty);

if (field != null)
{
var phoneNumber = Convert.ToString(value, CultureInfo.CurrentCulture);

// get the value of the dependent property
object dependentValue = field.GetValue(validationContext.ObjectInstance, null);

// trim spaces and convert dependent value to uppercase to support case senstive comparison
if (dependentValue != null && dependentValue is string)
{
dependentValue = (dependentValue as string).Trim();
dependentValue as string).Length == 0 ? null : (dependentValue as string).ToUpper();
}

// trim spaces and convert TargetValue to uppercase to support case senstive comparison
if (TargetValue != null && TargetValue is string)
{
TargetValue = (TargetValue as string).Trim();
TargetValue = (TargetValue as string).Length == 0 ? null : (TargetValue as string).ToUpper();
}

// compare the value against the target value
if ((dependentValue == null && TargetValue.Equals("") ||
(dependentValue == null && !TargetValue.Equals("") ||
(dependentValue != null && !dependentValue.Equals(this.TargetValue)))))
{
// try validating this field
if (!_innerAttribute.IsValid(value))
{
     var match = _regex.Match(phoneNumber);
     if (!match.Success)
           // validation failed - return an error
           return new ValidationResult(FormatErrorMessage(validationContext.DisplayName), new[] { validationContext.MemberName });
}
}
}
// validation success - return success
return ValidatioResult.Success;
}
}
We need to create or make changes to Model(s) and Controller(s) as below to implement the created custom validation.

POCO / Model: Create or Modify Address class to apply the custom validation on City property as below to define City is required when selected Country is USA.
public class Address {
public int AddressId {get; set; }

public string Line1 { get; set; }

public string Line2 { get; set; }

public string City { get; set; }

public string State { get; set; }

public string Zip { get; set; }

public IEnumerable Country { get; set; }

[RequiredIfRegExValidator(@"^\d{10}$", "Country", "USA", ErrorMessage = "Phone Requires Numbers between 0-9")]
public string Phone { get; set; }
}
Controller: The controller for Address entity with a POST method will look like below.

Here, before persisting the changes in database, I am forcing process to re-validate Modal State by calling "this.Validate()", which will help the process to identify and throw an error upfront instead of making a call to DB.
public class AddressController : ApiController
{
[ResponseType(typeof(Address))]
public IHttpActionResult PostAddress(Address address)
{
// use below to force validation before doing anything
this.Validate(address);

if (!ModelState.IsValid)
{
return BadRequest(ModelState);
}
db.Address.Add(address);
db.SaveChanges();

return CreatedAtRoute("DefaultApi", new { id = address.AddressId }, address);
}
}
Testing: Below TestMethods will help testing the post method to check for the expected validation error:

Here, The first test method will represent a case forcing this.Validate() method and second without forcing this.Validate() in controller's POST method.
// Forcing this.Validate() - this case Controller will thow the error
[TestMethod]
public void RequiredPhoneinNumbersWhenCountryIsNotUSATest()
{
Address address = new Address { Line1 = "200 E Main St", Line2 = "Apt B", State = "VA", Country = "USA", Phone="732654ABC" };
AddressController addressController = new AddressController();
AddressController.Request = new HttpRequestMessage();
AddressController.Request.Properties["MS_HttpConfiguration"] = new HttpConfiguration();

addressController.PostAddress(address);

Assert.IsFalse(addressController.ModelState.IsValid);
Assert.IsTrue(addressController.ModelState.Count == 1, "Phone Requires Numbers between 0-9");
}
// Without forcing this.Validate() - this case DB will throw the error
[TestMethod]
public void RequiredPhoneinNumbersWhenCountryIsNotUSATest()
{
Address address = new Address { Line1 = "200 E Main St", Line2 = "Apt B", State = "VA", Country = "USA", Phone="732654ABC" };
try
{
AddressController addressController = new AddressController();
AddressController.Request = new HttpRequestMessage();
AddressController.Request.Properties["MS_HttpConfiguration"] = new HttpConfiguration();

addressController.PostAddress(address);
}
catch (System.Data.Entity.Validation.DbEntityValidationException dbEx)
{
foreach (var validationErrors in dbEx.EntityValidationErrors)
{
foreach (var validationError in validationErrors.ValidationErrors)
{
Assert.AreEqual("Phone Requires Numbers between 0-9", validationError.ErrorMessage);
}
}
}
}
With this I am concluding the illustration. Feel free to share your feedback.

Happy Programming !!!

Monday, January 9, 2017

MVVM Pattern

The Model-View-View Model is a natural pattern which will provide Separation of Concerns for XAML platforms like WPF. Below are some of the advantages of using this pattern:

  • It provides separation of concerns. A clean separation between application logic and the UI will make an application easier to test, maintain, and evaluate.
  • It improves code re-use opportunities and enables the developer-designer workflow.
  • When the UI XAML is not tightly coupled to the code-behind, it is easy for designers to exercise the freedom they need to be creative and make a good product.
  • It increases application's test-ability. Moving the UI logic to a separate class that can be instantiated independently of a UI technology makes unit testing much easier.

There are three core components in the MVVM pattern: the Model, the View, and the View Model. Each serves a distinct and separate role. Below explains the relationship between these three components (image courtesy MSDN):

The components are decoupled from each other, thus enabling: Components to be swapped, Internal implementation to be changed without affecting the others, Components to be worked on independently, and Isolated unit testing.

At a high level, the components interaction happens as, the view "knows about" the view model, and the view model "knows about" the model, but the model is unaware of the view model, and the view model is unaware of the view.

View: The View is responsible for defining the structure, layout, and appearance of what the user sees on the screen. This is defined purely with XAML with limited code-behind that does not contain business logic. A view can have its own view model, or it can inherit its parent's view model, and gets data through bindings, or by invoking methods on the view model.

Model: This is the representation of the Domain model that includes a data model along with business and validation logic. Examples of model objects include repositories, business objects, data transfer objects (DTOs), Plain Old CLR Objects (POCOs), and generated entity and proxy objects.

View Model: This acts as an intermediary between the View and the Model, and responsible for handling the view logic. Typically, the view model interacts with the model to retrieve data and then makes the data available to the view by reformat the data simpler for the view to handle. The view model may also be responsible for defining logical state changes that affect some aspect of the display in the view, such as an indication that some operation is pending.

Sunday, January 8, 2017

Repository Pattern

Repository Pattern separates the data access logic and maps it to the entities in the business logic by working with the domain entities and performing data access logic. The interaction between domain entities, data access logic and the business logic talk to each other using interfaces, and hides the details of data access from the business logic. In other words, business logic can access the data object without having knowledge of the underlying data access architecture.

Using this pattern, business logic is not aware whether the application is using LINQ to SQL or ADO.NET Entity Model ORM. In the future, underlying data sources or architecture can be changed without affecting the business logic.

Some advantages of using Repository Pattern are: Easy Domain driven development, can test business and database access logic separately, no duplication of code and maintainability of code is easy as data access logic is centralized.

Repository Pattern illustration:
As a First step we will create a base Entity interface with common properties, a Book Entity and a Base repository interfaces with CRUD operations
// Entity Base Interface with common properties
public interface IEntityBase
{
public int ID { get; set; }
public DateTime? CreateDate { get; set; }
public DateTime? ModifiedDate { get; set; }
}

// Base Repository Interface with CRUD operations
public interface IBaseRepository where T: IEntityBase
{
IEnumerable All();
T Create(T entity);
void Update(T entity);
T Delete(T entity);
int? Save();
}

// Book Entity implementing IEntityBase
public class Book : IEntityBase
{
public string Title { get; set;}
public decimal Price { get; set;}
}
Generic Implementation of Repository Pattern
Create BookRepository class by inheriting IBaseRepository as below:
public class BookRepository : IBaseRepository where T : IEntityBase
{
private DotNetLearningContext context;
protected readonly IDbSet _dbset;

public BookRepository(DotNetLearningContext context)
{
this.context = context;
this._dbset = context.Set();
}

public virtual IEnumerable All()
{
return _dbset.AsEnumerable();
}

public IEnumerable<T> FindBy(System.Linq.Expressions.Expression<Func<T, bool>> predicate)
{
IEnumerable query = _dbset.Where(predicate).AsEnumerable();
return query;
}

public virtual T Create(T entity)
{
return context.Set().Add(entity);
}

public virtual void Update(T entity)
{
context.Entry(entity).State = System.Data.Entity.EntityState.Modified;
}

public virtual T Delete(T entity)
{
return context.Set().Remove(entity);
}

public async Task Save()
{
return await context.SaveChangesAsync();
}
}
You can use / call this repository class as below:
IBaseRepository bookRepository = new BookRepository();
var result = bookRepository.All;

foreach (var r in result)
{
Console.WriteLine(r.authorname);
}
Dependency Injection Implementation of Repository Pattern
Create BaseRepository class by inheriting IBaseRepository as below:
public class BaseRepository : IBaseRepository where T : IEntityBase
{
private DotNetLearningContext context;
protected readonly IDbSet _dbset;

public BaseRepository(DotNetLearningContext context)
{
this.context = context;
this._dbset = context.Set();
}

public virtual IEnumerable All()
{
return _dbset.AsEnumerable();
}

public IEnumerable<T> FindBy(System.Linq.Expressions.Expression<Func<T, bool>> predicate)
{
IEnumerable query = _dbset.Where(predicate).AsEnumerable();
return query;
}

public virtual T Create(T entity)
{
return context.Set().Add(entity);
}

public virtual void Update(T entity)
{
context.Entry(entity).State = System.Data.Entity.EntityState.Modified;
}

public virtual T Delete(T entity)
{
return context.Set().Remove(entity);
}

public async Task Save()
{
return await context.SaveChangesAsync();
}
}
Using UnityContainer Register and Resolve Interface and Class implementations to make it available to use as Dependency Injection(s) as below in App/WebApiConfig classes:
// Dependency Injecor Resolvers
var container = new UnityContainer();

container.RegisterType(typeof(IBaseRepository<>), typeof(BaseRepository<>));

config.DependencyResolver = new UnityResolver(container);
Inject resolved repositories through controllers as below :
public class BookController
{
public BookController(IBaseRepository baseRepository)
: base(baseRepository)
{
// Call Base Repository Methods to use
}
}