0
comments
Posted by
Belbinson Toby ,
3:31 AM
This is an agile HTML parser that builds a read/write DOM and supports plain XPATH or XSLT (you actually don't HAVE to understand XPATH nor XSLT to use it, don't worry...). It is a .NET code library that allows you to parse "out of the web" HTML files. The parser is very tolerant with "real world" malformed HTML. The object model is very similar to what proposes System.Xml, but for HTML documents (or streams).
Html Agility Pack now supports Linq to Objects (via a LINQ to Xml Like interface). Check out the new beta to play with this feature
Sample applications:
- Page fixing or generation. You can fix a page the way you want, modify the DOM, add nodes, copy nodes, well... you name it.
- Web scanners. You can easily get to img/src or a/hrefs with a bunch XPATH queries.
- Web scrapers. You can easily scrap any existing web page into an RSS feed for example, with just an XSLT file serving as the binding. An example of this is provided.
There is no dependency on anything else than .Net's XPATH implementation. There is no dependency on Internet Explorer's MSHTML dll or W3C's HTML tidy or ActiveX / COM object, or anything like that. There is also no adherence to XHTML or XML, although you can actually produce XML using the tool. The version posted here on CodePlex is for the .NET Framework 2.0. If you need the old version, please go to the old page or drop me a note.
Posted by
Belbinson Toby ,
Saturday, March 28, 2015
7:39 AM
A critical aspect of COM is how clients and servers interact. A COM client is whatever code or object gets a pointer to a COM server and uses its services by calling the methods of its interfaces. A COM server is any object that provides services to clients; these services are in the form of COM interface implementations that can be called by any client that is able to get a pointer to one of the interfaces on the server object.
There are two main types of servers, in-process and out-of-process. In-process servers are implemented in a dynamic linked library (DLL), and out-of-process servers are implemented in an executable file (EXE). Out-of-process servers can reside either on the local computer or on a remote computer. In addition, COM provides a mechanism that allows an in-process server (a DLL) to run in a surrogate EXE process to gain the advantage of being able to run the process on a remote computer. For more information, see DLL Surrogates.
The COM programming model and constructs have now been extended so that COM clients and servers can work together across the network, not just within a given computer. This enables existing applications to interact with new applications and with each other across networks with proper administration, and new applications can be written to take advantage of networking features.
COM client applications do not need to be aware of how server objects are packaged, whether they are packaged as in-process objects (in DLLs) or as local or remote objects (in EXEs). Distributed COM further allows objects to be packaged as service applications, synchronizing COM with the rich administrative and system-integration capabilities of Windows.
Note Throughout this documentation the acronym COM is used in preference to DCOM. This is because DCOM is not separate;it is just COM with a longer wire. In cases where what is being described is specifically a remote operation, the term distributed COM is used.
COM is designed to make it possible to add the support for location transparency that extends across a network. It allows applications written for single computers to run across a network and provides features that extend these capabilities and add to the security necessary in a network. (For more information, see Security in COM.)
COM specifies a mechanism by which the class code can be used by many different applications.
For more information, see the following topics:
- Getting a Pointer to an Object
- Creating an Object Through a Class Object
- COM Server Responsibilities
- Persistent Object State
- Providing Class Information
- Inter-Object Communication
Related topics
Posted by
Belbinson Toby ,
Wednesday, March 4, 2015
4:47 AM
Introduction
Type casting is one of the unavoidable things in software development. In many situations we need to convert one object (Type) to another object (Type) and some times we get an exception like this: "Cannot implicitly convert type 'Object one' to 'object two'". To avoid this type of exceptions and check object compatibility, C# provides two operators namely
is and as.
is operator
The
is operator in C# is used to check the object type and it returns a bool value: true if the object is the same type and false if not.
For
null objects, it returns false.
Syntax:
Hide Copy Code
bool isobject = (Object is Type);
Example:
Hide Shrink
Copy Code
namespace IsAndAsOperators
{
// Sample Student Class
class Student
{
public int stuNo { get; set; }
public string Name { get; set; }
public int Age { get; set; }
}
// Sample Employee Class
class Employee
{
public int EmpNo { get; set; }
public string Name { get; set; }
public int Age { get; set; }
public double Salary { get; set; }
}
class Program
{
static void Main(string[] args)
{
Student stuObj = new Student();
stuObj.stuNo = 1;
stuObj.Name = "Siva";
stuObj.Age = 15;
Employee EMPobj=new Employee();
EMPobj.EmpNo=20;
EMPobj.Name="Rajesh";
EMPobj.Salary=100000;
EMPobj.Age=25;
// Is operator
// Check Employee EMPobj is Student Type
bool isStudent = (EMPobj is Student);
System.Console.WriteLine("Empobj is a Student ?: {0}", isStudent.ToString());
// Check Student stiObj is Student Typoe
isStudent = (stuObj is Student);
System.Console.WriteLine("Stuobj is a Student ?: {0}", isStudent.ToString());
stuObj = null;
// Check null object Type
isStudent = (stuObj is Student);
System.Console.WriteLine("Stuobj(null) is a Student ?: {0}", isStudent.ToString());
System.Console.ReadLine();
}
}
Output
Hide Copy Code
Empobj is a Student ?: False
Stuobj is a Student ?: True
Stuobj(null) is a Student ?: False
as operator
The
as operator does the same job of is operator but the difference is instead of bool, it returns the object if they are compatible to that type, else it returns null.
Syntax:
Hide Copy Code
Type obj = Object as Type;
Example:
Hide Shrink
Copy Code
namespace IsAndAsOperators
{
// Sample Student Class
class Student
{
public int stuNo { get; set; }
public string Name { get; set; }
public int Age { get; set; }
}
// Sample Employee Class
class Employee
{
public int EmpNo { get; set; }
public string Name { get; set; }
public int Age { get; set; }
public double Salary { get; set; }
}
class Program
{
static void Main(string[] args)
{
Student stuObj = new Student();
stuObj.stuNo = 1;
stuObj.Name = "Praveen";
stuObj.Age = 15;
Employee EMPobj=new Employee();
EMPobj.EmpNo=20;
EMPobj.Name="Rajesh";
EMPobj.Salary=100000;
EMPobj.Age=25;
System.Console.WriteLine("Empobj is a Student ?: {0}", CheckAndConvertobject(EMPobj));
System.Console.WriteLine("StuObj is a Student ?: {0}", CheckAndConvertobject(stuObj));
System.Console.ReadLine();
}
public static string CheckAndConvertobject(dynamic obj)
{
// If obj is Type student it asign value to Stuobj else it asign null
Student stuobj = obj as Student;
if (stuobj != null)
return "This is a Student and his name is " + stuobj.Name;
return "Not a Student";
}
}
}
Output:
Hide Copy Code
Empobj is a Student ?: Not a Student
StuObj is a Student ?: This is a Student and his name is Praveen
Advantage of 'as' over 'is
In the case of
is operator, to type cast, we need to do two steps:- Check the Type using
is - If it’s
truethen Type cast
Actually this affects the performance since each and every time the CLR will walk the inheritance hierarchy, checking each base type against the specified type. To avoid this, use
as it will do it in one step. Only for checking the type should we use the is operator.
Posted by
Belbinson Toby ,
Tuesday, March 3, 2015
4:21 AM
CREATE PROCEDURE [usp_Customer_INS_By_XML]@Customer_XML XMLASBEGINEXEC sp_xml_preparedocument @xmldoc OUTPUT, @Customer_XML--OPEN XML example of inserting multiple customers into a Table.INSERT INTO CUSTOMER(First_NameMiddle_NameLast_Name)SELECTFirst_Name,Middle_Name,Last_NameFROM OPENXML (@xmldoc, '/ArrayOfCustomers[1]/Customer',2)WITH(First_Name VARCHAR(50),Middle_Name VARCHR(50),Last_Name VARCHAR(50))EXEC sp_xml_removedocument @xmldocEND
Posted by
Belbinson Toby ,
4:21 AM
https://msdn.microsoft.com/en-us/data/dn456843.aspx
What EF does by default
In all versions of Entity Framework, whenever you execute SaveChanges() to insert, update or delete on the database the framework will wrap that operation in a transaction. This transaction lasts only long enough to execute the operation and then completes. When you execute another such operation a new transaction is started.
Starting with EF6 Database.ExecuteSqlCommand() by default will wrap the command in a transaction if one was not already present. There are overloads of this method that allow you to override this behavior if you wish. Also in EF6 execution of stored procedures included in the model through APIs such asObjectContext.ExecuteFunction() does the same (except that the default behavior cannot at the moment be overridden).
In either case, the isolation level of the transaction is whatever isolation level the database provider considers its default setting. By default, for instance, on SQL Server this is READ COMMITTED.
Entity Framework does not wrap queries in a transaction.
This default functionality is suitable for a lot of users and if so there is no need to do anything different in EF6; just write the code as you always did.
However some users require greater control over their transactions – this is covered in the following sections.
How the APIs work
Prior to EF6 Entity Framework insisted on opening the database connection itself (it threw an exception if it was passed a connection that was already open). Since a transaction can only be started on an open connection, this meant that the only way a user could wrap several operations into one transaction was either to use aTransactionScope or use the ObjectContext.Connection property and start calling Open() and BeginTransaction() directly on the returned EntityConnectionobject. In addition, API calls which contacted the database would fail if you had started a transaction on the underlying database connection on your own.
Note: The limitation of only accepting closed connections was removed in Entity Framework 6. For details, see Connection Management (EF6 Onwards).
Starting with EF6 the framework now provides:
Database.BeginTransaction() : An easier method for a user to start and complete transactions themselves within an existing DbContext – allowing several operations to be combined within the same transaction and hence either all committed or all rolled back as one. It also allows the user to more easily specify the isolation level for the transaction.
Database.UseTransaction() : which allows the DbContext to use a transaction which was started outside of the Entity Framework.
Database.BeginTransaction() has two overrides – one which takes an explicit IsolationLevel and one which takes no arguments and uses the default IsolationLevel from the underlying database provider. Both overrides return a DbContextTransaction object which provides Commit() and Rollback() methods which perform commit and rollback on the underlying store transaction.
The DbContextTransaction is meant to be disposed once it has been committed or rolled back. One easy way to accomplish this is the using(…) {…} syntax which will automatically call Dispose() when the using block completes:
using System;
using System.Collections.Generic;
using System.Data.Entity;
using System.Data.SqlClient;
using System.Linq;
using System.Transactions;
namespace TransactionsExamples
{
class TransactionsExample
{
static void StartOwnTransactionWithinContext()
{
using (var context = new BloggingContext())
{
using (var dbContextTransaction = context.Database.BeginTransaction())
{
try
{
context.Database.ExecuteSqlCommand(
@"UPDATE Blogs SET Rating = 5" +
" WHERE Name LIKE '%Entity Framework%'"
);
var query = context.Posts.Where(p => p.Blog.Rating >= 5);
foreach (var post in query)
{
post.Title += "[Cool Blog]";
}
context.SaveChanges();
dbContextTransaction.Commit();
}
catch (Exception)
{
dbContextTransaction.Rollback();
}
}
}
}
}
}
using System.Collections.Generic;
using System.Data.Entity;
using System.Data.SqlClient;
using System.Linq;
using System.Transactions;
namespace TransactionsExamples
{
class TransactionsExample
{
static void StartOwnTransactionWithinContext()
{
using (var context = new BloggingContext())
{
using (var dbContextTransaction = context.Database.BeginTransaction())
{
try
{
context.Database.ExecuteSqlCommand(
@"UPDATE Blogs SET Rating = 5" +
" WHERE Name LIKE '%Entity Framework%'"
);
var query = context.Posts.Where(p => p.Blog.Rating >= 5);
foreach (var post in query)
{
post.Title += "[Cool Blog]";
}
context.SaveChanges();
dbContextTransaction.Commit();
}
catch (Exception)
{
dbContextTransaction.Rollback();
}
}
}
}
}
}
Note: Beginning a transaction requires that the underlying store connection is open. So calling Database.BeginTransaction() will open the connection if it is not already opened. If DbContextTransaction opened the connection then it will close it when Dispose() is called.
Passing an existing transaction to the context
Sometimes you would like a transaction which is even broader in scope and which includes operations on the same database but outside of EF completely. To accomplish this you must open the connection and start the transaction yourself and then tell EF a) to use the already-opened database connection, and b) to use the existing transaction on that connection.
To do this you must define and use a constructor on your context class which inherits from one of the DbContext constructors which take i) an existing connection parameter and ii) the contextOwnsConnection boolean.
Note: The contextOwnsConnection flag must be set to false when called in this scenario. This is important as it informs Entity Framework that it should not close the connection when it is done with it (e.g. see line 4 below):
{
conn.Open();
using (var context = new BloggingContext(conn, contextOwnsConnection: false))
{
}
}
Furthermore, you must start the transaction yourself (including the IsolationLevel if you want to avoid the default setting) and let the Entity Framework know that there is an existing transaction already started on the connection (see line 33 below).
Then you are free to execute database operations either directly on the SqlConnection itself, or on the DbContext. All such operations are executed within one transaction. You take responsibility for committing or rolling back the transaction and for calling Dispose() on it, as well as for closing and disposing the database connection. E.g.:
using System.Collections.Generic;
using System.Data.Entity;
using System.Data.SqlClient;
using System.Linq;
sing System.Transactions;
namespace TransactionsExamples
{
class TransactionsExample
{
static void UsingExternalTransaction()
{
using (var conn = new SqlConnection("..."))
{
conn.Open();
using (var sqlTxn = conn.BeginTransaction(System.Data.IsolationLevel.Snapshot))
{
try
{
var sqlCommand = new SqlCommand();
sqlCommand.Connection = conn;
sqlCommand.Transaction = sqlTxn;
sqlCommand.CommandText =
@"UPDATE Blogs SET Rating = 5" +
" WHERE Name LIKE '%Entity Framework%'";
sqlCommand.ExecuteNonQuery();
using (var context =
new BloggingContext(conn, contextOwnsConnection: false))
{
context.Database.UseTransaction(sqlTxn);
var query = context.Posts.Where(p => p.Blog.Rating >= 5);
foreach (var post in query)
{
post.Title += "[Cool Blog]";
}
context.SaveChanges();
}
sqlTxn.Commit();
}
catch (Exception)
{
sqlTxn.Rollback();
}
}
}
}
}
}
Notes:
- You can pass null to Database.UseTransaction() to clear Entity Framework’s knowledge of the current transaction. Entity Framework will neither commit nor rollback the existing transaction when you do this, so use with care and only if you’re sure this is what you want to do.
- You will see an exception from Database.UseTransaction() if you pass a transaction:
- When the Entity Framework already has an existing transaction
- When Entity Framework is already operating within a TransactionScope
- Whose connection object is null (i.e. one which has no connection – usually this is a sign that that transaction has already completed)
- Whose connection object does not match the Entity Framework’s connection.
Using transactions with other features
This section details how the above transactions interact with:
- Connection resiliency
- Asynchronous methods
- TransactionScope transactions
Connection Resiliency
The new Connection Resiliency feature does not work with user-initiated transactions. For details, see Limitations with Retrying Execution Strategies.Asynchronous Programming
The approach outlined in the previous sections needs no further options or settings to work with the asynchronous query and save methods. But be aware that, depending on what you do within the asynchronous methods, this may result in long-running transactions – which can in turn cause deadlocks or blocking which is bad for the performance of the overall application.TransactionScope Transactions
Prior to EF6 the recommended way of providing larger scope transactions was to use a TransactionScope object:using System.Data.Entity;
using System.Data.SqlClient;
using System.Linq;
using System.Transactions;
namespace TransactionsExamples
{
class TransactionsExample
{
static void UsingTransactionScope()
{
using (var scope = new TransactionScope(TransactionScopeOption.Required))
{
using (var conn = new SqlConnection("..."))
{
conn.Open();
var sqlCommand = new SqlCommand();
sqlCommand.Connection = conn;
sqlCommand.CommandText =
@"UPDATE Blogs SET Rating = 5" +
" WHERE Name LIKE '%Entity Framework%'";
sqlCommand.ExecuteNonQuery();
using (var context =
new BloggingContext(conn, contextOwnsConnection: false))
{
var query = context.Posts.Where(p => p.Blog.Rating > 5);
foreach (var post in query)
{
post.Title += "[Cool Blog]";
}
context.SaveChanges();
}
}
scope.Complete();
}
}
}
}
The SqlConnection and Entity Framework would both use the ambient TransactionScope transaction and hence be committed together.
Starting with .NET 4.5.1 TransactionScope has been updated to also work with asynchronous methods via the use of the TransactionScopeAsyncFlowOptionenumeration:
using System.Data.Entity;
using System.Data.SqlClient;
using System.Linq;
using System.Transactions;
namespace TransactionsExamples
{
class TransactionsExample
{
public static void AsyncTransactionScope()
{
using (var scope = new TransactionScope(TransactionScopeAsyncFlowOption.Enabled))
{
using (var conn = new SqlConnection("..."))
{
await conn.OpenAsync();
var sqlCommand = new SqlCommand();
sqlCommand.Connection = conn;
sqlCommand.CommandText =
@"UPDATE Blogs SET Rating = 5" +
" WHERE Name LIKE '%Entity Framework%'";
await sqlCommand.ExecuteNonQueryAsync();
using (var context = new BloggingContext(conn, contextOwnsConnection: false))
{
var query = context.Posts.Where(p => p.Blog.Rating > 5);
foreach (var post in query)
{
post.Title += "[Cool Blog]";
}
await context.SaveChangesAsync();
}
}
}
}
}
}
There are still some limitations to the TransactionScope approach:
- Requires .NET 4.5.1 or greater to work with asynchronous methods.
- It cannot be used in cloud scenarios unless you are sure you have one and only one connection (cloud scenarios do not support distributed transactions).
- It cannot be combined with the Database.UseTransaction() approach of the previous sections.
- It will throw exceptions if you issue any DDL (e.g. because of a Database Initializer) and have not enabled distributed transactions through the MSDTC Service.
- It will automatically upgrade a local transaction to a distributed transaction if you make more than one connection to a given database or combine a connection to one database with a connection to a different database within the same transaction (note: you must have the MSDTC service configured to allow distributed transactions for this to work).
- Ease of coding. If you prefer the transaction to be ambient and dealt with implicitly in the background rather than explicitly under you control then the TransactionScope approach may suit you better.
In summary, with the new Database.BeginTransaction() and Database.UseTransaction() APIs above, the TransactionScope approach is no longer necessary for most users. If you do continue to use TransactionScope then be aware of the above limitations. We recommend using the approach outlined in the previous sections instead where possible.
Posted by
Belbinson Toby ,
Sunday, March 1, 2015
6:37 AM
https://msdn.microsoft.com/en-us/library/aa354509%28v=vs.110%29.aspx
The Setup Batch File
The Membership and Role Provider sample demonstrates how a service can use the ASP.NET membership and role providers to authenticate and authorize clients.
In this sample, the client is a console application (.exe) and the service is hosted by Internet Information Services (IIS).
Note |
|---|
The setup procedure and build instructions for this sample are located at the end of this topic.
|
The sample demonstrates how:
- A client can authenticate by using the username-password combination.
- The server can validate the client credentials against the ASP.NET membership provider.
- The server can be authenticated by using the server's X.509 certificate.
- The server can map the authenticated client to a role by using the ASP.NET role provider.
- The server can use the PrincipalPermissionAttribute to control access to certain methods that are exposed by the service.
The membership and role providers are configured to use a store backed by SQL Server. A connection string and various options are specified in the service configuration file. The membership provider is given the name SqlMembershipProvider while the role provider is given the name SqlRoleProvider.
<!-- Set the connection string for SQL Server -->
<connectionStrings>
<add name="SqlConn"
connectionString="Data Source=localhost;Integrated Security=SSPI;Initial Catalog=aspnetdb;" />
</connectionStrings>
<system.web>
<!-- Configure the Sql Membership Provider -->
<membership defaultProvider="SqlMembershipProvider" userIsOnlineTimeWindow="15">
<providers>
<clear />
<add
name="SqlMembershipProvider"
type="System.Web.Security.SqlMembershipProvider"
connectionStringName="SqlConn"
applicationName="MembershipAndRoleProviderSample"
enablePasswordRetrieval="false"
enablePasswordReset="false"
requiresQuestionAndAnswer="false"
requiresUniqueEmail="true"
passwordFormat="Hashed" />
</providers>
</membership>
<!-- Configure the Sql Role Provider -->
<roleManager enabled ="true"
defaultProvider ="SqlRoleProvider" >
<providers>
<add name ="SqlRoleProvider"
type="System.Web.Security.SqlRoleProvider"
connectionStringName="SqlConn"
applicationName="MembershipAndRoleProviderSample"/>
</providers>
</roleManager>
</system.web>
The service exposes a single endpoint for communicating with the service, which is defined by using the Web.config configuration file. The endpoint consists of an address, a binding, and a contract. The binding is configured with a standard wsHttpBinding, which defaults to using Windows authentication. This sample sets the standardwsHttpBinding to use username authentication. The behavior specifies that the server certificate is to be used for service authentication. The server certificate must contain the same value for the SubjectName as the findValue attribute in the <serviceCertificate> of <serviceCredentials> configuration element. In addition the behavior specifies that authentication of username-password pairs is performed by the ASP.NET membership provider and role mapping is performed by the ASP.NET role provider by specifying the names defined for the two providers.
<system.serviceModel>
<protocolMapping>
<add scheme="http" binding="wsHttpBinding" />
</protocolMapping>
<bindings>
<wsHttpBinding>
<!-- Set up a binding that uses Username as the client credential type -->
<binding>
<security mode ="Message">
<message clientCredentialType ="UserName"/>
</security>
</binding>
</wsHttpBinding>
</bindings>
<behaviors>
<serviceBehaviors>
<behavior>
<!-- Configure role based authorization to use the Role Provider -->
<serviceAuthorization principalPermissionMode ="UseAspNetRoles"
roleProviderName ="SqlRoleProvider" />
<serviceCredentials>
<!-- Configure user name authentication to use the Membership Provider -->
<userNameAuthentication userNamePasswordValidationMode ="MembershipProvider"
membershipProviderName ="SqlMembershipProvider"/>
<!-- Configure the service certificate -->
<serviceCertificate storeLocation ="LocalMachine"
storeName ="My"
x509FindType ="FindBySubjectName"
findValue ="localhost" />
</serviceCredentials>
<!--For debugging purposes set the includeExceptionDetailInFaults attribute to true-->
<serviceDebug includeExceptionDetailInFaults="false" />
<serviceMetadata httpGetEnabled="true"/>
</behavior>
</serviceBehaviors>
</behaviors>
</system.serviceModel>
When you run the sample, the client calls the various service operations under three different user accounts: Alice, Bob, and Charlie. The operation requests and responses are displayed in the client console window. All four calls made as user "Alice" should succeed. User "Bob" should get an access denied error when trying to call the Divide method. User "Charlie" should get an access denied error when trying to call the Multiply method. Press ENTER in the client window to shut down the client.
To set up, build, and run the sample
- To build the C# or Visual Basic .NET edition of the solution, follow the instructions in Running the Windows Communication Foundation Samples.
- Ensure that you have configured the ASP.NET Application Services Database.
NoteIf you are running SQL Server Express Edition, your server name is .\SQLEXPRESS. This server should be used when configuring the ASP.NET Application Services Database as well as in the Web.config connection string.
NoteThe ASP.NET worker process account must have permissions on the database that is created in this step. Use the sqlcmd utility or SQL Server Management Studio to do this. - To run the sample in a single- or cross-computer configuration, use the following instructions.
To run the sample on the same computer
- Make sure that the path includes the folder where Makecert.exe is located.
- Run Setup.bat from the sample install folder in a Visual Studio command prompt run with administrator privileges. This installs the service certificates required for running the sample.
- Launch Client.exe from \client\bin. Client activity is displayed on the client console application.
- If the client and service are not able to communicate, see Troubleshooting Tips.
To run the sample across computers
- Create a directory on the service computer. Create a virtual application named servicemodelsamples for this directory by using the Internet Information Services (IIS) management tool.
- Copy the service program files from \inetpub\wwwroot\servicemodelsamples to the virtual directory on the service computer. Ensure that you copy the files in the \bin subdirectory. Also copy the Setup.bat, GetComputerName.vbs, and Cleanup.bat files to the service computer.
- Create a directory on the client computer for the client binaries.
- Copy the client program files to the client directory on the client computer. Also copy the Setup.bat, Cleanup.bat, and ImportServiceCert.bat files to the client.
- On the server, open a Visual Studio command prompt with administrative privileges and run setup.bat service. Running setup.bat with the service argument creates a service certificate with the fully-qualified domain name of the computer and exports the service certificate to a file named Service.cer.
- Edit Web.config to reflect the new certificate name (in the findValue attribute in the <serviceCertificate> of <serviceCredentials>), which is the same as the fully-qualified domain name of the computer.
- Copy the Service.cer file from the service directory to the client directory on the client computer.
- In the Client.exe.config file on the client computer, change the address value of the endpoint to match the new address of your service.
- On the client, open a Visual Studio command prompt with administrative privileges and run ImportServiceCert.bat. This imports the service certificate from the Service.cer file into the CurrentUser - TrustedPeople store.
- On the client computer, launch Client.exe from a command prompt. If the client and service are not able to communicate, see Troubleshooting Tips.
To clean up after the sample
- Run Cleanup.bat in the samples folder after you have finished running the sample.
Note |
|---|
This script does not remove service certificates on a client when running this sample across computers. If you have run Windows Communication Foundation (WCF) samples that use certificates across computers, be sure to clear the service certificates that have been installed in the CurrentUser - TrustedPeople store. To do this, use the following command: certmgr -del -r CurrentUser -s TrustedPeople -c -n <Fully Qualified Server Machine Name> For example: certmgr -del -r CurrentUser -s TrustedPeople -c -n server1.contoso.com.
|
The Setup Batch File
The Setup.bat batch file included with this sample allows you to configure the server with relevant certificates to run a self-hosted application that requires server certificate-based security. This batch file must be modified to work across computers or to work in a non-hosted case.
The following provides a brief overview of the different sections of the batch files so that they can be modified to run in the appropriate configuration.
- Creating the server certificate.The following lines from the Setup.bat batch file create the server certificate to be used. The %SERVER_NAME% variable specifies the server name. Change this variable to specify your own server name. This batch file defaults it to localhost.The certificate is stored in My (Personal) store under the LocalMachine store location.
echo ************ echo Server cert setup starting echo %SERVER_NAME% echo ************ echo making server cert echo ************ makecert.exe -sr LocalMachine -ss MY -a sha1 -n CN=%SERVER_NAME% -sky exchange -pe - Installing the server certificate into the client's trusted certificate store.The following lines in the Setup.bat batch file copy the server certificate into the client trusted people store. This step is required because certificates generated by Makecert.exe are not implicitly trusted by the client system. If you already have a certificate that is rooted in a client trusted root certificate—for example, a Microsoft-issued certificate—this step of populating the client certificate store with the server certificate is not required.
certmgr.exe -add -r LocalMachine -s My -c -n %SERVER_NAME% -r CurrentUser -s TrustedPeople
Posted by
Belbinson Toby ,
6:31 AM
Authorization determines whether an identity should be granted the requested type of access to a given resource.
ASP.NET implements authorization through authorization providers, the modules that contain the code to authorize access to a given resource. ASP.NET includes the following authorization modules.
| ASP.NET Authentication Provider | Description |
|---|---|
| File authorization | File authorization is performed by theFileAuthorizationModule, and is active when the application is configured to use Windows authentication. It checks the access control list ( ACL ) of the file to determine whether a user should have access to the file. ACL permissions are verified for the Windows identity or, if impersonation is enabled, for the Windows identity of the ASP.NET process. For more information, see ASP.NET Impersonation. |
| URL authorization | URL authorization is performed by theURLAuthorizationModule, which maps users and roles to URLs in ASP.NET applications. This module can be used to selectively allow or deny access to arbitrary parts of an application ( typically directories ) for specific users or roles. |
Configuring authorization using the <authorization> section
To enable URL authorization for a given directory ( including the application root directory ), you need to set up a configuration file that contains an authorization section for that directory. The general syntax for the authorization section is as follows:
<authorization> < [ allow | deny ] [ users ] [ roles ] [ verbs ] /> </authorization>
The allow or deny element is required, and either the users or the roles attribute must be specified. Both can be included, but both are not required. The verbs attribute is optional.
The allow and deny elements grant and revoke access, respectively. Each element supports three attributes, which are defined in the following table.
| Attribute | Description |
|---|---|
| roles | Identifies a targeted role for this element. For more information, seeASP.NET Roles. |
| users | Identifies the targeted identity names ( user accounts ) for this element. For more information, see ASP.NET Membership. |
| verbs | Defines the HTTP verbs to which the action applies, such as GET,HEAD, or POST. The default is "*", which specifies all verbs. |
In addition to identity names, there are two special identities, as shown in the following table.
| Identity | Description |
|---|---|
| * | Refers to all identities |
| ? | Refers to the anonymous identity |
To allow John and deny everyone else, one might construct the following configuration section:
<authorization> <allow users = "John" /> <deny users = "*" /> </authorization>
The following example grants access to
Mary and members of the Admins role, while denying access to John( unless John is a member of the Admins role ) and to all anonymous users.<authorization> <allow users = "Mary" /> <allow roles = "Admins" /> <deny users = "John" /> <deny users = "?" /> </authorization>
Both users and roles can refer to multiple entities by using a comma-separated list such as in the following:
<allow users = "John, Mary, redmond\bar" />
Notice that the domain account [
redmond\bar ] must include both the domain and user name.
The following example lets everyone do a GET, but only
Mary can use POST:<authorization> <allow verb = "GET" users = "*" /> <allow verb = "POST" users = "Mary" /> <deny verb = "POST" users = "*" /> </authorization>
Rules are applied using the following heuristics:
- Rules defined in application-level configuration files take precedence over inherited rules. The system determines which rule takes precedence by constructing a merged list of all rules for a URL, with the most recent rules ( those nearest in the hierarchy ) at the head of the list.
- Given a set of merged rules for an application, ASP.NET starts at the head of the list and checks rules until the first match is found.
- If a match is found and the match is an <allow> element, the module grants access to the request.
- If a match is found and the match is a <deny> element, the request is returned with a 401 HTTP status code.
- If no rules match, the request is allowed unless otherwise denied.
Notice in the last situation, the request is allowed access even if no rules were matched. This happens so because the default configuration for ASP.NET defines an <allow users = "*"> element, which authorizes all users. By default, this rule is applied last.
To prevent this behavior, define a <deny users = "*"> element at the application level.
Like all other configuration settings, the access permissions established for a directory also apply to all of its subdirectories, unless explicitly overriden in a child configuration file.
Configuring authorization using the <location> element
Instead of defining access permissions in separate directory configuration files, you can also define one or more location elements in a root configuration file to specify the particular files or directories to which authorization settings defined in that location element should apply.
The following code example demonstrates how to allow an anonymous user to gain access to the
Logon.aspx page.<configuration>
<location path = "Logon.aspx">
<system.web>
<authorization>
<allow users = "?" />
</authorization>
</system.web>
</location>
</configuration>
Subscribe to:
Comments (Atom)


