Quantcast
Channel: OdeToCode by K. Scott Allen
Viewing all 513 articles
Browse latest View live

Seeding an Entity Framework Database From File Data

$
0
0

The migrations feature of the Entity Framework includes a Seed method where you can populate the database with the initial static data an application needs. For example, a list of countries.

 

protected override void Seed(Movies.Infrastructure.MovieData context)
{
context.Countries.AddOrUpdate(c => c.Name,
new Country {Name = "India"},
new Country {Name = "Japan"},
// ..
);
}

You'll find the Seed method in the migration configuration class when you "enable-migrations" in a project. Since the migrations framework will call the Seed method with every database update you make, you have to be sure you don't create duplicated records in the database. The AddOrUpdate method will help to prevent duplicate data. With the first parameter to the method you specify a property the framework can use to identify each entity. The identifier could be the primary key value of the entity, but if your database is generating the value, as it would with identity columns, you won't know the value. To manage a list of countries, we just want to make sure we don't duplicate a country, so identifying a record using the name of the country will suffice.
 
Behind the scenes, AddOrUpdate will ultimately need to first query the database to see if the record exists and if so, update the record. If the record doesn't exist migrations will insert a new record. AddOrUpdate is not something you'd want to use for millions of records, but it works well for reasonable sized data sets.  However, sometimes you already have the data you need in a CSV or XML file and don't want to translate the data into a mountain of C# code. Or, perhaps you already have initialization scripts written in SQL and just need to execute the scripts during the Seed method.
 
It's relatively easy to parse data from an XML file, or read a SQL file and execute a command. What can be tricky with the Seed method is knowing your execution context. Seed might be called as part of running "update-database" in the Visual Studio Package Manager Console window. But the Seed method might also be called when a web application launches in an IIS worker process and migrations are turned on by configuration. These are two entirely different execution contexts that complicate the otherwise mundane chore of locating and opening a file.

Putting It Somewhere Safe

One solution is to embed the data files you need into the assembly with the migration code. Add the files into the project, open the properties window, and set the Build Action property to "Embedded Resource".
 
Embedding Resources
 
Now you can open the file via GetManifestResourceStream. The Seed method might look like:
 
protected override void Seed(CitationData context)
{

var resourceName = "Citations.App_Data.Countries.xml";
var assembly = Assembly.GetExecutingAssembly();
var stream = assembly.GetManifestResourceStream(resourceName);
var xml = XDocument.Load(stream);
var countries = xml.Element("Countries")
.Elements("Country")
.Select(x => new Country
{
Name = (string)x.Element("Name"),
}).ToArray();
context.Countries.AddOrUpdate(c=>c.Name, countries);
}

 

Notes

The name of the resource is derived from the location of the resource ({ddefault_namespace}.{folder}.{subfolder}.{filename}). In the example above, countries.xml lives in the App_Data folder of the project, and the project's default namespace is "Citations". If you don't like the name, you can change it.
 
For .sql scripts you can use context.Database.ExecuteSqlCommand, but you'll need to break the file into separate commands if there are delimiters inside (like "GO").
 
Finally, if the data is truly static you might consider executing the data load during the Up method of a migration.

ELMAH and MiniProfiler In ASP.NET MVC 4

$
0
0

If you are new to ASP.NET MVC you might not know about ELMAH and MiniProfiler. These are two distinct OSS projects that you can easily install with NuGet, and both provide valuable debugging and diagnostic services. Scott Hanselman has blogged about both projects in the past (see NuGet Package of the Week #7, and #9), but this post will highlight some of the updates since Scott's post and still provide a general overview of the  features for each project.

ELMAH

ELMAH gives you "application-wide error logging that is completely pluggable". In other words, you can easily log and view all the errors your application experiences without making any changes to your code. All you need to get started is to "install-package elmah" from the Package Manager Console in Visual Studio and ELMAH will be fully configured and available. Navigate to /elmah.axd in your application, and you'll see all the errors (this will only work for local requests, by default).

ELMAH Error Log

ELMAH will keep all the errors in memory, so an application restart will clear the log. If you want the error information to stick around longer, you can configure ELMAH to store information in a more permanent location, like inside of a database. The elmah.sqlservercompact package will add all the configuration you need to store the error log in a SQL Compact database file inside the App_Data directory of the app. There are also packages to log errors to SQL Server (elmah.sqlserver), Mongo (elmah.mongodb) and more. Most of these packages require you to go into the web.config file to configure simple connection string settings, and possibly setup a schema (see the files dropped into App_Readme for more details).

ELMAH and HandleErrorAttribute

For ASP.NET MVC projects you might want to start by installing the elmah.mvc package. The elmah.mvc package will install an ElmahController so you can view the error log by visiting the /elmah URL. More importantly, elmah.mvc will install an action filter to log errors even when the MVC HandleErrorAttribute (the one that most applications use as a global action filter) marks an exception as "handled". The HandleErrorAttribute marks exceptions as handled and renders a friendly error view when custom errors are enabled in web.config, which will usually be true when an app runs in production. These handled exceptions are invisible to ELMAH, but not when elmah.mvc is in place.

MiniProfiler

MiniProfiler is a "simple but effective" profiler you can use to help find performance problems and bottlenecks. The project has come a long way over the last year. To get started with MiniProfiler, you can install the MiniProfiler.MVC3 package (which also works in ASP.NET MVC 4). After installation, add a single line of code to your _Layout view(s) before the closing </body> tag:

@StackExchange.Profiling.MiniProfiler.RenderIncludes() 

Update:

You'll also need to add a handler so MiniProfiler can retrieve the scripts it needs:

<system.webServer>
...
<handlers>
<add name="MiniProfiler" path="mini-profiler-resources/*" verb="*" type="System.Web.Routing.UrlRoutingModule" resourceType="Unspecified" preCondition="integratedMode" />
</handlers>
</system.webServer>


Once everything is ready, you'll start to see the MiniProfiler results appear in the top left of the browser window (for local request only, by default). Click on the time measurement to see more details:

MiniProfiler results

MiniProfiler can tell you how long it takes for actions to execute and views to render, and provides an API if you need to instrument specific pieces of code. The MiniProfiler install will also add a MiniProfiler.cs file in your App_Start directory. This file is where most of the MiniProfiler configuration takes place. From here you can enable authorization and database profiling.

Two great projects to check out, if you haven't seen them before.

Perils of the MVC4 AccountController

$
0
0
The final release of ASP.NET MVC 4 brought some changes to how membership works in the MVC Internet Project template. While pre-release versions of MVC 4 all used the traditional ASP.NET membership providers for forms authentication, the final release relies on the WebSecurity and OAuthWebSecurity classes from the Web Matrix and Web Pages assemblies. Behind the scenes, the SimpleMembershipProvider and the ExtendedMembershipProvider, as well as DotNetOpenAuth are all at work.
These changes are a good move, because many web sites no longer want to store user credentials locally. Instead, they want to use OAuth and OpenID so someone else is responsible for keeping passwords safe, and people coming to the site have one less password to invent (or one less place to share an existing password). With these changes it is easy to authenticate a user via Facebook, Twitter, Microsoft, or Google. All you need to do is plugin the right keys.

However, there are a few issues to be aware of if you build an application on top of the existing code in the AccountController.

1. The InitializeSimpleMembershipAttribute

The InitializeSimpleMembershipAttribute (let’s call it ISMA) is part of the code generated by the MVC 4 Internet template. ISMA exists to provide a lazy initialization of the underlying providers, as well as potentially create the database for storing membership, roles, and OAuth logins. You’ll find it decorating the AccountController as an action filter, so it can jump in and initialize all the bits before the controller starts to interact with the security related classes.

[Authorize]
[InitializeSimpleMembership]
public class AccountController : Controller
{
// ...
}

As an action filter, ISMA hooks into OnActionExecuting to perform the lazy initialization work, but this can be too late in the life cycle. The Authorize attribute will need the providers to be ready earlier if it needs to perform role based access checks (during OnAuthorization). In other words, if the first request to a site hits a controller action like the following:

[Authorize(Roles="Sales")]

.. then you’ll have an exception as the filter checks the user’s role but the providers aren’t initialized. We’ll talk about concerns over the exception message details later.

My recommendation is to remove ISMA from the project, and initialize WebSecurity during the application start event.

2. Usage of the Entity Framework

The MVC 4 Internet project template uses EF code-first classes to manage user profiles, so it includes a DbContext derived class (UsersContext) and a UserProfile entity, both of which are defined in the Models\AccountModel.cs file.

If you plan on customizing the UserProfile class, make sure you let the application create the database for you. You can see the database creation code in the ISMA also.

If you create a database yourself, make sure to include all the tables needed for the WebSecurity class. There are not scripts published that I have found, but you can always generate DDL scripts after the application runs.

webpages membership tables

If you create an empty database and run the application to let WebSecurity create its own tables, it will not include any of your user profile customizations (it will only do that for you if the database doesn’t exist).

There is a strange tension in a new application because it’s not clear who is responsible for getting all the pieces to work together. WebSecurity can create the database schema it needs to operate, but can miss creating something you add to the UserProfile entity, which is code you can customize, but you wouldn’t know that at a first glance.

If you are using the Entity Framework for an application, my recommendation is to remove the DbContext derived class from AccountModel.cs and take ownership of the UserProfile class. You’ll need to fix places in the AccountController that are looking to use the context class, but replace those pieces of code with your own context class.

If you are not using the Entity Framework, you’ll want to remove the DbContext derived class in any case.

3. Impact on EF Migrations

Because the Internet project template includes an Entity Framework DbContext class in the project, if you add your own DbContext to the project and try to enable EF migrations, you’ll be greeted with an error.

PM> enable-migrations
More than one context type was found in the assembly.
To enable migrations for [Context], use Enable-Migrations –ContextTypeName [Context].

The problem is easy to solve. As the error states, you can use the –ContextTypeName flag to specify your context class name. Note that you can only have migrations for one context in a project, so if you want to have migrations for both contexts you’ll need to move one to a different project. Again, my recommendation is to just remove the existing UsersContext the Internet project template creates, and take ownership of the user profile in your own context.

4. Impact on Seeding Data

The previous ASP.NET membership providers were easy to work with from the Seed method in an EF migrations class. With this new approach you need to perform some additional configuration steps. I covered these steps in a previous post: “Seeding Membership and Roles in ASP.NET MVC 4”.

5. Impact of WebMatrix Assemblies

It’s possible to run into an exception message like the following (like when the ISMA code runs too late):

You must call the "WebSecurity.InitializeDatabaseConnection" method before you call any other method of the "WebSecurity" class. This call should be placed in an _AppStart.cshtml file in the root of your site.

We don’t use an _AppStart.cshtml file in ASP.NET MVC, so look to add code to Application_Start in global.asax.cs instead.

6. Impact of the WebMatrix Style

A few people have written me to ask if the AccountController created by the Internet application template is the future of ASP.NET MVC coding. A representative sample appears below:

public ActionResult Register(RegisterModel model)
{
if (ModelState.IsValid)
{
// Attempt to register the user
try
{
WebSecurity.CreateUserAndAccount(model.UserName, model.Password);
WebSecurity.Login(model.UserName, model.Password);
return RedirectToAction("Index", "Home");
}
catch (MembershipCreateUserException e)
{
ModelState.AddModelError("", ErrorCodeToString(e.StatusCode));
}
}

// If we got this far, something failed, redisplay form
return View(model);
}

Specifically, people are asking about the number of static method calls through WebSecurity and trying to figure out the impact on testing and extensibility, as well as the impact on impressionable young minds who might read too much into the code.

I’ve assured everyone the future is not full of static methods. Or, at least not my future.

Where Are We?

In some upcoming posts we’ll explore an alternative approach to membership, roles, and OAuth in ASP.NET MVC 4, and see if there is an approach that is simple, testable, and extensible enough to work with more than a relational database for storage.

Build Your Own Membership System For ASP.NET MVC - Part I

$
0
0

Membership Provider Base ClassBuilding a piece of software to manage users is easy, but only if you know exactly what you want. After all, most of the code inside the various existing ASP.NET providers consists of straightforward parameter validation and data access. While this membership code is simple in isolation, there is still value inside the existing providers. The providers have proven themselves in production for thousands of web sites.

Unfortunately, it is difficult to derive value from the existing providers and reuse just the parts you need when building a custom membership solution for an application. The providers entangle a number of responsibilities and require a relational database. This has always been a source of frustration when building YACMP (yet another custom membership provider). My typical approach is to start from scratch by deriving from the abstract MembershipProvider class.

However, starting with the abstract MembershipProvider class doesn’t give me any inherent benefits in an ASP.NET MVC or Web API application. There are no custom controls to drag from the toolbox that will automatically integrate with a custom provider, and other than the Authorize attribute (which works against the roles provider), there is no implicit dependency on Membership.Provider or Roles.Provider, which are the typical static gateways to membership and role features.

There are actually drawbacks to building  custom providers with ASP.NET MVC. The provider model doesn’t easily cooperate with the dependency resolution features of MVC and Web API. Also, the API is a bit dated and doesn’t have the ability to work with OAuth or OpenID.

The solution to the OAuth problem in a new MVC 4 Internet application is to combine a new membership provider (the SimpleMembershipProvider) with some Web Matrix bits (the WebSecurity class) into something that works with OAuth and still allows a user to register locally with a password, but unfortunately still depends on a relational database and is complicated to understand, extend, and debug (search for MVC 4 SimpleMembership and you’ll find more questions on StackOverflow than anything else).

Given that the traditional provider model doesn’t provide many benefits for MVC and WebAPI, what would it look like to build a membership system and not start by deriving from MembershipProvider? That’s the topic for the next post.

Build Your Own Membership System For ASP.NET MVC - Part II

$
0
0

MemFlex is a look at what is possible in an ASP.NET MVC application if you eschew the existing ASP.NET providers for membership, roles, and OAuth. MemFlex doesn’t use the existing providers, but does use classes from the .NET framework and DotNetOpenAuth to build a membership and roles system with some simple requirements:

- Support all the actions of the MVC 4 Internet AccountController (register, login, login with OAuth).

- Be test friendly (by providing interface definitions for both clients and dependencies)

- Run without ASP.NET (for integration tests and database migrations, as two examples).

- Work with a variety of data sources (two common scenarios for storing use accounts these days involve document databases and web services).

Here are parts of the project as they exist today.

Sample Application

The sample application is a typical MVC 4 Internet application where the AccountController and database migrations use the FlexMembershipProvider. There are classes provided for working with both the Entity Framework and RavenDB, and you can (almost) switch between the two by adjusting an assembly reference and the namespaces you use.

After you’ve defined what a User model should look like, the first part would be configuring a FlexMembershipUserStore to work with your custom user type.

using FlexProviders.EF;

namespace LogMeIn.Models
{
public class UserStore : FlexMembershipUserStore<User>
{
public UserStore(MovieDb db) : base(db)
{

}
}
}

The above code snippet, which uses EF, just requires a generic type parameter (your user type), and a DbContext object to work with. The UserStore is then plugged into the FlexMembershipProvider. You can do this by hand, or let an IoC container take care of the work.

var membership = new FlexMembershipProvider(
new UserStore(context),
new AspnetEnvironment());

Once the FlexMembershipProvider is initialized inside a controller, you can use an API that looks a bit like the traditional ASP.NET Membership API.

_membershipProvider.Login(model.UserName, model.Password

Everything Else

The FlexProviders part of the project consists of 4 pieces: the integrations tests, a RavenDB user store, an EF user store, and the FlexProviders themselves.

FlexProviders

The FlexProviders project defines the basic abstractions for a flexible membership system. For example, the interface definition to work with locally registered users (for now, a separate interface provides the OAuth functionality):

public interface IFlexMembershipProvider
{
bool Login(string username, string password);
void Logout();
void CreateAccount(IFlexMembershipUser user);
bool HasLocalAccount(string username);
bool ChangePassword(string username, string oldPassword, string newPassword);
}

There is also a concrete implementation of a flexible membership provider:

public class FlexMembershipProvider : IFlexMembershipProvider, 
IFlexOAuthProvider,
IOpenAuthDataProvider
{
public FlexMembershipProvider(
IFlexUserStore userStore,
IApplicationEnvironment applicationEnvironment)
{
_userStore = userStore;
_applicationEnvironment = applicationEnvironment;
}

public bool Login(string username, string password)
{
var user = _userStore.GetUserByUsername(username);
if(user == null)
{
return false;
}

// ... omitted for brevity ...
}
// ...
}

Of course since the membership provider requires an IFlexUserStore dependency, the operations required for data access are defined in this project in the IFlexUserStore interface. There is also an AspnetEnvironment class that removes hard dependencies on test unfriendly bits like HttpContext.Current.

public class AspnetEnvironment : IApplicationEnvironment
{
public void IssueAuthTicket(string username, bool persist)
{
FormsAuthentication.SetAuthCookie(username,persist);
}

// ...
}

FlexProviders.EF and FlexProviders.Raven

It’s relatively straightforward to build classes that will take care of the data access required by a membership provider. For both Raven and EF, all you really need is a generic type parameter and a unit of work. For Raven:

namespace FlexProviders.Raven
{
public class FlexMembershipUserStore<TUser>
: IFlexUserStore where TUser : class, new()

{
private readonly IDocumentSession _session;

public FlexMembershipUserStore(IDocumentSession session)
{
_session = session;
}

public IFlexMembershipUser GetUserByUsername(string username)
{
return _session.Query<TUser>().SingleOrDefault(u => u.Username == username);
}

// ...
}
}

And the EF version:

namespace FlexProviders.EF
{
public class FlexMembershipUserStore<TUser>
: IFlexUserStore where TUser: class, IFlexMembershipUser, new()
{
private readonly DbContext _context;

public FlexMembershipUserStore (DbContext context)
{
_context = context;
}

public IFlexMembershipUser GetUserByUsername(string username)
{
return _context.Set<TUser>().SingleOrDefault(u => u.Username == username);
}

// ...
}
}

FlexProviders.Tests

This project is a set of integration tests to verify the EF and Raven providers actually put and retrieve data with real databases. The EF tests require the DTC to be running (net start msdtc). The tests are configured to use SQL 2012 LocalDB (for EF) by default, while the Raven tests use Raven’s in-memory embedded mode.

None of the code is vetted or hardened and still needs some work, but if you find it to be an inspiration or have some feedback or pull requests, let me know.

There are no NuGet packages available, as yet.

Conclusion

I said in the last post that building a membership system is easy if you know exactly what you want. You can mostly rely on other pieces of the framework for the hard parts (creating secure cookies and cryptography, for example, but also relying on DotNetOpenAuth for the OAuth heavy lifting).

The hard part of building a membership system is when you try to build it for unknown customers and unknown scenarios. I don’t envy Microsoft in the sense that if they build a membership system that is simple, 60% of their customers will say it doesn’t work for their application. If they build a membership system that is sophisticated enough to work with most applications, 60% of their customers will say it looks like WCF. Somewhere there is a sweet spot that will make a majority of customers happy.

What I have here will work for most of the applications I’ve been associated with, provides some flexibility in the data store, and still remains, I think, relatively easy to understand.

Two Great New Conferences

$
0
0

DevIntersection – Las Vegas

DevIntersection is the final stop in the .NET Rocks! road trip and runs from December 9th to the 12th in Las Vegas. In addition to conference sessions I’ll be doing an ASP.NET MVC 4 workshop on December 13th. Register by November 15th to receive a Windows 8 tablet!

DevIntersection Las Vegas

 

Warm Crocodile – Copenhagen

The Warm Crocodile Developer Conference is a 2 day conference in Copenhagen, Denmark with a great selection of speakers and sessions. In the words of the organizers– “We want to create a brand, a conference brand that promises to deliver on set of great things, both people and content, but also, and just as much fun and partying and networking.” Warm Crocodile Developer Conference

I hope to see you there!

Abstractions, Patterns, and Interfaces

$
0
0

Someone recently asked me how to go about building an application that processes customer information. The customer information might live in a database, but it also might live in a .csv file.

The interesting thing is I’m in the middle of building a tool for DBAs that one day will be a fancy WPF desktop application integrated with source control repositories and relational databases, but for the moment is a simple console application using only the file system for storage. Inside I’ve faced many scenarios similar to the question being asked, and these are scenarios I’ve faced numerous times over the years.

There are 101 ways to solve the problem, but let’s work through one solution together.

Getting Started

We might start by writing some tests, or we might start by jumping in and trying to display a list of all customers in a data source, but either way we’ll eventually find ourselves with the following code, which contains the essence of the question:

var customers = // how do I get customers?

 

To make things more concrete, let’s take that line of code and put it in a class that will do something useful. A class to put all customer names into the output of a console program.

publicclass CustomerDump

{
publicvoid Render()
{
var customers = // how ?
foreach (var customer in customers)
{
Console.WriteLine(customer.Name);
}
}
}


Although we might not know how to retrieve the customer data, we probably do know what data we need about each customer. We’ll go ahead and define a Customer class for objects to hold customer data.

publicclass Customer
{
publicint Id { get; set; }
publicstring Name { get; set; }
publicstring Location { get; set; }
}

Now we can work on the main question. The business has told us we need to be flexible with the customer data, so how will we go about retrieving customers?

Defining an Interface

Interfaces are wonderful for a language like C#. Interfaces give us everything we need to work with an object in a strongly-typed manner, but place the least number of constraints on the object implementing the interface. Interfaces make the C# compiler happy without forcing us to pay an inheritance tax for working with a class hierarchy. We’ll define an interface that describes exactly how we want to fetch customers and how we want the customers packaged for us to consume.

publicinterface ICustomerDataSource
{
IList<Customer> FetchAllCustomers();
}

There are many subtleties to interface design. Even the simple interface here required us to make a number of decisions.

First, what is the name of the operation? Do we want to FetchAllCustomers? SelectAllCustomers? GetCustomers? I believe names are important at this level, but you don’t want to give too much away. A name like SelectAllCustomers is biased towards working with a relational database, and we know we’ll be working with more than just a SQL database.

Often the name is influenced by what we know about the project and the business. Fortunately, refactoring tools make names easy to change.

Another design decision is the return type. When you are trying to abstract away some operation, you have to decide if you’ll go for the lowest common denominator (anything can return IEnumerable), or something that might only be achieved by an advanced data source (like IQueryable). In this example we are forcing all implementations to return a list, which has some tradeoffs, but at least we know we’ll be getting specific type of data structure. IEnumerable would be targetting the lowest common denominator and means the interface is easier to implement, but we might not have all the convenience features we need.

Once again, knowing a bit about the direction of the project and being in tune with the business needs will help in determining when to add flexibility and when to enforce constraints. 

Implementing the Interface

One question we might have had in the back of our mind is how to provide an implementation of the data loading interface when some implementations might need parameters like a database connection string, while other implementations might need file system details, like the path the to the .csv file with customers inside.

When designing an interface we need to put those thoughts in the back of our mind and focus entirely on the client’s needs first. Just watch how this unfolds as we build a class to read custom data from a csv file.

class CustomerCsvDataSource : ICustomerDataSource
{
public CustomerCsvDataSource(string path)
{
_path = path;
}



public IList<Customer> FetchAllCustomers()
{
return File.ReadAllLines(_path)
.Select(line => line.Split(','))
.Select((values, index) =>
new Customer
{
Id = index,
Name = values[0],
Location = values[1]
}).ToList();
}



readonlystring _path;
}


This isn’t the most robust CSV parser in the world (it won’t deal with embedded commas, so we might want to get some help), but it does demonstrate a pattern I’ve been using over and over again recently. Class implements interface, stores constructor parameters in read-only fields, exposes methods to implement the interface, and above all keep things simple, small, and focused.

Here is the pattern again, this time in a class that uses Mark Rendle’s  Simple.Data to access SQL Server, but we could do the same thing with raw ADO.NET, the Entity Framework, or even MongoDB.

class CustomerDbDataSource : ICustomerDataSource
{
public CustomerDbDataSource(string connectionString)
{
_connectionString = connectionString;
}



public IList<Customer> FetchAllCustomers()
{
var db = Database.OpenConnection(_connectionString);
return db.Customers.All().ToList<Customer>();
}



readonlystring _connectionString;
}


We can see now that worrying about connection strings and file names while defining the interface was premature worrying. These were all implementation details the interface isn’t concerned with, as the interface only exposes the operations clients need, like the ability to fetch customers.

Instead, these classes are “programmed” with implementation specific instructions given by constructor parameters, and the instructions give them everything they need to do the work required by the interface. The classes never change the instructions (they are all saved in read-only fields), but they use the instructions to produce new results.

We have now reached the point where we have two different classes to deal with two different sources of data, but how do we use them?

Consuming the Interface

Returning to our CustomerDump class, one obvious approach to producing results is the following.

publicclass CustomerDump
{
publicvoid Render()
{
var dataSource = new CustomerCsvDataSource("customers.csv");
var customers = dataSource.FetchAllCustomers();

foreach (var customer in customers)
{
Console.WriteLine(customer.Name);
}
}
}

The above approach can work, but we’ve tied the CustomerDump class to the CSV data source by instantiating CustomerCsvDataSource directly. If we need CustomerDump to only work with a CSV data source, this is reasonable, but we know most of the application needs to work with different data sources so we’ll need to avoid this approach in most places.

Instead of CustomerDump choosing a data source and coupling itself to a specific class, we’ll force someone to give CustomerDump the data source to use.

publicclass CustomerDump
{
public CustomerDump(ICustomerDataSource dataSource)
{
_dataSource = dataSource;
}

publicvoid Render()
{
var customers = _dataSource.FetchAllCustomers();

foreach (var customer in customers)
{
Console.WriteLine(customer.Name);
}
}

readonly ICustomerDataSource _dataSource;
}

Now, any logic we have inside of CustomerDump can work with customers from anywhere, and we can add new data sources in the future. We’ve gained a lot of flexibility in an area where the business demands flexibility, and hopefully didn’t build a mountain of abstractions where none were required. All the pieces are small and focused, and they way they will fit together depends on the application you are building. Which leads to the next question – who is responsible for putting CustomerDump together?

At the top level of every application built in this fashion you’ll have some bootstrapping code to arrange all the pieces and set them in motion. For a console mode application it might look like this:

staticvoid Main(string[] args)
{
// arrange
var connectionString = @"server=(localdb)\v11.0;database=Customers";
var dataSource = new CustomerDbDataSource(connectionString);
var dump = new CustomerDump(dataSource);

// execute
dump.Render();
}

Here we have hard-coded values again, but you can imagine hard-coded connection strings and class names getting intermingled or replaced with if/else statements and settings from the app.config file. As the application becomes more complex, we could turn to tools like MEF or StructureMap to manage the construction of the building blocks we need.

Going Further

One of the biggest challenges in building well factored software is knowing when to stop adding abstractions. For example, we can say the CustomerDump class is currently tied too tightly to Console.Out. To remove the dependency we’ll instead inject a Stream for CustomerDump to use.

public CustomerDump(ICustomerDataSource dataSource,
Stream output)
{
_dataSource = dataSource;
_output = new StreamWriter(output);
}
Alternatively, we could say CustomerDump shouldn’t be responsible for both getting and formatting each customer as well as sending the result to the screen. In that case we’ll just have CustomerDump create the formatted string, and leave it to the caller to decide what to do with the result.
publicstring CreateDump()
{
var builder = new StringBuilder();
var customers = _dataSource.FetchAllCustomers();

foreach (var customer in customers)
{
builder.AppendFormat("{0} : {1}",
customer.Name, customer.Location);
}
return builder.ToString();
}

Now we might look at the code and decide that getting and formatting are two different responsibilities, so we’ll need someone to pass the list of customers to format instead of having the method use the data source directly. And so on, and so on.

Where do we stop?

That’s where most samples break down because the right place to stop is the place where we have just enough abstraction to make things work and still meet our requirements for testability, maintainability, scalability, readability, extensibility, and all the other ilities we need. Samples like this can show you the patterns you can use to achieve specific results, but only in the context of a specific application do we know the results we need. We need to apply both YAGNI and SRP in the right places and at the right time.

GroupBy With Maximum Size

$
0
0

I recently needed to group some objects, which is easy with GroupBy, but I also needed to enforce a maximum group size, as demonstrated by the following test.

publicvoid Splits_Group_When_GroupSize_Greater_Than_MaxSize()
{
var items = new[] { "A1", "A2", "A3", "B4", "B5" };

var result = items.GroupByWithMaxSize(i => i[0], maxSize: 2);

Assert.True(result.ElementAt(0).SequenceEqual(new[] { "A1", "A2" }));
Assert.True(result.ElementAt(1).SequenceEqual(new[] { "A3" }));
Assert.True(result.ElementAt(2).SequenceEqual(new[] { "B4", "B5" }));
}
The following code is not the fastest or cleverest solution, but it does make all the tests turn green.  
publicstatic IEnumerable<IEnumerable<T>> GroupByWithMaxSize<T, TKey>(
this IEnumerable<T> source, Func<T, TKey> keySelector, int maxSize)
{
var originalGroups = source.GroupBy(keySelector);

foreach (var group in originalGroups)
{
if (group.Count() <= maxSize)
{
yieldreturn group;
}
else
{
var regroups = group.Select((item, index) => new { item, index })
.GroupBy(g => g.index / maxSize);
foreach (var regroup in regroups)
{
yieldreturn regroup.Select(g => g.item);
}
}
}
}
In this case I don’t need the Key property provided by IGrouping, so the return type is a generically beautiful IEnumerable<IEnumerable<T>>.

Two ASP.NET MVC 4 Courses

$
0
0

Now on Pluralsight:

The ASP.NET MVC 4 Fundamentals training course spends most of its time on new features for version 4 of the framework, including:

- Mobile display modes, display providers, and browser overriding

- Async programming with C# 5 and the async / await keywords

- The WebAPI

- Bundling and minification with the Web Optimization bits

The Building Applications with ASP.NET MVC 4 training course is a start to finish introduction to programming with ASP.NET MVC 4. Some of the demos in the 7+ hours of content include:

- Using controllers, action results, action filters and routing

- Razor views, partial views, and layout views

- Models, view models, data annotations, and validation

- Custom validation attributes and self-validating models

- Entity Framework 5 code-first programming

- Entity Framework migrations and seeding

- Security topics including mass assignment and cross site request forgeries

- Using JavaScript and jQuery to add paging, autocompletion, async form posts, and async searches

- Taking control of Simple Membership

- Using OAuth and OpenID

- Caching, localization, and diagnostics

- Error logging with ELMAH

- Unit testing with Visual Studio 2012

- Deploying to IIS

- Deploying to a Microsoft Windows Azure web site

Enjoy!

Flood Filling In A Canvas

$
0
0

Canvasfill is a demo for a friend who wants to flood fill a clicked area in an HTML 5 canvas.

A couple notes:

JavaScript loads a PNG image into the canvas when the page loads.

var img = new Image();
img.onload = function () {
canvas.width = img.width;
canvas.height = img.height;
context.drawImage(this,0,0);
};
img.src = "thermometer_01.png";

The image and the JavaScript must load from the same domain for the sample to work, otherwise you’ll run into security exceptions (unless you try to CORS enable the image, which doesn’t work everywhere).

The code uses a requestAnimationFrame polyfill from Paul Irish for efficient animations.

The code uses getImageData and putImageData to get and color a single pixel on each iteration.

image = context.getImageData(point.x, point.y, 1, 1);
var pixel = image.data;

This is not the most efficient approach to using the canvas, so if you need speed you’ll want to look at grabbing the entire array of pixels. With the current approach it is easier to “see” how the flood fill algorithm works since you can watch as pixels change colors in specific directions.

The flood fill algorithm itself is an extremely primitive queue-based (non-recursive) algorithm. It doesn’t deal well with anti-aliased images, for example, so you might need to look at more advanced algorithms if the image is not a blocky clip art image or a screen shot of Visual Studio 2012 with the default color scheme.

Seeding an Entity Framework Database From File Data

$
0
0

The migrations feature of the Entity Framework includes a Seed method where you can populate the database with the initial static data an application needs. For example, a list of countries.

 

protectedoverridevoid Seed(Movies.Infrastructure.MovieData context)
{
context.Countries.AddOrUpdate(c => c.Name,
new Country {Name = "India"},
new Country {Name = "Japan"},
// ..
);
}

You'll find the Seed method in the migration configuration class when you "enable-migrations" in a project. Since the migrations framework will call the Seed method with every database update you make, you have to be sure you don't create duplicated records in the database. The AddOrUpdate method will help to prevent duplicate data. With the first parameter to the method you specify a property the framework can use to identify each entity. The identifier could be the primary key value of the entity, but if your database is generating the value, as it would with identity columns, you won't know the value. To manage a list of countries, we just want to make sure we don't duplicate a country, so identifying a record using the name of the country will suffice.
 
Behind the scenes, AddOrUpdate will ultimately need to first query the database to see if the record exists and if so, update the record. If the record doesn't exist migrations will insert a new record. AddOrUpdate is not something you'd want to use for millions of records, but it works well for reasonable sized data sets.  However, sometimes you already have the data you need in a CSV or XML file and don't want to translate the data into a mountain of C# code. Or, perhaps you already have initialization scripts written in SQL and just need to execute the scripts during the Seed method.
 
It's relatively easy to parse data from an XML file, or read a SQL file and execute a command. What can be tricky with the Seed method is knowing your execution context. Seed might be called as part of running "update-database" in the Visual Studio Package Manager Console window. But the Seed method might also be called when a web application launches in an IIS worker process and migrations are turned on by configuration. These are two entirely different execution contexts that complicate the otherwise mundane chore of locating and opening a file.

Putting It Somewhere Safe

One solution is to embed the data files you need into the assembly with the migration code. Add the files into the project, open the properties window, and set the Build Action property to "Embedded Resource".
 
Embedding Resources
 
Now you can open the file via GetManifestResourceStream. The Seed method might look like:
 
protectedoverridevoid Seed(CitationData context)
{

var resourceName = "Citations.App_Data.Countries.xml";
var assembly = Assembly.GetExecutingAssembly();
var stream = assembly.GetManifestResourceStream(resourceName);
var xml = XDocument.Load(stream);
var countries = xml.Element("Countries")
.Elements("Country")
.Select(x => new Country
{
Name = (string)x.Element("Name"),
}).ToArray();
context.Countries.AddOrUpdate(c=>c.Name, countries);
}

 

Notes

The name of the resource is derived from the location of the resource ({ddefault_namespace}.{folder}.{subfolder}.{filename}). In the example above, countries.xml lives in the App_Data folder of the project, and the project's default namespace is "Citations". If you don't like the name, you can change it.
 
For .sql scripts you can use context.Database.ExecuteSqlCommand, but you'll need to break the file into separate commands if there are delimiters inside (like "GO").
 
Finally, if the data is truly static you might consider executing the data load during the Up method of a migration.

ELMAH and MiniProfiler In ASP.NET MVC 4

$
0
0

If you are new to ASP.NET MVC you might not know about ELMAH and MiniProfiler. These are two distinct OSS projects that you can easily install with NuGet, and both provide valuable debugging and diagnostic services. Scott Hanselman has blogged about both projects in the past (see NuGet Package of the Week #7, and #9), but this post will highlight some of the updates since Scott's post and still provide a general overview of the  features for each project.

ELMAH

ELMAH gives you "application-wide error logging that is completely pluggable". In other words, you can easily log and view all the errors your application experiences without making any changes to your code. All you need to get started is to "install-package elmah" from the Package Manager Console in Visual Studio and ELMAH will be fully configured and available. Navigate to /elmah.axd in your application, and you'll see all the errors (this will only work for local requests, by default).

ELMAH Error Log

ELMAH will keep all the errors in memory, so an application restart will clear the log. If you want the error information to stick around longer, you can configure ELMAH to store information in a more permanent location, like inside of a database. The elmah.sqlservercompact package will add all the configuration you need to store the error log in a SQL Compact database file inside the App_Data directory of the app. There are also packages to log errors to SQL Server (elmah.sqlserver), Mongo (elmah.mongodb) and more. Most of these packages require you to go into the web.config file to configure simple connection string settings, and possibly setup a schema (see the files dropped into App_Readme for more details).

ELMAH and HandleErrorAttribute

For ASP.NET MVC projects you might want to start by installing the elmah.mvc package. The elmah.mvc package will install an ElmahController so you can view the error log by visiting the /elmah URL. More importantly, elmah.mvc will install an action filter to log errors even when the MVC HandleErrorAttribute (the one that most applications use as a global action filter) marks an exception as "handled". The HandleErrorAttribute marks exceptions as handled and renders a friendly error view when custom errors are enabled in web.config, which will usually be true when an app runs in production. These handled exceptions are invisible to ELMAH, but not when elmah.mvc is in place.

MiniProfiler

MiniProfiler is a "simple but effective" profiler you can use to help find performance problems and bottlenecks. The project has come a long way over the last year. To get started with MiniProfiler, you can install the MiniProfiler.MVC3 package (which also works in ASP.NET MVC 4). After installation, add a single line of code to your _Layout view(s) before the closing </body> tag:

@StackExchange.Profiling.MiniProfiler.RenderIncludes() 

Update:

You'll also need to add a handler so MiniProfiler can retrieve the scripts it needs:

<system.webServer>
...
<handlers>
<addname="MiniProfiler"path="mini-profiler-resources/*"verb="*"type="System.Web.Routing.UrlRoutingModule"resourceType="Unspecified"preCondition="integratedMode"/>
</handlers>
</system.webServer>


Once everything is ready, you'll start to see the MiniProfiler results appear in the top left of the browser window (for local request only, by default). Click on the time measurement to see more details:

MiniProfiler results

MiniProfiler can tell you how long it takes for actions to execute and views to render, and provides an API if you need to instrument specific pieces of code. The MiniProfiler install will also add a MiniProfiler.cs file in your App_Start directory. This file is where most of the MiniProfiler configuration takes place. From here you can enable authorization and database profiling.

Two great projects to check out, if you haven't seen them before.

Perils of the MVC4 AccountController

$
0
0
The final release of ASP.NET MVC 4 brought some changes to how membership works in the MVC Internet Project template. While pre-release versions of MVC 4 all used the traditional ASP.NET membership providers for forms authentication, the final release relies on the WebSecurity and OAuthWebSecurity classes from the Web Matrix and Web Pages assemblies. Behind the scenes, the SimpleMembershipProvider and the ExtendedMembershipProvider, as well as DotNetOpenAuth are all at work.
These changes are a good move, because many web sites no longer want to store user credentials locally. Instead, they want to use OAuth and OpenID so someone else is responsible for keeping passwords safe, and people coming to the site have one less password to invent (or one less place to share an existing password). With these changes it is easy to authenticate a user via Facebook, Twitter, Microsoft, or Google. All you need to do is plugin the right keys.

However, there are a few issues to be aware of if you build an application on top of the existing code in the AccountController.

1. The InitializeSimpleMembershipAttribute

The InitializeSimpleMembershipAttribute (let’s call it ISMA) is part of the code generated by the MVC 4 Internet template. ISMA exists to provide a lazy initialization of the underlying providers, as well as potentially create the database for storing membership, roles, and OAuth logins. You’ll find it decorating the AccountController as an action filter, so it can jump in and initialize all the bits before the controller starts to interact with the security related classes.

[Authorize]
[InitializeSimpleMembership]
publicclass AccountController : Controller
{
// ...
}

As an action filter, ISMA hooks into OnActionExecuting to perform the lazy initialization work, but this can be too late in the life cycle. The Authorize attribute will need the providers to be ready earlier if it needs to perform role based access checks (during OnAuthorization). In other words, if the first request to a site hits a controller action like the following:

[Authorize(Roles="Sales")]

.. then you’ll have an exception as the filter checks the user’s role but the providers aren’t initialized. We’ll talk about concerns over the exception message details later.

My recommendation is to remove ISMA from the project, and initialize WebSecurity during the application start event.

2. Usage of the Entity Framework

The MVC 4 Internet project template uses EF code-first classes to manage user profiles, so it includes a DbContext derived class (UsersContext) and a UserProfile entity, both of which are defined in the Models\AccountModel.cs file.

If you plan on customizing the UserProfile class, make sure you let the application create the database for you. You can see the database creation code in the ISMA also.

If you create a database yourself, make sure to include all the tables needed for the WebSecurity class. There are not scripts published that I have found, but you can always generate DDL scripts after the application runs.

webpages membership tables

If you create an empty database and run the application to let WebSecurity create its own tables, it will not include any of your user profile customizations (it will only do that for you if the database doesn’t exist).

There is a strange tension in a new application because it’s not clear who is responsible for getting all the pieces to work together. WebSecurity can create the database schema it needs to operate, but can miss creating something you add to the UserProfile entity, which is code you can customize, but you wouldn’t know that at a first glance.

If you are using the Entity Framework for an application, my recommendation is to remove the DbContext derived class from AccountModel.cs and take ownership of the UserProfile class. You’ll need to fix places in the AccountController that are looking to use the context class, but replace those pieces of code with your own context class.

If you are not using the Entity Framework, you’ll want to remove the DbContext derived class in any case.

3. Impact on EF Migrations

Because the Internet project template includes an Entity Framework DbContext class in the project, if you add your own DbContext to the project and try to enable EF migrations, you’ll be greeted with an error.

PM> enable-migrations
More than one context type was found in the assembly.
To enable migrations for [Context], use Enable-Migrations –ContextTypeName [Context].

The problem is easy to solve. As the error states, you can use the –ContextTypeName flag to specify your context class name. Note that you can only have migrations for one context in a project, so if you want to have migrations for both contexts you’ll need to move one to a different project. Again, my recommendation is to just remove the existing UsersContext the Internet project template creates, and take ownership of the user profile in your own context.

4. Impact on Seeding Data

The previous ASP.NET membership providers were easy to work with from the Seed method in an EF migrations class. With this new approach you need to perform some additional configuration steps. I covered these steps in a previous post: “Seeding Membership and Roles in ASP.NET MVC 4”.

5. Impact of WebMatrix Assemblies

It’s possible to run into an exception message like the following (like when the ISMA code runs too late):

You must call the "WebSecurity.InitializeDatabaseConnection" method before you call any other method of the "WebSecurity" class. This call should be placed in an _AppStart.cshtml file in the root of your site.

We don’t use an _AppStart.cshtml file in ASP.NET MVC, so look to add code to Application_Start in global.asax.cs instead.

6. Impact of the WebMatrix Style

A few people have written me to ask if the AccountController created by the Internet application template is the future of ASP.NET MVC coding. A representative sample appears below:

public ActionResult Register(RegisterModel model)
{
if (ModelState.IsValid)
{
// Attempt to register the user
try
{
WebSecurity.CreateUserAndAccount(model.UserName, model.Password);
WebSecurity.Login(model.UserName, model.Password);
return RedirectToAction("Index", "Home");
}
catch (MembershipCreateUserException e)
{
ModelState.AddModelError("", ErrorCodeToString(e.StatusCode));
}
}

// If we got this far, something failed, redisplay form
return View(model);
}

Specifically, people are asking about the number of static method calls through WebSecurity and trying to figure out the impact on testing and extensibility, as well as the impact on impressionable young minds who might read too much into the code.

I’ve assured everyone the future is not full of static methods. Or, at least not my future.

Where Are We?

In some upcoming posts we’ll explore an alternative approach to membership, roles, and OAuth in ASP.NET MVC 4, and see if there is an approach that is simple, testable, and extensible enough to work with more than a relational database for storage.

Build Your Own Membership System For ASP.NET MVC - Part I

$
0
0

Membership Provider Base ClassBuilding a piece of software to manage users is easy, but only if you know exactly what you want. After all, most of the code inside the various existing ASP.NET providers consists of straightforward parameter validation and data access. While this membership code is simple in isolation, there is still value inside the existing providers. The providers have proven themselves in production for thousands of web sites.

Unfortunately, it is difficult to derive value from the existing providers and reuse just the parts you need when building a custom membership solution for an application. The providers entangle a number of responsibilities and require a relational database. This has always been a source of frustration when building YACMP (yet another custom membership provider). My typical approach is to start from scratch by deriving from the abstract MembershipProvider class.

However, starting with the abstract MembershipProvider class doesn’t give me any inherent benefits in an ASP.NET MVC or Web API application. There are no custom controls to drag from the toolbox that will automatically integrate with a custom provider, and other than the Authorize attribute (which works against the roles provider), there is no implicit dependency on Membership.Provider or Roles.Provider, which are the typical static gateways to membership and role features.

There are actually drawbacks to building  custom providers with ASP.NET MVC. The provider model doesn’t easily cooperate with the dependency resolution features of MVC and Web API. Also, the API is a bit dated and doesn’t have the ability to work with OAuth or OpenID.

The solution to the OAuth problem in a new MVC 4 Internet application is to combine a new membership provider (the SimpleMembershipProvider) with some Web Matrix bits (the WebSecurity class) into something that works with OAuth and still allows a user to register locally with a password, but unfortunately still depends on a relational database and is complicated to understand, extend, and debug (search for MVC 4 SimpleMembership and you’ll find more questions on StackOverflow than anything else).

Given that the traditional provider model doesn’t provide many benefits for MVC and WebAPI, what would it look like to build a membership system and not start by deriving from MembershipProvider? That’s the topic for the next post.

Build Your Own Membership System For ASP.NET MVC - Part II

$
0
0

MemFlex is a look at what is possible in an ASP.NET MVC application if you eschew the existing ASP.NET providers for membership, roles, and OAuth. MemFlex doesn’t use the existing providers, but does use classes from the .NET framework and DotNetOpenAuth to build a membership and roles system with some simple requirements:

- Support all the actions of the MVC 4 Internet AccountController (register, login, login with OAuth).

- Be test friendly (by providing interface definitions for both clients and dependencies)

- Run without ASP.NET (for integration tests and database migrations, as two examples).

- Work with a variety of data sources (two common scenarios for storing use accounts these days involve document databases and web services).

Here are parts of the project as they exist today.

Sample Application

The sample application is a typical MVC 4 Internet application where the AccountController and database migrations use the FlexMembershipProvider. There are classes provided for working with both the Entity Framework and RavenDB, and you can (almost) switch between the two by adjusting an assembly reference and the namespaces you use.

After you’ve defined what a User model should look like, the first part would be configuring a FlexMembershipUserStore to work with your custom user type.

using FlexProviders.EF;

namespace LogMeIn.Models
{
publicclass UserStore : FlexMembershipUserStore<User>
{
public UserStore(MovieDb db) : base(db)
{

}
}
}

The above code snippet, which uses EF, just requires a generic type parameter (your user type), and a DbContext object to work with. The UserStore is then plugged into the FlexMembershipProvider. You can do this by hand, or let an IoC container take care of the work.

var membership = new FlexMembershipProvider(
new UserStore(context),
new AspnetEnvironment());

Once the FlexMembershipProvider is initialized inside a controller, you can use an API that looks a bit like the traditional ASP.NET Membership API.

_membershipProvider.Login(model.UserName, model.Password

Everything Else

The FlexProviders part of the project consists of 4 pieces: the integrations tests, a RavenDB user store, an EF user store, and the FlexProviders themselves.

FlexProviders

The FlexProviders project defines the basic abstractions for a flexible membership system. For example, the interface definition to work with locally registered users (for now, a separate interface provides the OAuth functionality):

publicinterface IFlexMembershipProvider
{
bool Login(string username, string password);
void Logout();
void CreateAccount(IFlexMembershipUser user);
bool HasLocalAccount(string username);
bool ChangePassword(string username, string oldPassword, string newPassword);
}

There is also a concrete implementation of a flexible membership provider:

publicclass FlexMembershipProvider : IFlexMembershipProvider, 
IFlexOAuthProvider,
IOpenAuthDataProvider
{
public FlexMembershipProvider(
IFlexUserStore userStore,
IApplicationEnvironment applicationEnvironment)
{
_userStore = userStore;
_applicationEnvironment = applicationEnvironment;
}

publicbool Login(string username, string password)
{
var user = _userStore.GetUserByUsername(username);
if(user == null)
{
returnfalse;
}

// ... omitted for brevity ...
}
// ...
}

Of course since the membership provider requires an IFlexUserStore dependency, the operations required for data access are defined in this project in the IFlexUserStore interface. There is also an AspnetEnvironment class that removes hard dependencies on test unfriendly bits like HttpContext.Current.

publicclass AspnetEnvironment : IApplicationEnvironment
{
publicvoid IssueAuthTicket(string username, bool persist)
{
FormsAuthentication.SetAuthCookie(username,persist);
}

// ...
}

FlexProviders.EF and FlexProviders.Raven

It’s relatively straightforward to build classes that will take care of the data access required by a membership provider. For both Raven and EF, all you really need is a generic type parameter and a unit of work. For Raven:

namespace FlexProviders.Raven
{
publicclass FlexMembershipUserStore<TUser>
: IFlexUserStore where TUser : class, new()

{
privatereadonly IDocumentSession _session;

public FlexMembershipUserStore(IDocumentSession session)
{
_session = session;
}

public IFlexMembershipUser GetUserByUsername(string username)
{
return _session.Query<TUser>().SingleOrDefault(u => u.Username == username);
}

// ...
}
}

And the EF version:

namespace FlexProviders.EF
{
publicclass FlexMembershipUserStore<TUser>
: IFlexUserStore where TUser: class, IFlexMembershipUser, new()
{
privatereadonly DbContext _context;

public FlexMembershipUserStore (DbContext context)
{
_context = context;
}

public IFlexMembershipUser GetUserByUsername(string username)
{
return _context.Set<TUser>().SingleOrDefault(u => u.Username == username);
}

// ...
}
}

FlexProviders.Tests

This project is a set of integration tests to verify the EF and Raven providers actually put and retrieve data with real databases. The EF tests require the DTC to be running (net start msdtc). The tests are configured to use SQL 2012 LocalDB (for EF) by default, while the Raven tests use Raven’s in-memory embedded mode.

None of the code is vetted or hardened and still needs some work, but if you find it to be an inspiration or have some feedback or pull requests, let me know.

There are no NuGet packages available, as yet.

Conclusion

I said in the last post that building a membership system is easy if you know exactly what you want. You can mostly rely on other pieces of the framework for the hard parts (creating secure cookies and cryptography, for example, but also relying on DotNetOpenAuth for the OAuth heavy lifting).

The hard part of building a membership system is when you try to build it for unknown customers and unknown scenarios. I don’t envy Microsoft in the sense that if they build a membership system that is simple, 60% of their customers will say it doesn’t work for their application. If they build a membership system that is sophisticated enough to work with most applications, 60% of their customers will say it looks like WCF. Somewhere there is a sweet spot that will make a majority of customers happy.

What I have here will work for most of the applications I’ve been associated with, provides some flexibility in the data store, and still remains, I think, relatively easy to understand.


Two Great New Conferences

$
0
0

DevIntersection – Las Vegas

DevIntersection is the final stop in the .NET Rocks! road trip and runs from December 9th to the 12th in Las Vegas. In addition to conference sessions I’ll be doing an ASP.NET MVC 4 workshop on December 13th. Register by November 15th to receive a Windows 8 tablet!

DevIntersection Las Vegas

 

Warm Crocodile – Copenhagen

The Warm Crocodile Developer Conference is a 2 day conference in Copenhagen, Denmark with a great selection of speakers and sessions. In the words of the organizers– “We want to create a brand, a conference brand that promises to deliver on set of great things, both people and content, but also, and just as much fun and partying and networking.” Warm Crocodile Developer Conference

I hope to see you there!

Abstractions, Patterns, and Interfaces

$
0
0

Someone recently asked me how to go about building an application that processes customer information. The customer information might live in a database, but it also might live in a .csv file.

The interesting thing is I’m in the middle of building a tool for DBAs that one day will be a fancy WPF desktop application integrated with source control repositories and relational databases, but for the moment is a simple console application using only the file system for storage. Inside I’ve faced many scenarios similar to the question being asked, and these are scenarios I’ve faced numerous times over the years.

There are 101 ways to solve the problem, but let’s work through one solution together.

Getting Started

We might start by writing some tests, or we might start by jumping in and trying to display a list of all customers in a data source, but either way we’ll eventually find ourselves with the following code, which contains the essence of the question:

var customers = // how do I get customers?

 

To make things more concrete, let’s take that line of code and put it in a class that will do something useful. A class to put all customer names into the output of a console program.

publicclass CustomerDump

{
publicvoid Render()
{
var customers = // how ?
foreach (var customer in customers)
{
Console.WriteLine(customer.Name);
}
}
}


Although we might not know how to retrieve the customer data, we probably do know what data we need about each customer. We’ll go ahead and define a Customer class for objects to hold customer data.

publicclass Customer
{
publicint Id { get; set; }
publicstring Name { get; set; }
publicstring Location { get; set; }
}

Now we can work on the main question. The business has told us we need to be flexible with the customer data, so how will we go about retrieving customers?

Defining an Interface

Interfaces are wonderful for a language like C#. Interfaces give us everything we need to work with an object in a strongly-typed manner, but place the least number of constraints on the object implementing the interface. Interfaces make the C# compiler happy without forcing us to pay an inheritance tax for working with a class hierarchy. We’ll define an interface that describes exactly how we want to fetch customers and how we want the customers packaged for us to consume.

publicinterface ICustomerDataSource
{
IList<Customer> FetchAllCustomers();
}

There are many subtleties to interface design. Even the simple interface here required us to make a number of decisions.

First, what is the name of the operation? Do we want to FetchAllCustomers? SelectAllCustomers? GetCustomers? I believe names are important at this level, but you don’t want to give too much away. A name like SelectAllCustomers is biased towards working with a relational database, and we know we’ll be working with more than just a SQL database.

Often the name is influenced by what we know about the project and the business. Fortunately, refactoring tools make names easy to change.

Another design decision is the return type. When you are trying to abstract away some operation, you have to decide if you’ll go for the lowest common denominator (anything can return IEnumerable), or something that might only be achieved by an advanced data source (like IQueryable). In this example we are forcing all implementations to return a list, which has some tradeoffs, but at least we know we’ll be getting specific type of data structure. IEnumerable would be targetting the lowest common denominator and means the interface is easier to implement, but we might not have all the convenience features we need.

Once again, knowing a bit about the direction of the project and being in tune with the business needs will help in determining when to add flexibility and when to enforce constraints. 

Implementing the Interface

One question we might have had in the back of our mind is how to provide an implementation of the data loading interface when some implementations might need parameters like a database connection string, while other implementations might need file system details, like the path the to the .csv file with customers inside.

When designing an interface we need to put those thoughts in the back of our mind and focus entirely on the client’s needs first. Just watch how this unfolds as we build a class to read custom data from a csv file.

class CustomerCsvDataSource : ICustomerDataSource
{
public CustomerCsvDataSource(string path)
{
_path = path;
}



public IList<Customer> FetchAllCustomers()
{
return File.ReadAllLines(_path)
.Select(line => line.Split(','))
.Select((values, index) =>
new Customer
{
Id = index,
Name = values[0],
Location = values[1]
}).ToList();
}



readonlystring _path;
}


This isn’t the most robust CSV parser in the world (it won’t deal with embedded commas, so we might want to get some help), but it does demonstrate a pattern I’ve been using over and over again recently. Class implements interface, stores constructor parameters in read-only fields, exposes methods to implement the interface, and above all keep things simple, small, and focused.

Here is the pattern again, this time in a class that uses Mark Rendle’s  Simple.Data to access SQL Server, but we could do the same thing with raw ADO.NET, the Entity Framework, or even MongoDB.

class CustomerDbDataSource : ICustomerDataSource
{
public CustomerDbDataSource(string connectionString)
{
_connectionString = connectionString;
}



public IList<Customer> FetchAllCustomers()
{
var db = Database.OpenConnection(_connectionString);
return db.Customers.All().ToList<Customer>();
}



readonlystring _connectionString;
}


We can see now that worrying about connection strings and file names while defining the interface was premature worrying. These were all implementation details the interface isn’t concerned with, as the interface only exposes the operations clients need, like the ability to fetch customers.

Instead, these classes are “programmed” with implementation specific instructions given by constructor parameters, and the instructions give them everything they need to do the work required by the interface. The classes never change the instructions (they are all saved in read-only fields), but they use the instructions to produce new results.

We have now reached the point where we have two different classes to deal with two different sources of data, but how do we use them?

Consuming the Interface

Returning to our CustomerDump class, one obvious approach to producing results is the following.

publicclass CustomerDump
{
publicvoid Render()
{
var dataSource = new CustomerCsvDataSource("customers.csv");
var customers = dataSource.FetchAllCustomers();

foreach (var customer in customers)
{
Console.WriteLine(customer.Name);
}
}
}

The above approach can work, but we’ve tied the CustomerDump class to the CSV data source by instantiating CustomerCsvDataSource directly. If we need CustomerDump to only work with a CSV data source, this is reasonable, but we know most of the application needs to work with different data sources so we’ll need to avoid this approach in most places.

Instead of CustomerDump choosing a data source and coupling itself to a specific class, we’ll force someone to give CustomerDump the data source to use.

publicclass CustomerDump
{
public CustomerDump(ICustomerDataSource dataSource)
{
_dataSource = dataSource;
}

publicvoid Render()
{
var customers = _dataSource.FetchAllCustomers();

foreach (var customer in customers)
{
Console.WriteLine(customer.Name);
}
}

readonly ICustomerDataSource _dataSource;
}

Now, any logic we have inside of CustomerDump can work with customers from anywhere, and we can add new data sources in the future. We’ve gained a lot of flexibility in an area where the business demands flexibility, and hopefully didn’t build a mountain of abstractions where none were required. All the pieces are small and focused, and they way they will fit together depends on the application you are building. Which leads to the next question – who is responsible for putting CustomerDump together?

At the top level of every application built in this fashion you’ll have some bootstrapping code to arrange all the pieces and set them in motion. For a console mode application it might look like this:

staticvoid Main(string[] args)
{
// arrange
var connectionString = @"server=(localdb)\v11.0;database=Customers";
var dataSource = new CustomerDbDataSource(connectionString);
var dump = new CustomerDump(dataSource);

// execute
dump.Render();
}

Here we have hard-coded values again, but you can imagine hard-coded connection strings and class names getting intermingled or replaced with if/else statements and settings from the app.config file. As the application becomes more complex, we could turn to tools like MEF or StructureMap to manage the construction of the building blocks we need.

Going Further

One of the biggest challenges in building well factored software is knowing when to stop adding abstractions. For example, we can say the CustomerDump class is currently tied too tightly to Console.Out. To remove the dependency we’ll instead inject a Stream for CustomerDump to use.

public CustomerDump(ICustomerDataSource dataSource,
Stream output)
{
_dataSource = dataSource;
_output = new StreamWriter(output);
}
Alternatively, we could say CustomerDump shouldn’t be responsible for both getting and formatting each customer as well as sending the result to the screen. In that case we’ll just have CustomerDump create the formatted string, and leave it to the caller to decide what to do with the result.
publicstring CreateDump()
{
var builder = new StringBuilder();
var customers = _dataSource.FetchAllCustomers();

foreach (var customer in customers)
{
builder.AppendFormat("{0} : {1}",
customer.Name, customer.Location);
}
return builder.ToString();
}

Now we might look at the code and decide that getting and formatting are two different responsibilities, so we’ll need someone to pass the list of customers to format instead of having the method use the data source directly. And so on, and so on.

Where do we stop?

That’s where most samples break down because the right place to stop is the place where we have just enough abstraction to make things work and still meet our requirements for testability, maintainability, scalability, readability, extensibility, and all the other ilities we need. Samples like this can show you the patterns you can use to achieve specific results, but only in the context of a specific application do we know the results we need. We need to apply both YAGNI and SRP in the right places and at the right time.

GroupBy With Maximum Size

$
0
0

I recently needed to group some objects, which is easy with GroupBy, but I also needed to enforce a maximum group size, as demonstrated by the following test.

publicvoid Splits_Group_When_GroupSize_Greater_Than_MaxSize()
{
var items = new[] { "A1", "A2", "A3", "B4", "B5" };

var result = items.GroupByWithMaxSize(i => i[0], maxSize: 2);

Assert.True(result.ElementAt(0).SequenceEqual(new[] { "A1", "A2" }));
Assert.True(result.ElementAt(1).SequenceEqual(new[] { "A3" }));
Assert.True(result.ElementAt(2).SequenceEqual(new[] { "B4", "B5" }));
}
The following code is not the fastest or cleverest solution, but it does make all the tests turn green.  
publicstatic IEnumerable<IEnumerable<T>> GroupByWithMaxSize<T, TKey>(
this IEnumerable<T> source, Func<T, TKey> keySelector, int maxSize)
{
var originalGroups = source.GroupBy(keySelector);

foreach (var group in originalGroups)
{
if (group.Count() <= maxSize)
{
yieldreturn group;
}
else
{
var regroups = group.Select((item, index) => new { item, index })
.GroupBy(g => g.index / maxSize);
foreach (var regroup in regroups)
{
yieldreturn regroup.Select(g => g.item);
}
}
}
}
In this case I don’t need the Key property provided by IGrouping, so the return type is a generically beautiful IEnumerable<IEnumerable<T>>.

Two ASP.NET MVC 4 Courses

$
0
0

Now on Pluralsight:

The ASP.NET MVC 4 Fundamentals training course spends most of its time on new features for version 4 of the framework, including:

- Mobile display modes, display providers, and browser overriding

- Async programming with C# 5 and the async / await keywords

- The WebAPI

- Bundling and minification with the Web Optimization bits

The Building Applications with ASP.NET MVC 4 training course is a start to finish introduction to programming with ASP.NET MVC 4. Some of the demos in the 7+ hours of content include:

- Using controllers, action results, action filters and routing

- Razor views, partial views, and layout views

- Models, view models, data annotations, and validation

- Custom validation attributes and self-validating models

- Entity Framework 5 code-first programming

- Entity Framework migrations and seeding

- Security topics including mass assignment and cross site request forgeries

- Using JavaScript and jQuery to add paging, autocompletion, async form posts, and async searches

- Taking control of Simple Membership

- Using OAuth and OpenID

- Caching, localization, and diagnostics

- Error logging with ELMAH

- Unit testing with Visual Studio 2012

- Deploying to IIS

- Deploying to a Microsoft Windows Azure web site

Enjoy!

Flood Filling In A Canvas

$
0
0

Canvasfill is a demo for a friend who wants to flood fill a clicked area in an HTML 5 canvas.

A couple notes:

JavaScript loads a PNG image into the canvas when the page loads.

var img = new Image();
img.onload = function () {
canvas.width = img.width;
canvas.height = img.height;
context.drawImage(this,0,0);
};
img.src = "thermometer_01.png";

The image and the JavaScript must load from the same domain for the sample to work, otherwise you’ll run into security exceptions (unless you try to CORS enable the image, which doesn’t work everywhere).

The code uses a requestAnimationFrame polyfill from Paul Irish for efficient animations.

The code uses getImageData and putImageData to get and color a single pixel on each iteration.

image = context.getImageData(point.x, point.y, 1, 1);
var pixel = image.data;

This is not the most efficient approach to using the canvas, so if you need speed you’ll want to look at grabbing the entire array of pixels. With the current approach it is easier to “see” how the flood fill algorithm works since you can watch as pixels change colors in specific directions.

The flood fill algorithm itself is an extremely primitive queue-based (non-recursive) algorithm. It doesn’t deal well with anti-aliased images, for example, so you might need to look at more advanced algorithms if the image is not a blocky clip art image or a screen shot of Visual Studio 2012 with the default color scheme.

Viewing all 513 articles
Browse latest View live


Latest Images