Microsoft Teams / Skype for Business coexistence and interoperability

For some time now Microsoft has made no bones about the fact that Microsoft Teams is the future replacement for Skype for Business (S4B).  In fact there isn’t much you can do in S4B that isn’t already in Teams today.

By default, you have S4B and Teams side by side and most organizations don’t change this at the onset.  Fortunately Microsoft has now published an article to Understand Microsoft Teams and Skype for Business coexistence and interoperability that can explain all the options on how this can work.  Note that best practices recommend a controlled rollout of Microsoft Teams, once a usage guidelines and governance strategy has been defined for the organization.

The issue that most organizations have is how to migrate their users over to Teams.  The default situation means that most users will continue to use S4B until they find the people they want to communicate with are already communicating on Teams.  At that point they will use both until all the people they communicate with are migrated.

This is generally okay as most organizations want some pilot or leading edge users to try it out before migrating everyone.  The trick here is that you may want to control who has access to Teams until you are confident or comfortable enough to migrate everyone.  This means a controlled rollout of Microsoft Teams, a topic intimately tied to how you manage Office 365 Groups and has been discussed and documented well in the last year.  Once the pilot is complete, then you can formalize a migration plan consistent with the usage guidelines and governance strategy.

Multi-Factor Authentication & SharePoint

WHY MULTI-FACTOR AUTHENTICATION

Multi-Factor authentication is an idea that has long been overdue for most internet facing sites as most of them today are in-secure in their implementations utilizing single factor authentication.  Bad actors have long found ways to intercept identities and passwords (due to lax password rules and policies, identity breaches, spyware, and social engineering) making single factor authentication insufficient security for most organizations in today’s world.

Most internet facing SharePoint sites never had to worry too much about this as most traditional on premise internet facing SharePoint site implementations are extranet sites using reverse proxy solutions utilizing AD identities.  These identities most often had stronger passwords, policies, and encryption…buffering them from most bad actor efforts.  They however still vulnerable to identity breaches, spyware, and social engineering attacks.

However, things are changing…

Going forward, most SharePoint sites will be public facing in some form or another.  Take for example that Claims authentication could be delegated to Facebook or LinkedIn or (as of SharePoint 2013 SP1) on premise users can have access to OneDrive.  Or that they may be using Provider hosted SharePoint Apps that are hosted in the cloud, or that these SharePoint farms might be a hybrid implementation utilizing Office 365 or they may even exist entirely in a cloud infrastructure such as Azure.  Sure, Microsoft has built security using standards that are effective and secure for single factor authentication, but this doesn’t stop bad actors from breaking security using identity breaches, spyware, and social engineering.  This is where the multi-factor authentication shines.

By forcing users to not only enter identity information, but to also validate them using another communications method such as SMS, email, or even voice calls (among others), it prevents most identity breaches, spyware, and social engineering type attacks.  This is becoming more and more important as more of our information (including personally identifiable information [PII]) continues to move to the cloud, including information in SharePoint.

 

IMPLEMENTATION OPTIONS FOR ON PREMISE MULTI-FACTOR AUTHENTICATION

So the next step is to figure out how to implement Multi-Factor authentication for an on premise SharePoint site.  Currently I can only see four options (if you know of others, please notify me):

 

Option 1: Use simple Azure Multi-Factor authentication

This will require that you store your user identities in AD on Azure. This is usually a non-starter for most organizations as they typically store their identities in on premise AD.  There are ways to perform AD synching in order to replicate on premise identities in the cloud, but this is neither simple, nor is it without governance issues in most cases.

This would be the approach I would use if it was ok to store user identities in Azure AD such as typical Office 365 scenarios.

See Multi-Factor Authentication documentation for details: http://azure.microsoft.com/en-us/documentation/services/multi-factor-authentication/

 

Option 2: Use ADFS

ADFS will authenticate based on user certificates from the local certificate store or claims providers. This will however require extensive configuration of ADFS and implementation of trusted identity provider inside SharePoint.  This may get simpler in the next version of Windows Server.

As it stands today, this should only be chosen in scenarios for non-cloud based Single Sign On applications, and not for simpler scenario such as typical Multi-Factor authentication due to the complexity of the implementation.  If however you want to implement the secondary authentication method via a 3rd party secure provider (such as RSA SecurID), this is likely the approach you should take.

See Under the hood tour on Multi-Factor Authentication in ADFS for details: http://blogs.msdn.com/b/ramical/archive/2014/01/30/under-the-hood-tour-on-multi-factor-authentication-in-ad-fs-part-1-policy.aspx

 

Option 3: Implement forms authentication and customize the login page to implement Multi-Factor authentication

First you can authenticate the user using your favorite identity store (such as AD or Asp.Net membership provider) and then you would use custom logic for SMS, email, or voice calls authentication.  A team of skilled developers could be able to implement this, however you will need a provider service to send and receive the secondary authentication communications.

This should be the solution if you want to implement Multi-Factor authentication in-house only.

 

Option 4: Implement an Azure Multi-Factor Authentication Server in your on premise environment and use the Azure Multi-Factor Authentication Service

This is really a combination of options 1 and 3.  It uses Azure for the Multi-Factor Authentication Service (in Azure) and it uses the Azure Multi-Factor Authentication Server (on premise install on a server with internet access).  The benefit here is that you don’t have to do custom development or maintain any code.  Rather you perform a server installation and configuration only.

This should be the solution if you want to implement Multi-Factor authentication with no development involved using user identities in your on premise AD store.  This is also the solution if you are considering cloud based Single Sign On applications.

Below shows the overview video of how the process would work:

Azure.MultiFactorAuthentication.OnPremise.929x493

 

See Enabling Multi-Factor Authentication for On-Premises Applications and Windows Server for details: http://technet.microsoft.com/en-au/library/dn249467.aspx

 

In most on premise SharePoint use cases, Option 4 will be the best solution…

Implementing ECTs in SPD using Stored Procedures

If you plan to use Stored Procedures, you will need a separate stored procedure for each CRUD operation. In addition, you will need separate stored procedures for any associations you might need. It is important to note that each Read List, Read Item, and Association stored procedures need to return all the fields that will be required by any other stored procedure defined by that Content Type. In other words, the Read List, Read Item, and Association stored procedures need to return the same exact fields. If they don’t, you will get runtime errors.

Since most examples center on tables, you will often not see a detailed discussion of fields that are required for all the operations as tables always return to you all the fields of that table. So to avoid unintended runtime errors with your ECTs always make sure that your stored procedures return to you all the fields that you think you might need even if you expect not to need them in a particular ECT operation definition. SPD then allows you to define which of these fields should be included in the ECT definition.

The following is a list of field issues that you should be aware of:

  • Unique Identifiers: Each stored procedure needs to provide a unique identifier of type integer. SPD will allow you to have other types of unique identifiers, but you will run into runtime errors if you try to perform any association, create, update, or delete operations. You need these identifiers to avoid issues even if they are completely meaningless to your solution.
  • Limit filters (Read List operations): If it is possible that your data will return more than two thousand records, this will become big problem down the line. BCS by default has a 2000 item throttling limit. This limit can be changed, see BCS PowerShell: Introduction and Throttle Management. You can go without limit filters in development and not see any issue even if your database has hundreds of thousands of records as External lists will by default implement paging. Just understand that if you are using the object model (BCS Runtime or Client object models) to access your data, all records will be returned to you. This can be a major cause of performance degradation and you will not likely see it till you are on a production environment where there are greater latency issues (such as distributed servers, zones, and SSL implementations that you are likely not to have in development). One important thing to note is that a limit filter on its own will just limit the items returned; this means that without another filter type you can only access a subset of your data. For example if you want to limit the amount of books returned by a query to 100, you would add a limit filter and add another such as a Wildcard Filter (say for example a book’s partial title or publish date), this will mean you will get a maximum of 100 books which match the Wildcard filter returned. So in order to implement limit filtering on Read List operation, your Read List stored procedure needs to have an input parameter to use for performing an additional filter criteria.
  • Nullable field types: SPD will give you warnings if it finds fields that are nullable, but it can handle them just fine. Be careful with this as External lists will try to return empty strings to these fields if the fields are not required. This can be a real problem if the field is not of a CHAR, VARCHAR, or some other string type. This will give you runtime errors. If you are using these fields via the object model (BCS Runtime or Client object models), then you can handle this by returning nulls for these field types.

Authenticating a BCS Solution to an External System

The first questions you should ask yourself is how do you want to access the data in SQL Server and what accounts you want to get the data with. Most examples you can find will use Pass through, so essentially you are passing the logged-on user’s credentials to SQL Server. This is a problem when you have thousands of users. Do you really want to give customers direct access to the database? Ok, what are our other options then?

We could use Revert To Self. This means that we would use the identity of the application pool to get our data. This is a viable option if we treat all users to our application the same. Unfortunately my client wanted customers to be able to perform CRUD operations on their own data, but no one else. If we used Revert To Self, the database would not know if the current request is for a user that should or should not be able to update the requested information. So that leaves us with one final option; Impersonation.

Impersonation is implemented using the Secure Store Service (SSS) in SharePoint 2010. The idea is that the SSS will detect the current logged-on user’s identity and based on permission rules that we create in the Secure Store, the request to SQL Server will be permitted or denied. If permitted, the SSS will use an impersonated identity defined in the Secure Store to make the request to SQL Server. This approach is ideal if you are going to deploy the solution to multiple environments as users and users’ permissions could be different between the environments and it pushes security as a configuration step, abstracting it from the solution itself. Another benefit to this approach is that the client wanted to authenticate customers via Claims authentication, but wanted their staff to login using AD. The SSS allows us to use Claims groups, as well as AD groups, and give us the capability to assign Claims users permissions to the database that are different from the AD users permissions.

This is the approach we took for authenticating to the SQL Server and I will go into further detail about this in a future blog. For now, you can find more information on Authenticating to Your External System on the BCS Team Blog.

Why choose a BCS solution?

Surfacing external data in SharePoint enables users to build composite applications that give access to critical information and make their interactions with that information more convenient.  Business Connectivity Services (BCS) is the SharePoint service which allows surfacing of external data from SQL Server, Web Services, or.NET Assembly Connector.  SharePoint even provides no-code BCS solutions to surface the external data via SharePoint Designer to allow for rapid development and provides External Lists to quickly interacting with that data.  You can also secure the data by setting permissions on who can create, read, update, and delete (CRUD) the data.  You can even crawl that data using SharePoint Enterprise Search and set a profile page for rendering the search results in a meaningful way.  You can rapidly developing a feature rich front end for your external data.

Search enabling your BCS solution will however require more than SharePoint Designer to develop.  This is also true if you want to deploy your solution to multiple environments.  This does not mean that you have to write code, but it does mean that you will find yourself in Visual Studio modifying declarative markup that SharePoint Designer can produce and packaging that markup into WSP solution packages.  This approach will allow you to develop external data solutions that you can quickly develop and deploy to multiple environments consistently and will provide your users the ability to immediately search and render meaningful results.

Now if you do determine that you need to integrate external data with SharePoint, but want to still have custom forms or any kind of richer user experience to interact with that data, you could write code using the SharePoint API against external lists.  This approach is often touted as one of the great things about external lists in that you can treat them as any other SharePoint lists.  This approach will work for small datasets, but for large datasets and anytime you are concerned about performance of your forms, you will really want to write your code to go against the BCS Runtime or Client Object Models.  This is a very powerful approach as you can now develop a very rich custom user interface to interact with your external data within the SharePoint context.  You can even access your external data from other applications via the BCS Client Object Model.

As I have discussed, SharePoint provides for rapid development of external data and allows you to apply security and search on your external data.  It also provides the capability to build very powerful and very rich custom user experiences for your external data.  Implementing these solutions to different environments and making your application production ready presents some challenges that are not well documented.  In my next few blog posts, I will go through the process of making an external data solution from beginning to end and show you techniques that will make your solution more stable and production ready.