Monday, December 14, 2009

Troubleshooting Certificate Issues

For the past few days, we’ve been working on SAML 2.0 interoperability with OIOSAML and had to dig pretty deep to troubleshoot some issues we were running into. OIOSAML is an implementation of a SAML 2.0 compliant service provider for Java and J2EE applications which runs on Apache Tomcat. The issue wasn’t rocket science; however if we could not resolve it, federated authentication couldn’t be enabled for this application. The only error on the SP was:

Stack Trace:

Caused by: dk.itst.oiosaml.sp.model.validation.ValidationException: The response is not signed correctly

at com.sf.sfv4.authentication.saml2.extend.SFSAML2Response.validateResponse(SFSAML2Response.java:97)

at com.sf.sfv4.authentication.saml2.SFSAML2AssertionConsumerHandler.handleSAMLResponse(SFSAML2AssertionConsumerHandler.java:392)

... 45 more

A Google search on the error provided some possible leads to what the problem could be. However, one would immediately assume the issue was within the signing certificate due to the error. Yes, but you’d have to prove it.

When the SP consumes the SAMLResponse from an issuing IDP, the SP checks the IDP metadata file for a valid issuer; or Entity Id. This value should is stored in the local .xml of an STS within the EntityDescriptor element under the Entity ID attribute. For example, when publishing federation metadata in ADFSv2, these values are within the first element:

+ <EntityDescriptor wsu:Id="8e0d3ee9-0865-49c7-9c05-c8c64399757f" entityID="https://xxx.xxxx.com/Trust" xmlns="urn:oasis:names:tc:SAML:2.0:metadata" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">

Understanding what happens under the hood helps extremely in troubleshooting problems. When an SP (RP-STS) consumes an incoming SAMLResponse, it checks its policy store for a valid entity id, which then tells the token issuance service which certificate to validate the authenticity of the message by digital signature within the response or assertion sections of the SAML token.

However, in my scenario we never exchanged federation metadata other than manually providing parameters because the delivery method was IdP-initiated POST binding; therefore we had to prove the signature was bad. To do so, we basically wrote an application and published it as an RP to our IP-STS. The core functionality of signing an XML document can be referenced in Rebecca Croft’s blog, Apollo Jack using the System.Security.Cryptography namespace and System.Security.Cryptography.XML.

The SAMLResponse you consume can be displayed in a simple web form which you can write code to validate the SAML formatting and digital signature. The message will be encoded in Base64, therefore you’d need to decode it then check the signature.

To decode the message, you can use this method:

   1: public static String decodeMessage(string samlResponse)
   2:         {          
   3:             byte[] encodedDataAsBytes = System.Convert.FromBase64String(samlResponse);
   4:             string decodedSAMLResponse = System.Text.Encoding.UTF8.GetString(encodedDataAsBytes);
   5:             return decodedSAMLResponse;
   6:         }



To check a signature, you’d use the public key portion of the signing certificate and the SignedXml.CheckSignature method. From there, you can be insured your signing certificates should be validated on the RP side.

Sunday, December 13, 2009

Troubleshooting the MSIS7012

This summary is not available. Please click here to view the post.

Visualizing WS-Federation and SAML Profiles

SAML and WS-Federation within ADFSv2 may (or may not) introduce new concepts to the AD administrator. The immediate reaction may be, “I’m not a developer!” However, understanding the technology and how to implement it in the enterprise is no different than understanding Kerberos authentication protocols used by Active Directory. Travis Spencer published a very good slide deck which covers all the dance steps for both WS-Federation and SAML profiles. This is really good and helps visualize what happens behind all the federation acronyms and terms.

Animated Explanation of SAML

Friday, December 4, 2009

Implementing Single Sign-on with SalesForce.com

Getting single sign-on to work with Salesforce.com is pretty straightforward. They support both SAML 1.1 and now 2.0 formats using IdP-initiated POST Bindings. ADFSv2 Beta 2 currently does not support IdP-initiated SSO; although, from what I’ve heard will in RTM. So do get this working, you can do this through a custom STS.

All SalesForce.com requires is the Username or Federation ID to be passed as an assertion within the SAML token. This can be presented in either the Subject or as an Attribute value. The simplest method is to do it through the subject. On the Saleforce.com side, the federation single sign-on using SAML settings should be:

  1. SAML Enabled: yes
  2. SAML User ID Type: Username
  3. SAML User ID Location: Subject
  4. SAML Version: 2.0
  5. The public key of your Token Signing Certificate needs to be uploaded for validating token authenticity

Salesforce.com provides a validation tool to compare your generated SAML Response against the SSO settings on their server. This is pretty helpful in working through errors.

Getting a properly formatted SAML Response is probably the most important key for federated authentication to work. If there is something not standard or funky in the token, the STS that attempts to consume it will likely throw an error.

Below is a working SAML 2.0 token we’re using for Salesforce.com (note that I’ve removed my signature information):

<?xml version="1.0" encoding="UTF-8"?>
<samlp:Response xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol" Destination=
https://login.salesforce.com ID="fhlaclpimfkgkjpbdijjcjahbhbldojhekcojnog" IssueInstant="2009-12-04T09:52:35Z" Version="2.0"><Signature xmlns="http://www.w3.org/2000/09/xmldsig#">……</Signature>
<saml:Issuer xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion">mycompany.com</saml:Issuer>
<samlp:Status>
<samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success" />
</samlp:Status>
<saml:Assertion xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion" ID="baofcgcmpmmekakjkkbomfbefcfdljgbkbdifohm" IssueInstant="2009-12-04T09:52:35Z" Version="2.0">
<saml:Issuer>mycompany.com</saml:Issuer>
<saml:Subject>
<saml:NameID Format="urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified">ccalderon@mycompany.com</saml:NameID>
<saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">
<saml:SubjectConfirmationData NotOnOrAfter="2009-12-05T09:52:35Z" Recipient=
https://login.salesforce.com />
</saml:SubjectConfirmation>
</saml:Subject>
<saml:Conditions NotBefore="2009-12-04T01:52:22Z" NotOnOrAfter="2009-12-05T09:52:35Z">
<saml:AudienceRestriction>
<saml:Audience>
https://saml.salesforce.com</saml:Audience>
</saml:AudienceRestriction>
</saml:Conditions>
<saml:AuthnStatement AuthnInstant="2009-12-04T09:52:35Z">
<saml:AuthnContext> <saml:AuthnContextClassRef>urn:oasis:names:tc:SAML:2.0:ac:classes:unspecified</saml:AuthnContextClassRef>
</saml:AuthnContext>
</saml:AuthnStatement>
<saml:AttributeStatement>
<saml:Attribute Name="ssoStartPage" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml:AttributeValue xmlns:xs="
http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="xs:string">https://localhost/SalesForce.SSO/SSOLogin.aspx</saml:AttributeValue>
</saml:Attribute>
<saml:Attribute Name="logoutURL" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml:AttributeValue xmlns:xs="
http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="xs:string">https://www.salesforce.com</saml:AttributeValue>
</saml:Attribute>
</saml:AttributeStatement>
</saml:Assertion>
</samlp:Response>

Wednesday, November 11, 2009

Can ADFSv2 Beta2 work with ZXID?

This week, we configured interoperability with an STS running ZXID for SP-initiated SSO. ZXID is an open source IdM for SAML SSO. It’s basically an Apache httpd auth module for SAML SSO. It uses pure SAML 2.0 and ID-WSF Web Services, and others language bindings supported through SWIG. More on the product can be found here and how it works:

OpenLiberty Secure Identity Web Service
Apache with mod_auth_saml Receipe

Thoughts? Pretty cool…

Friday, October 16, 2009

FIM RC1: Access to the requested resource(s) is denied

A common attribute used in ILM projects is the “Employee Status” attribute. In RC1, this value does not exist for the user resource type within the portal. Additionally, there might be more attributes you need to create and associate with any resource type; therefore, after going through the procedures documented in the “Introduction to Schema Management” guide, you’ll probably experience the following error when exporting data from the FIM MA:

"failed-web-motification-error"

Type: Microsoft.ResourceManagement.WebServices.Client.PermissionDeniedException

Message: Access to the requested resource(s) is denied

Stack Trace:    at Microsoft.ResourceManagement.WebServices.Client.UninitializedResource.PerformUpdate()
   at Microsoft.ResourceManagement.WebServices.Client.UninitializedResource.Update()
   at MIIS.ManagementAgent.RavenMA.ExportObjectModification(DataSourceObject dsObject, SchemaManager schemaManager)
   at MIIS.ManagementAgent.RavenMA.Export(DataSourceObject dsObject)

As Joe mentions on the forums, in RC1 the default MPRs list explicit attribute values within the list of resource attributes versus just saying “All Attributes.” Any custom attribute needs to be added in order for the synchronization account to update them during an export procedure. To do so, just add the attribute to the “Synchronization: Synchronization account controls users it synchronizes” MRP. Not sure if this is relevant, but I had to cycle my FIM Service for it to apply immediately.

Wednesday, September 16, 2009

Automating MOSS 2007 installs

Let’s build onto the process described in my last post. This time, let’s look at automating the setup of MOSS 2007. Here is a link which describes the process for “slipstreaming” the MOSS setup files with SP1; therefore, I’m going to skip that. Apparently, the Product Group has released a downloadable version also, so this might be useful for future service packs and updates.

Depending on where you want to go, if you still need to install SQL…you can bolt this process right on top of the unattended SQL installation procedure. For me, this comes in handy when re-building my farm(s) for development.

The pre-requisites for installing MOSS 2007 on Windows 2008 are to install the web server role w/ the IIS6 management components. You can do this by using servermanagercmd.exe with the –I switch + the [Web-WebServer] and [Web-Mgmt-Compat] components. For example:

servermanagercmd -i Web-WebServer

servermanagercmd -i Web-Mgmt-Compat

Assuming SQL is already provisioned and you have a slipstreamed install directory, you can proceed to setup a configuration file for setup. Be sure to install the pre-requisites, then you can proceed to use the /config [path and file name] switch to reference a Config.xml file to setup MOSS 2007. If you’ve slipstreamed your installation files with SP1, the updates will be applied during the installation. Here is the TechNet link on how to use the Config.xml for controlling installs or doing more advanced installations.

Below is a sample configuration file I use for simple farm installation, using the following syntax:

\\..\...\MOSS2007_FullSP1\x86Setup\setup.exe /config “config.xml”

- <Configuration>

- <Package Id="sts">

<Setting Id="LAUNCHEDFROMSETUPSTS" Value="Yes" />

<Setting Id="REBOOT" Value="ReallySuppress" />

<Setting Id="SETUPTYPE" Value="CLEAN_INSTALL" />

</Package>

- <Package Id="spswfe">

<Setting Id="SETUPCALLED" Value="1" />

<Setting Id="REBOOT" Value="ReallySuppress" />

<Setting Id="OFFICESERVERPREMIUM" Value="1" />

</Package>

<DATADIR Value="C:\Data" />

<Logging Type="verbose" Path="%temp%" Template="Office Server Setup(*).log" />

<Display Level="none" CompletionNotice="Yes" SupressModal="Yes" AcceptEULA="Yes" />

<PIDKEY Value="XXXXX- XXXXX - XXXXX - XXXXX - XXXXX " />

<Setting Id="SERVERROLE" Value="APPLICATION" />

<Setting Id="USINGUIINSTALLMODE" Value="0" />

</Configuration>

Automating SQL 2008 w/SP1 installs

Building customer solutions can require the maintenance of many development environments; therefore, I’d rather not be spending my whole day doing watching the progress bar of some app install. In addition, each development environment may differ slightly in configuration; therefore I need the ability to just point and click for installs, yet I still need to provide the flexibility to change the installation configuration when needed. There are many ways to do this…I know folks that have built elaborate deployment tools that leverage either SQL or XML to get configurations; however here are some ideas for how I do things using syspreped VM images. This process can easily be packaged and integrated into a nice automated build process using something like SCCM.

Slipstreaming Source Installation Binaries: Installing prerequisite software is a pain, especially if you have to go back around and patch or apply a service pack. As a best practice, it is always best to build using the “most current” advertisements and patches. We all know, applying patches is something that won’t go away, however if I can reduce my deployment time by merging service packs (which always take long), I can make my process more efficient. Here is a post that provides the steps for slipstreaming SQL 2008 with SP1.

Unattended Installation: Unattended installations methods provide value from automation, in addition to insuring consistency in the configuration of a system. For example, say I’m deploying across many systems such as a web farm. I’d want the build automated versus going to each machine. SQL 2008 supports unattended installs by using a configuration file. This configuration file provides the ability to deploy SQL throughout the enterprise with the same configurations. Here is the MSDN link which covers installing SQL using configuration files.

How To: Create an installation directory to store the source installation files. Within that directory, you can store any pre-requisites. For example, mine is: \\XXX.XXX.XXX.XXX\Source\SQLServer2008Ent_FullSP1\Soruce

Within the pre-requisites directory (\\XXX.XXX.XXX.XXX\Source\SQLServer2008Ent_FullSP1\Pre-Req), I keep the following support files:

  1. .NET 3.5 SP1 (Full)
  2. KB959209 (Updates for .NET 3.5 SP1)
  3. Windows 4.5 Installer

The installation directory (\\XXX.XXX.XXX.XXX\Source\SQLServer2008Ent_FullSP1\Setup) maintains:

  1. Installation Files
  2. Configuration File (ConfigurationFile.ini) 

The following commands can be wrapped up into a batch file or installation package to be executed by the installation process.

  1. dotnetfx35.exe /qb /norestart
  2. NDP35SP1-KB958484-x86.exe /q /v /norestart
  3. wusa Windows6.0-KB942288-v2-x86.msu /quiet (will require reboot)
  4. setup.exe /SQLSVCPASSWORD="********" /AGTSVCPASSWORD="********" /ConfigurationFile="%Path to ConfigurationFile.INI%"

Thursday, June 25, 2009

Transitioning to Geneva Framework and Server

This week, I’m getting the opportunity to play catch-up and get my feet wet with Geneva. So far, it’s awesome because there is so much material already out! As soon as all my pre-reqs are installed, integration with VS 2008 immediately worked {Per DL “huh, a Beta product working” =-)}! Yep, the option to “Create a new STS project in the current solution” is pretty slick. Developers can begin building an application immediately without having to wait for the IT guy; therefore keeping everything within VS until time to deploy a build.

If you’ve already played with the federation stuff, I suggest watching Channel 9’s interview with Donovan Follette on making the shift from ADFS v1 to Geneva and Jan Alexander on the claims transformation language in Geneva Server Beta 2. Both address all the important things you need to know to get started such as the new concepts Geneva introduces and how they relate to the old concepts used in ADFS v1.

Check it out, the links to Channel 9 are above!

Wednesday, April 8, 2009

AD PowerShell Cmdlets & AD WebServices

New features coming out for Windows Server 2008 R2 that I’m really interested in are the AD PowerShell Cmdlets and AD WebServices. This evening, I happened to stubble on PG’s blog, “Active Directory PowerShell Blog” which provided some valuable info on what’s coming soon! Of course, the first thing I did after reading a the first few posts is begin my download of R2 so I can begin playing with them myself. So much to learn, so little time…

Let me summarize what’s new:

Basically, the AD PsH cmdlets will immediately support 4 categories (Account, Topology, DS Object, Providers) for AD administration. Here is a link which breaks down the actual cmdlets. Just with what you see, you can bet there is a lot of opportunity for extensibility (or as they refer to it, “Advanced Functions!”

The next thing is the AD WebServices, which support both ADAM and AD upon installation.

Here is the link to their blog:

Active Directory PowerShell Blog

Saturday, March 7, 2009

Using PowerShell and S.DS.AD to create Sites and Service objects

System.DirectoryServices.ActiveDirectory (S.DS.AD) is a .NET namespace available for performing common tasks related to Active Directory Domain Services. S.DS.AD differs from S.DS in that it is a pure .NET interface which allows us to extend deeper into DS development. See S.DS.AD Scenarios here.

With PowerShell (PSH), we can leverage the classes in this namespace for common manual tasks that can be scripted. For example, in a migration scenario, managing AD sites and services can be time consuming to set up. Here are some functions I wrote which allow you to automate these process using PSH.

To do bulk creations of site objects, you would store your configuration in a CSV file and use them as parameters to each PSH function.

Say we need to 1. Create Sites, 2. Create Subnets, 3. Create SiteLinks, and 4. Configure our SiteLinks. Using Excel, you can create 4 CSV source files for each task, then use the Import-CSV and ForEach-Object cmdlets to call each function for each record.

For example:

Import-Csv C:\importFile.csv ForEach-Object {Create-Site $_.SiteName}
Import-Csv C:\importSubnets.txt ForEach-Object {Create-SubNet $_.SubNet $_.SiteName}
Import-Csv C:\importSiteLinks.txt ForEach-Object {Create-SiteLink $_.SiteLinkName $_.Site $_.Cost $_.Interval}

Here are the PSH functions:

Creating AD Sites

Function Create-Site{Param ($siteName)
$forest = [System.DirectoryServices.ActiveDirectory.Forest]::GetCurrentForest()
$type = [System.DirectoryServices.ActiveDirectory.DirectoryContextType]"forest"
$contextType = New-Object System.DirectoryServices.ActiveDirectory.DirectoryContext($type,$forest)
$site = New-Object System.DirectoryServices.ActiveDirectory.ActiveDirectorySite($contextType,$siteName)
$site.Options = [System.DirectoryServices.ActiveDirectory.ActiveDirectorySiteOptions]::GroupMembershipCachingEnabled
$site.Save()
Write-Host "Creating site object $siteName..." }

Creating AD Subnets

Function Create-SubNet{Param($subNetName,$siteName)
$forest = [System.DirectoryServices.ActiveDirectory.Forest]::GetCurrentForest()
$type = [System.DirectoryServices.ActiveDirectory.DirectoryContextType]"forest"
$contextType = New-Object System.DirectoryServices.ActiveDirectory.DirectoryContext($type,$forest)
$site = [System.DirectoryServices.ActiveDirectory.ActiveDirectorySite]::FindByName($contextType, $siteName)
$subnet = New-Object System.DirectoryServices.ActiveDirectory.ActiveDirectorySubnet($contextType,$subNetName,$site)
$subnet.Save()
Write-Host "Creating subnet object $subNetName..." }

Creating AD SiteLinks

Function Create-SiteLink{Param($siteLinkName,$siteName,$siteCost,$repInterval)
$forest = [System.DirectoryServices.ActiveDirectory.Forest]::GetCurrentForest()
$type = [System.DirectoryServices.ActiveDirectory.DirectoryContextType]"forest"
$contextType = New-Object System.DirectoryServices.ActiveDirectory.DirectoryContext($type,$forest)
$trans = [System.DirectoryServices.ActiveDirectory.ActiveDirectoryTransportType]::Rpc
$site = [System.DirectoryServices.ActiveDirectory.ActiveDirectorySite]::FindByName($contextType,$siteName)
$link = New-Object System.DirectoryServices.ActiveDirectory.ActiveDirectorySiteLink($contextType,$siteLinkName,$trans)
$link.Cost = $siteCost
$link.ReplicationInterval = $repInterval
$d = $link.Sites.Add($site)
$link.Save()
Write-Host "Creating siteLink object $siteLinkName..." }

Adding Sites to a SiteLink

Function Add-SitetoSiteLink{Param($siteName,$siteLinkName)
$forest = [System.DirectoryServices.ActiveDirectory.Forest]::GetCurrentForest()
$type = [System.DirectoryServices.ActiveDirectory.DirectoryContextType]"forest"
$contextType = New-Object System.DirectoryServices.ActiveDirectory.DirectoryContext($type,$forest)
$site = [System.DirectoryServices.ActiveDirectory.ActiveDirectorySite]::FindByName($contextType,$siteName)
$link = [System.DirectoryServices.ActiveDirectory.ActiveDirectorySiteLink]::FindByName($contextTye,$siteLinkName)
$link.Sites.Add($site)
$link.Save() }