Sunday, November 27, 2011

IAG vs provisioning

November has been a very busy month which has resulted in very few posts.

A part of the busyness was a trip to San Diego to visit the Gartner IAM 2011 conference. Overall the conference was great fun and I met lots and lots of old friends. Going to IAM conferences is a bit like going to a high school reunion. The food was also excellent and as the conference was held in a small hotel I actually got to go outside and get some fresh air every now and then. This is a rare luxury in the standard Las Vegas located gathering.

In my opinion the overall trend was a continuation of the direction that was announced in last years Gartner IAM magic quadrants. You could also see the same trend in the Forrester Role Management and access recertification wave. The concept has gotten a brand new TLA in form of IAG (I still like RMAR) and will now have it's own little quadrant.

So what is IAG? Well the core concept revolves around the simple fact that it seems to be very hard to get ROI on conventional provisioning driven IAM projects. In theory IAM projects are supposed to provide ROI based on the fact that they lower operational costs. In practice this has turned out to be an illusive goal.

As we all like to stay employed we now have to figure out something else to sell to the business and this new is now service catalogs, access recertification, transparency and governance. The core user needs to switch from an IT department gnome (a.k.a. the sysadmin) to the actual business users.

What does this mean for the applications? Primarily it means that they need to be prettier and easier to use. The Amazon shopping cart analogy seems to be very popular for access requests as well as credit score like risk assessment numbers. Access recertification as well as approval workflows needs to be very appeticing as well as easy to use by non IT users. Enterprise role management seems to have fallen out of fashion and we are back handling entitlements albeit nicer named entitlement with better tans (i.e. no AD group names like fap0503dfg).

The current leaders in this space seems to be Aveksa and Sailpoint but the big boys are starting to notice and are trying to catch up. IBM has some very interesting stuff coming out very soon in general availability in the role space (although they may change the branding of that specific functionality now that roles aren't cool anymore). Oracle just updated Oracle Identity Analytics and I am sure that there is more to come soon.

Sunday, October 16, 2011

Forrester security forum

It is gearing up to be time for the Forrester security forum but as per usual I can't attend as one of the disadvantages when you have a wife who chairs an event is that you kind of have to stay at home and take care of the kids. If I would be able to go I would be interested in the following talks:

I am interested in what Chenxi Wang's talk about "Securing The Extended Enterprise — Protect Your Information Anywhere, Anytime, And On Any Device" is actually going to contain. It is always good when you get intrigued by the talk abstract. Is she going to talk about BYOD for mobile devices? Or is she setting the stage for Andras Csar and Eve Maler?


Andras will probably continue the very strong (in my opinion) line of reasoning around access governance that he laid the groundwork for in the Forrester Role and Access Management Wave but focusing on social networks. Authorization in social networks is not an easy task and if you add that the user identity may actually reside in another social network and you might just have a federated user object to authorize on the problem becomes even more complex.


My guess is that Eve's talk on "Securing And Identity-Enabling Monster Mashups" will focus on OAuth and I think that is a story that really deserves continued spot light. I recently watched a webinar where Eve was one of the speakers and OAuth clearly can be used in very interesting ways to lift the security of the internal as well as external ESB to not only support authorization on the service account level but to take the authorization to the internal user or even end user level.


There are also some very promising keynotes. Scott Gerlach's piece on how to involve your customers looks really interesting. I and most info sec professionals normally have problems with even involving the business in IT security so getting the customers engaged is clearly a new and interesting perspective. The Diginotar affair has not really gotten very much attention outside of nerd circles so I am very happy that it is being talked about more. The CIO-CISO Partnership: Partnering To Protect Our Customers is another good keynote topic with really promising abstract.


Full disclosure note: The section at the start of this posting was not a joke. I am married to Laura Koetzle who chairs the event.

Monday, September 12, 2011

XACML training workshop in Washington DC

On September 19-20 Axiomatics will be arranging an XACML workshop in Washington DC. I will be there and perhaps I will meet some of my readers.

In my opinion the most interesting aspect of this workshop is that Axiomatics' has managed to establish a fully featured ecosystem around their product. I started looking at the product back in 2009 and at that point it was a useful and very interesting PDP.

The use case I was looking at was online health records for usage in pre and post FDA approval registries and given that Axiomatics had been used in the national Swedish healthcare implementation they had the substantial edge in that the system actually was in production. The main issue with Axiomatics at that point was that getting access to the rest of the pieces that you would need for an actual production implementation would require usage of components that were built or heavily configured by companies that really didn't have any global delivery capability. If your needed the stuff delivered on the Nordic market then it worked fine but if you needed it in the US or Asia Pacific you basically  needed to use another products.

Over the last three years Axiomatics has managed to pick up some really smart people including Gerry Gebel from Burton group. Gerry and the rest of Axiomatics has worked really hard on establishing connections with other product companies whose products fits very well with the PDP as well as professional services organizations that can manage the implementation.

The result of this work can be seen in the speakers list for the XACML workshop. Sailpoint will be there to talk about how you use Sailpoint IdentityIQ to not only provision users to the central user and attribute repository, perhaps an LDAP server from Radiant Logic, but also manage the entire lifecycle of the user including access recertifications. You have Layer7 that talks about how to integrate Axiomatics into your corporate web service gateway or your enterprise SOA platform. Well done Axiomatics!

Sunday, September 4, 2011

RMAR is the word!

If you have been in the IAM space for a while you kind of recognize the waves that regularly hits the industry. One example is the provisioning wave that started picking up speed back in 2004-2005 when most provisioning vendors were simple startups with a few customers and rather rudimentary products. Over the next 24 months each major player (IBM, Sun, Oracle, CA) built or acquired a product in the space which in turn meant that suddenly the sales and marketing resources that were available to sell the products increased by a factor 10-100. Unfortunately the delivery capability of the professional services organizations didn't really grow as fast which lead to some "unfortunate" implementation projects some of whom I was part of.

Go forward a couple of years to 2006-2007 and the hot product is now role management. The same pattern plays out again. Sun buys Vauu RBACx, Oracle buys Bridgestream, IBM stays on the sideline and uses partnerships. There are a couple of independent players that aligns to the big boys (Aveksa, Sailpoint) but when the economy started to fall apart things started to look bleak for the independents and they were forced to shed staff and to cut down on R&D as their customers no longer could afford to start new projects or even keep already initiated projects moving.

A few more years forward and we are now in mid 2011 and Forrester is publishing a new and shiny Role Management and access recertification wave (get it at Sailpoint) that places Aveksa and Sailpoint as the leaders. Certainly not the result I would have expected back in 2008 so I would like to congratulate both Aveksa and Sailpoint to their placement. They have done a very impressive job and shown that a relatively small independent shop can outperform the big boys. Well done!

One major change in the market place is that the role management and access recertification is getting more and more exposure as a central part of any IAM strategy. Gartner prefers the term IAI (Identity and Access Intelligence) and our Germanic friends at Kuppinger Cole uses GRC (Governance, Risk and Compliance). Andras actually doesn't coin his own TLA or eTLA in the report which I am very disappointed about. Doesn't RMAR sound like something that would conquer the world?

The Forrester take on the subject is that:
As a security and risk leader, if you only have one dollar to spend on identity management, spend it on access governance.
Undoubtedly a very strong endorsement of the area that will result in lots of end user companies spending even more money in this area.

If we look at the competitive landscape what does this wave mean. Sailpoint and Aveksa are of course going to get a very substantial boost. They both have really good and mature products so I am not so surprised that they fared very well.

When it comes to the players that fared less well I am not surprised at all about IBM's ranking. IBM is in the process of bringing a brand new product to the market and their current offering really is close to non existing. I got a sneak peak at the new IBM role manager at Pulse this spring and I am quite convinced that IBM  will be a top player once this hits the market in later 2011 or early 2012 but at the moment they deserve the scoring.

I am in a way surprised about the Oracle's scoring. Oracle has been trying to come up with a viable offering for a long time and after a couple of false starts (first an internal product that was killed before hitting the market, then Bridgestream/ORM (which was killed after a quite bad showing in the market) they finally got a good product in form of OIA (ex Vauu RBACx, ex Sun Role Manager). Perhaps the many name changes of the product gives a clue about why it no longer is a top notch offering? If you take a good product and spend a couple of years integrating it into a major IAM vendors stack (Sun) and that then promptly gets acquired by another major IAM vendor (Oracle). The new owner spends another couple of years integrating the product into their stack and at the end the world simply has move on and what was a good product is now just run of the mill.

The most interesting conclusion is perhaps that the era of when the base for any IAM strategy was implementation of one of the huge provisioning centered IAM stacks (Oracle, Sun (RIP), IBM and CA) may be over. Perhaps we are entering a world where provisioning isn't the center piece and where the independent players takes a bigger part of the market? Another alternative is of course that Larry gets fed up and buys Sailpoint, CA buys Aveksa and the IAM stacks gets one more mandatory component.

(Full disclosure note: my wife was one of the editors of this report)

Wednesday, June 29, 2011

IAM project painpoints

In my experience IAM projects generally have severe pain points in three areas:

  1. Processes
  2. Data
  3. Technology
On the processes side it is often unclear if the new system should reflect how things should be done or how things are actually done. You also have the built in conflict between operations (things should be done as simply and straightforward as possible) and audit/compliance/security (the processes should provide adequate
safeguards). One safe way to fail an IDM project is to not get your processes defined and accepted by the key stakeholders at an early stage of the project but rather discover this issue during UAT.

If your data is dirty it doesn't really matter how good your provisioning and/or access logic is. Data ownership is often a huge issue as the owners, if they even exist, usually are blissfully unaware of how bad the data actually is. Data issues are interesting because there are lots of different kinds of data problems. In some cases the data lacks clear referential integrity between different systems which will hit you during initial load. Another data issue that may surface if you use user names to generate things like logins and email addresses is that names can cause problems. In many cases you need a reporting structure to be able to communicate to the users manager. If you don't really know who the manager is, which isn't that uncommon among contractors then you will have a problem.

Finally the technology part is about having a vendor that has experience on not only standing up the technology in itself but also to integrate it with the target applications. It is not uncommon that you spend 2-3 weeks on implementing the technical part of an integration the first time you do it while it takes you 2-3 days (or even 2-3 hours) the second time. Experienced high quality technical resources are key to have a quick and efficient implementation but right now there are many more projects than qualified engineers and architects.

Sunday, June 26, 2011

JIT provisioning - the compliance view

JIT provisioning gives you significant advantages in operational agility as the cost of integrating provisioning to an application, measured in time and effort, becomes a lot smaller than with the conventional provisioning approach. As always there is of course downsides with JIT provisioning so lets talk about the issues and how to mitigate them.

The reason why provisioning systems exists are basically to make onboarding, offboarding and maintenance of the access profile of the corporate user more efficient. The efficiency gain comes partly from automating the actual provisioning and deprovisioning operations and partly from automating compliance activities (who has access to what). It is clear that JIT addresses basic operations in an efficient manner but what about compliance?

Conventional provisioning systems offers the ability to see what a user has access to and also why the user access to these resources. The answer to the why question may be that "because the user in an employee the provisioning policy dictates that resource X should be granted" or "the users manager raised a request for resource Y and the resource owner granted it". Some provisioning systems also supports access recertification ("on May 15 2011 the users manager thought that the user should have access to resource Y"). The access information is often exposed through reporting functions and/or a pretty web interface so auditors can get the information they need without having to understand the inner workings of the provisioning system.

In the JIT world things get a bit more complex. In essence the authorization is based on what the guy on the other end is claiming to be true. In it's simplest form anyone who comes over from the partner application would have full access to your application. In a more complex situation you may have the partner sending you either raw user information attributes (user y is in cost center x) or some form of role attributes (user y has the role of broker level two). The application then makes an authorization decision based in this information. This two tiered authorization model makes the auditors life substantially harder but there are ways to increase transparency (i.e. use XACML instead of embedding the authorization decision in code).

Even with transparency measures in place a nswering the questions "who has access to resource X" and "what resource does user Y have access to" becomes really tricky in a JIT world. If you also need to answer the why questions you are in real trouble. It is going to be really interesting to see what vendor in the access governance space will be first with addressing this need.

Tuesday, June 21, 2011

JIT provisioning - the application view

In JIT provisioning I looked at how you could create a just in time provisioning system. In that posting I discussed the case from the identity hubs point of view. Now lets take the other viewpoint and be the app instead.

As the app you basically have made the choice to trust that the identity hub has done a good job of doing the authentication and authorization. You don't really have any other choice than to trust the hub. If you are an application that doesn't need to persist state between session your life becomes very simple. You serve the content based on the information provided in the request.

On the other hand if you need to persist state you basically need to create a new account every time someone with a new primary key attribute shows up. You would also need some kind of mechanism to invalidate accounts that haven't been used for a while as you would otherwise just accumulate active accounts indefinitely. The disablement could be done through straight ageing (no usage for one month results in the account being put in disabled status or perhaps even deleted) or by querying the identity hub. The query could either be a delta recon (what have been disabled/deleted since I last asked) or a full recon  (get all accounts from the hub and see what accounts are present on your side but not on the hub side).

One interesting aspect of this is that in a company to company situation it should be of interest to the hub company to be able to show the application partners what the hub authorization logic really is as they are in fact trusting the hub blindly. This is a very interesting use case for XACML as it is much easier to review some XACML than hundreds or even thousands of lines of Java/c#.

Monday, June 6, 2011

JIT provisioning

Lets take a break from the check lists and take a look at another interesting subject: Just In Time (JIT) provisioning.

Over the last couple of years SAML has emerged as the defacto standard for federated authentication and authorization. If you are working with a partner the first question is usually "Do you support SAML?".

Incoming SAML makes it possible to essentially outsource the process of authentication and authorization to a business partner. The partner vouches for the identity of the user and you can essentially use this information to give the user access to your system. This solves the run time but in most cases you still need a "back channel" provisioning process. Getting a SAML assertion telling you that the user "msandr01" would like to log in to your application  logging is good but most application needs more information to create a working system account.

Nishant Kaushik published a very good four part series of blog postings on this subject about a year ago that I highly recommend. I ran into the problem in a discussion at Wisegate a couple of weeks ago and I am also looking into the problem for a couple of use cases at work.

The use case we talked about at Wisegate was a bank that had outsourced all of their customer facing applications to various vendors. One vendor did retail banking, another brokerage, a third investment banking etc. All of the different vendors had their own user repositories and own SSO solutions so a single bank customer could have multiple logins and multiple passwords and would have to login to each application separately. The business of course didn't like this and wanted an SSO solution.

The true high tech JIT solution would be to use a federated authentication product such as IBM TFIM and do SAML with the apps all in run time. The hub would be truely light weight and not persist any information about the users. A typical user interaction with the hub would look like:
  1. Take the request from the user
  2. Figure out which application this user is trying to login to
  3. Figure out if the user has any account in any of the apps by asking the apps
  4. Authenticate the user
  5. Authorize the user
  6. Create the SAML assertion and send it to the app
  7. Act as a reverse proxy in the interaction between the user and the app

In theory this is a great idea but there are some practical considerations.

One issue is latency. Given that this is an online person facing transaction the login should ideally not take more than three seconds (or so) and if we end up pushing 10-15 seconds the business will start screaming. The SSO hub and the apps are physically in different places which means that you will get latency even if you have lighting fast machines that process the requests.

Another issue is complexity. This does require quite a lot of bleeding edge technology and there are plenty of things that could go wrong.

In the end the discussion ended with the conclusion that it is probably safer to go for a more conventional approach where user populations of the apps are reconned back to a central repository in the SSO hub using a meta directory product. SAML would still be used to communicate assertions to the applications but this solution is a lot faster and eliminates a lot of the unknowns. This solution pattern is very common among a number of vendors including Symplified.

Not as high tech and cool but is guaranteed to work and won't cause hard to fix latency based performance problems .

Sunday, June 5, 2011

Checklist manifesto part two - requirements gathering

This posting is a continuation to Checklist manifesto. In that post I discussed how the concept of checklists can be applied to IAM projects on the overall delivery methodology level. Lets talk a bit about how check lists can be used in the different parts of the delivery methodology.

Lets assume that you are using a classical waterfall. This gives you the following steps:

  1. Requirements gathering
  2. Design
  3. Implementation
  4. Test
  5. Go live
  6. Maintenance

In this post I will focus on how you would user checklists in the requirements gathering phase.

One thing I have noticed over my IAM implementations is that if you take a use case driven approach it seems like most provisioning projects will contain almost the same use cases. Depending on how you slice and dice your cases and what your scope is you usually end up with 30-50 core use cases which tend to cover the same subjects.

The use cases may be very different as each and every company seem to like to do things their own special way but you will cover the same overall business process.

This means that a mature implementation organization should be able to come up with a list of use cases that can be given to the more junior resources that will perform the actual requirements gathering. If you are a customer I would definitely include this as a question on the RFP. If you are a junior resource I would speak to your seniors and check if they don't have a list of use cases on their hard drives or if they quickly can create one based on their previous projects.

More about my experiences in the lovely world of requirements gathering can be found in the post UAT and requirements gathering.

Tuesday, May 31, 2011

Checklist manifesto

Management books are often boring and mostly not really applicable on your situation so they rarely makes good material for a blog post. A while ago my wife was told to read "The checklist manifesto" by her boss and once she was done with it I took a look and actually really liked it.

The core concept in the checklist manifesto is that there are certain series of rather simple steps that needs to happen in order to perform a complex process. The core example is the surgery checklist but Atul Gawande (the author) also uses examples from other fields such as aviation.

Can this principle be used in identity and access management projects? I would say definitely yes.

Good examples why checklists are useful are my two first IAM projects. In the first "first project" I basically had no clue on what I was doing neither on the process side nor on the technical side. Luckily I was in a very junior position so my lack of knowledge didn't doom the project. Interestingly many of the senior resources also didn't have any knowledge of the product but luckily the client realized this and hired a very seasoned person straight from the product manufacturer who managed to get the project back on track and utilized the quite impressive domain knowledge of the senior resources to create a very good solution. The project also managed to pick up some very talented technical resources along the way which helped quite a lot.

In my second project I had a very senior role. In fact I really wasn't ready to take on the role and the project suffered from my lack of experience. We as an organization also had some other issues that we had to work out and in the end I as well as the organization ended up being much stronger but it took a lot of hard work.

In both projects checklists we ended up creating checklists. In the first the main contractor already had a semi formal checklist for how to run IAM projects. They didn't know how to run an OIM project but they applied their general IAM checklist to the project and where decently successful. The initial design that they created was totally unimplementable in OIM but at least they had a design that after a few months of tweaks and changes ended up being implementable.

In the second project we didn't really have any checklists or any form of process until we got some help from one of the senior PMs. He had a general checklist for how to do IAM engagement and we also developed a form of general checklist for how to do offshore development engagements in IAM projects.

Once we had our checklists in place for how to develop, test and migrate the code we could start delivering code that did what the design said it would do. Now the problem morphed to gather the correct requirements so that you can create a design that solves the actual business problem. The requirements and design gathering process is much harder to codify so that was much more of a challenge and it took me a few more years to get to a point where I think I am getting a good handle on that challenge.

Monday, May 16, 2011

Pass through authentication

One of my readers remarked that one the hardest technical challenges is to migrate things that can't be migrated.

One prime suspect here are password hashes. In most well designed systems you don't store passwords in encrypted in a reversible format but rather in form of one way hashes (preferably with some salt mixed in). This means that the only way to migrate the passwords is through mass cracking which usually isn't feasible or at least shouldn't be feasible.

In TDS there is a very interesting solution to this problem in form of pass through authentication. You essentially let the password field be empty and specify that when the user tries to authenticate simply authenticate against the old system. If the authentication is successful then set the new password in TDS. Very good solution and the design patterns can be easily implemented even if your "authentication repository of choice" doesn't support this functionality natively.

Saturday, May 14, 2011

Sun set

One of the things my professor in system development at Chalmers told me was to never put anything related to "new" or "next generation" in a system name because at some point the system will be old and will need to be replaced. Having to name the next system "even newer" simply looks silly.

Currently there are a lot of Sun IDM owners that are contemplating upgrading into something that actually is moving forward. Oracle has promised continuing support and probably will deliver, at a steep price, but with the platform not being improved investing more money to add new capabilities doesn't make sense. It will also be harder and harder to find staff that can support the system. I recently heard about a Sun client where the staff had spent three months trying to install a new instance of the access manager agent. The client simply couldn't get it to work and had to give up in the end which of course resulted in some serious angst among the senior leadership as it meant that if the access management agent broke in production the entire application would be out of order for at least an extended period of time and potentially permanently.

We are currently in the final part of the first phase of our migration of Sun and I thought it may be interesting to talk a bit about the experience.

When you do a major upgrade of an IDM stack you basically have two choices. The first approach is to try to do some form of automated or semi automated upgrade. You essentially look at the upgrade as a big patch so you apply the patch in dev and do some regression testing. If everything looks good you start to walk the environments through test, stage and finally prod. In theory this works well and there are even cases when it works in practice (see MIIS to ILM). In the case of going from Sun IDM to Oracle IDM I think it would take a small miracle and a very skilled team to get this approach to work. The product stacks are simply too different and too complex so there is going to be cases where the conversion simply isn't possible. If you also have substantial customizations in play I would consider the approach being close to setting yourself up to failure.

The second approach is to try to leverage as much of your already existing documentation but essentially look at the project as a refactoring exercise. If you have good business requirements and workflow documentation then use that but anything from design and upwards I would strongly recommend that you rework. Most IDM installations don't have good requirements to start with and given that you usually end up creating delta releases rather than updating the original documentation even the ones where the docs where good at one point usually are a victim of "here is the original docs and here are the docs for the 5-30 delta releases that we have put in over the last five years" syndrome.

Update: Identigral's blog recently covered this issue. It is interesting to see that their conclusions are quite closely aligned with this posting.

Thursday, April 21, 2011

XACML for eHealth

If you have been reading this blog for a while you may have noticed that I have an interest in XACML and especially in using XACML in the healthcare sector.

If you have similar interests you may be want to take a look at a webinar that Axiomatics will be running on May 5 2011.

Friday, March 25, 2011

CISSP

There are certain things in life that you plan to do for a very long time before you actually get around doing it. In my case CISSP is one of these things that I have thought about doing for more than five years but something has always interfered. In December I finally managed to attend a test and it went well so now I am a CISSP.

I actually didn't study very much for the test as the twins eats most of my spare time. I did take the time to read through Shon Harris All In One Exam Guide which I think helped quite a bit. Really good reading and actually worth taking a look at even if you are not going up for a CISSP.

The best piece of advice I got was to remember to bring some snacks and something to drink. The exam is six hours and you really need something to keep you from keeling over of exhaustion. It was many years since I have ever felt as totally knackered as I felt after the test.

Monday, February 28, 2011

Monitoring

Most enterprise IDM systems are very complicated and complex beasts with redundancy in not only the presentation layer but in the application layer and the data persistence layer. This can make it hard to answer the simple question "Is the system working properly or not". In most cases you also want to be able to spot issues early so that you can fix them before they become a problem that may take down the entire system.

The best way to ensure that all components are healthy and all services are up is to implement a comprehensive monitoring program so what are the things that you tend to want monitor?

The most basic monitoring policy is the "wait until the end user yells" approach. In this approach you simply wait for the end user to start screaming and as long no one is screaming then things must be fine. This approach have some significant limitations so it is not the way I would recommend.

Once you start talking monitoring you usually discover that the corporation has some kind of standardized monitoring tool that you should/must use. These tools usually can provide the following functionality:

  • Host monitoring (is the server OS up)
  • CPU/Memory/disk monitoring
  • Process monitoring
Monitoring can be done either with a monitoring agent that is installed on each server or through an agentless approach. Having basic infrastructure monitoring can be very useful as it will alert you about creeping issues such as small memory leaks or logs that slowly but steadily eats up all available disk space. The trick is to make sure you actually can fine tune the alarm threshold and response level as you go along. In most cases you do want to be told if the CPU suddenly spikes from a max of 25% to 95% but as being woken up at 2 am every second Wednesday because the CPU load spikes for a few seconds during a batch load may not be ideal for you (or your marriage) you do want the ability to put in exceptions in the logic.

In most cases you will have some kind of network or port monitoring as part of your load balancer setup. Given that port oriented network monitoring configuration tends to get very complex I will write about this in a separate post.

Next step is to look at the application aware monitoring. This is usually accomplished by looking at the application logs and escalating entries that are following a certain pattern (i.e. whose log level is ERROR). You can also look for specific error messages that you know are thrown when a specific error condition occurs.

Once you have monitoring in place you should be able to sleep better at night. At least as long as your monitoring doesn't wake you up reporting nuisance errors.

Wednesday, February 9, 2011

UAT and requirements gathering

In provisioning projects requirements gathering and UAT testing is always an interesting area. The general problem with UAT and system level testing is that in order for the testing to be useful you really need to know what result is the correct result and what is a failure.

When you write bespoke software you can usually find the answer to this question in the design which in turn is derived from the functional requirements which are derived from the business requirements. In a typical IAM implementation on the other hand you tend to use very feature rich platforms that provides huge amounts of functionality most of which you will never use in any given project. It is also often hard to shut down this "bonus functionality" so the functionality often is available even if it really shouldn't be used. Many times when you get to UAT the testers starts touching all kinds of buttons and levers and unless you have an experienced team that has spent substantial amount of time on actively locking things down you will have some issues with this.

Business process wise IAM implementations can fall on anywhere scale ranging from totally transformative to literal refactoring where the business process doesn't change at all. In the first case the challenge is that the UAT needs to reflect the new business process and you also need to ensure that the new business process actually supports all the functions that the business needs. In the second case things are easier because you basically just need to capture an already existing process.

No matter the strategic focus of the IAM project a proper UAT needs to be business process focused. The whole point of UAT is to ensure that all business critical processes are present and are working as expected or at least in a way that is acceptable. The irony is of course that if you discover substantial process breaks in the UAT you are usually in deep trouble as it usually is too late to fix things before the planned go live.

The best way to avoid this situation is to ensure that you do a form of dry run UAT very early in the process. In my experience one of the best ways to do this is to utilize simple Visio workflows. You start with the requirements, which usually are more or less useless, and create Visio workflows. You then show the workflows for the people that knows or should know the business process. These people can rarely tell you what is needed but they can usually tell you if you get it wrong or tell you things that you should take into considerations. After a couple of cycles you usually have a pretty good process and you have also gained the buy in from one of the stakeholders.

If your architect is experienced he or she can usually produce a pretty good set of flows that can act as a starting point. Most more sophisticated consultancy organizations will be able to provide a standard set of flows as well. If your implementation team starts coding without first establishing, vetting and communicating the business process it is time to start getting alarmed. The project may still deliver successfully but it will most likely have a very painful UAT or even worse a very painful go live in front of it.

Tuesday, February 8, 2011

Pulse 2011

I will be in a panel at Pulse 2011 so if you want to hear me and some very distinguished copanelists talk about IAM you can visit session 1925 Identity and Access Management 2-3 pm on Mon Feb 28 in room 123 MGM Conference Center level 1.

My talk will be about the IAM challenges that BCBS MA currently are facing in the IAM space which is largely the same thing as what I am writing about in this blog so if you like this blog you may enjoy meeting me at Pulse as well.

Thursday, January 20, 2011

[TIM] Contractor life cycle management in TIM

In IBM Tivoli Identity Manager the most common design for supporting the contractor lifecycle consists of a termination date on the user form and a "lifecycle rule" that basically disables/terminates any contractors who have termination dates in the past.

The update of the termination dates can either be handled by a helpdesk or it can be done directly by the managers as TIM has a very detailed and good user form access engine in the form of the ACL and views which means that you can actually grant access down to the individual field.

The lifecycle rule (scheduled task in OIM speak) is also very easy to implement as you can define an LDAP filter that gives you the users that are ready for termination.

Not very hard at all. At least not as long as your requirements are as simple and straightforward as this.

Sunday, January 9, 2011

Contractor life cycle management

Most corporations have two general types of internal users.

The first category is employees or associates. The defining criteria of an employee is that they are present in the HR system. The HR system will tell us when a person joins the company, when a persons personal data changes or when the person leaves or gets fired. This is of course especially true if the HR system also feeds payroll. In most cases the IDM system will be directly connected to the HR system which will mean that the IDM system automatically will be notified about any changes.

The second categories are people that aren't directly hired by the corporation but rather are hired by another entity or they are hired by the corporation but in a different employment form. This means that this category of people are usually not present in the HR system. The users are still working for the corporation as consultants, contractors, contingent labor or partner employees. These users still needs access to IT systems so we need to somehow include them in the IDM system so that we can control their access.

If you are lucky the corporation actually has a database or some kind of application that tracks these users. In some cases it may not be one application but anywhere from a couple to a dozen applications. In this case you can just connect to these systems and you can handle these users the same way you handle employees.

In many cases there simply isn't any system that keeps track of the contractors so they need to be entered by someone directly into your IDM system. Not only do they need to be created but the contractor information also need to be maintained. Bearing this in mind what kind of lifecycle events do we typically need to support? In my experience these are the most important:
  1. Creation
  2. Update of personal data
  3. Change of manager
  4. Termination
Creation is usually very similar to creation of associate but there may be a need for an approval step and you may get problems with getting access to specific information about the user such as a unique serial number. Creation is usually driven by an employee that will act as the contractors manager.

Update of personal data is always really hard to get actually get up and running. In most cases personal information about a contractor user tends to get stale which may or may not be an actual problem for the IDM system. For example if a contractor gets married and changes their name in the name information in the IDM system should be updated but in many cases this information is never entered into the IDM system.

Change of manager is a critical use case if you are using the manager as a source of truth about the contractor. It is common that a contractor initially gets hired to work for one manager but later the contractor switches projects and ends up working for another manager. If you don't have updated and correct information about which manager each contractor reports to you will have a problem. Another problem that you will encounter is that the contractors manager may have left the company without first reassign their reports to another employee. It is often a good idea to include a check for reporting contractors in the termination process for an employee.

Terminations is of course just as critical as creations. We do need to remove the access for contractors that no longer is working for the company but there is nothing that motivates anyone to share the fact that a contractor has left with us. The contractor is gone so the contractor won't tell us. The manager is unlikely to tell us as the manager has nothing to gain.

The standard solution to this problem is to implement mandatory recertification of contractors. When a contractor joins the contractor gets a finite life span. Typically life spans is somewhere between 90 days to a year. Unless the contractor gets extended the contractor will be terminated once the end date is hit.

In most cases the extension is done by the contractor manager either directly in a self service interface or by calling the helpdesk. In order to ensure that the manager remembers to extend the user you usually implement a reminder process that sends out a number of reminder emails typically starting 30 days before the termination is supposed to happen.

If the user no longer works for the manager that the idm system has listed then you have a problem. This is especially true if the manager has left the company. In many cases it therefor a good idea to send the warning email to the user as well as the manager. The user is usually motivated to talk to their current manager who will then make sure that the IDM system is updated and that the user is extended.

Once you have these processes in place your contractors life cycles are in place the audit and corporate information security departments should be happy. Or at least happy enough to leave you in peace for a while.