Sunday, February 20, 2011

FIM 2010: Build 4.0.3573.2 Performance Improvements, part 2

In the previous installment, FIM 2010: Build 4.0.3573.2 Performance Improvements, part 1, I documented the base configuration of my Hyper-V test machine and now I'll document the configuration of the virtual machines themselves and share the results of the initial disk tuning for the patched RTM release, build 4.0.3531.2.

Virtual Machine Configuration

  • Dedicated AD DC (2008 R2)
  • Dedicated SQL Server (2008 R2 10.50.1600) w/Dual Processors and 4GB RAM
    • Separate OS (4k), DB (64k), Logs (64k), and TempDB (64k) drives within the VM, but all VHD’s on a single RAID 4-drive set
    • All VHD files were dedicated (fully expanded), not dynamic
  • FIM Sync/Service Server (2008 R2) w/Dual Processors and 2GB RAM
  • FIMService and FIMSynchronization databases set to Simple recovery and pre-grown to 4GB (DB and Logs)
  • No autogrowth observed throughout the load on either DB
  • All NIC's (virtual and physical) have Large Send offload disabled

Initial Load Scenario

In my initial load scenario testing I have the FIM Service loaded bare, with no additional sets, policies or workflow added, the same as you'd expect prior to migrating any policy over. In my personal testing, I've see 44% faster load times simply by not loading your policy first and importing all of your objects into a pristine system.

So, we have all of the FIM Services running on a single VM and all of the databases hosted on a single SQL Server, both joined to a domain hosted by a dedicated AD Domain Controller. Next, I will illustrate the disk configuration.

In the first example we have a poor disk I/O configuration, no caching and RAID 5 – this configuration leads to high disk queue length and disk latency making the disk configuration a clear bottleneck. In Configuration 2 we have a somewhat tuned configuration where we've added disk caching, moved the System partition to an SSD and moved to a more efficient RAID 10; from the results below we can see that the disk is no longer a bottleneck.

  4.0.3531.2
Disk Configuration 1
4.0.3531.2
Disk Configuration 2
Records (8 attributes/record) 11,251 11,251
FIM MA Export Only Elapsed Time (mins) 585 214
FIM MA Objects Exported/sec 0.319 0.684
Processor Time - miiserver 0.40% 0.72%
Processor Time - fimservice 14.35% 0.63%
Logical Disk (SQL) - Average Disk Queue Length 2.256 0.001
Logical Disk (SQL) - Average Disk sec/Transfer (ms) 108 3
Objects Exported/sec Improvement Factor over Previous configuration n/a 2.14
Elapsed Time improvement over previous configuration (mins)   371

Baseline Results

The results from the baseline tests clearly show that the disk subsystem can have adverse effects on the state of your FIM performance, especially when it comes to the initial load scenario. With some simple disk tuning we were able to reduce the run time by 371 mins and achieve a 2.14x improvement over the elapsed running time to export the same records. Average disk queue lengths <1 should not indicate a bottleneck and the fact that our overall latency dropped from 108 ms to 3 ms backs this up. We generally want to keep the latency under 10 ms, and no more than 20 ms. I would like to point out that while the SQL Server disk is broken down into separate volumes, all of the VHD's from all of the VM's are on the same RAID volume in both configurations which would be typical of SAN deployments that split LUN's across all spindles.

In your deployments you should be dealing with at least workgroup class hardware with real servers and performance class SAS/SCSI drives in the 10k-15k RPM range with caching RAID array controllers and should be able to achieve similar results in your initial baseline. In fact, the improved numbers I see here match very closely what I've obtained running on IBM production class hardware and fibre attached SAN (NetApp).  I have not been able to personally break 0.7 Objects Exported/sec for an initial load scenario on any configuration running 4.0.3531.2 (RTM with Update 1). I believe these results indicate that now the FIM Service becomes the clear bottleneck as there are no other counters indicating a processor, memory, or network bottleneck.

In the next installment I'll look at how loading the new 4.0.3573.2 hotfix improves times on the same disk configuration 2.

0 comments:

Post a Comment