When does it break?
We have targets, and our tools have limits. Using our programatic interface, here is a variation of the post adding objects with a tool that doubles its output of random hosts and IP's. A few updates and deletes will randomly happen, but this API client is all about aggressively building policy objects.
vSec To Scale : Target 10k
Taking hardware and system load out of the equation, we can consistently push over 10 thousand vSec objects, giving us a high water mark that, if below, we should check system resources as the software is capable of accepting 10k objects. Just not all at once. Factors such as slowing down polling intervals and reducing update times to the gateway can improve scale.
vSec Stressed : Target 25k
From unsynced objects, to an entire dump of the vSec table. Here are two attempts to break the 10k barrier.
It can be done, but you run the risk of instability when exceeding changes at a velocity that exceeds the time cycle of the management to poll and update with new information. It's unrealistic to scale the system to be constantly loading objects at this scale and speed, a real environment would be more fluid. Services come, and go, and get updated, but at a much slower rate.
SLAPI Operations
A little more like a cloud environment.
Longer term testing for memory leaks and table misses will help us identify failure conditions, and potentially, write around them. For example, to improve performance and lower overall system delay, rather then scale one large SLAPI group up, we are better to scale across, distribute the updates and polling across multiple repositories.