Uploaded image for project: 'Crowd Data Center'
  1. Crowd Data Center
  2. CWD-4179

synchronization of large groups in Crowd timeout and starts to remove users from cache

    • Icon: Bug Bug
    • Resolution: Cannot Reproduce
    • Icon: Medium Medium
    • None
    • 2.4.2, 2.8
    • Directory - LDAP
    • Active Directory

      This issue seems to happen when large groups synchronization timed out and it seems that Crowd then remove the users from the cache as a result.

      This is issue can be resolved after restarting Crowd and also a full sync have to be performed for users to be able to login again.

            [CWD-4179] synchronization of large groups in Crowd timeout and starts to remove users from cache

            As a result of the extremely long synchronization times, the resource utilization of the machine will also shoot up.

            We have multiple applications that synchronize their local caches with Crowd on 70-90 minute intervals.

            After upgrading to 2.8.4, the CPU utilization shot up to 100%. We then realized that each crowd-to-app sync was taking over 1 hour to complete (previously took <5 min each).

            Since the operations began to overlap and compete with one another, the load average shot up to 12+ on a 2 CPU machine.

            After changing all of the directories back to the old default of 'aggregate group memberships' and waiting a few hours for all of the in-progress sync processes to complete, the load average went back to normal (below 1.0).

            This is definitely a problem with the new default behavior of 'non-aggregating membership'. My advice, if you have a lot of directories or tons of people in your group memberships, stay away from the non-aggregating feature until it is better optimized.

            Mark Silhavy added a comment - As a result of the extremely long synchronization times, the resource utilization of the machine will also shoot up. We have multiple applications that synchronize their local caches with Crowd on 70-90 minute intervals. After upgrading to 2.8.4, the CPU utilization shot up to 100%. We then realized that each crowd-to-app sync was taking over 1 hour to complete (previously took <5 min each). Since the operations began to overlap and compete with one another, the load average shot up to 12+ on a 2 CPU machine. After changing all of the directories back to the old default of 'aggregate group memberships' and waiting a few hours for all of the in-progress sync processes to complete, the load average went back to normal (below 1.0). This is definitely a problem with the new default behavior of 'non-aggregating membership'. My advice, if you have a lot of directories or tons of people in your group memberships, stay away from the non-aggregating feature until it is better optimized.

            If you are looking for confirmations, read CWDSUP-11774

            William Ing added a comment - If you are looking for confirmations, read CWDSUP-11774

            Anyone working on this or the next step is to beg Atlassian to accept patches from customers that are hit by product bugs?

            William Ing added a comment - Anyone working on this or the next step is to beg Atlassian to accept patches from customers that are hit by product bugs?

            We have two groups that have 20,000+ users.
            Our full sync between JIRA 6.3.12 and Crowd 2.8.3 takes more than 60 minute
            This problem occur in Crowd Version 2.8.
            This problem does not occur in Crowd Version 2.7 and older.

            Workaround

            On the Directories tab check Aggregate group memberships across directories to use the 'aggregating membership' scheme.
            Please see Effective memberships with multiple directories - Crowd 2.8 - Atlassian Documentation.

            We think that there is this problem with filter processing.

            2015-06-29 23:31:04,122 http-bio-9105-exec-14 INFO [rest.service.controller.MembershipsController] Timed call for membership of jira-users for test/test took 2065984ms. Users: 15676, groups: 0

            atlassian-crowd/components/crowd-rest/crowd-rest-plugin/src/main/java/com/atlassian/crowd/plugin/rest/service/controller/MembershipsController.java
            95    private Iterable<Membership> getMemberships(final Application application, Iterable<String> groupNames)
            ...
            106                ImmutableMembership membership = new ImmutableMembership(groupName,
            107                        applicationService.searchDirectGroupRelationships(application, userNames),
            108                        applicationService.searchDirectGroupRelationships(application, childGroupNames));
            109
            110                long ms = TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - start);
            111
            112                if (ms >= TIMED_LOG_THRESHOLD_MILLIS)
            113                {
            114                    log.info(LOG_MESSAGE, groupName, application.getName(), ms, membership.getUserNames().size(), membership.getChildGroupNames().size()); <- this
            115                }
            116                else
            117                {
            118                    log.debug(LOG_MESSAGE, groupName, application.getName(), ms, membership.getUserNames().size(), membership.getChildGroupNames().size());
            119                }
            
            atlassian-crowd/components/crowd-core/src/main/java/com/atlassian/crowd/manager/application/ApplicationServiceGeneric.java
            2240    @Override
            2241    public <T> List<T> searchDirectGroupRelationships(final Application application, final MembershipQuery<T> query)
            ....
            2252        if (application.isMembershipAggregationEnabled()) <- aggregating membership process
            2253        {
            2254            for (final Directory directory : getActiveDirectories(application))
            2255            {
            2256                results.addAll(doDirectDirectoryMembershipQuery(query, directory.getId()));
            2257            }
            2258        }
            2259        else <- non-aggregating membership (since 2.8)
            2260        {
            2261            if (query.isFindChildren())
            2262            {
            2263                // find all non-shadowed children
            2264                for (final Directory directory : getActiveDirectories(application))
            2265                {
            ...
            2276                    else if (query.getReturnType() == String.class && query.getEntityToReturn().equals(EntityDescriptor.user()))
            2277                    {
            2278                        Iterable<DirectoryEntity> users = usersInDirectory(directory.getId(), (Iterable<String>) doDirectDirectoryMembershipQuery(query, directory.getId()));
            2279                        results.addAll((Iterable<T>) Iterables.transform(Iterables.filter(users, isCanonicalEntity(application)), NAME_FUNCTION));
            2280                    }
            2281                    else if (query.getReturnType() == String.class && query.getEntityToReturn().equals(EntityDescriptor.group()))
            2282                    {
            2283                        Iterable<DirectoryEntity> groups = groupsInDirectory(directory.getId(), (Iterable<String>) doDirectDirectoryMembershipQuery(query, directory.getId()));
            2284                        results.addAll((Iterable<T>) Iterables.transform(Iterables.filter(groups, isCanonicalEntity(application)), NAME_FUNCTION));
            2285                    }
            ...
            

            Ricksoft Co., Ltd. added a comment - We have two groups that have 20,000+ users. Our full sync between JIRA 6.3.12 and Crowd 2.8.3 takes more than 60 minute This problem occur in Crowd Version 2.8. This problem does not occur in Crowd Version 2.7 and older. Workaround On the Directories tab check Aggregate group memberships across directories to use the 'aggregating membership' scheme. Please see Effective memberships with multiple directories - Crowd 2.8 - Atlassian Documentation . We think that there is this problem with filter processing. 2015-06-29 23:31:04,122 http-bio-9105-exec-14 INFO [rest.service.controller.MembershipsController] Timed call for membership of jira-users for test/test took 2065984ms. Users: 15676, groups: 0 atlassian-crowd/components/crowd-rest/crowd-rest-plugin/src/main/java/com/atlassian/crowd/plugin/rest/service/controller/MembershipsController.java 95 private Iterable<Membership> getMemberships( final Application application, Iterable< String > groupNames) ... 106 ImmutableMembership membership = new ImmutableMembership(groupName, 107 applicationService.searchDirectGroupRelationships(application, userNames), 108 applicationService.searchDirectGroupRelationships(application, childGroupNames)); 109 110 long ms = TimeUnit.NANOSECONDS.toMillis( System .nanoTime() - start); 111 112 if (ms >= TIMED_LOG_THRESHOLD_MILLIS) 113 { 114 log.info(LOG_MESSAGE, groupName, application.getName(), ms, membership.getUserNames().size(), membership.getChildGroupNames().size()); <- this 115 } 116 else 117 { 118 log.debug(LOG_MESSAGE, groupName, application.getName(), ms, membership.getUserNames().size(), membership.getChildGroupNames().size()); 119 } atlassian-crowd/components/crowd-core/src/main/java/com/atlassian/crowd/manager/application/ApplicationServiceGeneric.java 2240 @Override 2241 public <T> List<T> searchDirectGroupRelationships( final Application application, final MembershipQuery<T> query) .... 2252 if (application.isMembershipAggregationEnabled()) <- aggregating membership process 2253 { 2254 for ( final Directory directory : getActiveDirectories(application)) 2255 { 2256 results.addAll(doDirectDirectoryMembershipQuery(query, directory.getId())); 2257 } 2258 } 2259 else <- non-aggregating membership (since 2.8) 2260 { 2261 if (query.isFindChildren()) 2262 { 2263 // find all non-shadowed children 2264 for ( final Directory directory : getActiveDirectories(application)) 2265 { ... 2276 else if (query.getReturnType() == String .class && query.getEntityToReturn().equals(EntityDescriptor.user())) 2277 { 2278 Iterable<DirectoryEntity> users = usersInDirectory(directory.getId(), (Iterable< String >) doDirectDirectoryMembershipQuery(query, directory.getId())); 2279 results.addAll((Iterable<T>) Iterables.transform(Iterables.filter(users, isCanonicalEntity(application)), NAME_FUNCTION)); 2280 } 2281 else if (query.getReturnType() == String .class && query.getEntityToReturn().equals(EntityDescriptor.group())) 2282 { 2283 Iterable<DirectoryEntity> groups = groupsInDirectory(directory.getId(), (Iterable< String >) doDirectDirectoryMembershipQuery(query, directory.getId())); 2284 results.addAll((Iterable<T>) Iterables.transform(Iterables.filter(groups, isCanonicalEntity(application)), NAME_FUNCTION)); 2285 } ...

            From a customer:

            1. Open the application that provide to JIRA or Confluence.
            2. Set the option "Aggregate group memberships"
            3. Disable incremental sync of it dir at the JIRA/Confluence
            4. Force start full resync at the JIRA/Confluence
              It looks like aggregate function works faster with much count of groups/directories.
              The problem that had happen before (much clients complaint to lost an access for some project) solved by re-enabled option "use nested groups" in the dir, because after upgrade the Crowd this option had disabled automaticall by unknown reason.
              So, set aggregate group memberships + re-enabled nested groups for us = al fine in synchronization. The sync works more fast than was before, 20-30 sec versus 1-2 hours before.

            Foo Sim (Inactive) added a comment - From a customer: Open the application that provide to JIRA or Confluence. Set the option "Aggregate group memberships" Disable incremental sync of it dir at the JIRA/Confluence Force start full resync at the JIRA/Confluence It looks like aggregate function works faster with much count of groups/directories. The problem that had happen before (much clients complaint to lost an access for some project) solved by re-enabled option "use nested groups" in the dir, because after upgrade the Crowd this option had disabled automaticall by unknown reason. So, set aggregate group memberships + re-enabled nested groups for us = al fine in synchronization. The sync works more fast than was before, 20-30 sec versus 1-2 hours before.

            2015-06-29 23:31:04,122 http-bio-9105-exec-14 INFO [rest.service.controller.MembershipsController] Timed call for membership of jira-users for test/test took 2065984ms. Users: 15676, groups: 0
            

            that's 2065 seconds for 1 group containing 15676 users

            Foo Sim (Inactive) added a comment - 2015-06-29 23:31:04,122 http-bio-9105-exec-14 INFO [ rest .service.controller.MembershipsController] Timed call for membership of jira-users for test/test took 2065984ms. Users: 15676, groups: 0 that's 2065 seconds for 1 group containing 15676 users

            I to can confirm that this is happening. On our largest instance we have five groups that have 25,000+ users and routinely have users complain that they have lost access to group based information. Our full sync between JIRA 6.3.8 and Crowd 2.8 takes around 45 to 60 minutes.

            James E. Hunt [ASRC Federal] added a comment - I to can confirm that this is happening. On our largest instance we have five groups that have 25,000+ users and routinely have users complain that they have lost access to group based information. Our full sync between JIRA 6.3.8 and Crowd 2.8 takes around 45 to 60 minutes.

            intersol_old added a comment -

            This problem occured again this morning and caused serious downtimes to several services. You do have a new set of log files inside https://support.atlassian.com/servicedesk/customer/portal/16/CWDSUP-10063

            I do think that this line from the logs (see support ticket) says it all:

            cat atlassian-crowd.log |grep '2015-02-10 06:5' |grep 'scheduler_Worker-3'
            2015-02-10 06:53:06,890 scheduler_Worker-3 INFO [atlassian.crowd.directory.DbCachingRemoteDirectory] INCREMENTAL synchronisation for directory [ 7274497 ] starting
            2015-02-10 06:53:06,890 scheduler_Worker-3 INFO [atlassian.crowd.directory.DbCachingRemoteDirectory] Attempting INCREMENTAL synchronisation for directory [ 7274497 ]
            2015-02-10 06:53:07,985 scheduler_Worker-3 INFO [crowd.directory.ldap.SpringLdapTemplateWrapper] Timed call for search with handler on dc=citrite,dc=net took 1018ms
            2015-02-10 06:53:12,612 scheduler_Worker-3 INFO [directory.ldap.cache.UsnChangedCacheRefresher] scanned and compared [ 0 ] users to delete, [ 0 ] users to add, [ 192 ] users to update in DB cache in [ 5712ms ]
            2015-02-10 06:53:12,618 scheduler_Worker-3 INFO [atlassian.crowd.directory.DbCachingRemoteChangeOperations] deleting [ 0 ] users
            2015-02-10 06:53:12,620 scheduler_Worker-3 INFO [atlassian.crowd.directory.DbCachingRemoteChangeOperations] deleted [ 0 ] users in [ 2ms ]
            2015-02-10 06:53:13,289 scheduler_Worker-3 INFO [atlassian.crowd.directory.DbCachingRemoteChangeOperations] scanning [ 192 ] users to add or update
            2015-02-10 06:53:13,397 scheduler_Worker-3 INFO [atlassian.crowd.directory.DirectoryCacheImplUsingChangeOperations] scanned and compared [ 192 ] users for update in DB cache in [ 775ms ]
            2015-02-10 06:53:13,397 scheduler_Worker-3 INFO [atlassian.crowd.directory.DirectoryCacheImplUsingChangeOperations] synchronised [ 192 ] users in [ 775ms ]
            2015-02-10 06:53:14,098 scheduler_Worker-3 INFO [atlassian.crowd.directory.DbCachingRemoteChangeOperations] scanning [ 0 ] users to add or update
            2015-02-10 06:53:14,211 scheduler_Worker-3 INFO [atlassian.crowd.directory.DirectoryCacheImplUsingChangeOperations] scanned and compared [ 0 ] users for update in DB cache in [ 809ms ]
            2015-02-10 06:53:14,211 scheduler_Worker-3 INFO [atlassian.crowd.directory.DirectoryCacheImplUsingChangeOperations] synchronised [ 0 ] users in [ 809ms ]
            2015-02-10 06:53:15,327 scheduler_Worker-3 INFO [crowd.directory.ldap.SpringLdapTemplateWrapper] Timed call for search with handler on dc=citrite,dc=net took 1115ms
            2015-02-10 06:53:15,988 scheduler_Worker-3 INFO [directory.ldap.cache.UsnChangedCacheRefresher] found [ 3 ] changed remote groups in [ 1777ms ]
            2015-02-10 06:53:15,988 scheduler_Worker-3 INFO [atlassian.crowd.directory.DirectoryCacheImplUsingChangeOperations] scanning [ 3 ] groups to add or update
            2015-02-10 06:53:15,994 scheduler_Worker-3 INFO [atlassian.crowd.directory.DbCachingRemoteChangeOperations] scanned and compared [ 3 ] groups for update in DB cache in [ 6ms ]
            2015-02-10 06:53:15,995 scheduler_Worker-3 INFO [atlassian.crowd.directory.DirectoryCacheImplUsingChangeOperations] synchronized [ 3 ] groups in [ 7ms ]
            2015-02-10 06:53:18,220 scheduler_Worker-3 INFO [atlassian.crowd.directory.DbCachingRemoteChangeOperations] Could not remove user [ananthak] from group [confluence-users]. User was not found.
            2015-02-10 06:53:20,317 scheduler_Worker-3 INFO [atlassian.crowd.directory.DbCachingRemoteChangeOperations] Could not remove user [shengl] from group [confluence-users]. User was not found.
            2015-02-10 06:53:43,130 scheduler_Worker-3 INFO [atlassian.crowd.directory.DbCachingRemoteChangeOperations] removed [ 1686 ] user members from [ confluence-users ] in [ 26160ms ]
            2015-02-10 06:53:45,611 scheduler_Worker-3 INFO [atlassian.crowd.directory.DbCachingRemoteChangeOperations] Could not remove user [ananthak] from group [jira-users]. User was not found.
            2015-02-10 06:54:14,747 scheduler_Worker-3 INFO [atlassian.crowd.directory.DbCachingRemoteChangeOperations] removed [ 1854 ] user members from [ jira-users ] in [ 30661ms ]
            2015-02-10 06:54:16,868 scheduler_Worker-3 INFO [atlassian.crowd.directory.DbCachingRemoteChangeOperations] Could not remove user [ananthak] from group [jira-internal]. User was not found.
            2015-02-10 06:54:43,889 scheduler_Worker-3 INFO [atlassian.crowd.directory.DbCachingRemoteChangeOperations] removed [ 1791 ] user members from [ jira-internal ] in [ 28317ms ]
            2015-02-10 06:54:43,898 scheduler_Worker-3 INFO [directory.ldap.cache.AbstractCacheRefresher] migrated memberships for group - (3/3 - 100.0%) 87903ms elapsed
            2015-02-10 06:54:43,899 scheduler_Worker-3 INFO [directory.ldap.cache.UsnChangedCacheRefresher] scanned and compared [ 0 ] groups for delete in DB cache in [ 0ms ]
            2015-02-10 06:54:43,899 scheduler_Worker-3 INFO [atlassian.crowd.directory.DbCachingRemoteChangeOperations] removing [ 0 ] groups
            2015-02-10 06:54:43,900 scheduler_Worker-3 INFO [atlassian.crowd.directory.DbCachingRemoteChangeOperations] removed [ 0 ] groups in [ 1ms ]
            2015-02-10 06:54:43,900 scheduler_Worker-3 INFO [atlassian.crowd.directory.DbCachingRemoteDirectory] INCREMENTAL synchronisation complete for directory [ 7274497 ] in [ 97010ms ]
            

            Same happened with confluence-users, jira-internal,... and obviously nothing like this happened in LDAP.

            What other kind of confirmations do you need? Should I book a plane ticket and come to Sydney for a full hands-on experience?

            intersol_old added a comment - This problem occured again this morning and caused serious downtimes to several services. You do have a new set of log files inside https://support.atlassian.com/servicedesk/customer/portal/16/CWDSUP-10063 I do think that this line from the logs (see support ticket) says it all: cat atlassian-crowd.log |grep '2015-02-10 06:5' |grep 'scheduler_Worker-3' 2015-02-10 06:53:06,890 scheduler_Worker-3 INFO [atlassian.crowd.directory.DbCachingRemoteDirectory] INCREMENTAL synchronisation for directory [ 7274497 ] starting 2015-02-10 06:53:06,890 scheduler_Worker-3 INFO [atlassian.crowd.directory.DbCachingRemoteDirectory] Attempting INCREMENTAL synchronisation for directory [ 7274497 ] 2015-02-10 06:53:07,985 scheduler_Worker-3 INFO [crowd.directory.ldap.SpringLdapTemplateWrapper] Timed call for search with handler on dc=citrite,dc=net took 1018ms 2015-02-10 06:53:12,612 scheduler_Worker-3 INFO [directory.ldap.cache.UsnChangedCacheRefresher] scanned and compared [ 0 ] users to delete, [ 0 ] users to add, [ 192 ] users to update in DB cache in [ 5712ms ] 2015-02-10 06:53:12,618 scheduler_Worker-3 INFO [atlassian.crowd.directory.DbCachingRemoteChangeOperations] deleting [ 0 ] users 2015-02-10 06:53:12,620 scheduler_Worker-3 INFO [atlassian.crowd.directory.DbCachingRemoteChangeOperations] deleted [ 0 ] users in [ 2ms ] 2015-02-10 06:53:13,289 scheduler_Worker-3 INFO [atlassian.crowd.directory.DbCachingRemoteChangeOperations] scanning [ 192 ] users to add or update 2015-02-10 06:53:13,397 scheduler_Worker-3 INFO [atlassian.crowd.directory.DirectoryCacheImplUsingChangeOperations] scanned and compared [ 192 ] users for update in DB cache in [ 775ms ] 2015-02-10 06:53:13,397 scheduler_Worker-3 INFO [atlassian.crowd.directory.DirectoryCacheImplUsingChangeOperations] synchronised [ 192 ] users in [ 775ms ] 2015-02-10 06:53:14,098 scheduler_Worker-3 INFO [atlassian.crowd.directory.DbCachingRemoteChangeOperations] scanning [ 0 ] users to add or update 2015-02-10 06:53:14,211 scheduler_Worker-3 INFO [atlassian.crowd.directory.DirectoryCacheImplUsingChangeOperations] scanned and compared [ 0 ] users for update in DB cache in [ 809ms ] 2015-02-10 06:53:14,211 scheduler_Worker-3 INFO [atlassian.crowd.directory.DirectoryCacheImplUsingChangeOperations] synchronised [ 0 ] users in [ 809ms ] 2015-02-10 06:53:15,327 scheduler_Worker-3 INFO [crowd.directory.ldap.SpringLdapTemplateWrapper] Timed call for search with handler on dc=citrite,dc=net took 1115ms 2015-02-10 06:53:15,988 scheduler_Worker-3 INFO [directory.ldap.cache.UsnChangedCacheRefresher] found [ 3 ] changed remote groups in [ 1777ms ] 2015-02-10 06:53:15,988 scheduler_Worker-3 INFO [atlassian.crowd.directory.DirectoryCacheImplUsingChangeOperations] scanning [ 3 ] groups to add or update 2015-02-10 06:53:15,994 scheduler_Worker-3 INFO [atlassian.crowd.directory.DbCachingRemoteChangeOperations] scanned and compared [ 3 ] groups for update in DB cache in [ 6ms ] 2015-02-10 06:53:15,995 scheduler_Worker-3 INFO [atlassian.crowd.directory.DirectoryCacheImplUsingChangeOperations] synchronized [ 3 ] groups in [ 7ms ] 2015-02-10 06:53:18,220 scheduler_Worker-3 INFO [atlassian.crowd.directory.DbCachingRemoteChangeOperations] Could not remove user [ananthak] from group [confluence-users]. User was not found. 2015-02-10 06:53:20,317 scheduler_Worker-3 INFO [atlassian.crowd.directory.DbCachingRemoteChangeOperations] Could not remove user [shengl] from group [confluence-users]. User was not found. 2015-02-10 06:53:43,130 scheduler_Worker-3 INFO [atlassian.crowd.directory.DbCachingRemoteChangeOperations] removed [ 1686 ] user members from [ confluence-users ] in [ 26160ms ] 2015-02-10 06:53:45,611 scheduler_Worker-3 INFO [atlassian.crowd.directory.DbCachingRemoteChangeOperations] Could not remove user [ananthak] from group [jira-users]. User was not found. 2015-02-10 06:54:14,747 scheduler_Worker-3 INFO [atlassian.crowd.directory.DbCachingRemoteChangeOperations] removed [ 1854 ] user members from [ jira-users ] in [ 30661ms ] 2015-02-10 06:54:16,868 scheduler_Worker-3 INFO [atlassian.crowd.directory.DbCachingRemoteChangeOperations] Could not remove user [ananthak] from group [jira-internal]. User was not found. 2015-02-10 06:54:43,889 scheduler_Worker-3 INFO [atlassian.crowd.directory.DbCachingRemoteChangeOperations] removed [ 1791 ] user members from [ jira-internal ] in [ 28317ms ] 2015-02-10 06:54:43,898 scheduler_Worker-3 INFO [directory.ldap.cache.AbstractCacheRefresher] migrated memberships for group - (3/3 - 100.0%) 87903ms elapsed 2015-02-10 06:54:43,899 scheduler_Worker-3 INFO [directory.ldap.cache.UsnChangedCacheRefresher] scanned and compared [ 0 ] groups for delete in DB cache in [ 0ms ] 2015-02-10 06:54:43,899 scheduler_Worker-3 INFO [atlassian.crowd.directory.DbCachingRemoteChangeOperations] removing [ 0 ] groups 2015-02-10 06:54:43,900 scheduler_Worker-3 INFO [atlassian.crowd.directory.DbCachingRemoteChangeOperations] removed [ 0 ] groups in [ 1ms ] 2015-02-10 06:54:43,900 scheduler_Worker-3 INFO [atlassian.crowd.directory.DbCachingRemoteDirectory] INCREMENTAL synchronisation complete for directory [ 7274497 ] in [ 97010ms ] Same happened with confluence-users, jira-internal,... and obviously nothing like this happened in LDAP. What other kind of confirmations do you need? Should I book a plane ticket and come to Sydney for a full hands-on experience?

            intersol_old added a comment -

            I am digging now inside the logs to find exactly when it happened. It happened only once with jira-users group, it should be something visible in the logs, with lots of users being removed from this group.

            intersol_old added a comment - I am digging now inside the logs to find exactly when it happened. It happened only once with jira-users group, it should be something visible in the logs, with lots of users being removed from this group.

            joe added a comment -

            The support request suggests that:

            We notice the following occurring due to operation timed out during sync

            but the sample log shows only:

            INFO [crowd.directory.ldap.SpringLdapTemplateWrapper] Timed call for search with handler on XXXXX took YYYYYms
            

            which is safe debugging of long-running LDAP queries (CWD-4082), rather than an indication of failure.

            I see there may be a related issue to investigate with inconsistencies during synchronisation, but I can't see any evidence here that failed LDAP queries or timeouts are being treated as successful.

            joe added a comment - The support request suggests that: We notice the following occurring due to operation timed out during sync but the sample log shows only: INFO [crowd.directory.ldap.SpringLdapTemplateWrapper] Timed call for search with handler on XXXXX took YYYYYms which is safe debugging of long-running LDAP queries ( CWD-4082 ), rather than an indication of failure. I see there may be a related issue to investigate with inconsistencies during synchronisation, but I can't see any evidence here that failed LDAP queries or timeouts are being treated as successful.

              pniegowski Pawel Niegowski (Inactive)
              dooi Der Lun
              Affected customers:
              23 This affects my team
              Watchers:
              39 Start watching this issue

                Created:
                Updated:
                Resolved: