Summary

      Full Synchronisation in between Crowd and any of the Atlassian Applications fails due to multiple users with the same External ID.

      Steps to Replicate

      It is unclear how Crowd duplicate External_IDs in its database without manual intervention, but, to replicate this problem we should:

      1. Select one of the Atlassian applications and connect it to Crowd as the User Management platform. For this scenario, Confluence was chosen.
      2. With that completed, add two users to Crowd, specifically to the directory where Confluence is reading users from. For this scenario, we added User1 and User2.
      3. Open Crowd database and run the following query to set both users with the same External ID:
        update cwd_user set external_id = (select external_id from cwd_user where lower_user_name = 'user1')
        where lower_user_name = 'user2';
        
      1. With that completed, run a Synchronisation through Confluence User Directory screen, it will fail.

      Log Evidence

      The synchronisation fails due to the following error - Multiple entries with the same key:

      2018-06-11 15:08:35,851 ERROR [Caesium-1-4] [atlassian.crowd.directory.DbCachingDirectoryPoller] pollChanges Error occurred while refreshing the cache for directory [ 950274 ].
      java.lang.IllegalArgumentException: Multiple entries with same key: 458753:3bfe6523-ba06-471a-bb77-ad2a623dc7eb=com.atlassian.crowd.model.user.InternalUser@4577c9c[id=1081348,name=user,createdDate=2018-06-11 15:07:22.002,updatedDate=2018-06-11 15:07:22.002,active=true,emailAddress=user1@userinc.com,firstName=User,lastName=Uno,displayName=User Uno,credential=com.atlassian.crowd.embedded.api.PasswordCredential@d11ef10[credential=********,encryptedCredential=true],lowerName=user1,lowerEmailAddress=user1@userinc.com,lowerFirstName=user,lowerLastName=uno,lowerDisplayName=user uno,directoryId=950274,externalId=458753:3bfe6523-ba06-471a-bb77-ad2a623dc7eb] and 458753:3bfe6523-ba06-471a-bb77-ad2a623dc7eb=com.atlassian.crowd.model.user.InternalUser@6a222bb[id=1081347,name=dude,createdDate=2018-06-11 15:07:22.002,updatedDate=2018-06-11 15:07:22.002,active=true,emailAddress=user2@userinc.com,firstName=User,lastName=Dos,displayName=User Dos Uno,credential=com.atlassian.crowd.embedded.api.PasswordCredential@39b45c78[credential=********,encryptedCredential=true],lowerName=user2,lowerEmailAddress=user2@userinc.com,lowerFirstName=user,lowerLastName=dos,lowerDisplayName=user dos,directoryId=950274,externalId=458753:3bfe6523-ba06-471a-bb77-ad2a623dc7eb]
      	at com.google.common.collect.ImmutableMap.checkNoConflict(ImmutableMap.java:150)
      	at com.google.common.collect.RegularImmutableMap.checkNoConflictInBucket(RegularImmutableMap.java:104)
      	at com.google.common.collect.RegularImmutableMap.<init>(RegularImmutableMap.java:70)
      	at com.google.common.collect.ImmutableMap$Builder.build(ImmutableMap.java:254)
      	at com.atlassian.crowd.directory.DbCachingRemoteChangeOperations.mapUsersByExternalId(DbCachingRemoteChangeOperations.java:1194)
      	at com.atlassian.crowd.directory.DbCachingRemoteChangeOperations.getUsersToAddAndUpdate(DbCachingRemoteChangeOperations.java:1106)
      	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
      	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      	at java.lang.reflect.Method.invoke(Method.java:498)
      	at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:302)
      	at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:190)
      	at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
      	at org.springframework.transaction.interceptor.TransactionInterceptor$1.proceedWithInvocation(TransactionInterceptor.java:99)
      	at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:281)
      	at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:96)
      	at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
      	at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:208)
      	at com.atlassian.crowd.directory.$Proxy2496.getUsersToAddAndUpdate(Unknown Source)
      	at com.atlassian.crowd.directory.DirectoryCacheImplUsingChangeOperations.addOrUpdateCachedUsers(DirectoryCacheImplUsingChangeOperations.java:55)
      	at com.atlassian.crowd.directory.ldap.cache.RemoteDirectoryCacheRefresher.synchroniseAllUsers(RemoteDirectoryCacheRefresher.java:88)
      	at com.atlassian.crowd.directory.ldap.cache.AbstractCacheRefresher.synchroniseAll(AbstractCacheRefresher.java:56)
      	at com.atlassian.crowd.directory.ldap.cache.EventTokenChangedCacheRefresher.synchroniseAll(EventTokenChangedCacheRefresher.java:69)
      	at com.atlassian.crowd.directory.DbCachingRemoteDirectory.synchroniseCache(DbCachingRemoteDirectory.java:1186)
      	at com.atlassian.crowd.manager.directory.DirectorySynchroniserImpl.synchronise(DirectorySynchroniserImpl.java:74)
      	at com.atlassian.crowd.directory.DbCachingDirectoryPoller.pollChanges(DbCachingDirectoryPoller.java:50)
      	at com.atlassian.crowd.manager.directory.monitor.poller.DirectoryPollerJobRunner.runJob(DirectoryPollerJobRunner.java:96)
      

      Steps to Diagnose and Workaround

      In order to confirm that your instance is indeed affected by this issue, you need to see above stack trace in your logs and the query below should return results - Don't forget to replace the ID of the affected directory below:

      SELECT id, active, user_name, external_id, directory_id FROM cwd_user WHERE external_id IN 
      (SELECT external_id FROM cwd_user WHERE directory_id = 'YOUR_DIRECTORY_ID_HERE' GROUP BY external_id HAVING COUNT(external_id) > 1) order by 4;
      

      In case you don't have the ID of the affected directory, run the query below:

      select * from cwd_directory;
      

      Once you have the results handy, you have to:

      1. Access Crowd and remove the affected users from the directory Confluence (or other application) is syncing to.
        1. If you are using a LDAP Connector (not a Delegated Directory nor a Crowd Internal Directory), you need to run the clean-up at your LDAP too or filter the duplicated users through a LDAP query.
      2. In case those users are not supposed to have access anymore, proceed to the next step. Else, re-add those users to Crowd and grant them access to the directory Confluence (or other application) is syncing to. This will generate new External IDs for them.
      3. Synchronise directories through Confluence (or other application) user directory screen.

      Suggestion

      Perhaps this bug can be fixed through adding a constraint under cwd_user table that specifies one external_id per directory_id. There's a unique key constraint currently that defines one lower_user_name per directory. This Feature request is logged here: https://jira.atlassian.com/browse/CWD-3882

      Workaround

      1. As a possible, examine the duplicate records returned by the diagnostic query above and determine which user_name entry that you want to be used for the user.
      2. For the other record of the user, set the external_id to a text entry such as 'invalid'. For example:
        update cwd_user set external_id = 'invalid' where id = <the record ID of the entry with the user_name that shouldn't be used>;
        

            [CWD-5182] Sync Failures due to duplicated External IDs

            Fixed in:

            • Crowd 4.4.0
            • Jira 9.0
            • Confluence - 8.0
            • Bitbucket - 8.0
            • Bamboo - 8.1

             

            Pawel Gruszczynski (Inactive) added a comment - Fixed in: Crowd 4.4.0 Jira 9.0 Confluence - 8.0 Bitbucket - 8.0 Bamboo - 8.1  

            We have also run into this issue and the proposed work around of updating the "external_id" value for the conflict user resolved the issue right away.

            We were provided with the following query to help identify anyone who is a conflict user

            SELECT id, user_name, external_id, directory_id FROM cwd_user WHERE external_id IN 
            (SELECT external_id FROM cwd_user WHERE directory_id = $dirId GROUP BY external_id HAVING COUNT(external_id) > 1); 

            Kristopher Drew Perez added a comment - We have also run into this issue and the proposed work around of updating the "external_id" value for the conflict user resolved the issue right away. We were provided with the following query to help identify anyone who is a conflict user SELECT id, user_name, external_id, directory_id FROM cwd_user WHERE external_id IN (SELECT external_id FROM cwd_user WHERE directory_id = $dirId GROUP BY external_id HAVING COUNT(external_id) > 1);

            Having done multiple migrations, and having encountered this issue ~12 times now. We've learned how to replicate this issue. This typically happens when there's two colliding users with the same email and username from two separate directories and one of the users has some sort of change made to it.

             

            This can also happen when a user has a name change (i.e. due to marriage) within the same directory and the second user comes before the first but their email is left the same.

             

            I believe the sync has to be interrupted at some portion for it to have been past one of these two in the second scenario for something to cause it to end up with the duplicates and start having it's fit (i.e. in the scenario where a unique foreign key constraint gets hit on a group due to another bug where a group may be added with a space at the end and Crowd/Jira get upset about duplicate key entries for a group)

             

            For larger instances, this is definitely not a 'small' issue since we've encountered this issue at least a dozen times. Enough so that we have the fix/downtime etc documented at this point since there is not really a good work around scenario for it.

            Xaviar Steavenson added a comment - Having done multiple migrations, and having encountered this issue ~12 times now. We've learned how to replicate this issue. This typically happens when there's two colliding users with the same email and username from two separate directories and one of the users has some sort of change made to it.   This can also happen when a user has a name change (i.e. due to marriage) within the same directory and the second user comes before the first but their email is left the same.   I believe the sync has to be interrupted at some portion for it to have been past one of these two in the second scenario for something to cause it to end up with the duplicates and start having it's fit (i.e. in the scenario where a unique foreign key constraint gets hit on a group due to another bug where a group may be added with a space at the end and Crowd/Jira get upset about duplicate key entries for a group)   For larger instances, this is definitely not a 'small' issue since we've encountered this issue at least a dozen times. Enough so that we have the fix/downtime etc documented at this point since there is not really a good work around scenario for it.

            Karen added a comment -

            We are experiencing this same issue with Confluence 6.6. We are not using Crowd. We are using the OOB Confluence LDAP Sync. Is there a suggested workaround for our use case? For now, we will delete the duplicate users manaully, but want to prevent this going forward.

            Karen added a comment - We are experiencing this same issue with Confluence 6.6. We are not using Crowd. We are using the OOB Confluence LDAP Sync. Is there a suggested workaround for our use case? For now, we will delete the duplicate users manaully, but want to prevent this going forward.

              Unassigned Unassigned
              mhorlle Marcelo Horlle
              Affected customers:
              15 This affects my team
              Watchers:
              46 Start watching this issue

                Created:
                Updated:
                Resolved: