Uploaded image for project: 'Crowd Data Center'
  1. Crowd Data Center
  2. CWD-4133

DbCachingRemoteDirectory / RemoteDirectoryCacheRefresher noisy logging

    • 3
    • 6
    • Our product teams collect and evaluate feedback from a number of different sources. To learn more about how we use customer feedback in the planning process, check out our new feature policy.

      I see logs like this in PUG every hour:

      @4000000054666294350ce114 2014-11-15 07:14:02,887 INFO [scheduler_Worker-4] [atlassian.crowd.directory.DbCachingRemoteDirectory] synchroniseCache INCREMENTAL synchronisation for directory [ 163841 ] starting
      @4000000054666294350ceccc 2014-11-15 07:14:02,888 INFO [scheduler_Worker-4] [atlassian.crowd.directory.DbCachingRemoteDirectory] synchroniseCache Attempting INCREMENTAL synchronisation for directory [ 163841 ]
      @4000000054666294350cf0b4 2014-11-15 07:14:02,889 INFO [scheduler_Worker-4] [atlassian.crowd.directory.DbCachingRemoteDirectory] synchroniseCache Incremental synchronisation for directory [ 163841 ] was not completed, falling back to a full synchronisation
      @400000005466629435475dbc 2014-11-15 07:14:02,889 INFO [scheduler_Worker-4] [atlassian.crowd.directory.DbCachingRemoteDirectory] synchroniseCache INCREMENTAL synchronisation for directory [ 163841 ] was not successful, attempting FULL
      @4000000054666296076666cc 2014-11-15 07:14:04,123 INFO [scheduler_Worker-4] [directory.ldap.cache.RemoteDirectoryCacheRefresher] findAllRemoteUsers found [ 1388 ] remote users in [ 1233 ms ]
      @4000000054666296102c5f8c 2014-11-15 07:14:04,270 INFO [scheduler_Worker-4] [atlassian.crowd.directory.DbCachingRemoteChangeOperations] deleteCachedUsersNotIn scanned and compared [ 1388 ] users for delete in DB cache in [ 145ms ]
      @4000000054666296103091c4 2014-11-15 07:14:04,271 INFO [scheduler_Worker-4] [atlassian.crowd.directory.DbCachingRemoteChangeOperations] deleteCachedUsersNotIn scanned for deleted users in [ 146ms ]
      @400000005466629612a9abfc 2014-11-15 07:14:04,312 INFO [scheduler_Worker-4] [atlassian.crowd.directory.DbCachingRemoteChangeOperations] getUsersToAddAndUpdate scanning [ 1388 ] users to add or update
      @400000005466629613ddcae4 2014-11-15 07:14:04,332 INFO [scheduler_Worker-4] [atlassian.crowd.directory.DirectoryCacheImplUsingChangeOperations] addOrUpdateCachedUsers scanned and compared [ 1388 ] users for update in DB cache in [ 50ms ]
      @400000005466629613e7f08c 2014-11-15 07:14:04,333 INFO [scheduler_Worker-4] [atlassian.crowd.directory.DirectoryCacheImplUsingChangeOperations] addOrUpdateCachedUsers synchronised [ 1388 ] users in [ 51ms ]
      @40000000546662961938bbd4 2014-11-15 07:14:04,422 INFO [scheduler_Worker-4] [directory.ldap.cache.RemoteDirectoryCacheRefresher] findAllRemoteGroups found [ 27 ] remote groups in [ 89 ms ]
      @40000000546662961938c78c 2014-11-15 07:14:04,422 INFO [scheduler_Worker-4] [atlassian.crowd.directory.DirectoryCacheImplUsingChangeOperations] addOrUpdateCachedGroups scanning [ 27 ] groups to add or update
      @400000005466629619b4602c 2014-11-15 07:14:04,430 INFO [scheduler_Worker-4] [atlassian.crowd.directory.DbCachingRemoteChangeOperations] findGroupsToUpdate scanned and compared [ 27 ] groups for update in DB cache in [ 7ms ]
      @400000005466629619c25a4c 2014-11-15 07:14:04,431 INFO [scheduler_Worker-4] [atlassian.crowd.directory.DirectoryCacheImplUsingChangeOperations] addOrUpdateCachedGroups synchronized [ 27 ] groups in [ 8ms ]
      @400000005466629619fb19a4 2014-11-15 07:14:04,435 INFO [scheduler_Worker-4] [atlassian.crowd.directory.DbCachingRemoteChangeOperations] deleteCachedGroupsNotIn scanned and compared [ 27 ] groups for delete in DB cache in [ 3ms ]
      @400000005466629a10470bac 2014-11-15 07:14:08,271 INFO [scheduler_Worker-4] [atlassian.crowd.directory.DbCachingRemoteDirectory] synchroniseCache FULL synchronisation complete for directory [ 163841 ] in [ 5384ms ]
      

      Looking at these messages, I'm not convinced there's anything particularly INFO-worthy in there. Sure - it found some users in the remote directory, it found some groups in the remote directory, it tried to see if any local users/groups needed to be added or removed. It didn't find any.

      I would propose the following:

      • A single INFO level log on the hour to indicate that a sync is taking place (basically, the first log line in the output above is fine).
      • Everything else should be DEBUG level.
      • Any user deletion / creation should be INFO level.

            [CWD-4133] DbCachingRemoteDirectory / RemoteDirectoryCacheRefresher noisy logging

            Atlassian Update - 27 April 2025

            Hello,

            Thank you for submitting this suggestion. We appreciate you taking the time to share your ideas for improving our products, as many features and functions come from valued customers such as yourself.

            Atlassian is committed to enhancing the security and compliance of our Data Center products, with an emphasis on sustainable scalability and improving the product experience for both administrators and end-users. We periodically review older suggestions to ensure we're focusing on the most relevant feedback. This suggestion is being closed due to a lack of engagement in the last four years, including no new watchers, votes, or comments. This inactivity suggests a low impact. Therefore, this suggestion is not in consideration for our future roadmap.

            Please note the comments on this thread are not being monitored.

            You can read more about our approach to highly voted suggestions here and how we prioritize what to implement here.

            To learn more about our recent investments in Crowd Data Center, please check our public roadmap.

            Kind regards,
            Crowd Data Center

            Aakrity Tibrewal added a comment - Atlassian Update - 27 April 2025 Hello, Thank you for submitting this suggestion. We appreciate you taking the time to share your ideas for improving our products, as many features and functions come from valued customers such as yourself. Atlassian is committed to enhancing the security and compliance of our Data Center products, with an emphasis on sustainable scalability and improving the product experience for both administrators and end-users. We periodically review older suggestions to ensure we're focusing on the most relevant feedback. This suggestion is being closed due to a lack of engagement in the last four years , including no new watchers, votes, or comments. This inactivity suggests a low impact. Therefore, this suggestion is not in consideration for our future roadmap. Please note the comments on this thread are not being monitored. You can read more about our approach to highly voted suggestions here and how we prioritize what to implement here. To learn more about our recent investments in Crowd Data Center, please check our public roadmap . Kind regards, Crowd Data Center

            Chris Kiick (Inactive) added a comment - From cloud log analyzer page, top 25 INFO logs from production confluence are all crowd.directory. https://cloud-log-tools.internal.domain.dev.atlassian.io/logs/top?index=logs-unicorn-confluence-prd-YYYY.week.WW&logLevel=INFO&startDate=2015%2F08%2F18&endDate=2015-08-27T00%3A00%3A00-05%3A00&limit=25

            Hi msymons. In many cases, those messages are normal. We have recently made some improvements in this area (CWD-4245) and future releases of Crowd, JIRA, Confluence and Stash will be more precise in their logging.

            Diego Berrueta added a comment - Hi msymons . In many cases, those messages are normal. We have recently made some improvements in this area ( CWD-4245 ) and future releases of Crowd, JIRA, Confluence and Stash will be more precise in their logging.

            And what about the "not successful" events for INCREMENTAL synch?

            Similar to above example, I never do not see these...

            2015-03-19 11:45:24,190 INFO [scheduler_Worker-5] [atlassian.crowd.directory.DbCachingRemoteDirectory] synchroniseCache Attempting INCREMENTAL synchronisation for directory [ 12877825 ]
            2015-03-19 11:45:24,234 INFO [scheduler_Worker-5] [atlassian.crowd.directory.DbCachingRemoteDirectory] synchroniseCache Incremental synchronisation for directory [ 12877825 ] was not completed, falling back to a full synchronisation
            2015-03-19 11:45:24,235 INFO [scheduler_Worker-5] [atlassian.crowd.directory.DbCachingRemoteDirectory] synchroniseCache INCREMENTAL synchronisation for directory [ 12877825 ] was not successful, attempting FULL
            

            Is this normal? Something that needs to be logged separately? Why would the INCREMENTAL synch not succeed?

            Mark Symons added a comment - And what about the "not successful" events for INCREMENTAL synch? Similar to above example, I never do not see these... 2015-03-19 11:45:24,190 INFO [scheduler_Worker-5] [atlassian.crowd.directory.DbCachingRemoteDirectory] synchroniseCache Attempting INCREMENTAL synchronisation for directory [ 12877825 ] 2015-03-19 11:45:24,234 INFO [scheduler_Worker-5] [atlassian.crowd.directory.DbCachingRemoteDirectory] synchroniseCache Incremental synchronisation for directory [ 12877825 ] was not completed, falling back to a full synchronisation 2015-03-19 11:45:24,235 INFO [scheduler_Worker-5] [atlassian.crowd.directory.DbCachingRemoteDirectory] synchroniseCache INCREMENTAL synchronisation for directory [ 12877825 ] was not successful, attempting FULL Is this normal? Something that needs to be logged separately? Why would the INCREMENTAL synch not succeed?

            joe added a comment -

            CWD-3132 is about a specific case (the first synchronisation) that may be better as chattier than the others.

            joe added a comment - CWD-3132 is about a specific case (the first synchronisation) that may be better as chattier than the others.

            Okay so it looks like this sync process was previously even more chatty than it is today, which was resolved in CWD-3044. I don't quite understand CWD-3132, it seems tangential to this ticket.

            Samuel Day (Inactive) added a comment - Okay so it looks like this sync process was previously even more chatty than it is today, which was resolved in CWD-3044 . I don't quite understand CWD-3132 , it seems tangential to this ticket.

            joe added a comment -

            This was last visited in CWD-3044, and CWD-3132 (reported during the same work) has not been fixed yet.

            joe added a comment - This was last visited in CWD-3044 , and CWD-3132 (reported during the same work) has not been fixed yet.

            I think the first time we sync a server, we should give detailed logging on what we found, as it is extremely helpful in diagnosing configuration issues. However steady state logging should be silent unless debug logging is enabled, as the server is effectively not doing anything at this time.

            Whether thatthat detailed logging should be the first time after a restart, or the first time ever is the real question.

            Richard Atkins added a comment - I think the first time we sync a server, we should give detailed logging on what we found, as it is extremely helpful in diagnosing configuration issues. However steady state logging should be silent unless debug logging is enabled, as the server is effectively not doing anything at this time. Whether thatthat detailed logging should be the first time after a restart, or the first time ever is the real question.

            Samuel Day (Inactive) added a comment - /cc richatkins

              Unassigned Unassigned
              sday@atlassian.com Samuel Day (Inactive)
              Votes:
              2 Vote for this issue
              Watchers:
              12 Start watching this issue

                Created:
                Updated:
                Resolved: