Improve AI Search caching for large tenants in Confluence Cloud

XMLWordPrintable

    • 2
    • Confluence

      Problem

      When users perform their first AI-powered search in Confluence Cloud, the response time is noticeably slower compared to subsequent searches.

      Identical AI search queries can return different results for different users, particularly during the initial search (Search results are filtered based on each user's permissions. If two users have different access rights to spaces or pages, their search results—even for identical queries—will differ)

      This scenario is especially pronounced for large tenants, where search performance and reliability are critical for productivity and user trust. The slower initial performance is due to backend processes such as indexing.

      Suggested Solution

      Implement enhanced caching mechanisms specifically tailored for large tenants to optimize the performance of AI search queries.

      This could include pre-caching frequently accessed data and improving the efficiency of backend processes to reduce initial query latency.

      Why This Is Important

      For large organizations, efficient and consistent search is essential to ensure users can quickly find relevant information. Improving the caching strategy for initial AI searches would help deliver a more reliable and performant experience for all users, regardless of when or by whom the search is performed.

      Addressing these discrepancies can enhance user satisfaction and productivity, ensuring that all users receive accurate and timely search results.

      Workaround

      Currently, there is no known workaround for this behavior. A workaround will be added here when available.

            Assignee:
            Unassigned
            Reporter:
            Edson B [Atlassian Support]
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

              Created:
              Updated: