Uploaded image for project: 'Jira Software Data Center'
  1. Jira Software Data Center
  2. JSWSERVER-20190

Due to large request size Jira receives error 400 from Bitbucket and/or Fisheye when refreshing dev panel cache

      Issue Summary

      Due to large request size Jira receives error 400 (bad request) from Bitbucket and/or Fisheye when refreshing dev panel cache.

      Steps to Reproduce

      1. Integrate Bitbucket and / or Fisheye and Jira using application links
      2. Execute GET request against <bitbucket_url>/rest/remote-link-aggregation/latest/aggregation endpoint that is longer than 8k in size:
        /rest/remote-link-aggregation/latest/aggregation?globalId=TEST-6224&globalId=TEST3-6224&globalId=TEST4-216&globalId=TEST2-18464&globalId=TEST-6543&globalId=TEST3-6543&globalId=TEST5-931&globalId=TEST1-674&globalId=TEST1-675&globalId=TEST2-17213&globalId=TEST2-14338&globalId=TEST10-1126&globalId=TEST11-1126&globalId=TEST2-18466&globalId=TEST5-1625&globalId=TEST5-2285&globalId=TEST5-2286&globalId=TEST5-2287&globalId=TEST5-2727&globalId==TEST13-215&globalId==TEST13-216&globalId=TEST1-620&globalId=TEST1-621&globalId=TEST5-1633&globalId=TEST5-1634&globalId=TEST6-1006&globalId=TEST5-1635&globalId=TEST2-17898&globalId=TEST14-536&globalId=TEST2-15631&globalId=TEST5-661&globalId=TEST2-17222&globalId=TEST4-341&globalId==TEST12-772&globalId=TEST-6246&globalId=TEST3-6246&globalId=TEST-5570&globalId=TEST3-5570&globalId=TEST1-452&globalId=TEST2-14347&globalId=TEST1-622&globalId=TEST2-17907&globalId=TEST-4506&globalId=TEST3-4506&globalId=TEST14-609&globalId=TEST5-1251&globalId=TEST2-18837&globalId=TEST5-1253&globalId=TEST4-266&globalId=TEST2-14351&globalId=TEST2-17913&globalId=TEST2-17914&globalId=TEST5-2736&globalId=TEST-6254&globalId=TEST3-6254&globalId=TEST1-719&globalId=TEST2-17235&globalId=TEST-4878&globalId=TEST3-4878&globalId=TEST2-18477&globalId=TEST-7177&globalId=TEST3-7177&globalId=TEST2-18478&globalId=TEST1-739&globalId=TEST5-3128&globalId=TEST-2058&globalId=TEST3-2058&globalId=TEST1-720&globalId=TEST2-16660&globalId=TEST2-18485&globalId=TEST-6560&globalId=TEST3-6560&globalId=TEST5-592&globalId=TEST-5577&globalId=TEST3-5577&globalId=TEST-6561&globalId=TEST3-6561&globalId=TEST5-593&globalId=TEST2-14368&globalId=TEST4-179&globalId=TEST2-17925&globalId=TEST7-163&globalId=TEST6-854&globalId=TEST-1606&globalId=TEST3-1606&globalId=TEST5-726&globalId=TEST1-740&globalId=TEST2-12037&globalId=TEST2-17249&globalId=TEST-7282&globalId=TEST3-7282&globalId=TEST16-541&globalId=TEST17-541&globalId=TEST18-541&globalId=TEST2-14387&globalId=TEST-7183&globalId=TEST3-7183&globalId=TEST2-17253&globalId=TEST2-17254&globalId=TEST2-18849&globalId=TEST-7184&globalId=TEST3-7184&globalId=TEST4-325&globalId=TEST2-18497&globalId=TEST2-16673&globalId=TEST2-16674&globalId=TEST2-17941&globalId=TEST2-16675&globalId=TEST2-16676&globalId=TEST1-742&globalId=TEST2-16678&globalId=TEST2-16679&globalId=TEST2-18851&globalId=TEST2-16680&globalId=TEST2-17944&globalId=TEST2-16681&globalId=TEST7-564&globalId=TEST2-17261&globalId=TEST2-18852&globalId=TEST2-16691&globalId=TEST-7058&globalId=TEST3-7058&globalId==TEST13-217&globalId=TEST14-507&globalId=TEST5-1680&globalId=TEST-5582&globalId=TEST3-5582&globalId=TEST5-1683&globalId=TEST1-455&globalId=TEST2-17273&globalId=TEST-6832&globalId=TEST3-6832&globalId=TEST1-571&globalId=TEST5-3185&globalId=TEST7-490&globalId=TEST2-17959&globalId=TEST2-17297&globalId=TEST5-2340&globalId=TEST6-1016&globalId==TEST13-218&globalId=TEST2-17299&globalId=TEST2-17300&globalId=TEST5-1690&globalId=TEST5-3191&globalId=TEST14-494&globalId=TEST-6576&globalId=TEST3-6576&globalId=TEST5-1044&globalId=TEST2-17303&globalId=TEST6-1237&globalId=TEST6-1238&globalId=TEST6-1239&globalId==TEST13-193&globalId==TEST13-194&globalId=TEST2-17963&globalId=TEST2-17311&globalId=TEST-851&globalId=TEST3-851&globalId=TEST7-491&globalId=TEST-6579&globalId=TEST3-6579&globalId=TEST14-538&globalId=TEST14-539&globalId=TEST7-221&globalId=TEST2-18872&globalId=TEST14-541&globalId=TEST7-224&globalId=TEST5-1707&globalId=TEST5-3199&globalId=TEST7-226&globalId=TEST10-1130&globalId=TEST11-1130&globalId=TEST5-1709&globalId=TEST2-17319&globalId=TEST5-2349&globalId=TEST2-17320&globalId=TEST5-1710&globalId=TEST5-2350&globalId=TEST5-1712&globalId=TEST5-1300&globalId=TEST2-17322&globalId=TEST5-2357&globalId=TEST-879&globalId=TEST3-879&globalId=TEST1-624&globalId=TEST6-1252&globalId=TEST2-17326&globalId=TEST-886&globalId=TEST3-886&globalId=TEST7-252&globalId=TEST2-14483&globalId=TEST2-15702&globalId=TEST2-17327&globalId=TEST5-3208&globalId=TEST7-452&globalId=TEST8-971&globalId=TEST-4200&globalId=TEST3-4200&globalId=TEST-902&globalId=TEST3-902&globalId=TEST5-536&globalId=TEST19-84&globalId=TEST2-17340&globalId=TEST-6318&globalId=TEST3-6318&globalId=TEST5-1721&globalId=TEST1-721&globalId=TEST2-13258&globalId=TEST5-1722&globalId=TEST5-1723&globalId=TEST2-17345&globalId=TEST2-17346&globalId=TEST5-1724&globalId=TEST15-19&globalId=TEST6-1257&globalId=TEST5-1725&globalId=TEST2-17349&globalId=TEST2-17996&globalId=TEST1-692&globalId=TEST19-80&globalId=TEST2-18506&globalId=TEST2-18507&globalId=TEST-6939&globalId=TEST3-6939&globalId=TEST5-1731&globalId=TEST2-14513&globalId=TEST5-1726&globalId=TEST6-953&globalId=TEST2-18508&globalId=TEST2-17352&globalId=TEST-7294&globalId=TEST3-7294&globalId=TEST2-17355&globalId=TEST2-18510&globalId=TEST4-255&globalId=TEST4-327&globalId=TEST5-1727&globalId=TEST5-1728&globalId=TEST2-13276&globalId=TEST6-787&globalId=TEST5-2780&globalId=TEST2-18515&globalId=TEST2-13281&globalId=TEST2-17368&globalId==TEST12-570&globalId=TEST2-18519&globalId=TEST2-17371&globalId=TEST-7062&globalId=TEST3-7062&globalId=TEST-987&globalId=TEST3-987&globalId=TEST5-680&globalId=TEST8-527&globalId=TEST7-546&globalId=TEST2-15753&globalId=TEST-3011&globalId=TEST3-3011&globalId=TEST7-549&globalId=TEST2-17374&globalId=TEST19-77&globalId=TEST2-17375&globalId=TEST2-17376&globalId=TEST5-3299&globalId=TEST-6606&globalId=TEST3-6606&globalId=TEST5-2363&globalId=TEST5-2364&globalId=TEST-1002&globalId=TEST3-1002&globalId=TEST5-677&globalId=TEST5-2365&globalId=TEST5-1755&globalId=TEST2-18918&globalId=TEST7-454&globalId=TEST2-17382&globalId=TEST2-14588&globalId=TEST14-610&globalId=TEST-3015&globalId=TEST3-3015&globalId=TEST-3459&globalId=TEST3-3459&globalId=TEST2-17386&globalId=TEST1-633&globalId=TEST5-1776&globalId=TEST5-1777&globalId=TEST14-581&globalId=TEST1-464&globalId=TEST14-582&globalId=TEST8-771&globalId=TEST2-17389&globalId=TEST5-1779&globalId=TEST-6942&globalId=TEST3-6942&globalId=TEST-4576&globalId=TEST3-4576&globalId=TEST2-15792&globalId=TEST6-1023&globalId=TEST2-17392&globalId=TEST8-850&globalId=TEST4-110&globalId=TEST2-18537&globalId=TEST5-1784&globalId=TEST1-723&globalId=TEST-7217&globalId=TEST3-7217&globalId=TEST7-497&globalId=TEST-7219&globalId=TEST3-7219&globalId=TEST-7221&globalId=TEST3-7221&globalId=TEST16-560&globalId=TEST17-560&globalId=TEST18-560&globalId=TEST2-14638&globalId=TEST2-13347&globalId=TEST2-18539&globalId=TEST5-2801&globalId=TEST2-15809&globalId=TEST5-1787&globalId=TEST2-18543&globalId=TEST2-17403&globalId=TEST-4939&globalId=TEST3-4939&globalId=TEST8-853&globalId=TEST5-1789&globalId=TEST5-1790&globalId=TEST-5624&globalId=TEST3-5624&globalId=TEST14-553&globalId=TEST-6333&globalId=TEST3-6333&globalId=TEST2-17408&globalId=TEST5-2376&globalId=TEST2-18043&globalId=TEST5-2378&globalId=TEST2-17409&globalId=TEST6-1093&globalId=TEST2-17413&globalId=TEST-7314&globalId=TEST3-7314&globalId=TEST2-13391&globalId=TEST-6626&globalId=TEST3-6626&globalId=TEST5-1322&globalId=TEST-6341&globalId=TEST3-6341&globalId=TEST7-552&globalId=TEST5-2814&globalId=TEST1-466&globalId=TEST-6344&globalId=TEST3-6344&globalId=TEST2-16768&globalId=TEST10-1323&globalId=TEST11-1323&globalId=TEST5-1796&globalId=TEST5-1797&globalId=TEST4-328&globalId=TEST-7070&globalId=TEST3-7070&globalId=TEST2-18049&globalId=TEST2-14716&globalId=TEST5-2826&globalId=TEST2-15876&globalId=TEST2-15883&globalId=TEST5-620&globalId=TEST5-2836&globalId=TEST5-2837&globalId=TEST5-1812&globalId=TEST1-724&globalId=TEST4-223&globalId==TEST12-394&globalId==TEST12-395&globalId==TEST12-396&globalId==TEST12-401&globalId==TEST12-404&globalId=TEST2-15897&globalId=TEST5-1818&globalId=TEST5-1820&globalId=TEST2-13450&globalId=TEST5-2397&globalId=TEST2-16792&globalId=TEST5-2398&globalId=TEST5-2399&globalId=TEST5-2400&globalId=TEST-241&globalId=TEST3-241&globalId=TEST5-2401&globalId=TEST-4626&globalId=TEST3-4626&globalId=TEST4-256&globalId=TEST2-18053&globalId=TEST1-500&globalId=TEST7-269&globalId=TEST7-500&globalId=TEST8-792&globalId=TEST2-18055&globalId=TEST5-1842&globalId=TEST5-1843&globalId=TEST14-611&globalId=TEST2-18057&globalId=TEST5-1848&globalId=TEST5-2404&globalId=TEST-7087&globalId=TEST3-7087&globalId=TEST4-257&globalId=TEST-4232&globalId=TEST3-4232&globalId=TEST1-665&globalId=TEST2-18062&globalId=TEST6-1031&globalId=TEST2-18063&globalId=TEST6-1105&globalId=TEST2-18065&globalId=TEST5-1855&globalId=TEST6-1108&globalId=TEST5-1856&globalId=TEST5-1857&globalId=TEST2-18069&globalId=TEST-7088&globalId=TEST3-7088&globalId=TEST-1108&globalId=TEST3-1108&globalId=TEST-7089&globalId=TEST3-7089&globalId=TEST5-2868&globalId=TEST8-550&globalId=TEST7-379&globalId==TEST12-793&globalId=TEST5-1863&globalId=TEST14-431&globalId=TEST5-1864&globalId=TEST7-506&globalId=TEST7-507&globalId=TEST5-2870&globalId=TEST5-1865&globalId=TEST7-270&globalId=TEST5-2872&globalId=TEST5-2873&globalId=TEST2-16834&globalId=TEST2-18080&globalId=TEST5-2408&"
        • or wait until Jira requests an update automatically.

      Expected Results

      • In case when manual cURL request is executed, Bitbucket and / or Fisheye returns the results properly.
      • In case when Jira requests an update, the dev panel is refreshed and no error is logged in the logs.

      Actual Results

      • After executing the cURL request, Bitbucket and / or Fisheye the following response is received by Jira:
        </head><body><h1>HTTP Status 400 – Bad Request</h1><hr class="line" /><p><b>Type</b> Exception Report</p><p><b>Message</b> Request header is too large</p><p><b>Description</b> The server cannot or will not process the request due to something that is perceived to be a client error (e.g., malformed request syntax, invalid request message framing, or deceptive request routing).</p><p><b>Exception</b></p><pre>java.lang.IllegalArgumentException: Request header is too large
        
      • When Jira requests an update, one of the following errors get logged in Jira side:
        • Example 1:
          2019-06-06 21:16:34,102 Caesium-1-3 ERROR ServiceRunner     [c.a.j.p.devstatus.provider.DefaultDevSummaryPollService] Refresh failure
          com.atlassian.jira.plugin.devstatus.provider.DataProviderRefreshFailure: Data Provider refresh failed with error code -1 and message - com.atlassian.sal.api.net.ResponseException: org.apache.http.ConnectionClosedException: Premature end of chunk coded message body: closing chunk expected]
          	at com.atlassian.jira.plugin.devstatus.provider.DefaultCachingProviderHelper.refreshProvider(DefaultCachingProviderHelper.java:79)
          	at com.atlassian.jira.plugin.devstatus.provider.DefaultDevSummaryPollService.handlePollingSuccess(DefaultDevSummaryPollService.java:69)
          	at com.atlassian.jira.plugin.devstatus.provider.DefaultDevSummaryPollService.lambda$performPull$1(DefaultDevSummaryPollService.java:51)
          	at com.atlassian.jira.plugin.devstatus.provider.source.applink.PollResult$PollResultSuccess.fold(PollResult.java:51)
          	at com.atlassian.jira.plugin.devstatus.provider.DefaultDevSummaryPollService.performPull(DefaultDevSummaryPollService.java:47)
          	at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
          
        • Example 2:
          2020-05-04 17:06:47,242+0200 Caesium-1-2 ERROR anonymous     [c.a.j.p.devstatus.provider.DefaultDevSummaryPollService] Refresh failure
          com.atlassian.jira.plugin.devstatus.provider.DataProviderRefreshFailure: Data Provider [Id: aec41341-413c-35bd-b05e-4ae260594c8c, Type: stash] refresh failed with error code -1 and message - com.atlassian.sal.api.net.ResponseException: javax.net.ssl.SSLProtocolException: Connection reset]
          	at com.atlassian.jira.plugin.devstatus.provider.DefaultCachingProviderHelper.refreshProvider(DefaultCachingProviderHelper.java:78)
          	at com.atlassian.jira.plugin.devstatus.provider.DefaultDevSummaryPollService.handlePollingSuccess(DefaultDevSummaryPollService.java:69)
          	at com.atlassian.jira.plugin.devstatus.provider.DefaultDevSummaryPollService.lambda$performPull$1(DefaultDevSummaryPollService.java:51)
          	at com.atlassian.jira.plugin.devstatus.provider.source.applink.PollResult$PollResultSuccess.fold(PollResult.java:51)
          
        • Example 3:
          2020-04-21 23:29:46,249+0200 Caesium-1-2 ERROR ServiceRunner [c.a.j.p.devstatus.provider.DefaultDevSummaryPollService] Refresh failure
          com.atlassian.jira.plugin.devstatus.provider.DataProviderRefreshFailure: Data Provider [Id: 3817a292-a5ca-3d31-8e6c-78ce507f0da4, Type: stash] refresh failed with error code 414 and message - HTTP status 414 Request-URI Too Large]
          at com.atlassian.jira.plugin.devstatus.provider.DefaultCachingProviderHelper.refreshProvider(DefaultCachingProviderHelper.java:78)
          

      Note

      The Dev status plugin has a guaranteed delivery feature, so in case Jira does not receive an update from Bitbucket and / or Fisheye for some time, Jira reaches out to Bitbucket and / or Fisheye and requests an update, to which Bitbucket and / or Fisheye replies with a list of issues that should be modified. Then, Jira reaches out again to seek all of the updates (which is what is failing in this case).

      Workarounds

      Option 1 (preferred)

      Decrease the fetch size on Jira side for guaranteed delivery, so even if some fetches induced by webhooks fail, guaranteed to deliver will deliver them later. For doing that:

      1. On Bitbucket side add the following into bitbucket.properties file:
        plugin.jira-development-integration.issues.updated.max=200
        
      1. Adjust this number up and down to find the maximum fetch size that doesn't fail for your particular case, remembering that issue keys vary in length so there may be more or less output added to the URL.
      2. Restart Bitbucket

      Option 2 - change headers size

      If there is a reverse-proxy in front of Bitbucket and / or Fisheye, follow both Step 1 and Step 2. Otherwise, follow only Step 1.

      Step 1

      1. On Bitbucket server, increase Tomcat header size by editing bitbucket.properties and adding:
        server.max-http-header-size=32768
        

        On Fisheye side, increase Jetty header size by setting the jetty.http.requestHeaderSize system property with the value 32768.

      1. Restart Bitbucket and / or Fisheye
      2. This will change Tomcat's / Jetty's max HTTP header size to 32kb. If you still experience the error, you can set higher header size.

      References:

      Please note:

      When implementing the workaround in Bitbucket, if you have server.tomcat.max-http-header-size defined in your bitbucket.properties, you should remove it and replace with server.max-http-header-size.

      The values are functionally the same, so to avoid conflicts, only one should be present. As server.tomcat.max-http-header-size will be removed in a future version of Bitbucket, we recommend you go with server.max-http-header-size.

      Step 2

      If you are using a reverse-proxy, you also need to make sure to set request line limit to the same size (32768 or larger):

          Form Name

            [JSWSERVER-20190] Due to large request size Jira receives error 400 from Bitbucket and/or Fisheye when refreshing dev panel cache

            I upgraded the Development Integration Plugin from version 5.5.4 to 5.5.6 on our recently upgraded jira (from 7.9.2 to 8.7.1) as described in this KB https://confluence.atlassian.com/jirakb/development-integration-plugin-gets-stuck-fetching-updated-issues-from-bitbucket-server-1014273314.html
            But after the update of that plugin, still the same errors in the atlassian-jira.log
            In this article https://jira.atlassian.com/browse/JSWSERVER-20612 there is a possible workaround, but before applying this one, I opened up a ticket with Atlassian Support...

            Wim Kerstens added a comment - I upgraded the Development Integration Plugin from version 5.5.4 to 5.5.6 on our recently upgraded jira (from 7.9.2 to 8.7.1) as described in this KB https://confluence.atlassian.com/jirakb/development-integration-plugin-gets-stuck-fetching-updated-issues-from-bitbucket-server-1014273314.html But after the update of that plugin, still the same errors in the atlassian-jira.log In this article https://jira.atlassian.com/browse/JSWSERVER-20612 there is a possible workaround, but before applying this one, I opened up a ticket with Atlassian Support...

            Hi Hamdy,

            We are facing the same issue. We just upgraded our on prem Jira from v7.x to 8.7.1
            2 Fisheye/Crucible, 1 Bitbucket and 1 Confluence Application links are connected.

            If you find a solution, plesae update this here, I'll keep on eye on it

            regards,
            Wim

            Wim Kerstens added a comment - Hi Hamdy, We are facing the same issue. We just upgraded our on prem Jira from v7.x to 8.7.1 2 Fisheye/Crucible, 1 Bitbucket and 1 Confluence Application links are connected. If you find a solution, plesae update this here, I'll keep on eye on it regards, Wim

            Akash added a comment -

            Hello,

             

            What action to be taken on Jira end? I can see update on BB but nothing on Jira end.

             

            Best,

            Akash

            Akash added a comment - Hello,   What action to be taken on Jira end? I can see update on BB but nothing on Jira end.   Best, Akash

            Hamdy Atakora added a comment - - edited

            My error is a bit different as i get a 413 error instead coming from Fisheye it seems Type: fecru]

            [c.a.j.p.devstatus.provider.DefaultDevSummaryPollService] Refresh failure com.atlassian.jira.plugin.devstatus.provider.DataProviderRefreshFailure: Data Provider [Id: c1700123-6791-3c82-9f84-14d51bb1aab8, Type: fecru] refresh failed with error code 413 and message - HTTP status 413 FULL head] at com.atlassian.jira.plugin.devstatus.provider.DefaultCachingProviderHelper.refreshProvider(DefaultCachingProviderHelper.java:78) at com.atlassian.jira.plugin.devstatus.provider.DefaultDevSummaryPollService.handlePollingSuccess(DefaultDevSummaryPollService.java:69) at com.atlassian.jira.plugin.devstatus.provider.DefaultDevSummaryPollService.lambda$performPull$1(DefaultDevSummaryPollService.java:51) at com.atlassian.jira.plugin.devstatus.provider.source.applink.PollResult$PollResultSuccess.fold(PollResult.java:51) at com.atlassian.jira.plugin.devstatus.provider.DefaultDevSummaryPollService.performPull(DefaultDevSummaryPollService.java:47) at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184) at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175) at java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948) at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151) at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174) at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418) at com.atlassian.jira.plugin.devstatus.provider.DevSummaryUpdateJob.runJob(DevSummaryUpdateJob.java:85) at com.atlassian.scheduler.core.JobLauncher.runJob(JobLauncher.java:134) at com.atlassian.scheduler.core.JobLauncher.launchAndBuildResponse(JobLauncher.java:106) at com.atlassian.scheduler.core.JobLauncher.launch(JobLauncher.java:90) at com.atlassian.scheduler.caesium.impl.CaesiumSchedulerService.launchJob(CaesiumSchedulerService.java:435) at com.atlassian.scheduler.caesium.impl.CaesiumSchedulerService.executeClusteredJob(CaesiumSchedulerService.java:430) at com.atlassian.scheduler.caesium.impl.CaesiumSchedulerService.executeClusteredJobWithRecoveryGuard(CaesiumSchedulerService.java:454) at com.atlassian.scheduler.caesium.impl.CaesiumSchedulerService.executeQueuedJob(CaesiumSchedulerService.java:382) at com.atlassian.scheduler.caesium.impl.SchedulerQueueWorker.executeJob(SchedulerQueueWorker.java:66) at com.atlassian.scheduler.caesium.impl.SchedulerQueueWorker.executeNextJob(SchedulerQueueWorker.java:60) at com.atlassian.scheduler.caesium.impl.SchedulerQueueWorker.run(SchedulerQueueWorker.java:35) at java.lang.Thread.run(Thread.java:748)
            
            

             

            Hamdy Atakora added a comment - - edited My error is a bit different as i get a 413 error instead coming from Fisheye it seems  Type: fecru] [c.a.j.p.devstatus.provider.DefaultDevSummaryPollService] Refresh failure com.atlassian.jira.plugin.devstatus.provider.DataProviderRefreshFailure: Data Provider [Id: c1700123-6791-3c82-9f84-14d51bb1aab8, Type: fecru] refresh failed with error code 413 and message - HTTP status 413 FULL head] at com.atlassian.jira.plugin.devstatus.provider.DefaultCachingProviderHelper.refreshProvider(DefaultCachingProviderHelper.java:78) at com.atlassian.jira.plugin.devstatus.provider.DefaultDevSummaryPollService.handlePollingSuccess(DefaultDevSummaryPollService.java:69) at com.atlassian.jira.plugin.devstatus.provider.DefaultDevSummaryPollService.lambda$performPull$1(DefaultDevSummaryPollService.java:51) at com.atlassian.jira.plugin.devstatus.provider.source.applink.PollResult$PollResultSuccess.fold(PollResult.java:51) at com.atlassian.jira.plugin.devstatus.provider.DefaultDevSummaryPollService.performPull(DefaultDevSummaryPollService.java:47) at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184) at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175) at java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948) at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151) at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174) at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418) at com.atlassian.jira.plugin.devstatus.provider.DevSummaryUpdateJob.runJob(DevSummaryUpdateJob.java:85) at com.atlassian.scheduler.core.JobLauncher.runJob(JobLauncher.java:134) at com.atlassian.scheduler.core.JobLauncher.launchAndBuildResponse(JobLauncher.java:106) at com.atlassian.scheduler.core.JobLauncher.launch(JobLauncher.java:90) at com.atlassian.scheduler.caesium.impl.CaesiumSchedulerService.launchJob(CaesiumSchedulerService.java:435) at com.atlassian.scheduler.caesium.impl.CaesiumSchedulerService.executeClusteredJob(CaesiumSchedulerService.java:430) at com.atlassian.scheduler.caesium.impl.CaesiumSchedulerService.executeClusteredJobWithRecoveryGuard(CaesiumSchedulerService.java:454) at com.atlassian.scheduler.caesium.impl.CaesiumSchedulerService.executeQueuedJob(CaesiumSchedulerService.java:382) at com.atlassian.scheduler.caesium.impl.SchedulerQueueWorker.executeJob(SchedulerQueueWorker.java:66) at com.atlassian.scheduler.caesium.impl.SchedulerQueueWorker.executeNextJob(SchedulerQueueWorker.java:60) at com.atlassian.scheduler.caesium.impl.SchedulerQueueWorker.run(SchedulerQueueWorker.java:35) at java.lang. Thread .run( Thread .java:748)  

            I have this same issue with fisheye integration.

            For fisheye, you want to set the system property jetty.http.headerbuffersize (for 4.6 or before) or jetty.http.requestHeaderSize (for 4.7).

            But with 4.6, I seem to have perhaps hit a hardcoded limit. Even with the size set to 48k, requests larger than 32k are failing with:

             2020-02-04 11:45:00,192 WARN  [qtp871790326-288 ] org.eclipse.jetty.http.HttpParser HttpParser-fill - HttpParser Full for SCEP@3a7b7bf3{l(/0:0:0:0:0:0:0:1:53714)<->r(/0:0:0:0:0:0:0:1:8060),d=true,open=true,ishut=false,oshut=false,rb=false,wb=false,w=true,i=0r}-{AsyncHttpConnection@6950c593,g=HttpGenerator{s=0,h=-1,b=-1,c=-1},p=HttpParser{s=-10,l=0,c=-3},r=0}

            We may also have a limitation on our F5 load balancer, I only get the error above when I bypass it.

            So generally, it would be nice to have these "guarenteed delivery" requests batched/paged so that they are never too large. 

            Peter-Dave Sheehan added a comment - I have this same issue with fisheye integration. For fisheye, you want to set the system property jetty.http.headerbuffersize (for 4.6 or before) or jetty.http.requestHeaderSize (for 4.7). But with 4.6, I seem to have perhaps hit a hardcoded limit. Even with the size set to 48k, requests larger than 32k are failing with: 2020-02-04 11:45:00,192 WARN [qtp871790326-288 ] org.eclipse.jetty.http.HttpParser HttpParser-fill - HttpParser Full for SCEP@3a7b7bf3{l(/0:0:0:0:0:0:0:1:53714)<->r(/0:0:0:0:0:0:0:1:8060),d= true ,open= true ,ishut= false ,oshut= false ,rb= false ,wb= false ,w= true ,i=0r}-{AsyncHttpConnection@6950c593,g=HttpGenerator{s=0,h=-1,b=-1,c=-1},p=HttpParser{s=-10,l=0,c=-3},r=0} We may also have a limitation on our F5 load balancer, I only get the error above when I bypass it. So generally, it would be nice to have these "guarenteed delivery" requests batched/paged so that they are never too large. 

            When implementing the workaround, if you have server.tomcat.max-http-header-size defined in your bitbucket.properties, you should remove it and replace with server.max-http-header-size. The values are functionally the same, so to avoid conflicts, only one should be present. As server.tomcat.max-http-header-size will be removed in a future version of Bitbucket, we recommend you go with server.max-http-header-size

            Alex [Atlassian,PSE] added a comment - When implementing the workaround, if you have server.tomcat.max-http-header-size defined in your bitbucket.properties, you should remove it and replace with server.max-http-header-size . The values are functionally the same, so to avoid conflicts, only one should be present. As server.tomcat.max-http-header-size will be removed in a future version of Bitbucket, we recommend you go with server.max-http-header-size

            Here's a stacktrace, maybe you can create a regex for hercules loganalyzer from it:

            2019-09-16 11:05:51,276 Caesium-1-3 ERROR ServiceRunner     [c.a.j.p.devstatus.provider.DefaultDevSummaryPollService] Refresh failure2019-09-16 11:05:51,276 Caesium-1-3 ERROR ServiceRunner     [c.a.j.p.devstatus.provider.DefaultDevSummaryPollService] Refresh failurecom.atlassian.jira.plugin.devstatus.provider.DataProviderRefreshFailure: Data Provider refresh failed with error code 400 and message - HTTP status 400 ] at com.atlassian.jira.plugin.devstatus.provider.DefaultCachingProviderHelper.refreshProvider(DefaultCachingProviderHelper.java:79) at com.atlassian.jira.plugin.devstatus.provider.DefaultDevSummaryPollService.handlePollingSuccess(DefaultDevSummaryPollService.java:69) at com.atlassian.jira.plugin.devstatus.provider.DefaultDevSummaryPollService.lambda$performPull$1(DefaultDevSummaryPollService.java:51) at com.atlassian.jira.plugin.devstatus.provider.source.applink.PollResult$PollResultSuccess.fold(PollResult.java:51) at com.atlassian.jira.plugin.devstatus.provider.DefaultDevSummaryPollService.performPull(DefaultDevSummaryPollService.java:47) at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184) at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175) at java.util.Iterator.forEachRemaining(Iterator.java:116) at java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801) at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151) at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174) at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418) at com.atlassian.jira.plugin.devstatus.provider.DevSummaryUpdateJob.runJob(DevSummaryUpdateJob.java:85) at com.atlassian.scheduler.core.JobLauncher.runJob(JobLauncher.java:153) at com.atlassian.scheduler.core.JobLauncher.launchAndBuildResponse(JobLauncher.java:118) at com.atlassian.scheduler.core.JobLauncher.launch(JobLauncher.java:97) at com.atlassian.scheduler.caesium.impl.CaesiumSchedulerService.launchJob(CaesiumSchedulerService.java:443) at com.atlassian.scheduler.caesium.impl.CaesiumSchedulerService.executeClusteredJob(CaesiumSchedulerService.java:438) at com.atlassian.scheduler.caesium.impl.CaesiumSchedulerService.executeClusteredJobWithRecoveryGuard(CaesiumSchedulerService.java:462) at com.atlassian.scheduler.caesium.impl.CaesiumSchedulerService.executeQueuedJob(CaesiumSchedulerService.java:390) at com.atlassian.scheduler.caesium.impl.CaesiumSchedulerService$1.consume(CaesiumSchedulerService.java:285) at com.atlassian.scheduler.caesium.impl.CaesiumSchedulerService$1.consume(CaesiumSchedulerService.java:282) at com.atlassian.scheduler.caesium.impl.SchedulerQueueWorker.executeJob(SchedulerQueueWorker.java:65) at com.atlassian.scheduler.caesium.impl.SchedulerQueueWorker.executeNextJob(SchedulerQueueWorker.java:59) at com.atlassian.scheduler.caesium.impl.SchedulerQueueWorker.run(SchedulerQueueWorker.java:34) at java.lang.Thread.run(Thread.java:748)

            Nabil Sayegh added a comment - Here's a stacktrace, maybe you can create a regex for hercules loganalyzer from it: 2019-09-16 11:05:51,276 Caesium-1-3 ERROR ServiceRunner      [c.a.j.p.devstatus.provider.DefaultDevSummaryPollService] Refresh failure2019-09-16 11:05:51,276 Caesium-1-3 ERROR ServiceRunner      [c.a.j.p.devstatus.provider.DefaultDevSummaryPollService] Refresh failurecom.atlassian.jira.plugin.devstatus.provider.DataProviderRefreshFailure: Data Provider refresh failed with error code 400 and message - HTTP status 400 ] at com.atlassian.jira.plugin.devstatus.provider.DefaultCachingProviderHelper.refreshProvider(DefaultCachingProviderHelper.java:79) at com.atlassian.jira.plugin.devstatus.provider.DefaultDevSummaryPollService.handlePollingSuccess(DefaultDevSummaryPollService.java:69) at com.atlassian.jira.plugin.devstatus.provider.DefaultDevSummaryPollService.lambda$performPull$1(DefaultDevSummaryPollService.java:51) at com.atlassian.jira.plugin.devstatus.provider.source.applink.PollResult$PollResultSuccess.fold(PollResult.java:51) at com.atlassian.jira.plugin.devstatus.provider.DefaultDevSummaryPollService.performPull(DefaultDevSummaryPollService.java:47) at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184) at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175) at java.util.Iterator.forEachRemaining(Iterator.java:116) at java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801) at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151) at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174) at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418) at com.atlassian.jira.plugin.devstatus.provider.DevSummaryUpdateJob.runJob(DevSummaryUpdateJob.java:85) at com.atlassian.scheduler.core.JobLauncher.runJob(JobLauncher.java:153) at com.atlassian.scheduler.core.JobLauncher.launchAndBuildResponse(JobLauncher.java:118) at com.atlassian.scheduler.core.JobLauncher.launch(JobLauncher.java:97) at com.atlassian.scheduler.caesium.impl.CaesiumSchedulerService.launchJob(CaesiumSchedulerService.java:443) at com.atlassian.scheduler.caesium.impl.CaesiumSchedulerService.executeClusteredJob(CaesiumSchedulerService.java:438) at com.atlassian.scheduler.caesium.impl.CaesiumSchedulerService.executeClusteredJobWithRecoveryGuard(CaesiumSchedulerService.java:462) at com.atlassian.scheduler.caesium.impl.CaesiumSchedulerService.executeQueuedJob(CaesiumSchedulerService.java:390) at com.atlassian.scheduler.caesium.impl.CaesiumSchedulerService$1.consume(CaesiumSchedulerService.java:285) at com.atlassian.scheduler.caesium.impl.CaesiumSchedulerService$1.consume(CaesiumSchedulerService.java:282) at com.atlassian.scheduler.caesium.impl.SchedulerQueueWorker.executeJob(SchedulerQueueWorker.java:65) at com.atlassian.scheduler.caesium.impl.SchedulerQueueWorker.executeNextJob(SchedulerQueueWorker.java:59) at com.atlassian.scheduler.caesium.impl.SchedulerQueueWorker.run(SchedulerQueueWorker.java:34) at java.lang.Thread.run(Thread.java:748)

            We are also being effected by this issue on smart commits, confirmed on support ticket

            Aron Felberbaum added a comment - We are also being effected by this issue on smart commits, confirmed on support ticket

              aermolenko Tony Miller
              mfilipan Marko Filipan
              Affected customers:
              40 This affects my team
              Watchers:
              88 Start watching this issue

                Created:
                Updated:
                Resolved: