Uploaded image for project: 'Bitbucket Data Center'
  1. Bitbucket Data Center
  2. BSERV-11636

Loading pull request metadata for branches can timeout unexpectedly

    XMLWordPrintable

Details

    Description

      Issue Summary

      When viewing the "Branches" page, if any branch has a large number of open pull requests, retrieving pull request counts can timeout.

      Steps to Reproduce

      • Visit $BASE_URL/projects/KEY/repos/slug/branches
      • Observe that the pull request column is empty

      Expected Results

      Pull request counts, and the "highest" pull request state, should be shown for each branch that has pull requests (regardless of state).

      Actual Results

      The pull request column is empty.

      When retrieving pull request metadata times out, the following message appears in atlassian-bitbucket.log:

      2019-03-02 11:09:46,588 INFO  [http-nio-7990-exec-288] jdoe *1REX68Sx669x10388721x3 xc133i 1.1.1.1,2.2.2.2 "GET /rest/api/latest/projects/KEY/repos/slug/branches HTTP/1.1" c.a.s.i.r.PluginRefMetadataMapProvider Timed out when retrieving ref metadata for com.atlassian.bitbucket.server.bitbucket-ref-metadata:outgoing-pull-request-metadata
      

      Notes

      If retrieving metadata times out while in the middle of a database query, the database connection may be interrupted. This is a recoverable error, and should not be cause for concern. The interrupted connection will be closed by the database pool and a replacement connection will be open. Since the pool typically has many connections available, this process shouldn't impact anything else the server is doing.

      If a database connection is interrupted, it may produce an error similar to the following. (This is for Oracle; the error on other RDBMSes would look different, but will contain similar stack frames.)

      2019-03-02 11:09:46,589 WARN  [threadpool:thread-1] jdoe *1REX68Sx669x10388721x3 xc133i 1.1.1.1,2.2.2.2 "GET /rest/api/latest/projects/KEY/repos/slug/branches HTTP/1.1" c.zaxxer.hikari.pool.ProxyConnection bitbucket - Connection oracle.jdbc.driver.T4CConnection@4e0d50d4 marked as broken because of SQLSTATE(08006), ErrorCode(17002)
      java.sql.SQLRecoverableException: IO Error: Socket read interrupted
      	at oracle.jdbc.driver.T4CPreparedStatement.executeForDescribe(T4CPreparedStatement.java:761)
      	at oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:904)
      	at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1082)
      	at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3780)
      	at oracle.jdbc.driver.T4CPreparedStatement.executeInternal(T4CPreparedStatement.java:1343)
      	at oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:3822)
      	at oracle.jdbc.driver.OraclePreparedStatementWrapper.executeQuery(OraclePreparedStatementWrapper.java:1165)
      	at com.zaxxer.hikari.pool.ProxyPreparedStatement.executeQuery(ProxyPreparedStatement.java:52)
      	at com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeQuery(HikariProxyPreparedStatement.java)
      	at org.hibernate.engine.jdbc.internal.ResultSetReturnImpl.extract(ResultSetReturnImpl.java:60)
      	at org.hibernate.loader.plan.exec.internal.AbstractLoadPlanBasedLoader.getResultSet(AbstractLoadPlanBasedLoader.java:419)
      	at org.hibernate.loader.plan.exec.internal.AbstractLoadPlanBasedLoader.executeQueryStatement(AbstractLoadPlanBasedLoader.java:191)
      	at org.hibernate.loader.plan.exec.internal.AbstractLoadPlanBasedLoader.executeLoad(AbstractLoadPlanBasedLoader.java:121)
      	at org.hibernate.loader.plan.exec.internal.AbstractLoadPlanBasedLoader.executeLoad(AbstractLoadPlanBasedLoader.java:86)
      	at org.hibernate.loader.collection.plan.AbstractLoadPlanBasedCollectionInitializer.initialize(AbstractLoadPlanBasedCollectionInitializer.java:87)
      	at org.hibernate.persister.collection.AbstractCollectionPersister.initialize(AbstractCollectionPersister.java:688)
      	at org.hibernate.event.internal.DefaultInitializeCollectionEventListener.onInitializeCollection(DefaultInitializeCollectionEventListener.java:75)
      	at org.hibernate.internal.SessionImpl.initializeCollection(SessionImpl.java:2185)
      	at org.hibernate.collection.internal.AbstractPersistentCollection$4.doWork(AbstractPersistentCollection.java:565)
      	at org.hibernate.collection.internal.AbstractPersistentCollection.withTemporarySessionIfNeeded(AbstractPersistentCollection.java:247)
      	at org.hibernate.collection.internal.AbstractPersistentCollection.initialize(AbstractPersistentCollection.java:561)
      	at org.hibernate.collection.internal.AbstractPersistentCollection.read(AbstractPersistentCollection.java:132)
      	at org.hibernate.collection.internal.PersistentSet.iterator(PersistentSet.java:163)
      	at java.util.Spliterators$IteratorSpliterator.estimateSize(Spliterators.java:1821)
      	at java.util.Spliterator.getExactSizeIfKnown(Spliterator.java:408)
      	at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:480)
      	at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
      	at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
      	at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
      	at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
      	at com.atlassian.stash.internal.watcher.InternalWatcherMapping.toWatchers(InternalWatcherMapping.java:112)
      	at com.atlassian.stash.internal.pull.InternalPullRequest.initialize(InternalPullRequest.java:393)
      	at com.atlassian.stash.internal.HibernateUtils.initialize(HibernateUtils.java:92)
      	at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
      	at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
      	at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
      	at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
      	at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
      	at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
      	at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
      	at com.atlassian.bitbucket.util.PageImpl.transform(PageImpl.java:137)
      	at com.atlassian.bitbucket.util.PageImpl.transform(PageImpl.java:15)
      	at com.atlassian.stash.internal.HibernateUtils.initializePage(HibernateUtils.java:124)
      	at com.atlassian.stash.internal.pull.HibernatePullRequestDao.search(HibernatePullRequestDao.java:291)
      	at com.atlassian.stash.internal.pull.DefaultPullRequestService.search(DefaultPullRequestService.java:1811)
      	at com.atlassian.stash.internal.pull.DefaultPullRequestService.search(DefaultPullRequestService.java:877)
      	at com.atlassian.plugin.util.ContextClassLoaderSettingInvocationHandler.invoke(ContextClassLoaderSettingInvocationHandler.java:26)
      	at com.atlassian.stash.internal.web.pull.PullRequestMetadataProvider$1.get(PullRequestMetadataProvider.java:47)
      	at com.atlassian.bitbucket.util.PagedIterable$PagedIterator.computeNext(PagedIterable.java:54)
      	at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
      	at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
      	at com.atlassian.stash.internal.web.pull.PullRequestMetadataProvider.getMetadata(PullRequestMetadataProvider.java:56)
      	at com.atlassian.stash.internal.repository.PluginRefMetadataMapProvider.lambda$null$3(PluginRefMetadataMapProvider.java:97)
      	at com.atlassian.stash.internal.concurrent.DefaultTransferableStateManager$StateTransferringCallable.call(DefaultTransferableStateManager.java:102)
      	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
      	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
      	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
      	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
      	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
      	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
      	at java.lang.Thread.run(Thread.java:748)
      	... 49 frames trimmed
      Caused by: java.net.SocketTimeoutException: Socket read interrupted
      	at oracle.net.nt.TimeoutSocketChannel.read(TimeoutSocketChannel.java:152)
      	at oracle.net.ns.NIOHeader.readHeaderBuffer(NIOHeader.java:82)
      	at oracle.net.ns.NIOPacket.readFromSocketChannel(NIOPacket.java:139)
      	at oracle.net.ns.NIOPacket.readFromSocketChannel(NIOPacket.java:101)
      	at oracle.net.ns.NIONSDataChannel.readDataFromSocketChannel(NIONSDataChannel.java:80)
      	at oracle.net.ano.CryptoNIONSDataChannel.readDataFromSocketChannel(Unknown Source)
      	at oracle.jdbc.driver.T4CMAREngineNIO.prepareForReading(T4CMAREngineNIO.java:98)
      	at oracle.jdbc.driver.T4CMAREngineNIO.unmarshalUB1(T4CMAREngineNIO.java:534)
      	at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:485)
      	at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:252)
      	at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:612)
      	at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:226)
      	at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:59)
      	at oracle.jdbc.driver.T4CPreparedStatement.executeForDescribe(T4CPreparedStatement.java:747)
      	... 61 common frames omitted
      

      Workaround

      Increase ref.metadata.timeout in bitbucket.properties until the timeout no longer occurs. The default value is ref.metadata.timeout=2, so trying a higher value, like ref.metadata.timeout=5

      Attachments

        Issue Links

          Activity

            People

              bturner Bryan Turner (Inactive)
              bturner Bryan Turner (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              7 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: