Uploaded image for project: 'Bitbucket Data Center'
  1. Bitbucket Data Center
  2. BSERV-19490

Mesh sidecar process doesn't work when process.timeout.execution value is set to 0 or a negative value

XMLWordPrintable

      Issue Summary

      According to the Configuration properties, process.timeout.execution can be set to 0 or a negative value to fully deactivate the timeout; however, doing so causes the mesh sidecar process to repeatedly restart.

      This is reproducible on Data Center: yes

      Steps to Reproduce

      • Download the Bitbucket 8.9.6 or any latest tar file from the Atlassian website.
      • Install and start the Bitbucket application and then connect it to a DB.
      • Shut down the Bitbucket application and set process.timeout.execution=0 parpameter in <Bitbucket-Home-Directory>/shared/bitbucket.properties file.
      • Start the Bitbucket application.

      Expected Results

      Bitbucket should fully disable the timeout when git is generating output, and the mesh sidecar process shouldn't crash.

      Actual Results

      Bitbucket repeatedly restarts the mesh sidecar process, resulting in a "Repository not available offline" error when a user attempts to access the repositories. The following error message can be seen just before Bitbucket restarts the mesh process.

      2024-06-16 16:42:50,381 INFO  [mesh-sidecar-monitor:thread-2]  c.a.s.i.s.g.m.DefaultSidecarManager Failed to send gRPC ping
      2024-06-16 16:43:00,363 INFO  [mesh-sidecar-monitor:thread-2]  c.a.s.i.s.g.m.DefaultSidecarManager Failed to send gRPC ping
      2024-06-16 16:43:10,362 INFO  [mesh-sidecar-monitor:thread-2]  c.a.s.i.s.g.m.DefaultSidecarManager Failed to send gRPC ping
      2024-06-16 16:43:20,406 INFO  [mesh-sidecar-monitor:thread-2]  c.a.s.i.s.g.m.DefaultSidecarManager Failed to send gRPC ping
      2024-06-16 16:43:30,391 INFO  [mesh-sidecar-monitor:thread-2]  c.a.s.i.s.g.m.DefaultSidecarManager Failed to send gRPC ping
      2024-06-16 16:43:40,439 INFO  [mesh-sidecar-monitor:thread-2]  c.a.s.i.s.g.m.DefaultSidecarManager Failed to send gRPC ping
      2024-06-16 16:43:50,425 INFO  [mesh-sidecar-monitor:thread-2]  c.a.s.i.s.g.m.DefaultSidecarManager Failed to send gRPC ping
      2024-06-16 16:44:00,385 INFO  [mesh-sidecar-monitor:thread-2]  c.a.s.i.s.g.m.DefaultSidecarManager Failed to send gRPC ping
      2024-06-16 16:44:10,363 INFO  [mesh-sidecar-monitor:thread-2]  c.a.s.i.s.g.m.DefaultSidecarManager Failed to send gRPC ping
      2024-06-16 16:44:14,375 WARN  [mesh-sidecar-monitor:thread-2]  c.a.s.i.s.g.m.DefaultSidecarManager gRPC ping response hasn't been received for 5 periods (94018ms ago), attempting restart
      2024-06-16 16:44:14,563 INFO  [mesh-grpc-request:thread-1]  c.a.s.i.s.g.m.DefaultMeshSidebandRegistry Sidecar#0 (http://localhost:7777): Sideband channel closed
      2024-06-16 16:44:14,563 INFO  [mesh-grpc-request:thread-1]  c.a.s.i.s.g.m.DefaultMeshSidebandRegistry Sidecar#0 (http://localhost:7777): Reopening sideband channel
      2024-06-16 16:44:14,739 INFO  [mesh-grpc-request:thread-5]  c.a.s.i.mesh.DefaultMeshNodeRegistry Node Sidecar (http://localhost:7777) went offline
      2024-06-16 16:44:14,723 INFO  [mesh-grpc-request:thread-6]  c.a.s.i.s.g.m.DefaultMeshSidebandRegistry Sidecar#0 (http://localhost:7777): Sideband channel closed because node is unavailable
      io.grpc.StatusRuntimeException: UNAVAILABLE: io exception
      	at io.grpc.Status.asRuntimeException(Status.java:535)
      	at io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onClose(ClientCalls.java:487)
      	at io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
      	at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
      	at io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
      	at com.atlassian.stash.internal.scm.git.mesh.LastSeenClientInterceptor$LastSeenClientListener.onClose(LastSeenClientInterceptor.java:40)
      	at io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
      	at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
      	at io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
      	at com.atlassian.stash.internal.scm.git.mesh.StatefulClientCallListener.onClose(StatefulClientCallListener.java:34)
      	at com.atlassian.stash.internal.scm.git.mesh.ErrorHandlingClientInterceptor$ErrorHandlingCall$1.onClose(ErrorHandlingClientInterceptor.java:149)
      	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
      	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
      	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
      	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
      	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
      	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
      	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
      	at java.lang.Thread.run(Thread.java:750)
      	... 1 frame trimmed
      Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: localhost/127.0.0.1:7777
      Caused by: java.net.ConnectException: Connection refused
      	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
      	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716)
      	at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:337)
      	at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:334)
      	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:776)
      	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)
      	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650)
      	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
      	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
      	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
      	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
      	at java.lang.Thread.run(Thread.java:750)
      2024-06-16 16:44:15,674 INFO  [mesh-sidecar-monitor:thread-2]  c.a.s.i.s.g.m.DefaultSidecarManager Sidecar has stopped (Exit code: 0)
      2024-06-16 16:44:21,027 WARN  [mesh-sidecar-monitor:thread-2]  c.a.s.i.s.g.m.DefaultSidecarManager atlassian-mesh.log: WARN  [main] - c.a.b.m.g.hook.DefaultGitHookService No native hook callback is available for this platform (Linux 6.2.15-100.fc36.aarch64 aarch64). Falling back to the perl version.
      2024-06-16 16:44:21,027 INFO  [mesh-sidecar-monitor:thread-2]  c.a.s.i.s.g.m.DefaultSidecarManager Sidecar started after 5353ms
      2024-06-16 16:44:23,716 INFO  [mesh-grpc-request:thread-6]  c.a.s.i.s.g.m.DefaultMeshSidebandRegistry Sidecar#0 (http://localhost:7777): Reopening sideband channel
      2024-06-16 16:44:24,169 INFO  [mesh-grpc-request:thread-6]  c.a.s.i.mesh.DefaultMeshNodeRegistry Node Sidecar (http://localhost:7777) came online
      

      Workaround

      • Remove the parameter process.timeout.execution from bitbucket.properties and mesh.properties file and restart the Bitbucket application.
      • Alternatively, if the parameter was added to disable the timeout and it's needed to resolve a specific issue, raise the timeout value to a higher number.

              8ab5f4fb2885 Dyon Georgopoulos
              83b3279fad28 Aman Shrivastava
              Votes:
              1 Vote for this issue
              Watchers:
              5 Start watching this issue

                Created:
                Updated:
                Resolved: