When a really long complex url is present in a wiki page, Confluence throws java.lang.StackOverflowError after the page is submitted.
The exception is thrown while the URLs are being extracted from the page content via ConfluenceLinkResolver.extractLinkTextList().
I was able to identify the problem as a JDK bug 6337993.
Until the JDK bug is fixed there are two ways to resolve this issue in Confluence.
The Xss Approach
By increasing the stack size via -Xss option let's say:
-Xss512k
The optimal stack size is platform and JVM dependent, so some research/consulting needs to be done when changing this value. Increasing the stack size makes is possible to resolve longer URLs, but it works only until the next limit is hit.
Regex Pattern Approach
The Confluence code can be modified to prevent the issue from occurring by simplifying the URL pattern.
Instead of (confluence/confluence/src/java/com/atlassian/confluence/renderer/radeox/filters/UrlFilter.java)
PURE_URL_PATTERN = "((" + protocols + ")(%[\\p{Digit}A-Fa-f][\\p{Digit}A-Fa-f]|[-_.!~*';/?:@#&=+$,\\p{Alnum}\\[\\]\\\\])+)";
you could use
PURE_URL_PATTERN = "((" + protocols + ")\\S+";
or some similar, simple pattern.
The downside of this approach is that the matching is not as strict as it used to be which might or might not break something else - Atlassian guys, you should be able to determine this.
I'm attaching a simple java app that I wrote to simulate what happens inside Confluence when this error occurs. When you uncomment the second pattern the exception is not thrown and the url is matched.
A stack trace captured when the exception was thrown is attached as well.