-
Bug
-
Resolution: Not a bug
-
High
-
None
-
Severity 3 - Minor
Hi everyone,
Let me shed some light behind this issue.
I confirm that the change of maxResults value for search API was intentional and it’s not a bug. We decided to reduce the maxResults value not only due to increased performance and memory impact on Jira but also after observing the volume of REST API requests that were unfinished because of timeout errors. Whilst we appreciate that the solution may not be ideal to the problem, and we keep investing into improving the performance of Jira searches, it was necessary to reduce the number of failed API requests without further delay.
According to Atlassian REST API policy default and maximum sizes of paged data are not considered part of the API and may change without deprecation notice. But I completely understand the impact this change caused to REST API clients that relied on the anticipated value of 1000 results and I apologize for lack of prior communication.
In principle, our recommended solution is to rely on pagination to retrieve the desired number of results in chunks for any API that supports startAt parameter. We also recommend that REST API clients systematically confirm maxResults value when making the requests to prevent disruptions whenever these limits change. That being said, we updated the related Knowledge Base article, which was inaccurately suggesting the default value of response results to be 1000.
Regarding the problems for startAt offset above 1000 in combination with expanded changelog, we need to investigate them as separate issue (see JRACLOUD-67458).
Thank you for your understanding.
Eve Stankiewicz
Jira Cloud Product Management
Summary
JIRA Cloud REST API /rest/api/latest/search?maxResults=1000 is returning only 100 results.
Steps to Reproduce
- Call to a REST API with maxResults = 1000
Expected Results
The API returns all issues until 1000 results.
Actual Results
The API returns only 100 results.
Notes:
This is impacting add-on dashboards and reporting.
Changing maxResults Parameter for JIRA REST API.
Workaround
You can use the startAt parameter to get the other results (if more than 100 are returned). /search endpoint implements pagination api which can be used to retrieve all the required data
- duplicates
-
JSWCLOUD-16236 Jira Cloud API search method maxResults error BUGGGGGG!!!
- Closed
- is related to
-
JRACLOUD-67722 JIRA Cloud REST API increase the limit for maxResults
- Gathering Interest
- is caused by
-
ADDON-230 Failed to load
-
CRANE-1177 Failed to load
- relates to
-
HOT-80655 Loading...
-
DEVHELP-1027 Loading...
[JRACLOUD-67570] JIRA Cloud REST API /rest/api/latest/search?maxResults=1000 is returning only 100 results.
This is a difficult limitation for our system, which is built on top of Jira systems. The given limitation and reasons are not acceptable. Please consider increasing the limits, or at the very least provide an offset parameter to the API so multiple calls can be a workaround.
I am really sorry that I migrate from server to cloud. Now it's hard to return. This is not acceptable.
Jira has tons of basic cloud software issues that other cloud software are offering in the market. THIS IS REALLY VERY BAD APPROACH FROM JIRA
I understand your point and I agree it would be much more comfortable being able to export everything in a shot. On the other hand I have to agree with the developers that it is very difficult to determine the average amount of data each issue may contain and, since we have seen cases in the past of people literally DoSsing their instances by trying to retrieve way too many issues at the same time, they decided to set some kind of 'safe' limit.
For more details on this, see:
- https://ecosystem.atlassian.net/browse/DEVHELP-1027
- https://ecosystem.atlassian.net/browse/ADDON-230
Finally, there is actually an open Feature Request to have the limit increased that has been created more than one year ago but that got only one vote so far:
I have linked the feature request to this ticket so that maybe more people will be aware of it and will vote for it, giving it a chance to get back on the developers' table at some point in time.
Incorrect behavior attributed to an "optional" parameter is just making excuses. Policy that allows you to avoid fixing a product people have paid for is just an excuse with paper work.
@Dario Bonotto, I guess you understand my point people are very disappointed with this approach. I implemented a REST API solution for exporting 2.6K issues, and because of this 100 record limitation, it takes to me about 5 minutes to get all of them in excel. Using a CSV export option that also has a limit about 1000 (not 10K) it takes much less time, but in order to load the information into excel, I need to build a different logic for downloading the file, uploading, parsing the information and dealing with the fact that the excel columns are not consistent among files, because it create an additional column per Sprint (when the issue has been in more than one sprint) and by labels (when there is more than one). Therefore the CSV file format changes dynamically based on the information downloaded, without changing the columns to export.
Now I need to deal with the logic for getting more than 1K records using excel export (because of the performance) and the logic for using the REST API for a small amount of data.
Having a limit of 1K for both ways is at least more consistent, and also having a CSV file format that does not change dynamically based on the number of labels, components or Sprint like it happens now. It makes more difficult the logic for parsing the information. For example, using a String of elements delimited in the same column instead of adding dynamically a new column that is more difficult to handle.
david.leal As mentioned in the status update on the top of the description, you can use pagination to get all the results. A similar approach can be used to get more than 10000 result when exporting to CSV (or the old export to Excel).
I believe that no doctor will ever give you all your medical records in one single page (unless it is very little data) since there is a limited amount of text you can put on a page before it becomes unreadable. The same concept applies here.
For details see:
How Atlassian can put a restriction like this? it means we don't own our data in Jira. Imaging I am going to the doctor for requesting my medical records and I got the same answer: We can give you only the first two pages of your records but you can visit us every day and we will give you the next two pages until you get all your records. This is insane. We have a limitation for exporting into excel file and now this low limit for the REST-API.
Hi mike.loux,
As mentioned in the documentation the 'isLast' property is optional. This means that is not returned by all the endpoints.
E.g.
if I call the search endpoint indeed I don't get it:
{"expand":"schema,names","startAt":0,"maxResults":50,"total":7,"issues":.....
While if I call rest/agile/1.0/board/ID/sprint:
{"maxResults":50,"startAt":0,"isLast":true,"values".......
However, while checking the documentation I have seen that the example for the 'search' endpoint is wrong. I have opened a request to get the documentation fixed:
Cheers,
Dario
Came upon this issue yesterday and decided to implement paging in my query (issue search via jql). It seems to work as the documentation specified, EXCEPT that I am not seeing the isLast field get returned, which contradicts what the documentation says. I can get around this by repeating until startAt + maxResults >= total, but the isLast parameter would have been a lot easier to handle. Anybody else seeing this?
Apparently setting maxResults also does not work on Jira Server (v7.7) ?
@doug indeed, don’t try this method within a custom function the number of queries made in this way and their spacing will need some experimentation balancing simultaneous and chained consecutive calls.
@Jonathon, I was hopeful for this approach. However, I am getting a "429 - Too many requests" response
Hi dmoore2, we are sorry for the issue seen on your instance. As the endpoint should work I tried to find related logs but there are non for GET /rest/api/2/issue/*/worklog requests in the last 7 days. But e.g. I might have incorrectly identified your instance.
May we ask you for raising a support ticket for that exact problem i.e. Google Apps script unable to fetch all issue worklogs? In the support ticket could you provide logs if any from the Google Apps script, please? Thanks!
@Doug - w.r.t Google Apps Script, Make use of the new UrlFetchApp.fetchAll(requests) API within Apps Script. You build the request objects and it executes them all in parallel.
Admittedly I've recently started hitting a 'maximum queries in 5 minutes' cap that i have never seen before.
We are using the API in conjuction with Google Apps scripts – we are attempting to import large amounts of worklog records for financial auditing. Due to this new limitation, Google Apps times out before the sixty requests are able to complete (6,000 or so records at 100 a piece). There is no option to "use pagination". I am doing so, but I hit a request timeout due to the limitation.
This was a monumentally stupid decision on Atlassian's part. You broke your API's contract with users by introducing a breaking change like this.
jambrose1, Please notice you are using the changelog endpoint and not the search one, and what you are reporting in here seems related to:
JRACLOUD-59998: Limit changelog to 100 entries, and add /changelog endpoint for full history- https://confluence.atlassian.com/jirakb/jira-cloud-rest-api-limits-the-number-of-changelogs-returned-938840277.html
Also, if pagination does not work, then this sounds like a bug. Please open a support request for this in support.atlassian.com so that we can further look into this.
Best Regards,
Dario
Atlassian Cloud Support
"/search endpoint implements pagination api which can be used to retrieve all the required data".
This is not true. Some endpoints such as get issue return:
{
"self": "https://xtivans.atlassian.net/rest/api/2/issue/XOD-17/changelog?maxResults=100&startAt=0",
"nextPage": "https://xtivans.atlassian.net/rest/api/2/issue/XOD-17/changelog?maxResults=100&startAt=100",
"maxResults": 100,
"startAt": 0,
"total": 127,
"isLast": false,
"values": [ ...
}
The search API does NOT return this pagination metadata, so I have to write two different pagination handlers.
{
"expand": "schema,names",
"startAt": 0,
"maxResults": 10,
"total": 48,
"issues": [...
}
This change to the API accomplishes nothing, but adding a few steps of extra complexity for us suing Power Bi... Everyone that needs it will simply just do as mentioned above....
Anyways...
I noticed the API search returns a total property, i am not well versed in M, but surely it can be used to be able to loop and avoid empty records. Although, i havent seen any with the solution mentioned on previous comments.
@Dwight Thompson It´s the same problem with refresh for me. I would think it´s because the "Query1" doesn´t use a "Source = " that is valid for scheduled refresh in the PowerBI web service.
i saw this blog post on the subject:
i will work on fix for my reports but not this week i´m afraid. will post any enhancements to the template i come across
@anders.fredriksson, thank you for much for your solution. It's working well! However, I am running into one problem. Refreshing the data source works fine when I refresh in Power Bi Desktop, but when I attempt to refresh in the Power BI Service, I get the following error: "You can't schedule refresh for this dataset because one or more sources currently don't support refresh." Have you noticed this? I wonder if there's any way around this. Thanks, again.
@Saurabh Garg, I recommend you follow this method:
However, you can use the below M code to follow my (now very inefficient) original method (just paste into the Advanced Editor and modify Source and Appended Query steps as required. Note that the Queries listed in the Appended Queries step have the same first 3 steps, with the only difference being the startAt parameter. You can append them all together once they are Converted to Table.
let Source = Json.Document(Web.Contents("https://yourdomain.atlassian.net/rest/api/2/search?jql=project%20%3D%20EXP&startAt=0&maxResults=100")), issues = Source[issues], #"Converted to Table" = Table.FromList(issues, Splitter.SplitByNothing(), null, null, ExtraValues.Error), #"Appended Query" = Table.Combine({#"Converted to Table", #"JIRA 101-200", #"JIRA 201-300"}), #"Expanded Column1" = Table.ExpandRecordColumn(#"Appended Query", "Column1", {"key", "fields"}, {"key", "fields"}) in #"Expanded Column1"
@ben.arjomandi77604661 , Can you give me more detail on your below comment
How are you going to convert all those queries to table?
@John, you can edit the "fJiraApiUrl" query. Select Advanced Editor, and then edit the relevant portions of the URL. Ensure you don't overwrite the "&StartAt &" portion in the middle of the URL.
Where can I change the Domain Name and edit the search criteria in PowerBI to apply Anders solution?
@Anders Fredriksson, just wanted to thank you for a fantastic solution for those of us using PowerQuery/PowerBI! Works like a charm!
Feedback from customer:
Is there any possibilities to revert back this changes or possibilities to increase the maximum limit value. Because in our instance we are maintaining few services / tools which will fetch the entire task details from JIRA which are updated for one whole day and then do the necessary process accordingly. In this case, the total result count will be approximately 3000 to 4000 tasks but if the limit is 100 then we need to fetch these details nearly 40 time request from REST API to get the full result which may result in performance problem of our JIRA instance where previously we used to fetch the data by a call of 4 times. Moreover we have many services & tools in our instance for our internal purpose which are being affected badly due to this recent sudden behavior changes in REST API.
We use PowerBI to retrieve JIRA results and at this point the data is useless and only represents a very small subset of what we are looking for. The JIRA plugin for PowerBI should be removed or users should be made aware of the results so they are not wasting time trying to understand why they are not seeing the entire dataset.
Hello Eve Stankiewicz and developers,
We have understand that the limiting the rest api return value to 100. But in every atlassian account each and every project will have more than 1000 tickets at some times. So when coming to business point of view the administrator should give the exact count and data regarding each project. And more over no one going to use more than 1000 tickets unless they are administrator. So there will be some lack of time to load the page which is acceptable.
In our project we have to see the performance of each project for that we need to take all open and fixed tickets every week in all projects. For real scenario we have 7400+ tickets in a project. So we need to take 2017 created ticket alone to new page which we created. In there we could not take more than 100 tickets. So we need support that need to get more than 1000 tickets. We can use even 1000 at a time is useful to use as of now. But 100 is very less to fetch data from JIRA.
https://getsupport.atlassian.com/servicedesk/customer/portal/23/JST-327017
Kindly check our exact requirement and issues from from above link.
Kindly update us that do we have any updates on this to work efficiently.
Thanks,
Arutkavi.
hi eve, thanks for the additional comments.
I see the other open issue and will track that. I'm wondering when this was introduced tho as I just started seeing this in our processing coincident with this issue... it also seems sporadic (fails for some data sets and not others).
I'll move my comments to the other ticket since we had already implemented paging and while the reduced result set size is inconvenient from a request/time perspective, it's not a real issue.
thanks again.
dg17 and others who experience failed search responses when expanding the changelog - we're working on a solution to this problem separately under JRACLOUD-67458. This is an unrelated problem and it would be impacting search queries even if there was no change to maxResults limit. The fix is underway and we expect to release it soon, however it also relies on limiting the number of changelogs returned to 100.
As an alternative solution to prevent the amount of data in changelogs to fail /rest/api/2/search requests, you can consider retrieving changelogs for individual issues using the following resource:
/rest/api/2/issue/{issueIdOrKey}/changelog
If you experience any other problems using pagination for /rest/api/2/search requests I advise to open individual support cases, so that we can properly investigate the root cause and address them properly, independent from the maxResults limit values.
Regards,
Eve Stankiewicz
Jira Cloud Product Management
The issue when expanding the changelog is caused by a known bug and it is tracked as:
JRACLOUD-67458: rest/api/2/search endpoint returns error 500 when expanding the changelog
dg17, as regarding the problems with pagination, if you can either provide some more specific examples or open a support ticket in support.atlassian.com we can further investigate.
Cheers,
Dario
If I can add weight to the above comments, we have been using our PowerBI dashboard to manage a very large project which now no longer fully refreshes - this is disappointing and I would hope the decision is reversed.
istankiewicz We need solution for this as we are completely blocked by this change
@Joshua/Polyanna: https://drive.google.com/file/d/0BxZAFHVVfQ6pNkNoUkdIUVBScW8/view?usp=sharing
please change JIRA domain to match your environment and edit the search criteria.
Is delivered as is....
Please @Anders, could you please send your PBI file (or only the scripts, in case of security data)? pollyannaogoncalves@gmail.com Thanks!!
Hi all Jira+PowerBI users!
Please take a look at http://datachix.com/2014/05/22/power-query-functions-some-scenarios/
It´s a great example of implementing an automatic "paging" through web records.
I personally had to rewrite our Power BI querys to adapt to the new API limitations (max 100 records).
Briefly you can add a function that loops through a list(table) of "startAt" values. So one query makes as many 100 row calls as needed. One thing is that you have to add a "remove empty" step if you make more calls than there are records.
I can share an .pbix example, if anyone is interested?
Eve, thanks for the update. However, the pagination also seems to be problematic after this update. It was working before it (bringing 500 or 1.000results per page) but since this it does not work anymore.
Thanks for the update.
I can provide examples of issues related to offset failures and including the change log if that helps. It seems to be tied to result size as I can run the queries fine if I further reduce the maxResults. Ex: if I set maxResults to 25 I am able to complete the query paging through all 1000+ results.
Hi everyone,
Let me shed some light behind this issue.
I confirm that the change of maxResults value for search API was intentional and it’s not a bug. We decided to reduce the maxResults value not only due to increased performance and memory impact on Jira but also after observing the volume of REST API requests that were unfinished because of timeout errors. Whilst we appreciate that the solution may not be ideal to the problem, and we keep investing into improving the performance of Jira searches, it was necessary to reduce the number of failed API requests without further delay.
According to Atlassian REST API policy default and maximum sizes of paged data are not considered part of the API and may change without deprecation notice. But I completely understand the impact this change caused to REST API clients that relied on the anticipated value of 1000 results and I apologize for lack of prior communication.
In principle, our recommended solution is to rely on pagination to retrieve the desired number of results in chunks for any API that supports startAt parameter. We also recommend that REST API clients systematically confirm maxResults value when making the requests to prevent disruptions whenever these limits change. That being said, we updated the related Knowledge Base article, which was inaccurately suggesting the default value of response results to be 1000.
Regarding the problems for startAt offset above 1000 in combination with expanded changelog, we need to investigate them as separate issue.
Thank you for your understanding.
Eve Stankiewicz
Jira Cloud Product Management
I'm facing the same issue for since yesterday morning! Nothing was said about this limit update/bug in the page of Atlassian. I'm using Power BI in order to call this data and now I'm getting only 100 rows and the pagination is not working anymore!
It is a bug because you changed the behavior of the REST API in an incompatible way. No notification was given meaning that people were broken over night. Furthermore, there was no error thrown that you are no longer supporting maxResults values greater than 100. You had to discover that there was missing data to know that there was something wrong. Nor has the documentation been changed to indicate that 100 is now the new upper limit. Obviously the original bug was to NOT having a predefined upper limit like most APIs do (or perhaps there was one before and you just lowered it).
Bottom line is this change should have been handled much better. That said, it no longer matters to me as we published a fix as soon as it was known to us - it was just as easy to use 100 as it was before to use 250.
Ben, thanks for running the query. Very strange. I get a 500 error unless I remove the change log or modify the maxResults.
Further, I get failures at very specific startAt offsets (1100, 1101 fail)
Hopefully someone chimes in from Atlasssian with some thoughts. Happy to provide specific examples.
dg, I was able to obtain those results without issue using various startAt values.
Ben, thanks for your comment.
I think my offset issues may be related to the fact that I am also requesting to expand the change log.
**Would you do me a favor and see if you get errors when including the expand=changelog parameter?
If I set maxResults to 50 (what it seems like they changed the default to), I can make requests, but I see failures at different startAt offsets.
dg, my workaround is for PowerQuery/PowerBI.
I tried various startAt values and was able to obtain the next 100 records without issue.
/rest/api/2/search?jql=created%20is%20not%20EMPTY%20ORDER%20BY%20created%20ASC&startAt=1200&maxResults=100
We have been using google sheets to pull this data. This unexpected change completely kills our JIRA bi work. The good news is that my management has now approved us to use www.smartsheet.com with JIRA integration. Added bonus is that we can also stop using portfolio since smartsheet can do better scheduling.
Ben, your solution may or may not work depending on the instance. I have found that even paging thru the results does not work for some data sets.
I get deterministic failures at specific offsets for different JIRA instances/projects. One fails at a startsAt=1100, another at 700, and a third at 5300. This may relate to another change that seems to not let you page through > 1000 results in the JIRA app itself.
Search: created is not EMPTY ORDER BY created ASC
and try and page past 1000 results. The app hangs on the 951- page and trying to edit the URL results in a redirect to 951- page.
This is a ridiculous change with no notice to users of a paid service who are relying on it. Changing the maxResults from 1000 to 100 seems like a lazy solution to the problem raised above.
For those using PowerBI/PowerQuery, the workaround is to create multiple different queries (0-100, 100-200, 200-300, etc) using the startAt parameter, Convert to Table, and then append them all into one query before continuing with the field expansions and further steps.
This is extremely annoying though. For example, I have a report which pulls up to 6000 records (in 6 different queries). Now I have to update this to 60!!! different queries and append them all together. It will take forever to refresh that report...
@dario using the startAt parameter is not an option either it seems as pagination breaks at some point when you are paging through issues. This may be related to the max as it happens with startAt=1100 in an example I am testing. Oddly, I can request results startAt=1200
You can see this in the JIRA app itself if you page to a result set that has > 1000 results. ex:
https://domain/issues/?jql=created%20is%20not%20EMPTY%20ORDER%20BY%20created%20ASC&startIndex=950
Paging from here hangs and if you manually edit the startIndex parameter you are redirected to the startIndex=950.
This issue is preventing any integrations that request result sets > 1000 even if you are paging at the maxResults=100 limit...
Am I to understand that the API max results (that was previously 1000) is permanently reduced to 100 "maxResults" parameter value?
Where is this documented?
1000 to 100 is a huge leap and is breaking many customer implementations - obviously! How can you make such changes without warning anyone and without updating the doc?
Hello,
You can use the startAt parameter in order to get the other results. However, this seems to be a short term solution and more details on this will follow.
I am getting in touch with the developers that implemented this change in order to get an official comment in here.
Best Regards,
Dario
Atlassian Cloud Support
Are you going to assist customers who are impacted by this change to resolve their issues and offer workarounds or other solutions? Like for example Power Bi Data pull from Jira?
Thanks ,
Regards,
Lukas
Are you going to assist customers who are impacted by this change to resolve their issues and offer workarounds or other solutions? Like for example Power Bi Data pull from Jira?
Thanks ,
Regards,
Lukas
Not that pagination and "start-at" mechanisms are inherently broken.... Assume we have items sequentially numbered 1 thru 5000... Pagination is set to 25 items..
So expectation:
first call: 1 thru 25
second call 26 thru 50
BUT if the collection changes between the calls
Say #3 is deleted after the first call.. Then "skipping 25" would result in 27 thru 51 being returned and #26 never beeng seen.
ATOMIC results require the use of some persistent (Atlassian server side) snapshot to ensure that all of the (potentially hundereds of sequential reqests) all are related to the data as it exists at a single point in time.