-
Suggestion
-
Resolution: Unresolved
-
148
-
54
-
Our product teams collect and evaluate feedback from a number of different sources. To learn more about how we use customer feedback in the planning process, check out our new feature policy.
Summary
As an administrator I would like to have a REST API endpoint that I can call in order to export users to CSV. The same CSV export you can get from the instance user management when you click on 'export all users' documented in the below page:
Workaround
It is possible to grab the cookie from an active site-admin session from the browser and then use it to authenticate a REST API POST request sent to the below internal endpoint (replace <CLOUD-ID> with your real cloud instance id):
https://admin.atlassian.com/gateway/api/adminhub/um/site/<CLOUD-ID>/users/export
As the body you can send something like:
{ includeApplicationAccess: true includeGroups: false includeInactiveUsers: true selectedGroupIds: [] }
Please notice this is just a workaround, the endpoint is not official, not documented and may change over time. You may find a sample Python script in this link.
There is an updated Python script provided by the DEV team as the endpoint API is no longer valid link here
[ID-8451] REST API endpoint to export all users to CSV
Update for orgs with new user management experience
import argparse import time import browser_cookie3 import requests cookies = browser_cookie3.chrome() # could do browser_cookie3.load() for arbitrary browser parser = argparse.ArgumentParser(description='Download users from a Cloud Org') parser.add_argument('cloud_org_id', type=str, help='Unique ID of the Cloud Org') parser.add_argument('output_file', type=str, help='Output file name') args = parser.parse_args() # update this according to your org url = 'https://admin.atlassian.com/gateway/api/adminhub/um/org/{}/exports/users'.format(args.cloud_org_id) # update according to needs data = { "includeApplicationAccess": "true", "includeGroups": "true", "includeInactiveUsers": "true", "selectedGroupIds": [] } response = requests.post(url, json=data, cookies=cookies) if not response.ok: print(f"Failed to export csv with code : {response.status_code} {response.text}") exit(1) exportKey = response.headers.get('exportKey') # get export ID (A/B/C -> C) exportID = exportKey.split('/').pop() # wait for background tasks to complete - adjust based on size of users time.sleep(60) # wait long enough here # Update this if needed, you can find this information by hovering over to the 'Download CSV file' button in the email region = 'us-west-2' # Hit download endpoint downloadUrl = 'https://admin.atlassian.com/gateway/api/adminhub/um/org/{}/exports/{}?region={}'.format(args.cloud_org_id, exportID, region) response = requests.get(downloadUrl, cookies=cookies) if response.ok: with open(args.output_file, "w") as cloudcsvfile: cloudcsvfile.write(response.text) print("Successfully exported csv to file") else: print(f"Failed to export csv with code {response.status_code}")
All comments
4326607df174 workaround worked well, however with the last browsers update to encrypt cookies, the browser_cookie3 approach has been blocked, so, I think, will not be possible for a while.
Any other workaround someone could have thought?
And following d108c8e54c86 comment, any updates on then this issue will be addressed to get an official API available? This export is a critical step for project control.
Using this internal endpoint works for us; however, there are instances where the 'region' specified does not match the correct S3 storage, causing the run to fail on the second if/else condition. Could we kindly request the list of regions being used (e.g., 'us-west-2', 'eu-east-1') for this endpoint?
Additionally, do you have any updates on when this will be addressed and if there will be an official API available?
Update for orgs with new user management experience
import argparse import time import browser_cookie3 import requests cookies = browser_cookie3.chrome() # could do browser_cookie3.load() for arbitrary browser parser = argparse.ArgumentParser(description='Download users from a Cloud Org') parser.add_argument('cloud_org_id', type=str, help='Unique ID of the Cloud Org') parser.add_argument('output_file', type=str, help='Output file name') args = parser.parse_args() # update this according to your org url = 'https://admin.atlassian.com/gateway/api/adminhub/um/org/{}/exports/users'.format(args.cloud_org_id) # update according to needs data = { "includeApplicationAccess": "true", "includeGroups": "true", "includeInactiveUsers": "true", "selectedGroupIds": [] } response = requests.post(url, json=data, cookies=cookies) if not response.ok: print(f"Failed to export csv with code : {response.status_code} {response.text}") exit(1) exportKey = response.headers.get('exportKey') # get export ID (A/B/C -> C) exportID = exportKey.split('/').pop() # wait for background tasks to complete - adjust based on size of users time.sleep(60) # wait long enough here # Update this if needed, you can find this information by hovering over to the 'Download CSV file' button in the email region = 'us-west-2' # Hit download endpoint downloadUrl = 'https://admin.atlassian.com/gateway/api/adminhub/um/org/{}/exports/{}?region={}'.format(args.cloud_org_id, exportID, region) response = requests.get(downloadUrl, cookies=cookies) if response.ok: with open(args.output_file, "w") as cloudcsvfile: cloudcsvfile.write(response.text) print("Successfully exported csv to file") else: print(f"Failed to export csv with code {response.status_code}")
It looks like the API request was changed, in our case works this one:
https://admin.atlassian.com/gateway/api/adminhub/um/org/{}/exports/users
so need to change from /site/ to /org/
Praneeth, that's a cool workaround but assumes execution on a developer laptop - unfortunately we need to have a solution export users from a fully automated procedure.
Doesn't solve for me :/
Updated Python script as the old endpoint API will be removed.
import argparse import time import browser_cookie3 import requests cookies = browser_cookie3.chrome() # could do browser_cookie3.load() for arbitrary browser parser = argparse.ArgumentParser(description='Download users from a Cloud site') parser.add_argument('cloud_site_id', type=str, help='Unique ID of the Cloud Site') parser.add_argument('output_file', type=str, help='Output file name') args = parser.parse_args() # update this according to your site url = 'https://admin.atlassian.com/gateway/api/adminhub/um/site/{}/exports/users'.format(args.cloud_site_id) # update according to needs data = { "includeApplicationAccess": "true", "includeGroups": "true", "includeInactiveUsers": "true", "selectedGroupIds": [] } response = requests.post(url, json=data, cookies=cookies) if not response.ok: print(f"Failed to export csv with code : {response.status_code} {response.text}") exit(1) exportKey = response.headers.get('exportKey') # get export ID (A/B/C -> C) exportID = exportKey.split('/').pop() # wait for background tasks to complete - adjust based on size of users time.sleep(60) # wait long enough here # Update this if needed, you can find this information from the email region = 'us-west-2' # Hit download endpoint downloadUrl = 'https://admin.atlassian.com/gateway/api/adminhub/um/site/{}/exports/{}?region={}'.format(args.cloud_site_id, exportID, region) response = requests.get(downloadUrl, cookies=cookies) if response.ok: with open(args.output_file, "w") as cloudcsvfile: cloudcsvfile.write(response.text) print("Successfully exported csv to file") else: print(f"Failed to export csv with code {response.status_code}")
Not only doesn't feed us back with some input about ETA for this feature, but they announced me they want to clos down the POST /co/authenticate which we were automating to extract the token.
Is there a forecast of when it will be available?
it would be very important for our customer.
Is this endpoint working ? This would make the work so much easier. Also I'd like to know, where can I find my Cloud Id ?
I guess they are too busy to respond. Honestly, I'm not too fond of products that are not user-friendly and without feedback. I saw open BUG reports, which had been opened 10+ years ago. I will vote to switch to some alternative product as an admin of this zoo in my company.
Hi Robert,
When you open the url via browser directly you will got the error message right ? If you manually export users from user management portal, you can see 'last seen' column in export csv files. That's really weird why Atlassian don't add it in user management list.
Our code (Thanks for Sky Moore and Veera's code) is to help you to do it automatically. It simulate user request and save return values to local disk. That's explain why you open the URL directly it will show error code, because you are missing cookies and something else in the request header and body, hence server cannot handle your request.
Hey Tianshu,
Thanks for the response. I'm sure I have the correct url now but I get a bunch of output like this:
{"cause":null,"stackTrace":[\{"classLoaderName":null,"moduleName":null,"moduleVersion":null,"methodName":"of","fileName":"Error.java","lineNumber":55,"className":"com.atlassian.usermanagement.nextgen.model.error.Error","nativeMethod":false},{"classLoaderName":null,"moduleName":null,"moduleVersion":null,"methodName":"lambda$getUserDetails$4","fileName":"UserResource.java","lineNumber":250,"className":"com.atlassian.usermanagement.nextgen.web.resources.enduser.UserResource","nativeMethod":false},
....
If I go to https://admin.atlassian.com/gateway/api/adminhub/um/site/<CLOUD-ID>/users/ (no export), I do see my users but the json data doesn't include the "last seen " fields that I get when I manually export the user data (this is a piece I really need). So am I missing something?
Make sure your request URL is correct , especially CLOUD-ID
https://admin.atlassian.com/gateway/api/adminhub/um/site/<CLOUD-ID>/users/export
The response.status_code I get back from the s.post is 403. When I put the URL in my browser, I get:
{"key":"forbidden","context":{"message":"Request is not tenanted properly"}}
What is the fix for this?
Hey Guys,
I find the a workaround to simulate user login and it can also be configured as a scheduled task in windows.
You can download chromedriver from here https://chromedriver.chromium.org/downloads
from selenium import webdriver import time import requests usrname='xxxxxx' psd = 'xxxxxxxxxx' url = 'https://admin.atlassian.com/gateway/api/adminhub/um/site/{}/users/export' data = { "includeApplicationAccess": "true", "includeGroups": "true", "includeInactiveUsers": "true", "selectedGroupIds": [] } # call this with username and password driver = webdriver.Chrome(executable_path="C:\chrome-driver\94.0.4606.61\chromedriver.exe") #replace your chromedriver location driver.delete_all_cookies() # clean up the prior login sessions driver.get('https://id.atlassian.com/login') driver.find_element_by_xpath('//*[@id="username"]').send_keys(usrname) driver.find_element_by_xpath('//*[@id="login-submit"]').click() time.sleep(3) driver.find_element_by_xpath('//*[@id="password"]').send_keys(psd) driver.find_element_by_xpath('//*[@id="login-submit"]').click() time.sleep(3) cookies = driver.get_cookies() s = requests.session() for cookie in cookies: s.cookies.set(cookie['name'],cookie['value']) response = s.post(url,json=data) if response.ok: with open("cloud-user-export.csv", "w") as cloudcsvfile: cloudcsvfile.write(response.text) print("Successfully exported csv to file") else: print("Failed to export csv with code {response.status_code}") driver.close() # close the browser
Hello guys, does anyone managed to make this work with JavaScript or React
@Chris Ottinger
How do you scheduled to extract of user info using the READ API?
Hi Veera,
This script pretty good. But in my environment. response.text only has one line "id,name,email,User status,Added to org,Org role,Last seen in Jira Software" no any other values.
Do you know how to solve this ?
Thanks.
sky.moore,
We need to use json parameter in the request instead of data.
Here is the sample Python script I have created. The script accepts two command line arguments: Cloud site ID and the output filename.
import sys, argparse import browser_cookie3 import requests cookies = browser_cookie3.chrome() parser = argparse.ArgumentParser(description='Download users from a Cloud site') parser.add_argument('cloud_site_id', type=str, help='Unique ID of the Cloud Site') parser.add_argument('output_file', type=str, help='Output file name') args = parser.parse_args() url = 'https://admin.atlassian.com/gateway/api/adminhub/um/site/{}/users/export'.format(args.cloud_site_id) data = { "includeApplicationAccess": "true", "includeGroups": "true", "includeInactiveUsers": "true", "selectedGroupIds": [] } response = requests.post(url, json=data, cookies = cookies) if response.ok: with open(args.output_file, "w") as cloudcsvfile: cloudcsvfile.write(response.text) print("Successfully exported csv to file") else: print("Failed to export csv with code {response.status_code}")
Getting a 415 Unsupported Media Type with the workaround implemented as follows.
#!/bin/env/python3 import requests as req import browser_cookie3 as bc3 def cloud_export_csv(cloud_id): cookies = bc3.firefox() url = f"https://admin.atlassian.com/gateway/api/adminhub/um/site/{cloud_id}/users/export" data = { "includeApplicationAccess": "true", "includeGroups": "true", "includeInactiveUsers": "true", "selectedGroupIds": [] } resp = req.post(url, data=data, cookies=cookies) if resp.ok: with open("cloud-user-export.csv", "w") as cloudcsvfile: cloudcsvfile.write(resp.text) print("Successfully exported csv to file") else: print(f"Failed to export csv with code {resp.status_code}")
Lots of applications for this when moving ground to cloud, among many others. Certainly would be helpful for Atlassian Partners who develop automation.
+1 for a REST API endpoint for pulling user details in bulk. In my case, a json formatted payload would be preferable as I normalise records from different systems based on json fields from the different services.
I had a scheduled nightly extract of user info using the READ API, including the business email address, for audit purposes, e.g. "who had access to what on day X".
I have users from different groups/departments with different email domains. email address in the audit extract enables identification of which groups/departments have access to what. The business email address also forms the unique key across services.
Would be nice to have this implemented in the REST API