-
Suggestion
-
Resolution: Unresolved
-
None
-
2
-
7
-
Problem
It's not possible to Analyze the usage of custom fields on large instances with over 1000 custom fields (e.g. over 5000+) as the scheduled job CustomFieldUsageRecalculationJob errors when querying the database.
The error that can be seen in the Scheduler:
com.atlassian.jira.issue.fields.usage.CustomFieldUsageRecalculationJob Type: Runnable Parameters: {} Run mode: once per cluster Schedule: 0 0 3 * * ? Last run: Fri Nov 18 03:00:00 CET 2022 Last run duration: 8 minutes Next run: Sat Nov 19 03:00:00 CET 2022 MessageQueryException: Caught PSQLException for select count(distinct CUSTOM_FIELD_VALUE.issue), CUSTOM_FIELD_VALUE.customfield from public.customfieldvalue CUSTOM_FIELD_VALUE where CUSTOM_FIELD_VALUE.customfield in (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?,
Suggestion
We'd like to have the CustomFieldUsageRecalculationJob reworked to be able to analyze the usage of all custom fields regardless of the amount and without impacting the rest of the system negatively in the case of increasing the socket timeout setting.
Workarounds
Option #1:
Remove or increase the socketTimeout to a setting that will cover the time it takes for the query to run.
- Keep in mind this setting affects all interactions with the database, not only for the batch job meaning all queries will have a unlimited/greater timeout value.
- This is covered on this KB: Postgres queries fail with java.net.SocketTimeoutException: Read timed out
Option #2:
Since there are times you'd prefer to keep the socketTimeout, an alternative is to modify the job so it fetches a smaller set of custom fields at a time.
The default setting will fetch all issue data related to a maximum of 1000 custom fields. By changing the com.atlassian.jira.issue.fields.usage.query.max.ids parameter to 100, the job will run more times but each run is likely to fit into the timeout setting.
This setting can be changed by adding the following line to your jira-config.properties:
com.atlassian.jira.issue.fields.usage.query.max.ids=100
- Restart the nodes one at a time after modifying the file for the change to take effect.
See KB Custom Field usage not available due to CustomFieldUsageRecalculationJob timing out for more info.
- links to
[JSDSERVER-12302] Analyze the usage of custom fields for large systems with over 1000 custom fields
UIS | Original: 1 | New: 2 |
Support reference count | Original: 6 | New: 7 |
Support reference count | New: 6 |
UIS | New: 1 |
Labels | New: ril |
Remote Link | New: This issue links to "Internal ticket (Web Link)" [ 978295 ] |
Description |
Original:
h2. Problem
It's not possible to [Analyze the usage of custom fields|https://confluence.atlassian.com/adminjiraserver/analyzing-the-usage-of-custom-fields-1047552722.html] on large instances with over 1000 custom fields (e.g. over 5000+) as the scheduled job CustomFieldUsageRecalculationJob errors when querying the database. The error that can be seen in the Scheduler: {code:java} com.atlassian.jira.issue.fields.usage.CustomFieldUsageRecalculationJob Type: Runnable Parameters: {} Run mode: once per cluster Schedule: 0 0 3 * * ? Last run: Fri Nov 18 03:00:00 CET 2022 Last run duration: 8 minutes Next run: Sat Nov 19 03:00:00 CET 2022 MessageQueryException: Caught PSQLException for select count(distinct CUSTOM_FIELD_VALUE.issue), CUSTOM_FIELD_VALUE.customfield from public.customfieldvalue CUSTOM_FIELD_VALUE where CUSTOM_FIELD_VALUE.customfield in (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, {code} h2. Suggestion We'd like to have the CustomFieldUsageRecalculationJob reworked to be able to analyze the usage of *all* custom fields regardless of the amount and without impacting the rest of the system negatively in the case of increasing the socket timeout setting. h2. Workarounds *Option #1:* Remove or increase the socketTimeout to a setting that will cover the time it takes for the query to run. * Keep in mind this setting affects _all_ interactions with the database, not only for the batch job meaning all queries will have a unlimited/greater timeout value. *Option #2:* Since there are times you'd prefer to keep the socketTimeout, an alternative is to modify the job so it fetches a smaller set of custom fields at a time. The default setting will fetch all issue data related to a maximum of 1000 custom fields. By changing the {{com.atlassian.jira.issue.fields.usage.query.max.ids}} parameter to 100, the job will run more times but each run is likely to fit into the timeout setting. This setting can be changed by adding the following line to your jira-config.properties: com.atlassian.jira.issue.fields.usage.query.max.ids=100 * Restart the nodes one at a time after modifying the file for the change to take effect. See KB [Custom Field usage not available due to CustomFieldUsageRecalculationJob timing out|https://confluence.atlassian.com/jirakb/custom-field-usage-not-available-due-to-customfieldusagerecalculationjob-timing-out-1115149494.html] for more info. |
New:
h2. Problem
It's not possible to [Analyze the usage of custom fields|https://confluence.atlassian.com/adminjiraserver/analyzing-the-usage-of-custom-fields-1047552722.html] on large instances with over 1000 custom fields (e.g. over 5000+) as the scheduled job CustomFieldUsageRecalculationJob errors when querying the database. The error that can be seen in the Scheduler: {code:java} com.atlassian.jira.issue.fields.usage.CustomFieldUsageRecalculationJob Type: Runnable Parameters: {} Run mode: once per cluster Schedule: 0 0 3 * * ? Last run: Fri Nov 18 03:00:00 CET 2022 Last run duration: 8 minutes Next run: Sat Nov 19 03:00:00 CET 2022 MessageQueryException: Caught PSQLException for select count(distinct CUSTOM_FIELD_VALUE.issue), CUSTOM_FIELD_VALUE.customfield from public.customfieldvalue CUSTOM_FIELD_VALUE where CUSTOM_FIELD_VALUE.customfield in (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, {code} h2. Suggestion We'd like to have the CustomFieldUsageRecalculationJob reworked to be able to analyze the usage of *all* custom fields regardless of the amount and without impacting the rest of the system negatively in the case of increasing the socket timeout setting. h2. Workarounds *Option #1:* Remove or increase the socketTimeout to a setting that will cover the time it takes for the query to run. * Keep in mind this setting affects _all_ interactions with the database, not only for the batch job meaning all queries will have a unlimited/greater timeout value. * This is covered on this KB: [Postgres queries fail with java.net.SocketTimeoutException: Read timed out|https://confluence.atlassian.com/jirakb/postgres-queries-fail-with-java-net-sockettimeoutexception-read-timed-out-1114812768.html] *Option #2:* Since there are times you'd prefer to keep the socketTimeout, an alternative is to modify the job so it fetches a smaller set of custom fields at a time. The default setting will fetch all issue data related to a maximum of 1000 custom fields. By changing the {{com.atlassian.jira.issue.fields.usage.query.max.ids}} parameter to 100, the job will run more times but each run is likely to fit into the timeout setting. This setting can be changed by adding the following line to your jira-config.properties: com.atlassian.jira.issue.fields.usage.query.max.ids=100 * Restart the nodes one at a time after modifying the file for the change to take effect. See KB [Custom Field usage not available due to CustomFieldUsageRecalculationJob timing out|https://confluence.atlassian.com/jirakb/custom-field-usage-not-available-due-to-customfieldusagerecalculationjob-timing-out-1115149494.html] for more info. |