Details
-
Bug
-
Resolution: Tracked Elsewhere
-
High
-
6.4.0.9, 6.4.4
-
None
-
6.04
-
Description
I was asked to investigate performance problems for a customer with a relatively large instance who has provided access to their NewRelic data. They had taken a profile while the LexoRank balancer was running, and I saw this:
Although the customer is on MySQL, we see similar poor behaviour of this style of query on our Postgres-based test cluster, which has roughly 279K issues and 5 ranking fields, resulting in nearly 1.4M lexorank rows:
cluster=# EXPLAIN UPDATE "AO_60DB71_LEXORANK" SET "LOCK_HASH"=NULL WHERE "LOCK_HASH" = '42'; QUERY PLAN ------------------------------------------------------------------------- Seq Scan on "AO_60DB71_LEXORANK" (cost=0.00..31078.31 rows=1 width=52) Filter: (("LOCK_HASH")::text = '42'::text) (2 rows)
A sequential scan and cost of 31078.31 is not acceptable, especially for such a common operation. This seems to be due to the lack of an @Indexed annotation on LockEntity.getLockHash(). When I manually create this index, the query cost changes significantly as expected:
cluster=# CREATE INDEX index_ao_60db71_lex_crf ON "AO_60DB71_LEXORANK" ("LOCK_HASH"); CREATE INDEX cluster=# EXPLAIN UPDATE "AO_60DB71_LEXORANK" SET "LOCK_HASH"=NULL WHERE "LOCK_HASH" = '42'; QUERY PLAN ----------------------------------------------------------------------------------------------------- Index Scan using index_ao_60db71_lex_crf on "AO_60DB71_LEXORANK" (cost=0.00..8.42 rows=1 width=52) Index Cond: (("LOCK_HASH")::text = '42'::text) (2 rows)
This would seem to be a valid workaround; however:
- I'm not sure AO's automatic schema management will ignore the extra index. Seems like it should, but I'm not sufficiently familiar with AO's behaviour here to know.
- Similarly, this will presumably be fixed in JIRA Agile. When that happens, the manually created index could get in the way of creating the JIRA Agile-supplied one, and I don't know what the result of that would be.