Upgrading from 2.28 to 2.32 - calculation of program indicators and speed of pivot table output

We’re performing an upgrade of a tracker instance from 2.28 to 2.32. The first thing we’ve noticed thus far is that a number of program indicators that were previously (incorrectly it seems) reporting 0 output are now reporting numbers correctly (or so it seems from preliminary testing). I will confirm later at which point the numbers were corrected (At which version number along the way).

The second glaring difference is the speed at which the pivot table spits out data. The 2.28 instance is considerably faster than 2.32. This isn’t a high resource machine - it’s a test VM with 16 Gb RAM. They share the same Postgresql instance with their own database and we run each Tomcat instance in a separate docker container on different ports for side-by-side comparison.
I did read that there were some changes to caching, though I don’t know that it’s related. I wondered if anyone else had experienced this?

EDIT While updating 10 program indicators in the pivot table, the PostgreSQL container’s CPU usage skyrockets and stays there (in this case all 4 cores are 100% occupied):

This doesn’t appear to be happening on the same data set under 2.28.

EDIT in testing, a simple program indicator with a count enrolment: V{enrollment_count} and a simple filter like
#{zGKdKVHSUEg.XkiIFMNFBAw} != ‘T’ && #{xNj5Us55o9q.nvOujO6NswT} == ‘Traitement Terminé’
is taking 2 minutes to process under 2.32.
The same program indicator outputs in 4 seconds in 2.28!

EDIT Ok, doing some data maintenance seems to have alleviated the issue. Not sure which was responsible but I assume clearing analytics tables and then re-running analytics did the trick.
Under maintenance, I cleared analytics table (I got a ton of errors about missing relationships in the sql log for this), I updated COC, Reloaded apps, Cleared app cache, cleared and recreated SQL views, updated org unit paths… then I performed an analytics table update. Considering that analytics ran at midnight, I’m a bit lost about why this was necessary, but I’m looking into running analytics for both instances at different times considering they both use the same DB. I’ll post an update if anything changes again.

Another quirk I noticed was on restart, I’m getting these messages in the SQL Log:
2020-05-07 18:52:41.570 UTC [28] ERROR: relation “reporttable_indicators” does not exist at character 40
2020-05-07 18:52:41.570 UTC [28] STATEMENT: select reporttableid, indicatorid from reporttable_indicators order by reporttableid, sort_order
2020-05-07 18:52:41.570 UTC [28] ERROR: current transaction is aborted, commands ignored until end of transaction block
2020-05-07 18:52:41.570 UTC [28] STATEMENT:
2020-05-07 18:52:41.572 UTC [28] ERROR: relation “reporttable_orgunitgroups” does not exist at character 61
2020-05-07 18:52:41.572 UTC [28] STATEMENT: select distinct d.reporttableid, gsm.orgunitgroupsetid from reporttable_orgunitgroups d inner join orgunitgroupsetmembers gsm on d.orgunitgroupid=gsm.orgunitgroupid
2020-05-07 18:52:41.572 UTC [28] ERROR: current transaction is aborted, commands ignored until end of transaction block

Also, when I run ‘analyze analytics tables’, I get these errors in the SQL log:
2020-05-07 19:19:36.186 UTC [80] ERROR: relation “analytics_completeness” does not exist
2020-05-07 19:19:36.186 UTC [80] STATEMENT: analyze analytics_completeness
2020-05-07 19:19:45.063 UTC [80] ERROR: relation “analytics_validationresult” does not exist
2020-05-07 19:19:45.063 UTC [80] STATEMENT: analyze analytics_validationresult

Will look into those in the morning - if anyone has any suggestions, I’m all ears, thanks!