I have upgraded my 2.28 instance to 2.29 version,7c81fba
Build date: 2018-06-28 06:46 . I have run the upgrade scripts before as instructed in guide. My server runs ubuntu 14.04 and postgresql 9.5.5.
However since then, my analytics is not running, i am receiving number of system emails with subject " Job ‘Credentials expiry alert’ failed; Job ‘inMemoryAnalyticsJob’ failed;Analytics table process failed "
Attached are some screenshots of the message. I tried to clear analytics tables and run analytics again but its not improving.
The online 2.29 demo also has the same message bu with different contents.
Please share the hints on how i can dealt with these. As my system has to sent out notification messages when thresholds are reached.
I think you need to shutdown instance; remove your current war file folder e.g ROOT or dhis which depends on the name of your war file.
Switch to your Postgres database and manually delete all those analytics tables. Then you restart instance and monitor log at startup and when running analytics again.
Hi Thomas and Dan for sharing your experience,
However i have dropped the analytics table and analytics_temp tables. and deploy my the recent war file. Then i run the analytics successfully. Afterward i started receiven the same massage as by attachment.
I am suspecting jobs that are by default created in the new feature of “Scheduler” Can any one explain reasons of having these? And please share more hints on how to deal with this problem.
I think you need to shutdown instance; remove your current war file folder e.g ROOT or dhis which depends on the name of your war file.
Switch to your Postgres database and manually delete all those analytics tables. Then you restart instance and monitor log at startup and when running analytics again.
On Tue, 3 Jul 2018 at 16:57, Kamugunga Adolphe kaadol@gmail.com wrote:
Hi,
I have upgraded my 2.28 instance to 2.29 version,7c81fba
Build date: 2018-06-28 06:46 . I have run the upgrade scripts before as instructed in guide. My server runs ubuntu 14.04 and postgresql 9.5.5.
However since then, my analytics is not running, i am receiving number of system emails with subject " Job ‘Credentials expiry alert’ failed; Job ‘inMemoryAnalyticsJob’ failed;Analytics table process failed "
Attached are some screenshots of the message. I tried to clear analytics tables and run analytics again but its not improving.
The online 2.29 demo also has the same message bu with different contents.
Please share the hints on how i can dealt with these. As my system has to sent out notification messages when thresholds are reached.
I think they are still in the completedatasetregistration. Please run the query below so that we can know the quantity.
We will have to delete them here and at the analytics tables again.
dhis-# SELECT p.startdate, count(dv.datasetid) as recno from completedatasetregistration dv INNER JOIN period p on p.periodid=dv.periodid group by p.startdate order by p.startdate;
I think you need to shutdown instance; remove your current war file folder e.g ROOT or dhis which depends on the name of your war file.
Switch to your Postgres database and manually delete all those analytics tables. Then you restart instance and monitor log at startup and when running analytics again.
On Tue, 3 Jul 2018 at 16:57, Kamugunga Adolphe kaadol@gmail.com wrote:
Hi,
I have upgraded my 2.28 instance to 2.29 version,7c81fba
Build date: 2018-06-28 06:46 . I have run the upgrade scripts before as instructed in guide. My server runs ubuntu 14.04 and postgresql 9.5.5.
However since then, my analytics is not running, i am receiving number of system emails with subject " Job ‘Credentials expiry alert’ failed; Job ‘inMemoryAnalyticsJob’ failed;Analytics table process failed "
Attached are some screenshots of the message. I tried to clear analytics tables and run analytics again but its not improving.
The online 2.29 demo also has the same message bu with different contents.
Please share the hints on how i can dealt with these. As my system has to sent out notification messages when thresholds are reached.
I think you need to shutdown instance; remove your current war file folder e.g ROOT or dhis which depends on the name of your war file.
Switch to your Postgres database and manually delete all those analytics tables. Then you restart instance and monitor log at startup and when running analytics again.
On Tue, 3 Jul 2018 at 16:57, Kamugunga Adolphe kaadol@gmail.com wrote:
Hi,
I have upgraded my 2.28 instance to 2.29 version,7c81fba
Build date: 2018-06-28 06:46 . I have run the upgrade scripts before as instructed in guide. My server runs ubuntu 14.04 and postgresql 9.5.5.
However since then, my analytics is not running, i am receiving number of system emails with subject " Job ‘Credentials expiry alert’ failed; Job ‘inMemoryAnalyticsJob’ failed;Analytics table process failed "
Attached are some screenshots of the message. I tried to clear analytics tables and run analytics again but its not improving.
The online 2.29 demo also has the same message bu with different contents.
Please share the hints on how i can dealt with these. As my system has to sent out notification messages when thresholds are reached.
That is true but the errors are still generating into the various analytics tables. That is why I wanted him to check those tables and if there are data in them he should delete them.
I think you need to shutdown instance; remove your current war file folder e.g ROOT or dhis which depends on the name of your war file.
Switch to your Postgres database and manually delete all those analytics tables. Then you restart instance and monitor log at startup and when running analytics again.
On Tue, 3 Jul 2018 at 16:57, Kamugunga Adolphe kaadol@gmail.com wrote:
Hi,
I have upgraded my 2.28 instance to 2.29 version,7c81fba
Build date: 2018-06-28 06:46 . I have run the upgrade scripts before as instructed in guide. My server runs ubuntu 14.04 and postgresql 9.5.5.
However since then, my analytics is not running, i am receiving number of system emails with subject " Job ‘Credentials expiry alert’ failed; Job ‘inMemoryAnalyticsJob’ failed;Analytics table process failed "
Attached are some screenshots of the message. I tried to clear analytics tables and run analytics again but its not improving.
The online 2.29 demo also has the same message bu with different contents.
Please share the hints on how i can dealt with these. As my system has to sent out notification messages when thresholds are reached.
Is it possible that the Reports app to run analytics does something different to what the Scheduler does, not dropping / creating the same tables? We added a JIRA issue describing this - Scheduler cannot complete analytics generation once Reports has created analytics beforehand.
A workaround for now is so far is to clear analytics via Data Admin > Maintenance then using the Scheduler only to run - manually or scheduled.
I think you need to shutdown instance; remove your current war file folder e.g ROOT or dhis which depends on the name of your war file.
Switch to your Postgres database and manually delete all those analytics tables. Then you restart instance and monitor log at startup and when running analytics again.
On Tue, 3 Jul 2018 at 16:57, Kamugunga Adolphe kaadol@gmail.com wrote:
Hi,
I have upgraded my 2.28 instance to 2.29 version,7c81fba
Build date: 2018-06-28 06:46 . I have run the upgrade scripts before as instructed in guide. My server runs ubuntu 14.04 and postgresql 9.5.5.
However since then, my analytics is not running, i am receiving number of system emails with subject " Job ‘Credentials expiry alert’ failed; Job ‘inMemoryAnalyticsJob’ failed;Analytics table process failed "
Attached are some screenshots of the message. I tried to clear analytics tables and run analytics again but its not improving.
The online 2.29 demo also has the same message bu with different contents.
Please share the hints on how i can dealt with these. As my system has to sent out notification messages when thresholds are reached.
Hi we tried clear analytics, and it didn’t work for us, we are trying to find some disk space and see if analytics works. The attached screen show that our PostgreSQL server used 95.3% when logged in.
As you said we have cron jobs on the saver machine automating the analytics, however after upgrading to 2.29 we realized there is analytics scheduled job at every hour to run analytics and this might also overwhelm the database. Can any one explain if these scheduled and system jobs are defaults ones and needs to be droped for instances with cron tables with similar jobs?
Is it possible that the Reports app to run analytics does something different to what the Scheduler does, not dropping / creating the same tables? We added a JIRA issue describing this - Scheduler cannot complete analytics generation once Reports has created analytics beforehand.
A workaround for now is so far is to clear analytics via Data Admin > Maintenance then using the Scheduler only to run - manually or scheduled.
That is true but the errors are still generating into the various analytics tables. That is why I wanted him to check those tables and if there are data in them he should delete them.
I think they are still in the completedatasetregistration. Please run the query below so that we can know the quantity.
We will have to delete them here and at the analytics tables again.
dhis-# SELECT p.startdate, count(dv.datasetid) as recno from completedatasetregistration dv INNER JOIN period p on p.periodid=dv.periodid group by p.startdate order by p.startdate;
On Wed, 4 Jul 2018 at 14:43, Kamugunga Adolphe kaadol@gmail.com wrote:
Hi Thomas and Dan for sharing your experience,
However i have dropped the analytics table and analytics_temp tables. and deploy my the recent war file. Then i run the analytics successfully. Afterward i started receiven the same massage as by attachment.
I am suspecting jobs that are by default created in the new feature of “Scheduler” Can any one explain reasons of having these? And please share more hints on how to deal with this problem.
I think you need to shutdown instance; remove your current war file folder e.g ROOT or dhis which depends on the name of your war file.
Switch to your Postgres database and manually delete all those analytics tables. Then you restart instance and monitor log at startup and when running analytics again.
On Tue, 3 Jul 2018 at 16:57, Kamugunga Adolphe kaadol@gmail.com wrote:
Hi,
I have upgraded my 2.28 instance to 2.29 version,7c81fba
Build date: 2018-06-28 06:46 . I have run the upgrade scripts before as instructed in guide. My server runs ubuntu 14.04 and postgresql 9.5.5.
However since then, my analytics is not running, i am receiving number of system emails with subject " Job ‘Credentials expiry alert’ failed; Job ‘inMemoryAnalyticsJob’ failed;Analytics table process failed "
Attached are some screenshots of the message. I tried to clear analytics tables and run analytics again but its not improving.
The online 2.29 demo also has the same message bu with different contents.
Please share the hints on how i can dealt with these. As my system has to sent out notification messages when thresholds are reached.
Disk use > 95% is showing that you are running of disk space, during analytics lots of temporary space is required; Jason is perfectly right. This will be the prime suspect. You extend the space and problem most likely to be resolved.
David also pointed out that there are additional process like ‘Report App’ also use analytics in different way that also can create problem. Did you check that. Before running analytics hourly you should check your analytics completion time.
System schedule and custom cron job might also conflict; you can stop all custom cron job and check.
Is it possible that the Reports app to run analytics does something different to what the Scheduler does, not dropping / creating the same tables? We added a JIRA issue describing this - Scheduler cannot complete analytics generation once Reports has created analytics beforehand.
A workaround for now is so far is to clear analytics via Data Admin > Maintenance then using the Scheduler only to run - manually or scheduled.
That is true but the errors are still generating into the various analytics tables. That is why I wanted him to check those tables and if there are data in them he should delete them.
I think they are still in the completedatasetregistration. Please run the query below so that we can know the quantity.
We will have to delete them here and at the analytics tables again.
dhis-# SELECT p.startdate, count(dv.datasetid) as recno from completedatasetregistration dv INNER JOIN period p on p.periodid=dv.periodid group by p.startdate order by p.startdate;
On Wed, 4 Jul 2018 at 14:43, Kamugunga Adolphe kaadol@gmail.com wrote:
Hi Thomas and Dan for sharing your experience,
However i have dropped the analytics table and analytics_temp tables. and deploy my the recent war file. Then i run the analytics successfully. Afterward i started receiven the same massage as by attachment.
I am suspecting jobs that are by default created in the new feature of “Scheduler” Can any one explain reasons of having these? And please share more hints on how to deal with this problem.
I think you need to shutdown instance; remove your current war file folder e.g ROOT or dhis which depends on the name of your war file.
Switch to your Postgres database and manually delete all those analytics tables. Then you restart instance and monitor log at startup and when running analytics again.
On Tue, 3 Jul 2018 at 16:57, Kamugunga Adolphe kaadol@gmail.com wrote:
Hi,
I have upgraded my 2.28 instance to 2.29 version,7c81fba
Build date: 2018-06-28 06:46 . I have run the upgrade scripts before as instructed in guide. My server runs ubuntu 14.04 and postgresql 9.5.5.
However since then, my analytics is not running, i am receiving number of system emails with subject " Job ‘Credentials expiry alert’ failed; Job ‘inMemoryAnalyticsJob’ failed;Analytics table process failed "
Attached are some screenshots of the message. I tried to clear analytics tables and run analytics again but its not improving.
The online 2.29 demo also has the same message bu with different contents.
Please share the hints on how i can dealt with these. As my system has to sent out notification messages when thresholds are reached.
I tried your workaround but analytics run once successfully and compile aggregate values of my weekly reports only, the dashboard with cases based data(program indicators) don’t have values. The strange situation is that from Event Reports app, i list/display events reported before upgrade only and when i change the period to “This month” no values displayed.
I run again the analytics but it failed with similar message. If someone has another alternative to fix this plz let me know! I am completely stack?
The instance is updated with the today build version
Version:
2.29
Build revision:
153207c
Build date:
2018-07-31 06:50
Jasper reports version:
6.3.1
Down here is the error message content:
Job ‘inMemoryAnalyticsJob’ failed
System title:Integrated Disease Surveillance
Base URL:
Time: 2018-07-31T16:04:01.496+02:00
Message: StatementCallback; uncategorized SQLException for SQL [drop table analytics]; SQL state [2BP01]; error code [0]; ERROR: cannot drop table analytics because other objects depend on it
Detail: table analytics_2004 depends on table analytics
table analytics_2005 depends on table analytics
table analytics_2007 depends on table analytics
table analytics_2008 depends on table analytics
table analytics_2009 depends on table analytics
table analytics_2010 depends on table analytics
table analytics_2011 depends on table analytics
table analytics_2012 depends on table analytics
table analytics_2013 depends on table analytics
table analytics_2014 depends on table analytics
table analytics_2015 depends on table analytics
Hint: Use DROP … CASCADE to drop the dependent objects too.; nested exception is org.postgresql.util.PSQLException: ERROR: cannot drop table analytics because other objects depend on it
Detail: table analytics_2004 depends on table analytics
table analytics_2005 depends on table analytics
table analytics_2007 depends on table analytics
table analytics_2008 depends on table analytics
table analytics_2009 depends on table analytics
table analytics_2010 depends on table analytics
table analytics_2011 depends on table analytics
table analytics_2012 depends on table analytics
table analytics_2013 depends on table analytics
table analytics_2014 depends on table analytics
table analytics_2015 depends on table analytics
Hint: Use DROP … CASCADE to drop the dependent objects too.
Cause: org.postgresql.util.PSQLException: ERROR: cannot drop table analytics because other objects depend on it
Detail: table analytics_2004 depends on table analytics
table analytics_2005 depends on table analytics
table analytics_2007 depends on table analytics
table analytics_2008 depends on table analytics
table analytics_2009 depends on table analytics
table analytics_2010 depends on table analytics
table analytics_2011 depends on table analytics
table analytics_2012 depends on table analytics
table analytics_2013 depends on table analytics
table analytics_2014 depends on table analytics
table analytics_2015 depends on table analytics
Hint: Use DROP … CASCADE to drop the dependent objects too.
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2422)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2167)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:306)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:441)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:365)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:307)
at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:293)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:270)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:266)
at com.mchange.v2.c3p0.impl.NewProxyStatement.execute(NewProxyStatement.java:75)
at org.springframework.jdbc.core.JdbcTemplate$1ExecuteStatementCallback.doInStatement(JdbcTemplate.java:436)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:408)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:445)
at org.hisp.dhis.analytics.table.AbstractJdbcTableManager.executeSilently(AbstractJdbcTableManager.java:341)
at org.hisp.dhis.analytics.table.AbstractJdbcTableManager.swapTable(AbstractJdbcTableManager.java:508)
at org.hisp.dhis.analytics.table.AbstractJdbcTableManager.swapTable(AbstractJdbcTableManager.java:178)
at sun.reflect.GeneratedMethodAccessor2130.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:333)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:207)
at com.sun.proxy.$Proxy142.swapTable(Unknown Source)
at org.hisp.dhis.analytics.table.DefaultAnalyticsTableService.lambda$swapTables$5(DefaultAnalyticsTableService.java:373)
at java.util.ArrayList.forEach(ArrayList.java:1249)
at org.hisp.dhis.analytics.table.DefaultAnalyticsTableService.swapTables(DefaultAnalyticsTableService.java:373)
at org.hisp.dhis.analytics.table.DefaultAnalyticsTableService.update(DefaultAnalyticsTableService.java:171)
at org.hisp.dhis.analytics.table.DefaultAnalyticsTableGenerator.generateTables(DefaultAnalyticsTableGenerator.java:115)
at org.hisp.dhis.analytics.table.scheduling.AnalyticsTableJob.execute(AnalyticsTableJob.java:70)
at org.hisp.dhis.scheduling.DefaultJobInstance.executeJob(DefaultJobInstance.java:145)
at org.hisp.dhis.scheduling.DefaultJobInstance.execute(DefaultJobInstance.java:59)
at org.hisp.dhis.scheduling.DefaultSchedulingManager.lambda$internalExecuteJobConfiguration$2(DefaultSchedulingManager.java:237)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Is it possible that the Reports app to run analytics does something different to what the Scheduler does, not dropping / creating the same tables? We added a JIRA issue describing this - Scheduler cannot complete analytics generation once Reports has created analytics beforehand.
A workaround for now is so far is to clear analytics via Data Admin > Maintenance then using the Scheduler only to run - manually or scheduled.
That is true but the errors are still generating into the various analytics tables. That is why I wanted him to check those tables and if there are data in them he should delete them.
I think they are still in the completedatasetregistration. Please run the query below so that we can know the quantity.
We will have to delete them here and at the analytics tables again.
dhis-# SELECT p.startdate, count(dv.datasetid) as recno from completedatasetregistration dv INNER JOIN period p on p.periodid=dv.periodid group by p.startdate order by p.startdate;
On Wed, 4 Jul 2018 at 14:43, Kamugunga Adolphe kaadol@gmail.com wrote:
Hi Thomas and Dan for sharing your experience,
However i have dropped the analytics table and analytics_temp tables. and deploy my the recent war file. Then i run the analytics successfully. Afterward i started receiven the same massage as by attachment.
I am suspecting jobs that are by default created in the new feature of “Scheduler” Can any one explain reasons of having these? And please share more hints on how to deal with this problem.
I think you need to shutdown instance; remove your current war file folder e.g ROOT or dhis which depends on the name of your war file.
Switch to your Postgres database and manually delete all those analytics tables. Then you restart instance and monitor log at startup and when running analytics again.
On Tue, 3 Jul 2018 at 16:57, Kamugunga Adolphe kaadol@gmail.com wrote:
Hi,
I have upgraded my 2.28 instance to 2.29 version,7c81fba
Build date: 2018-06-28 06:46 . I have run the upgrade scripts before as instructed in guide. My server runs ubuntu 14.04 and postgresql 9.5.5.
However since then, my analytics is not running, i am receiving number of system emails with subject " Job ‘Credentials expiry alert’ failed; Job ‘inMemoryAnalyticsJob’ failed;Analytics table process failed "
Attached are some screenshots of the message. I tried to clear analytics tables and run analytics again but its not improving.
The online 2.29 demo also has the same message bu with different contents.
Please share the hints on how i can dealt with these. As my system has to sent out notification messages when thresholds are reached.
Is it possible that the Reports app to run analytics does something different to what the Scheduler does, not dropping / creating the same tables? We added a JIRA issue describing this - Scheduler cannot complete analytics generation once Reports has created analytics beforehand.
A workaround for now is so far is to clear analytics via Data Admin > Maintenance then using the Scheduler only to run - manually or scheduled.
That is true but the errors are still generating into the various analytics tables. That is why I wanted him to check those tables and if there are data in them he should delete them.
I think they are still in the completedatasetregistration. Please run the query below so that we can know the quantity.
We will have to delete them here and at the analytics tables again.
dhis-# SELECT p.startdate, count(dv.datasetid) as recno from completedatasetregistration dv INNER JOIN period p on p.periodid=dv.periodid group by p.startdate order by p.startdate;
On Wed, 4 Jul 2018 at 14:43, Kamugunga Adolphe kaadol@gmail.com wrote:
Hi Thomas and Dan for sharing your experience,
However i have dropped the analytics table and analytics_temp tables. and deploy my the recent war file. Then i run the analytics successfully. Afterward i started receiven the same massage as by attachment.
I am suspecting jobs that are by default created in the new feature of “Scheduler” Can any one explain reasons of having these? And please share more hints on how to deal with this problem.
I think you need to shutdown instance; remove your current war file folder e.g ROOT or dhis which depends on the name of your war file.
Switch to your Postgres database and manually delete all those analytics tables. Then you restart instance and monitor log at startup and when running analytics again.
On Tue, 3 Jul 2018 at 16:57, Kamugunga Adolphe kaadol@gmail.com wrote:
Hi,
I have upgraded my 2.28 instance to 2.29 version,7c81fba
Build date: 2018-06-28 06:46 . I have run the upgrade scripts before as instructed in guide. My server runs ubuntu 14.04 and postgresql 9.5.5.
However since then, my analytics is not running, i am receiving number of system emails with subject " Job ‘Credentials expiry alert’ failed; Job ‘inMemoryAnalyticsJob’ failed;Analytics table process failed "
Attached are some screenshots of the message. I tried to clear analytics tables and run analytics again but its not improving.
The online 2.29 demo also has the same message bu with different contents.
Please share the hints on how i can dealt with these. As my system has to sent out notification messages when thresholds are reached.
Try to drop all analytics tables manually (EXCEPT analyticsperiodboundary!!), and preferably also the resource tables starting with under-score. Then try again.
Regards
Calle
···
On 31 July 2018 at 18:55, Kamugunga Adolphe kaadol@gmail.com wrote:
Dear Dave,
I tried your workaround but analytics run once successfully and compile aggregate values of my weekly reports only, the dashboard with cases based data(program indicators) don’t have values. The strange situation is that from Event Reports app, i list/display events reported before upgrade only and when i change the period to “This month” no values displayed.
I run again the analytics but it failed with similar message. If someone has another alternative to fix this plz let me know! I am completely stack?
The instance is updated with the today build version
Version:
2.29
Build revision:
153207c
Build date:
2018-07-31 06:50
Jasper reports version:
6.3.1
Down here is the error message content:
Job ‘inMemoryAnalyticsJob’ failed
System title:Integrated Disease Surveillance
Base URL:
Time: 2018-07-31T16:04:01.496+02:00
Message: StatementCallback; uncategorized SQLException for SQL [drop table analytics]; SQL state [2BP01]; error code [0]; ERROR: cannot drop table analytics because other objects depend on it
Detail: table analytics_2004 depends on table analytics
table analytics_2005 depends on table analytics
table analytics_2007 depends on table analytics
table analytics_2008 depends on table analytics
table analytics_2009 depends on table analytics
table analytics_2010 depends on table analytics
table analytics_2011 depends on table analytics
table analytics_2012 depends on table analytics
table analytics_2013 depends on table analytics
table analytics_2014 depends on table analytics
table analytics_2015 depends on table analytics
Hint: Use DROP … CASCADE to drop the dependent objects too.; nested exception is org.postgresql.util.PSQLException: ERROR: cannot drop table analytics because other objects depend on it
Detail: table analytics_2004 depends on table analytics
table analytics_2005 depends on table analytics
table analytics_2007 depends on table analytics
table analytics_2008 depends on table analytics
table analytics_2009 depends on table analytics
table analytics_2010 depends on table analytics
table analytics_2011 depends on table analytics
table analytics_2012 depends on table analytics
table analytics_2013 depends on table analytics
table analytics_2014 depends on table analytics
table analytics_2015 depends on table analytics
Hint: Use DROP … CASCADE to drop the dependent objects too.
Cause: org.postgresql.util.PSQLException: ERROR: cannot drop table analytics because other objects depend on it
Detail: table analytics_2004 depends on table analytics
table analytics_2005 depends on table analytics
table analytics_2007 depends on table analytics
table analytics_2008 depends on table analytics
table analytics_2009 depends on table analytics
table analytics_2010 depends on table analytics
table analytics_2011 depends on table analytics
table analytics_2012 depends on table analytics
table analytics_2013 depends on table analytics
table analytics_2014 depends on table analytics
table analytics_2015 depends on table analytics
Hint: Use DROP … CASCADE to drop the dependent objects too.
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2422)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2167)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:306)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:441)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:365)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:307)
at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:293)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:270)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:266)
at com.mchange.v2.c3p0.impl.NewProxyStatement.execute(NewProxyStatement.java:75)
at org.springframework.jdbc.core.JdbcTemplate$1ExecuteStatementCallback.doInStatement(JdbcTemplate.java:436)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:408)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:445)
at org.hisp.dhis.analytics.table.AbstractJdbcTableManager.executeSilently(AbstractJdbcTableManager.java:341)
at org.hisp.dhis.analytics.table.AbstractJdbcTableManager.swapTable(AbstractJdbcTableManager.java:508)
at org.hisp.dhis.analytics.table.AbstractJdbcTableManager.swapTable(AbstractJdbcTableManager.java:178)
at sun.reflect.GeneratedMethodAccessor2130.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:333)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:207)
at com.sun.proxy.$Proxy142.swapTable(Unknown Source)
at org.hisp.dhis.analytics.table.DefaultAnalyticsTableService.lambda$swapTables$5(DefaultAnalyticsTableService.java:373)
at java.util.ArrayList.forEach(ArrayList.java:1249)
at org.hisp.dhis.analytics.table.DefaultAnalyticsTableService.swapTables(DefaultAnalyticsTableService.java:373)
at org.hisp.dhis.analytics.table.DefaultAnalyticsTableService.update(DefaultAnalyticsTableService.java:171)
at org.hisp.dhis.analytics.table.DefaultAnalyticsTableGenerator.generateTables(DefaultAnalyticsTableGenerator.java:115)
at org.hisp.dhis.analytics.table.scheduling.AnalyticsTableJob.execute(AnalyticsTableJob.java:70)
at org.hisp.dhis.scheduling.DefaultJobInstance.executeJob(DefaultJobInstance.java:145)
at org.hisp.dhis.scheduling.DefaultJobInstance.execute(DefaultJobInstance.java:59)
at org.hisp.dhis.scheduling.DefaultSchedulingManager.lambda$internalExecuteJobConfiguration$2(DefaultSchedulingManager.java:237)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Is it possible that the Reports app to run analytics does something different to what the Scheduler does, not dropping / creating the same tables? We added a JIRA issue describing this - Scheduler cannot complete analytics generation once Reports has created analytics beforehand.
A workaround for now is so far is to clear analytics via Data Admin > Maintenance then using the Scheduler only to run - manually or scheduled.
That is true but the errors are still generating into the various analytics tables. That is why I wanted him to check those tables and if there are data in them he should delete them.
I think they are still in the completedatasetregistration. Please run the query below so that we can know the quantity.
We will have to delete them here and at the analytics tables again.
dhis-# SELECT p.startdate, count(dv.datasetid) as recno from completedatasetregistration dv INNER JOIN period p on p.periodid=dv.periodid group by p.startdate order by p.startdate;
On Wed, 4 Jul 2018 at 14:43, Kamugunga Adolphe kaadol@gmail.com wrote:
Hi Thomas and Dan for sharing your experience,
However i have dropped the analytics table and analytics_temp tables. and deploy my the recent war file. Then i run the analytics successfully. Afterward i started receiven the same massage as by attachment.
I am suspecting jobs that are by default created in the new feature of “Scheduler” Can any one explain reasons of having these? And please share more hints on how to deal with this problem.
I think you need to shutdown instance; remove your current war file folder e.g ROOT or dhis which depends on the name of your war file.
Switch to your Postgres database and manually delete all those analytics tables. Then you restart instance and monitor log at startup and when running analytics again.
On Tue, 3 Jul 2018 at 16:57, Kamugunga Adolphe kaadol@gmail.com wrote:
Hi,
I have upgraded my 2.28 instance to 2.29 version,7c81fba
Build date: 2018-06-28 06:46 . I have run the upgrade scripts before as instructed in guide. My server runs ubuntu 14.04 and postgresql 9.5.5.
However since then, my analytics is not running, i am receiving number of system emails with subject " Job ‘Credentials expiry alert’ failed; Job ‘inMemoryAnalyticsJob’ failed;Analytics table process failed "
Attached are some screenshots of the message. I tried to clear analytics tables and run analytics again but its not improving.
The online 2.29 demo also has the same message bu with different contents.
Please share the hints on how i can dealt with these. As my system has to sent out notification messages when thresholds are reached.
By strange coincidence Lamin (from the Gambia) was seeing the same error yesterday. Though the sequence is reversed from what David has described. The nightly scheduled run was completing successfully but the manual run was failing with this error about dependent analytics tables.
@Lamin you did something regarding period selection and then it seemed to work. Can you explain to the list?
Regards
Bob
···
On 31 July 2018 at 17:55, Kamugunga Adolphe kaadol@gmail.com wrote:
Dear Dave,
I tried your workaround but analytics run once successfully and compile aggregate values of my weekly reports only, the dashboard with cases based data(program indicators) don’t have values. The strange situation is that from Event Reports app, i list/display events reported before upgrade only and when i change the period to “This month” no values displayed.
I run again the analytics but it failed with similar message. If someone has another alternative to fix this plz let me know! I am completely stack?
The instance is updated with the today build version
Version:
2.29
Build revision:
153207c
Build date:
2018-07-31 06:50
Jasper reports version:
6.3.1
Down here is the error message content:
Job ‘inMemoryAnalyticsJob’ failed
System title:Integrated Disease Surveillance
Base URL:
Time: 2018-07-31T16:04:01.496+02:00
Message: StatementCallback; uncategorized SQLException for SQL [drop table analytics]; SQL state [2BP01]; error code [0]; ERROR: cannot drop table analytics because other objects depend on it
Detail: table analytics_2004 depends on table analytics
table analytics_2005 depends on table analytics
table analytics_2007 depends on table analytics
table analytics_2008 depends on table analytics
table analytics_2009 depends on table analytics
table analytics_2010 depends on table analytics
table analytics_2011 depends on table analytics
table analytics_2012 depends on table analytics
table analytics_2013 depends on table analytics
table analytics_2014 depends on table analytics
table analytics_2015 depends on table analytics
Hint: Use DROP … CASCADE to drop the dependent objects too.; nested exception is org.postgresql.util.PSQLException: ERROR: cannot drop table analytics because other objects depend on it
Detail: table analytics_2004 depends on table analytics
table analytics_2005 depends on table analytics
table analytics_2007 depends on table analytics
table analytics_2008 depends on table analytics
table analytics_2009 depends on table analytics
table analytics_2010 depends on table analytics
table analytics_2011 depends on table analytics
table analytics_2012 depends on table analytics
table analytics_2013 depends on table analytics
table analytics_2014 depends on table analytics
table analytics_2015 depends on table analytics
Hint: Use DROP … CASCADE to drop the dependent objects too.
Cause: org.postgresql.util.PSQLException: ERROR: cannot drop table analytics because other objects depend on it
Detail: table analytics_2004 depends on table analytics
table analytics_2005 depends on table analytics
table analytics_2007 depends on table analytics
table analytics_2008 depends on table analytics
table analytics_2009 depends on table analytics
table analytics_2010 depends on table analytics
table analytics_2011 depends on table analytics
table analytics_2012 depends on table analytics
table analytics_2013 depends on table analytics
table analytics_2014 depends on table analytics
table analytics_2015 depends on table analytics
Hint: Use DROP … CASCADE to drop the dependent objects too.
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2422)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2167)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:306)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:441)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:365)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:307)
at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:293)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:270)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:266)
at com.mchange.v2.c3p0.impl.NewProxyStatement.execute(NewProxyStatement.java:75)
at org.springframework.jdbc.core.JdbcTemplate$1ExecuteStatementCallback.doInStatement(JdbcTemplate.java:436)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:408)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:445)
at org.hisp.dhis.analytics.table.AbstractJdbcTableManager.executeSilently(AbstractJdbcTableManager.java:341)
at org.hisp.dhis.analytics.table.AbstractJdbcTableManager.swapTable(AbstractJdbcTableManager.java:508)
at org.hisp.dhis.analytics.table.AbstractJdbcTableManager.swapTable(AbstractJdbcTableManager.java:178)
at sun.reflect.GeneratedMethodAccessor2130.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:333)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:207)
at com.sun.proxy.$Proxy142.swapTable(Unknown Source)
at org.hisp.dhis.analytics.table.DefaultAnalyticsTableService.lambda$swapTables$5(DefaultAnalyticsTableService.java:373)
at java.util.ArrayList.forEach(ArrayList.java:1249)
at org.hisp.dhis.analytics.table.DefaultAnalyticsTableService.swapTables(DefaultAnalyticsTableService.java:373)
at org.hisp.dhis.analytics.table.DefaultAnalyticsTableService.update(DefaultAnalyticsTableService.java:171)
at org.hisp.dhis.analytics.table.DefaultAnalyticsTableGenerator.generateTables(DefaultAnalyticsTableGenerator.java:115)
at org.hisp.dhis.analytics.table.scheduling.AnalyticsTableJob.execute(AnalyticsTableJob.java:70)
at org.hisp.dhis.scheduling.DefaultJobInstance.executeJob(DefaultJobInstance.java:145)
at org.hisp.dhis.scheduling.DefaultJobInstance.execute(DefaultJobInstance.java:59)
at org.hisp.dhis.scheduling.DefaultSchedulingManager.lambda$internalExecuteJobConfiguration$2(DefaultSchedulingManager.java:237)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Is it possible that the Reports app to run analytics does something different to what the Scheduler does, not dropping / creating the same tables? We added a JIRA issue describing this - Scheduler cannot complete analytics generation once Reports has created analytics beforehand.
A workaround for now is so far is to clear analytics via Data Admin > Maintenance then using the Scheduler only to run - manually or scheduled.
That is true but the errors are still generating into the various analytics tables. That is why I wanted him to check those tables and if there are data in them he should delete them.
I think they are still in the completedatasetregistration. Please run the query below so that we can know the quantity.
We will have to delete them here and at the analytics tables again.
dhis-# SELECT p.startdate, count(dv.datasetid) as recno from completedatasetregistration dv INNER JOIN period p on p.periodid=dv.periodid group by p.startdate order by p.startdate;
On Wed, 4 Jul 2018 at 14:43, Kamugunga Adolphe kaadol@gmail.com wrote:
Hi Thomas and Dan for sharing your experience,
However i have dropped the analytics table and analytics_temp tables. and deploy my the recent war file. Then i run the analytics successfully. Afterward i started receiven the same massage as by attachment.
I am suspecting jobs that are by default created in the new feature of “Scheduler” Can any one explain reasons of having these? And please share more hints on how to deal with this problem.
I think you need to shutdown instance; remove your current war file folder e.g ROOT or dhis which depends on the name of your war file.
Switch to your Postgres database and manually delete all those analytics tables. Then you restart instance and monitor log at startup and when running analytics again.
On Tue, 3 Jul 2018 at 16:57, Kamugunga Adolphe kaadol@gmail.com wrote:
Hi,
I have upgraded my 2.28 instance to 2.29 version,7c81fba
Build date: 2018-06-28 06:46 . I have run the upgrade scripts before as instructed in guide. My server runs ubuntu 14.04 and postgresql 9.5.5.
However since then, my analytics is not running, i am receiving number of system emails with subject " Job ‘Credentials expiry alert’ failed; Job ‘inMemoryAnalyticsJob’ failed;Analytics table process failed "
Attached are some screenshots of the message. I tried to clear analytics tables and run analytics again but its not improving.
The online 2.29 demo also has the same message bu with different contents.
Please share the hints on how i can dealt with these. As my system has to sent out notification messages when thresholds are reached.
We recently made the upgrade from 2.26 to 2.29 (Build revision: 9c273a5) . The upgrade was made by the BAO Team.
The problem appears not immediatly after the upgrade, but when I tried to correct a wrong event date. One of the OrgUnits, inserted a date in the year 2015 instead of 2016.
The wrong event was the only one event in 2015.
I have temporary resolved the problem restoring the old wrong date of the event and excluding the 2015 year from the event report statistics, but the problem remain.
Please, let me know when the problem would be solved, in order to upgrade our instance and make the corrections of the wrong date.
Regards,
Antonia
This is the initial part of the message. I attache the entire message received.
On 31 July 2018 at 17:55, Kamugunga Adolphe kaadol@gmail.com wrote:
Dear Dave,
I tried your workaround but analytics run once successfully and compile aggregate values of my weekly reports only, the dashboard with cases based data(program indicators) don’t have values. The strange situation is that from Event Reports app, i list/display events reported before upgrade only and when i change the period to “This month” no values displayed.
I run again the analytics but it failed with similar message. If someone has another alternative to fix this plz let me know! I am completely stack?
The instance is updated with the today build version
Version:
2.29
Build revision:
153207c
Build date:
2018-07-31 06:50
Jasper reports version:
6.3.1
Down here is the error message content:
Job ‘inMemoryAnalyticsJob’ failed
System title:Integrated Disease Surveillance
Base URL:
Time: 2018-07-31T16:04:01.496+02:00
Message: StatementCallback; uncategorized SQLException for SQL [drop table analytics]; SQL state [2BP01]; error code [0]; ERROR: cannot drop table analytics because other objects depend on it
Detail: table analytics_2004 depends on table analytics
table analytics_2005 depends on table analytics
table analytics_2007 depends on table analytics
table analytics_2008 depends on table analytics
table analytics_2009 depends on table analytics
table analytics_2010 depends on table analytics
table analytics_2011 depends on table analytics
table analytics_2012 depends on table analytics
table analytics_2013 depends on table analytics
table analytics_2014 depends on table analytics
table analytics_2015 depends on table analytics
Hint: Use DROP … CASCADE to drop the dependent objects too.; nested exception is org.postgresql.util.PSQLException: ERROR: cannot drop table analytics because other objects depend on it
Detail: table analytics_2004 depends on table analytics
table analytics_2005 depends on table analytics
table analytics_2007 depends on table analytics
table analytics_2008 depends on table analytics
table analytics_2009 depends on table analytics
table analytics_2010 depends on table analytics
table analytics_2011 depends on table analytics
table analytics_2012 depends on table analytics
table analytics_2013 depends on table analytics
table analytics_2014 depends on table analytics
table analytics_2015 depends on table analytics
Hint: Use DROP … CASCADE to drop the dependent objects too.
Cause: org.postgresql.util.PSQLException: ERROR: cannot drop table analytics because other objects depend on it
Detail: table analytics_2004 depends on table analytics
table analytics_2005 depends on table analytics
table analytics_2007 depends on table analytics
table analytics_2008 depends on table analytics
table analytics_2009 depends on table analytics
table analytics_2010 depends on table analytics
table analytics_2011 depends on table analytics
table analytics_2012 depends on table analytics
table analytics_2013 depends on table analytics
table analytics_2014 depends on table analytics
table analytics_2015 depends on table analytics
Hint: Use DROP … CASCADE to drop the dependent objects too.
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2422)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2167)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:306)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:441)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:365)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:307)
at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:293)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:270)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:266)
at com.mchange.v2.c3p0.impl.NewProxyStatement.execute(NewProxyStatement.java:75)
at org.springframework.jdbc.core.JdbcTemplate$1ExecuteStatementCallback.doInStatement(JdbcTemplate.java:436)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:408)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:445)
at org.hisp.dhis.analytics.table.AbstractJdbcTableManager.executeSilently(AbstractJdbcTableManager.java:341)
at org.hisp.dhis.analytics.table.AbstractJdbcTableManager.swapTable(AbstractJdbcTableManager.java:508)
at org.hisp.dhis.analytics.table.AbstractJdbcTableManager.swapTable(AbstractJdbcTableManager.java:178)
at sun.reflect.GeneratedMethodAccessor2130.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:333)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:207)
at com.sun.proxy.$Proxy142.swapTable(Unknown Source)
at org.hisp.dhis.analytics.table.DefaultAnalyticsTableService.lambda$swapTables$5(DefaultAnalyticsTableService.java:373)
at java.util.ArrayList.forEach(ArrayList.java:1249)
at org.hisp.dhis.analytics.table.DefaultAnalyticsTableService.swapTables(DefaultAnalyticsTableService.java:373)
at org.hisp.dhis.analytics.table.DefaultAnalyticsTableService.update(DefaultAnalyticsTableService.java:171)
at org.hisp.dhis.analytics.table.DefaultAnalyticsTableGenerator.generateTables(DefaultAnalyticsTableGenerator.java:115)
at org.hisp.dhis.analytics.table.scheduling.AnalyticsTableJob.execute(AnalyticsTableJob.java:70)
at org.hisp.dhis.scheduling.DefaultJobInstance.executeJob(DefaultJobInstance.java:145)
at org.hisp.dhis.scheduling.DefaultJobInstance.execute(DefaultJobInstance.java:59)
at org.hisp.dhis.scheduling.DefaultSchedulingManager.lambda$internalExecuteJobConfiguration$2(DefaultSchedulingManager.java:237)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Is it possible that the Reports app to run analytics does something different to what the Scheduler does, not dropping / creating the same tables? We added a JIRA issue describing this - Scheduler cannot complete analytics generation once Reports has created analytics beforehand.
A workaround for now is so far is to clear analytics via Data Admin > Maintenance then using the Scheduler only to run - manually or scheduled.
That is true but the errors are still generating into the various analytics tables. That is why I wanted him to check those tables and if there are data in them he should delete them.
I think they are still in the completedatasetregistration. Please run the query below so that we can know the quantity.
We will have to delete them here and at the analytics tables again.
dhis-# SELECT p.startdate, count(dv.datasetid) as recno from completedatasetregistration dv INNER JOIN period p on p.periodid=dv.periodid group by p.startdate order by p.startdate;
On Wed, 4 Jul 2018 at 14:43, Kamugunga Adolphe kaadol@gmail.com wrote:
Hi Thomas and Dan for sharing your experience,
However i have dropped the analytics table and analytics_temp tables. and deploy my the recent war file. Then i run the analytics successfully. Afterward i started receiven the same massage as by attachment.
I am suspecting jobs that are by default created in the new feature of “Scheduler” Can any one explain reasons of having these? And please share more hints on how to deal with this problem.
I think you need to shutdown instance; remove your current war file folder e.g ROOT or dhis which depends on the name of your war file.
Switch to your Postgres database and manually delete all those analytics tables. Then you restart instance and monitor log at startup and when running analytics again.
On Tue, 3 Jul 2018 at 16:57, Kamugunga Adolphe kaadol@gmail.com wrote:
Hi,
I have upgraded my 2.28 instance to 2.29 version,7c81fba
Build date: 2018-06-28 06:46 . I have run the upgrade scripts before as instructed in guide. My server runs ubuntu 14.04 and postgresql 9.5.5.
However since then, my analytics is not running, i am receiving number of system emails with subject " Job ‘Credentials expiry alert’ failed; Job ‘inMemoryAnalyticsJob’ failed;Analytics table process failed "
Attached are some screenshots of the message. I tried to clear analytics tables and run analytics again but its not improving.
The online 2.29 demo also has the same message bu with different contents.
Please share the hints on how i can dealt with these. As my system has to sent out notification messages when thresholds are reached.
We have extended out disk and increase the RAM. I think it’s no longer disk or memory space.
postgres is located on separate server and hosts databases for other instances on 2.28 with which analytics are perfectly running.
I have removed existing war file and redeploy a fresh new as suggested . I am still exploring if there are not other jobs interfering ! I have stopped scheduler analytics job but there re still other builtin ones in 2.29 like resource file, credentials still active as it set on demo? do i have stop these also?
On Tue, Jul 31, 2018 at 6:55 PM Kamugunga Adolphe kaadol@gmail.com wrote:
Dear Dave,
I tried your workaround but analytics run once successfully and compile aggregate values of my weekly reports only, the dashboard with cases based data(program indicators) don’t have values. The strange situation is that from Event Reports app, i list/display events reported before upgrade only and when i change the period to “This month” no values displayed.
I run again the analytics but it failed with similar message. If someone has another alternative to fix this plz let me know! I am completely stack?
The instance is updated with the today build version
Version:
2.29
Build revision:
153207c
Build date:
2018-07-31 06:50
Jasper reports version:
6.3.1
Down here is the error message content:
Job ‘inMemoryAnalyticsJob’ failed
System title:Integrated Disease Surveillance
Base URL:
Time: 2018-07-31T16:04:01.496+02:00
Message: StatementCallback; uncategorized SQLException for SQL [drop table analytics]; SQL state [2BP01]; error code [0]; ERROR: cannot drop table analytics because other objects depend on it
Detail: table analytics_2004 depends on table analytics
table analytics_2005 depends on table analytics
table analytics_2007 depends on table analytics
table analytics_2008 depends on table analytics
table analytics_2009 depends on table analytics
table analytics_2010 depends on table analytics
table analytics_2011 depends on table analytics
table analytics_2012 depends on table analytics
table analytics_2013 depends on table analytics
table analytics_2014 depends on table analytics
table analytics_2015 depends on table analytics
Hint: Use DROP … CASCADE to drop the dependent objects too.; nested exception is org.postgresql.util.PSQLException: ERROR: cannot drop table analytics because other objects depend on it
Detail: table analytics_2004 depends on table analytics
table analytics_2005 depends on table analytics
table analytics_2007 depends on table analytics
table analytics_2008 depends on table analytics
table analytics_2009 depends on table analytics
table analytics_2010 depends on table analytics
table analytics_2011 depends on table analytics
table analytics_2012 depends on table analytics
table analytics_2013 depends on table analytics
table analytics_2014 depends on table analytics
table analytics_2015 depends on table analytics
Hint: Use DROP … CASCADE to drop the dependent objects too.
Cause: org.postgresql.util.PSQLException: ERROR: cannot drop table analytics because other objects depend on it
Detail: table analytics_2004 depends on table analytics
table analytics_2005 depends on table analytics
table analytics_2007 depends on table analytics
table analytics_2008 depends on table analytics
table analytics_2009 depends on table analytics
table analytics_2010 depends on table analytics
table analytics_2011 depends on table analytics
table analytics_2012 depends on table analytics
table analytics_2013 depends on table analytics
table analytics_2014 depends on table analytics
table analytics_2015 depends on table analytics
Hint: Use DROP … CASCADE to drop the dependent objects too.
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2422)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2167)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:306)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:441)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:365)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:307)
at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:293)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:270)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:266)
at com.mchange.v2.c3p0.impl.NewProxyStatement.execute(NewProxyStatement.java:75)
at org.springframework.jdbc.core.JdbcTemplate$1ExecuteStatementCallback.doInStatement(JdbcTemplate.java:436)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:408)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:445)
at org.hisp.dhis.analytics.table.AbstractJdbcTableManager.executeSilently(AbstractJdbcTableManager.java:341)
at org.hisp.dhis.analytics.table.AbstractJdbcTableManager.swapTable(AbstractJdbcTableManager.java:508)
at org.hisp.dhis.analytics.table.AbstractJdbcTableManager.swapTable(AbstractJdbcTableManager.java:178)
at sun.reflect.GeneratedMethodAccessor2130.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:333)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:207)
at com.sun.proxy.$Proxy142.swapTable(Unknown Source)
at org.hisp.dhis.analytics.table.DefaultAnalyticsTableService.lambda$swapTables$5(DefaultAnalyticsTableService.java:373)
at java.util.ArrayList.forEach(ArrayList.java:1249)
at org.hisp.dhis.analytics.table.DefaultAnalyticsTableService.swapTables(DefaultAnalyticsTableService.java:373)
at org.hisp.dhis.analytics.table.DefaultAnalyticsTableService.update(DefaultAnalyticsTableService.java:171)
at org.hisp.dhis.analytics.table.DefaultAnalyticsTableGenerator.generateTables(DefaultAnalyticsTableGenerator.java:115)
at org.hisp.dhis.analytics.table.scheduling.AnalyticsTableJob.execute(AnalyticsTableJob.java:70)
at org.hisp.dhis.scheduling.DefaultJobInstance.executeJob(DefaultJobInstance.java:145)
at org.hisp.dhis.scheduling.DefaultJobInstance.execute(DefaultJobInstance.java:59)
at org.hisp.dhis.scheduling.DefaultSchedulingManager.lambda$internalExecuteJobConfiguration$2(DefaultSchedulingManager.java:237)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Is it possible that the Reports app to run analytics does something different to what the Scheduler does, not dropping / creating the same tables? We added a JIRA issue describing this - Scheduler cannot complete analytics generation once Reports has created analytics beforehand.
A workaround for now is so far is to clear analytics via Data Admin > Maintenance then using the Scheduler only to run - manually or scheduled.
That is true but the errors are still generating into the various analytics tables. That is why I wanted him to check those tables and if there are data in them he should delete them.
I think they are still in the completedatasetregistration. Please run the query below so that we can know the quantity.
We will have to delete them here and at the analytics tables again.
dhis-# SELECT p.startdate, count(dv.datasetid) as recno from completedatasetregistration dv INNER JOIN period p on p.periodid=dv.periodid group by p.startdate order by p.startdate;
On Wed, 4 Jul 2018 at 14:43, Kamugunga Adolphe kaadol@gmail.com wrote:
Hi Thomas and Dan for sharing your experience,
However i have dropped the analytics table and analytics_temp tables. and deploy my the recent war file. Then i run the analytics successfully. Afterward i started receiven the same massage as by attachment.
I am suspecting jobs that are by default created in the new feature of “Scheduler” Can any one explain reasons of having these? And please share more hints on how to deal with this problem.
I think you need to shutdown instance; remove your current war file folder e.g ROOT or dhis which depends on the name of your war file.
Switch to your Postgres database and manually delete all those analytics tables. Then you restart instance and monitor log at startup and when running analytics again.
On Tue, 3 Jul 2018 at 16:57, Kamugunga Adolphe kaadol@gmail.com wrote:
Hi,
I have upgraded my 2.28 instance to 2.29 version,7c81fba
Build date: 2018-06-28 06:46 . I have run the upgrade scripts before as instructed in guide. My server runs ubuntu 14.04 and postgresql 9.5.5.
However since then, my analytics is not running, i am receiving number of system emails with subject " Job ‘Credentials expiry alert’ failed; Job ‘inMemoryAnalyticsJob’ failed;Analytics table process failed "
Attached are some screenshots of the message. I tried to clear analytics tables and run analytics again but its not improving.
The online 2.29 demo also has the same message bu with different contents.
Please share the hints on how i can dealt with these. As my system has to sent out notification messages when thresholds are reached.
I tried this option before, but didn’t drop the resource tables starting with under-score. The analytics runs and display it is successful , but from Event reports i can’t even list facilty cases reported! Did anyone tried to disable all jobs in scheduler and share the effect of that?
Try to drop all analytics tables manually (EXCEPT analyticsperiodboundary!!), and preferably also the resource tables starting with under-score. Then try again.
On 31 July 2018 at 18:55, Kamugunga Adolphe kaadol@gmail.com wrote:
Dear Dave,
I tried your workaround but analytics run once successfully and compile aggregate values of my weekly reports only, the dashboard with cases based data(program indicators) don’t have values. The strange situation is that from Event Reports app, i list/display events reported before upgrade only and when i change the period to “This month” no values displayed.
I run again the analytics but it failed with similar message. If someone has another alternative to fix this plz let me know! I am completely stack?
The instance is updated with the today build version
Version:
2.29
Build revision:
153207c
Build date:
2018-07-31 06:50
Jasper reports version:
6.3.1
Down here is the error message content:
Job ‘inMemoryAnalyticsJob’ failed
System title:Integrated Disease Surveillance
Base URL:
Time: 2018-07-31T16:04:01.496+02:00
Message: StatementCallback; uncategorized SQLException for SQL [drop table analytics]; SQL state [2BP01]; error code [0]; ERROR: cannot drop table analytics because other objects depend on it
Detail: table analytics_2004 depends on table analytics
table analytics_2005 depends on table analytics
table analytics_2007 depends on table analytics
table analytics_2008 depends on table analytics
table analytics_2009 depends on table analytics
table analytics_2010 depends on table analytics
table analytics_2011 depends on table analytics
table analytics_2012 depends on table analytics
table analytics_2013 depends on table analytics
table analytics_2014 depends on table analytics
table analytics_2015 depends on table analytics
Hint: Use DROP … CASCADE to drop the dependent objects too.; nested exception is org.postgresql.util.PSQLException: ERROR: cannot drop table analytics because other objects depend on it
Detail: table analytics_2004 depends on table analytics
table analytics_2005 depends on table analytics
table analytics_2007 depends on table analytics
table analytics_2008 depends on table analytics
table analytics_2009 depends on table analytics
table analytics_2010 depends on table analytics
table analytics_2011 depends on table analytics
table analytics_2012 depends on table analytics
table analytics_2013 depends on table analytics
table analytics_2014 depends on table analytics
table analytics_2015 depends on table analytics
Hint: Use DROP … CASCADE to drop the dependent objects too.
Cause: org.postgresql.util.PSQLException: ERROR: cannot drop table analytics because other objects depend on it
Detail: table analytics_2004 depends on table analytics
table analytics_2005 depends on table analytics
table analytics_2007 depends on table analytics
table analytics_2008 depends on table analytics
table analytics_2009 depends on table analytics
table analytics_2010 depends on table analytics
table analytics_2011 depends on table analytics
table analytics_2012 depends on table analytics
table analytics_2013 depends on table analytics
table analytics_2014 depends on table analytics
table analytics_2015 depends on table analytics
Hint: Use DROP … CASCADE to drop the dependent objects too.
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2422)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2167)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:306)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:441)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:365)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:307)
at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:293)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:270)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:266)
at com.mchange.v2.c3p0.impl.NewProxyStatement.execute(NewProxyStatement.java:75)
at org.springframework.jdbc.core.JdbcTemplate$1ExecuteStatementCallback.doInStatement(JdbcTemplate.java:436)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:408)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:445)
at org.hisp.dhis.analytics.table.AbstractJdbcTableManager.executeSilently(AbstractJdbcTableManager.java:341)
at org.hisp.dhis.analytics.table.AbstractJdbcTableManager.swapTable(AbstractJdbcTableManager.java:508)
at org.hisp.dhis.analytics.table.AbstractJdbcTableManager.swapTable(AbstractJdbcTableManager.java:178)
at sun.reflect.GeneratedMethodAccessor2130.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:333)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:207)
at com.sun.proxy.$Proxy142.swapTable(Unknown Source)
at org.hisp.dhis.analytics.table.DefaultAnalyticsTableService.lambda$swapTables$5(DefaultAnalyticsTableService.java:373)
at java.util.ArrayList.forEach(ArrayList.java:1249)
at org.hisp.dhis.analytics.table.DefaultAnalyticsTableService.swapTables(DefaultAnalyticsTableService.java:373)
at org.hisp.dhis.analytics.table.DefaultAnalyticsTableService.update(DefaultAnalyticsTableService.java:171)
at org.hisp.dhis.analytics.table.DefaultAnalyticsTableGenerator.generateTables(DefaultAnalyticsTableGenerator.java:115)
at org.hisp.dhis.analytics.table.scheduling.AnalyticsTableJob.execute(AnalyticsTableJob.java:70)
at org.hisp.dhis.scheduling.DefaultJobInstance.executeJob(DefaultJobInstance.java:145)
at org.hisp.dhis.scheduling.DefaultJobInstance.execute(DefaultJobInstance.java:59)
at org.hisp.dhis.scheduling.DefaultSchedulingManager.lambda$internalExecuteJobConfiguration$2(DefaultSchedulingManager.java:237)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Is it possible that the Reports app to run analytics does something different to what the Scheduler does, not dropping / creating the same tables? We added a JIRA issue describing this - Scheduler cannot complete analytics generation once Reports has created analytics beforehand.
A workaround for now is so far is to clear analytics via Data Admin > Maintenance then using the Scheduler only to run - manually or scheduled.
That is true but the errors are still generating into the various analytics tables. That is why I wanted him to check those tables and if there are data in them he should delete them.
I think they are still in the completedatasetregistration. Please run the query below so that we can know the quantity.
We will have to delete them here and at the analytics tables again.
dhis-# SELECT p.startdate, count(dv.datasetid) as recno from completedatasetregistration dv INNER JOIN period p on p.periodid=dv.periodid group by p.startdate order by p.startdate;
On Wed, 4 Jul 2018 at 14:43, Kamugunga Adolphe kaadol@gmail.com wrote:
Hi Thomas and Dan for sharing your experience,
However i have dropped the analytics table and analytics_temp tables. and deploy my the recent war file. Then i run the analytics successfully. Afterward i started receiven the same massage as by attachment.
I am suspecting jobs that are by default created in the new feature of “Scheduler” Can any one explain reasons of having these? And please share more hints on how to deal with this problem.
I think you need to shutdown instance; remove your current war file folder e.g ROOT or dhis which depends on the name of your war file.
Switch to your Postgres database and manually delete all those analytics tables. Then you restart instance and monitor log at startup and when running analytics again.
On Tue, 3 Jul 2018 at 16:57, Kamugunga Adolphe kaadol@gmail.com wrote:
Hi,
I have upgraded my 2.28 instance to 2.29 version,7c81fba
Build date: 2018-06-28 06:46 . I have run the upgrade scripts before as instructed in guide. My server runs ubuntu 14.04 and postgresql 9.5.5.
However since then, my analytics is not running, i am receiving number of system emails with subject " Job ‘Credentials expiry alert’ failed; Job ‘inMemoryAnalyticsJob’ failed;Analytics table process failed "
Attached are some screenshots of the message. I tried to clear analytics tables and run analytics again but its not improving.
The online 2.29 demo also has the same message bu with different contents.
Please share the hints on how i can dealt with these. As my system has to sent out notification messages when thresholds are reached.
So after deleting both analytics tables and resource tables - which might have become corrupted due to e.g. lack of harddisk space previously - you are now running analytics successfully, with no errors? BUT you are now not getting any output from Events Reports?
In that case, take a good look at the enrollment analytics table(s) and the event analytics table(s), and verify that the facility cases you are trying to display in event reports
do exist in the relevant enrollment and/or event tables
that all typical lookup fields in those tables have valid values (e.g. compare with the same table in another well-functioning instance).
Regards
Calle
···
On 1 August 2018 at 16:13, Kamugunga Adolphe kaadol@gmail.com wrote:
Thanks calle for suggestions,
I tried this option before, but didn’t drop the resource tables starting with under-score. The analytics runs and display it is successful , but from Event reports i can’t even list facilty cases reported! Did anyone tried to disable all jobs in scheduler and share the effect of that?
Try to drop all analytics tables manually (EXCEPT analyticsperiodboundary!!), and preferably also the resource tables starting with under-score. Then try again.
Regards
Calle
On 31 July 2018 at 18:55, Kamugunga Adolphe kaadol@gmail.com wrote:
Dear Dave,
I tried your workaround but analytics run once successfully and compile aggregate values of my weekly reports only, the dashboard with cases based data(program indicators) don’t have values. The strange situation is that from Event Reports app, i list/display events reported before upgrade only and when i change the period to “This month” no values displayed.
I run again the analytics but it failed with similar message. If someone has another alternative to fix this plz let me know! I am completely stack?
The instance is updated with the today build version
Version:
2.29
Build revision:
153207c
Build date:
2018-07-31 06:50
Jasper reports version:
6.3.1
Down here is the error message content:
Job ‘inMemoryAnalyticsJob’ failed
System title:Integrated Disease Surveillance
Base URL:
Time: 2018-07-31T16:04:01.496+02:00
Message: StatementCallback; uncategorized SQLException for SQL [drop table analytics]; SQL state [2BP01]; error code [0]; ERROR: cannot drop table analytics because other objects depend on it
Detail: table analytics_2004 depends on table analytics
table analytics_2005 depends on table analytics
table analytics_2007 depends on table analytics
table analytics_2008 depends on table analytics
table analytics_2009 depends on table analytics
table analytics_2010 depends on table analytics
table analytics_2011 depends on table analytics
table analytics_2012 depends on table analytics
table analytics_2013 depends on table analytics
table analytics_2014 depends on table analytics
table analytics_2015 depends on table analytics
Hint: Use DROP … CASCADE to drop the dependent objects too.; nested exception is org.postgresql.util.PSQLException: ERROR: cannot drop table analytics because other objects depend on it
Detail: table analytics_2004 depends on table analytics
table analytics_2005 depends on table analytics
table analytics_2007 depends on table analytics
table analytics_2008 depends on table analytics
table analytics_2009 depends on table analytics
table analytics_2010 depends on table analytics
table analytics_2011 depends on table analytics
table analytics_2012 depends on table analytics
table analytics_2013 depends on table analytics
table analytics_2014 depends on table analytics
table analytics_2015 depends on table analytics
Hint: Use DROP … CASCADE to drop the dependent objects too.
Cause: org.postgresql.util.PSQLException: ERROR: cannot drop table analytics because other objects depend on it
Detail: table analytics_2004 depends on table analytics
table analytics_2005 depends on table analytics
table analytics_2007 depends on table analytics
table analytics_2008 depends on table analytics
table analytics_2009 depends on table analytics
table analytics_2010 depends on table analytics
table analytics_2011 depends on table analytics
table analytics_2012 depends on table analytics
table analytics_2013 depends on table analytics
table analytics_2014 depends on table analytics
table analytics_2015 depends on table analytics
Hint: Use DROP … CASCADE to drop the dependent objects too.
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2422)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2167)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:306)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:441)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:365)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:307)
at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:293)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:270)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:266)
at com.mchange.v2.c3p0.impl.NewProxyStatement.execute(NewProxyStatement.java:75)
at org.springframework.jdbc.core.JdbcTemplate$1ExecuteStatementCallback.doInStatement(JdbcTemplate.java:436)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:408)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:445)
at org.hisp.dhis.analytics.table.AbstractJdbcTableManager.executeSilently(AbstractJdbcTableManager.java:341)
at org.hisp.dhis.analytics.table.AbstractJdbcTableManager.swapTable(AbstractJdbcTableManager.java:508)
at org.hisp.dhis.analytics.table.AbstractJdbcTableManager.swapTable(AbstractJdbcTableManager.java:178)
at sun.reflect.GeneratedMethodAccessor2130.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:333)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:207)
at com.sun.proxy.$Proxy142.swapTable(Unknown Source)
at org.hisp.dhis.analytics.table.DefaultAnalyticsTableService.lambda$swapTables$5(DefaultAnalyticsTableService.java:373)
at java.util.ArrayList.forEach(ArrayList.java:1249)
at org.hisp.dhis.analytics.table.DefaultAnalyticsTableService.swapTables(DefaultAnalyticsTableService.java:373)
at org.hisp.dhis.analytics.table.DefaultAnalyticsTableService.update(DefaultAnalyticsTableService.java:171)
at org.hisp.dhis.analytics.table.DefaultAnalyticsTableGenerator.generateTables(DefaultAnalyticsTableGenerator.java:115)
at org.hisp.dhis.analytics.table.scheduling.AnalyticsTableJob.execute(AnalyticsTableJob.java:70)
at org.hisp.dhis.scheduling.DefaultJobInstance.executeJob(DefaultJobInstance.java:145)
at org.hisp.dhis.scheduling.DefaultJobInstance.execute(DefaultJobInstance.java:59)
at org.hisp.dhis.scheduling.DefaultSchedulingManager.lambda$internalExecuteJobConfiguration$2(DefaultSchedulingManager.java:237)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Is it possible that the Reports app to run analytics does something different to what the Scheduler does, not dropping / creating the same tables? We added a JIRA issue describing this - Scheduler cannot complete analytics generation once Reports has created analytics beforehand.
A workaround for now is so far is to clear analytics via Data Admin > Maintenance then using the Scheduler only to run - manually or scheduled.
That is true but the errors are still generating into the various analytics tables. That is why I wanted him to check those tables and if there are data in them he should delete them.
I think they are still in the completedatasetregistration. Please run the query below so that we can know the quantity.
We will have to delete them here and at the analytics tables again.
dhis-# SELECT p.startdate, count(dv.datasetid) as recno from completedatasetregistration dv INNER JOIN period p on p.periodid=dv.periodid group by p.startdate order by p.startdate;
On Wed, 4 Jul 2018 at 14:43, Kamugunga Adolphe kaadol@gmail.com wrote:
Hi Thomas and Dan for sharing your experience,
However i have dropped the analytics table and analytics_temp tables. and deploy my the recent war file. Then i run the analytics successfully. Afterward i started receiven the same massage as by attachment.
I am suspecting jobs that are by default created in the new feature of “Scheduler” Can any one explain reasons of having these? And please share more hints on how to deal with this problem.
I think you need to shutdown instance; remove your current war file folder e.g ROOT or dhis which depends on the name of your war file.
Switch to your Postgres database and manually delete all those analytics tables. Then you restart instance and monitor log at startup and when running analytics again.
On Tue, 3 Jul 2018 at 16:57, Kamugunga Adolphe kaadol@gmail.com wrote:
Hi,
I have upgraded my 2.28 instance to 2.29 version,7c81fba
Build date: 2018-06-28 06:46 . I have run the upgrade scripts before as instructed in guide. My server runs ubuntu 14.04 and postgresql 9.5.5.
However since then, my analytics is not running, i am receiving number of system emails with subject " Job ‘Credentials expiry alert’ failed; Job ‘inMemoryAnalyticsJob’ failed;Analytics table process failed "
Attached are some screenshots of the message. I tried to clear analytics tables and run analytics again but its not improving.
The online 2.29 demo also has the same message bu with different contents.
Please share the hints on how i can dealt with these. As my system has to sent out notification messages when thresholds are reached.