Indonesia Promotes Meaningful Data Use through Localisation of Regional Data Use Academy

The DHIS2 Level 1 Data Use Academy was designed to allow for participants to improve their data analysis, data quality, and data use skills and knowledge. In global and regional trainings, either anonymized or generated organisation units, data elements, indicators and data have been used to perform this type of training previously. This resulted in difficulty in transferring skills back to the country and this academy was eventually deprecated and replaced with Level 1 Analytics tools.

The necessity to tailor data use training to increase skills using the countries live data and database has been understood for some time. Indonesia was therefore identified as the first country to pilot more standardized approaches to training which could be adapted in-country in order to support the implementation of data use training in additional countries going forward.


Fig 1. Exercising data analysis with data visualizer app

Why adapt the regional academy into a country specific academy?

In Indonesia, DHIS2 (localled called ASDK) acts as the integrated data warehouse, pulling data from multiple programs including HIV, TB, Malaria, MCH, Immunisation, etc. An assessment conducted in 2018 showed that many of the dashboards encountered critical problems that hindered optimal data and information use for program monitoring, planning, and decision making.

By adapting the regional Level 1 Data Use Academy to be used within the local context, the implementation team expected more meaningful capacity building that would allow for the participants to work with their live data. This allowed for real challenges with data quality to be reviewed along with conclusions regarding program performance to be examined in greater detail than is allowed within a regional or global academy.


Figure 2. Each health programme presented data elements, indicators, and analytics that routinely monitored

How did it work?

The in-country academy is based on 5 key principles as identified within the WHO guidelines for facility analysis:

  1. Identifying/reviewing the key indicators used in the program
  2. Reviewing the quality of the data collected in the program
  3. Reviewing the targets/denominators that are used in the program
  4. Visualize and interpret/analyse the data in the program
  5. Communicate and use the findings to enhance program data quality and service delivery

The agenda and curriculum were therefore made to ensure all of these core principles were covered during the training.


Figure 3. Participants tested their understanding by explaining what they have learned to other participants

What was done differently?

Several adoptions were made to the academy including:

  • At the beginning of the academy, each health program from the Ministry (such as HIV, TB, Immunisation, HRH) presented their data use (visualisation, analytics, and monitoring) routines. On the last day, reflecting on the academy, they proposed how ASDK/DHIS2 can be used to further enhance program review of data
  • We used the local database (data elements, indicators, maps, data values, population targets and estimates) for demonstrations and hands-on exercises
  • Local trainers trained by WHO and University of Oslo facilitated the majority of sessions, using local language whenever possible.


Figure 4. HIV program staff proposed ASDK use for HIV data review

Conclusion

  • In-country data use training’s have a stronger impact than regional or global training, but must be well structured using adult learning principles to apply well researched training methodologies in a localized context. A detailed review of the data to identify data quality issues and create relevant analytical outputs that can be reviewed and discussed is needed prior to this type of training; sufficient time for these various tasks should be alloted.

This use case was prepared by Aprisa Chrysantina and Shurajit Dutta.

4 Likes

@aprisa, @Shurajit_Dutta ,

I am really, really happy to see that the key principle of localised, program-specific, and “practical” training and skills development is starting to overtake the historic/current focus on more generic regional/global academies using dummy databases like the SL demo. HISP-SA has followed the same principle for the last 15-20 years of always training people locally using their own data.

We must just realise that it is more resource intensive to develop and conduct such localised training - AND it won’t be popular with health information managers and staffers who have been relying on the extra income from (relatively to their salary) “fat” per diems etc when travelling to academies abroad.

There is one major aspect, nearly always overlooked, missing from your revised approach: what should happen after the end of the training? Or in other words, HOW DO YOU MEASURE THE IMPACT OF THE TRAINING ON THE DAILY WORK OF EACH PARTICIPANT. Because at the end of the day that is the only success criteria you should have: participant’s own feeling of satisfaction; their ability to pass tests at the end; even their ability to teach others; - all of those are important process aspects but the actual IMPACT should be measured using methods that are (largely) trend analysis from the database ecosystem:

  • improved input-side meta-data (de-duplication; reduced data and streamlined data sets and programs; streamlined mix of routine data, sentinel data, survey data) - easy to monitor
  • improved processing/output side meta-data (ALL data elements used for REAL indicators with numerators and denominators; national, provincial, district, and possibly facility annual TARGETS for all relevant indicators) - easy to monitor
  • More complete data (can be monitored)
  • more timely data (can be monitored)
  • fewer/no obvious errors (outlier analysis, gap analysis)
  • no inconsistent data (validation rules)
  • increased frequency of usage (usage analytics)
  • increased analysis and outputs (dashboards, favourites, sql views, push analytics, etc)
  • increased messaging/discussions around outputs (interpretations)
  • increased use of system by managers NOT formally trained (usage analytics), i.e. secondary effects
  • increased distribution of outputs to stakeholders (decision-makers; researchers; media; NGOs; public)
    and similar.

Each individual course participant do not have power over all the impacts mentioned, of course - but the course participants as a community must have those powers (otherwise it means the training is leaving out key stakeholders that in reality will radically limit the potential impact).

my 2c worth

Regards
Calle

2 Likes

Great work and very interesting read @aprisa @Shurajit_Dutta . It seems like a meaningful way to train. And I agree with @Calle_Hedberg Calle - we should really look into the effects this is having on data use, both by looking at the actual data and outputs, but also through some qualitative approaches of interviewing participants before and a while after the academy.

Did you have participants that had attended other “global” academies previously that could note the difference?

2 Likes

@Calle_Hedberg @AnneThorseng this is great feedback. Implementing effective transfer of learning strategies and evaluation after the fact is something we still admittedly struggle with. In this case, there was some challenges as there were many programs in the training together, where we have suggested that these should be conducted program by program initially instead. A critical follow-up point we have identified is for program specific training to be conducted so further evaluation can occur.

There are a couple items you have highlighted that we have identified previously and will follow-up on particularly around data quality. We also tried to emphasise the inclusion of minimum data sets. As many programs are only starting to their integration path into DHIS2 within Indonesia, some of these are hard to examine or drive home but I agree overall with the need to more effectively measure impact.

We are coming up with some guidelines so we can continue to conduct more of these types of training’s in other countries and would be happy to have your (and others) input on how we can more effectively evaluate how this type of training effects participants day to day routine after the training is completed. We have a lot of ideas of how to create the content, review the data quality, marry principles of using DHIS2 to create outputs with the idea of interpretations and drawing meaningful conclusions particularly with the aid of the WHO facility analysis guidelines and curricula, etc. and evaluation should definitely be a part of this.

@AnneThorseng yes there were some participants who attended other academies outside Indonesia. I am not sure to what extent we evaluated this.

2 Likes

Thank you @Calle_Hedberg and @AnneThorseng. These are useful inputs. In addition to what you suggest, for now we’re thinking of evaluating outputs and outcomes such as:

  • perceptions towards the course (qualitatively through survey - done; and in-depth interview - to be done)
  • dashboards produced/revised (some already have dashboards - we have analysed these dashboards)
  • through the interview (and if possible site visit) we also want to know how they follow up this academy and what kind of support, challenges, and local management (provincial/district/MoH division) that play role in their follow up actions

I would add that the instructional design, materials development, and logistics preparation itself was a very interesting process :slight_smile: More respect to local wisdom, capacity and integrity should be taken into account when we’re localising.

1 Like