Automating DHIS2

Hi . I was wondering if anyone has done any scripting to automate
certain tasks (namely the datamart) with DHIS2. I came across this
utility for Firefox which was extremely easy to use and did pretty
much what I wanted.

http://www.iopus.com/imacros/firefox/?ref=fxtab

Usually, I would do this sort of thing through a shell script, but I
have not really figured out how to login to DHIS2 through a command
line tool like wget. Basic HTTP authentication does not seem to be
supported, but I thought I had seen a commit about this some time ago.

Basically, I think it would be very useful to be able to
programatically call URLs, for instance to regenerate the datamart and
report tables on a period and scheduled basis. This iMacro extension
is free, but only the paid version can be used through the command
line or Windows Task Scheduler. Any one else have some ideas how this
could be done?

Regards,
Jason

···

--
Jason P. Pickering
email: jason.p.pickering@gmail.com
tel:+260968395190

Hi Jason,

Scheduling of data mart exports is already a blueprint targetted for 2.0.6:
https://blueprints.launchpad.net/dhis2/+spec/datamart-jobs

The idea is to have this built-in and configurable from the DHIS UI.

···

Ola Hodne Titlestad (Mr)
HISP
Department of Informatics
University of Oslo

Mobile: +47 48069736
Home address: Vetlandsvn. 95B, 0685 Oslo, Norway. Googlemaps link

On 7 November 2010 08:37, Jason Pickering jason.p.pickering@gmail.com wrote:

Hi . I was wondering if anyone has done any scripting to automate

certain tasks (namely the datamart) with DHIS2. I came across this

utility for Firefox which was extremely easy to use and did pretty

much what I wanted.

http://www.iopus.com/imacros/firefox/?ref=fxtab

Usually, I would do this sort of thing through a shell script, but I

have not really figured out how to login to DHIS2 through a command

line tool like wget. Basic HTTP authentication does not seem to be

supported, but I thought I had seen a commit about this some time ago.

Basically, I think it would be very useful to be able to

programatically call URLs, for instance to regenerate the datamart and

report tables on a period and scheduled basis. This iMacro extension

is free, but only the paid version can be used through the command

line or Windows Task Scheduler. Any one else have some ideas how this

could be done?

Regards,

Jason

Jason P. Pickering

email: jason.p.pickering@gmail.com

tel:+260968395190


Mailing list: https://launchpad.net/~dhis2-devs

Post to : dhis2-devs@lists.launchpad.net

Unsubscribe : https://launchpad.net/~dhis2-devs

More help : https://help.launchpad.net/ListHelp

Hi Jason,

Scheduling of data mart exports is already a blueprint targetted for 2.0.6:
https://blueprints.launchpad.net/dhis2/+spec/datamart-jobs

The idea is to have this built-in and configurable from the DHIS UI.

Thats good. However, I think the more general principle of being able
to access DHIS2 functionality from command line tools would be very
valuable, particularly through RESTful URLs.

Knut

···

On Sun, Nov 7, 2010 at 9:55 AM, Ola Hodne Titlestad <olati@ifi.uio.no> wrote:

----------------------------------
Ola Hodne Titlestad (Mr)
HISP
Department of Informatics
University of Oslo

Mobile: +47 48069736
Home address: Vetlandsvn. 95B, 0685 Oslo, Norway. Googlemaps link

On 7 November 2010 08:37, Jason Pickering <jason.p.pickering@gmail.com> > wrote:

Hi . I was wondering if anyone has done any scripting to automate
certain tasks (namely the datamart) with DHIS2. I came across this
utility for Firefox which was extremely easy to use and did pretty
much what I wanted.

http://www.iopus.com/imacros/firefox/?ref=fxtab

Usually, I would do this sort of thing through a shell script, but I
have not really figured out how to login to DHIS2 through a command
line tool like wget. Basic HTTP authentication does not seem to be
supported, but I thought I had seen a commit about this some time ago.

Basically, I think it would be very useful to be able to
programatically call URLs, for instance to regenerate the datamart and
report tables on a period and scheduled basis. This iMacro extension
is free, but only the paid version can be used through the command
line or Windows Task Scheduler. Any one else have some ideas how this
could be done?

Regards,
Jason

--
Jason P. Pickering
email: jason.p.pickering@gmail.com
tel:+260968395190

_______________________________________________
Mailing list: DHIS 2 developers in Launchpad
Post to : dhis2-devs@lists.launchpad.net
Unsubscribe : DHIS 2 developers in Launchpad
More help : ListHelp - Launchpad Help

_______________________________________________
Mailing list: DHIS 2 developers in Launchpad
Post to : dhis2-devs@lists.launchpad.net
Unsubscribe : DHIS 2 developers in Launchpad
More help : ListHelp - Launchpad Help

--
Cheers,
Knut Staring

I think the automated data marts is a good feature, and might fit for
some users, but I can imagine situations where multiple actions might
need to be take place. Such as execute the data mart, execute some
report tables, export some data to CSV and save it in a certain place,
conduct a data integrity check and save the results as HTML...etc etc.

I saw the commit from Jo on basic HTTP authentication. Jo, could you
explain how and if this works?

Regards,
Jason

···

On Sun, Nov 7, 2010 at 11:35 AM, Knut Staring <knutst@gmail.com> wrote:

On Sun, Nov 7, 2010 at 9:55 AM, Ola Hodne Titlestad <olati@ifi.uio.no> wrote:

Hi Jason,

Scheduling of data mart exports is already a blueprint targetted for 2.0.6:
https://blueprints.launchpad.net/dhis2/+spec/datamart-jobs

The idea is to have this built-in and configurable from the DHIS UI.

Thats good. However, I think the more general principle of being able
to access DHIS2 functionality from command line tools would be very
valuable, particularly through RESTful URLs.

Knut

----------------------------------
Ola Hodne Titlestad (Mr)
HISP
Department of Informatics
University of Oslo

Mobile: +47 48069736
Home address: Vetlandsvn. 95B, 0685 Oslo, Norway. Googlemaps link

On 7 November 2010 08:37, Jason Pickering <jason.p.pickering@gmail.com> >> wrote:

Hi . I was wondering if anyone has done any scripting to automate
certain tasks (namely the datamart) with DHIS2. I came across this
utility for Firefox which was extremely easy to use and did pretty
much what I wanted.

http://www.iopus.com/imacros/firefox/?ref=fxtab

Usually, I would do this sort of thing through a shell script, but I
have not really figured out how to login to DHIS2 through a command
line tool like wget. Basic HTTP authentication does not seem to be
supported, but I thought I had seen a commit about this some time ago.

Basically, I think it would be very useful to be able to
programatically call URLs, for instance to regenerate the datamart and
report tables on a period and scheduled basis. This iMacro extension
is free, but only the paid version can be used through the command
line or Windows Task Scheduler. Any one else have some ideas how this
could be done?

Regards,
Jason

--
Jason P. Pickering
email: jason.p.pickering@gmail.com
tel:+260968395190

_______________________________________________
Mailing list: DHIS 2 developers in Launchpad
Post to : dhis2-devs@lists.launchpad.net
Unsubscribe : DHIS 2 developers in Launchpad
More help : ListHelp - Launchpad Help

_______________________________________________
Mailing list: DHIS 2 developers in Launchpad
Post to : dhis2-devs@lists.launchpad.net
Unsubscribe : DHIS 2 developers in Launchpad
More help : ListHelp - Launchpad Help

--
Cheers,
Knut Staring

--
Jason P. Pickering
email: jason.p.pickering@gmail.com
tel:+260968395190

Hi

Hi . I was wondering if anyone has done any scripting to automate
certain tasks (namely the datamart) with DHIS2. I came across this
utility for Firefox which was extremely easy to use and did pretty
much what I wanted.

http://www.iopus.com/imacros/firefox/?ref=fxtab

I think selenium probably has the same or more functionality - more
particulalrly for testing, but I think could also be used for general
purpose brosewr automation. Selenium.

But have to agree that automating the browser is probably the second
best solution where there is no other api. There are quite a few
services we need to define clean web api's for.

I have worked around basic auth using python scripts and maintaining
state with cookies like the browser does, but it's a bit awkward and
I'm not sure I'd recommend it.

I'll try and rummage back to find that (thrown away?) code.

Cheers
Bob

···

On 7 November 2010 07:37, Jason Pickering <jason.p.pickering@gmail.com> wrote:

Usually, I would do this sort of thing through a shell script, but I
have not really figured out how to login to DHIS2 through a command
line tool like wget. Basic HTTP authentication does not seem to be
supported, but I thought I had seen a commit about this some time ago.

Basically, I think it would be very useful to be able to
programatically call URLs, for instance to regenerate the datamart and
report tables on a period and scheduled basis. This iMacro extension
is free, but only the paid version can be used through the command
line or Windows Task Scheduler. Any one else have some ideas how this
could be done?

Regards,
Jason

--
Jason P. Pickering
email: jason.p.pickering@gmail.com
tel:+260968395190

_______________________________________________
Mailing list: DHIS 2 developers in Launchpad
Post to : dhis2-devs@lists.launchpad.net
Unsubscribe : DHIS 2 developers in Launchpad
More help : ListHelp - Launchpad Help

There is also Selenium-RC, which "uses the full power of programming
languages to create more complex tests like reading and writing files,
querying a database, and emailing test results":

k

···

On Sun, Nov 7, 2010 at 10:53 AM, Bob Jolliffe <bobjolliffe@gmail.com> wrote:

Hi

On 7 November 2010 07:37, Jason Pickering <jason.p.pickering@gmail.com> wrote:

Hi . I was wondering if anyone has done any scripting to automate
certain tasks (namely the datamart) with DHIS2. I came across this
utility for Firefox which was extremely easy to use and did pretty
much what I wanted.

http://www.iopus.com/imacros/firefox/?ref=fxtab

I think selenium probably has the same or more functionality - more
particulalrly for testing, but I think could also be used for general
purpose brosewr automation. Selenium.

This guy seems to have been doing sth similar:
http://solobonite.wikidot.com/seleniumrunner-tutorial

···

On Sun, Nov 7, 2010 at 10:53 AM, Bob Jolliffe <bobjolliffe@gmail.com> wrote:

I have worked around basic auth using python scripts and maintaining
state with cookies like the browser does, but it's a bit awkward and
I'm not sure I'd recommend it.

I'll try and rummage back to find that (thrown away?) code.

Hi Bob,
Would be good to see what you did with python. One man's trash it
another's treasure. :slight_smile:

Selenium seems a bit overkill perhaps. A simple script called from the
command line with cron/task scheduler would probably be more enough
for me at this point, if I could figure out how to authenticate
easily.

Regards,
Jason

···

On Sun, Nov 7, 2010 at 12:06 PM, Knut Staring <knutst@gmail.com> wrote:

On Sun, Nov 7, 2010 at 10:53 AM, Bob Jolliffe <bobjolliffe@gmail.com> wrote:

I have worked around basic auth using python scripts and maintaining
state with cookies like the browser does, but it's a bit awkward and
I'm not sure I'd recommend it.

I'll try and rummage back to find that (thrown away?) code.

This guy seems to have been doing sth similar:
http://solobonite.wikidot.com/seleniumrunner-tutorial

--
Jason P. Pickering
email: jason.p.pickering@gmail.com
tel:+260968395190

Use at your own risk :slight_smile: Really just me figuring out how to do
cookies with python.

pydhis.py (751 Bytes)

···

On 7 November 2010 11:21, Jason Pickering <jason.p.pickering@gmail.com> wrote:

Hi Bob,
Would be good to see what you did with python. One man's trash it
another's treasure. :slight_smile:

Selenium seems a bit overkill perhaps. A simple script called from the
command line with cron/task scheduler would probably be more enough
for me at this point, if I could figure out how to authenticate
easily.

Regards,
Jason

On Sun, Nov 7, 2010 at 12:06 PM, Knut Staring <knutst@gmail.com> wrote:

On Sun, Nov 7, 2010 at 10:53 AM, Bob Jolliffe <bobjolliffe@gmail.com> wrote:

I have worked around basic auth using python scripts and maintaining
state with cookies like the browser does, but it's a bit awkward and
I'm not sure I'd recommend it.

I'll try and rummage back to find that (thrown away?) code.

This guy seems to have been doing sth similar:
http://solobonite.wikidot.com/seleniumrunner-tutorial

--
Jason P. Pickering
email: jason.p.pickering@gmail.com
tel:+260968395190

From 2.0.5, Basic authentication should be done if you send the authorization header with the request, so there should be no need to do the cookie/forms login stuff from scripts.

I.e. curl -v -u user:password url

We can't serve up a 401 request unless you serve up the header, as we want the form based authentication to be the default (replying with a redirect).

With the next version of spring security (3.1), I think it will be easier to have multiple authentication configurations. I was hoping that we could then have Basic as the default for the path/to/dhis/api/ namespace then, and maybe slowly start providing more clearly defined api-services. If people have simple and clearly defined services they need, I can try to take a stab at implementing them as prototype service, there. It has to simple, though, as I don't have much time and don't know the domain model considerations that well.

I'm not completely sure if we really want to have basic as a default enabled for other urls. Basically I think we need to have this kind of stuff more generally configurable, but that is then a bigger task.. There is of course also the question of wether basic (and forms login) is acceptable for a web app without ssl, which people don't seem to use in the wild. I have been contemplating wether we should optionally support digest authentication (openROSA is considering making that a requirement for their mobile api [1]), but that is a headache when it comes to storing the passwords.

[1] http://groups.google.com/group/openrosa-workgroup/browse_thread/thread/f7a431b7f50ddb3

Jo

···

Den 7. nov. 2010 kl. 15.12 skrev Jason Pickering:

I think the automated data marts is a good feature, and might fit for
some users, but I can imagine situations where multiple actions might
need to be take place. Such as execute the data mart, execute some
report tables, export some data to CSV and save it in a certain place,
conduct a data integrity check and save the results as HTML...etc etc.

I saw the commit from Jo on basic HTTP authentication. Jo, could you
explain how and if this works?

Thanks Jo. :). Guess I should have read the curl manual. :slight_smile: That
worked like a charm. I have very, very basic needs at this point,
which consist essentially of being able to trigger data mart export
and report table generation at a scheduled time. That's it. Perhaps
2.0.6 will take care of it, but in the meantime, curl can press the
button for me. Of course, having a more cleanly defined series of REST
services would be very useful I think, but I am not sure the demand is
huge.

Regards,
Jason

···

On Mon, Nov 8, 2010 at 6:49 AM, Jo Størset <storset@gmail.com> wrote:

t
Den 7. nov. 2010 kl. 15.12 skrev Jason Pickering:

I think the automated data marts is a good feature, and might fit for
some users, but I can imagine situations where multiple actions might
need to be take place. Such as execute the data mart, execute some
report tables, export some data to CSV and save it in a certain place,
conduct a data integrity check and save the results as HTML...etc etc.

I saw the commit from Jo on basic HTTP authentication. Jo, could you
explain how and if this works?

From 2.0.5, Basic authentication should be done if you send the authorization header with the request, so there should be no need to do the cookie/forms login stuff from scripts.

I.e. curl -v -u user:password url

We can't serve up a 401 request unless you serve up the header, as we want the form based authentication to be the default (replying with a redirect).

With the next version of spring security (3.1), I think it will be easier to have multiple authentication configurations. I was hoping that we could then have Basic as the default for the path/to/dhis/api/ namespace then, and maybe slowly start providing more clearly defined api-services. If people have simple and clearly defined services they need, I can try to take a stab at implementing them as prototype service, there. It has to simple, though, as I don't have much time and don't know the domain model considerations that well.

I'm not completely sure if we really want to have basic as a default enabled for other urls. Basically I think we need to have this kind of stuff more generally configurable, but that is then a bigger task.. There is of course also the question of wether basic (and forms login) is acceptable for a web app without ssl, which people don't seem to use in the wild. I have been contemplating wether we should optionally support digest authentication (openROSA is considering making that a requirement for their mobile api [1]), but that is a headache when it comes to storing the passwords.

[1] http://groups.google.com/group/openrosa-workgroup/browse_thread/thread/f7a431b7f50ddb3

Jo

--
Jason P. Pickering
email: jason.p.pickering@gmail.com
tel:+260968395190

Thanks Jo. :). Guess I should have read the curl manual. :slight_smile: That
worked like a charm. I have very, very basic needs at this point,
which consist essentially of being able to trigger data mart export
and report table generation at a scheduled time. That's it.

Curl is definitely a good tool for the job :slight_smile:

Perhaps 2.0.6 will take care of it, but in the meantime, curl can press the
button for me. Of course, having a more cleanly defined series of REST
services would be very useful I think, but I am not sure the demand is
huge.

I think the demand will slowly grow with time, so we should probably try to take at least some baby steps soon.

Required note: As long as we don't call it REST :slight_smile: REST imples a hypermedia-driven application, so let's stick to calling it what it would probably be: a simple web api.

Jo

···

Den 8. nov. 2010 kl. 11.07 skrev Jason Pickering:

Required note: As long as we don’t call it REST :slight_smile: REST imples a hypermedia-driven application, so let’s stick to calling it what it would probably be: a simple web api.

Hey be a bit more visionary:) I think this is a great thought. We are getting more and more requests from people who want to use their own presentation layer (Ifakara folks in Tanzania will “make a web-based query tool on top of dhis2”, Uganda folks are integrating dhis2 with a CMS etc). I’m envisioning methods for:

  • getting all data elements/indicators with (a bit extended) DXF and HTML format responses with embedded links to URLs pointing to a method giving you the full details for each as HTML or PDF.

  • getting all indicators with DXF/HTML responses with links to URLs pointing to a PNG chart giving the aggregated vales for the 12 last months.

  • getting all report tables as DXF/HTML with links to URLs pointing to SDMX-HD/HTML/PDF/Excel representations of the table.

  • getting all orgunits for a given parent as DXF/HTML with links to URLs pointing to GIS PNG images, and so on…

There you have your hypermedia-driven application that moves from one state to the next by examining and choosing from among the alternative state transitions in the current set of representations.

This kind of stuff will give potential users an elegant way of integrating dhis2 data into whatever tool they prefer and avoid hacking into the database or fumbling with the source code. If don’t want your users to leave, make it easy for them to do so:)

Lars

Required note: As long as we don’t call it REST :slight_smile: REST imples a hypermedia-driven application, so let’s stick to calling it what it would probably be: a simple web api.

Hey be a bit more visionary:) I think this is a great thought.

Ok, if you say so :slight_smile:

We are getting more and more requests from people who want to use their own presentation layer (Ifakara folks in Tanzania will “make a web-based query tool on top of dhis2”, Uganda folks are integrating dhis2 with a CMS etc). I’m envisioning methods for:

  • getting all data elements/indicators with (a bit extended) DXF and HTML format responses with embedded links to URLs pointing to a method giving you the full details for each as HTML or PDF.
  • getting all indicators with DXF/HTML responses with links to URLs pointing to a PNG chart giving the aggregated vales for the 12 last months.
  • getting all report tables as DXF/HTML with links to URLs pointing to SDMX-HD/HTML/PDF/Excel representations of the table.
  • getting all orgunits for a given parent as DXF/HTML with links to URLs pointing to GIS PNG images, and so on…

There you have your hypermedia-driven application that moves from one state to the next by examining and choosing from among the alternative state transitions in the current set of representations.

This kind of stuff will give potential users an elegant way of integrating dhis2 data into whatever tool they prefer and avoid hacking into the database or fumbling with the source code. If don’t want your users to leave, make it easy for them to do so:)

I had hoped that the mobile case could serve as a starting point for exploration in this area, but basically the mobile use case makes more sense as a “custom protocol” as it is now, so it has ended up as little more than an introduction of jersey (which I still think is the right kind of tool for this). So, at a practical level, I think we should start by identifying a specific use case where we can explore how such an api might make sense, without too much custom requirements for how it should be built.

Distilling your list above might be a good start to get a sense of how to model an “application level” model (but we should probably have some use cases for making changes through the api, as well). I would certainly be interested in working on this, but I do have other commitments and don’t really know the domain well enough to get the modeling right by myself. You would of course have to get Bob onboard (I’m not sure what he’s wasting his time on these days, but I’m guessing it has to do with excel :), and probably be prepared for some changes to the way we map metadata to xml :slight_smile:

One problem with the hypermedia part is that there isn’t really much mature tools that easily support this kind of api building. With jax-rs we have basically gotten a better alternative to servlets, but still the way to build decent linkable representations and mapping to standardized content types haven’t really settled down into solid best practices. And with the amount of time it has taken for the rest community to come up with this kind of tool support, I’m not really sure it will materialize anytime soon. There are people building more innovative solutions out there, but those tools then are more bleeding edge or move into too different technologies from our current stack.

There are also some difficult “ground rules” we have to make the right trade off for, if we want to give this a go. We have to make a rough cut as to what makes sense to target for such a web interface versus more batch-oriented import/export and low-level interfaces for performance. We have to make a decent 80/20 trade off for what would be the important use cases to model support for in this way. And we need to have a sense of how much weight we want to put into supporting old-school soap stuff (I know Ime has a little requirement for some support there, but not sure how many others are still subscribing to that way of modeling apis).

Basically, I think it is a difficult challenge to both support larger import/export structures (where size is a main concern) and more fine grained representations (where it is more about finding the right granularity representations and integrating links in a natural way). I’m not sure how easy it is to model these two use cases with the same set of document structures.

Jo

···

Den 8. nov. 2010 kl. 23.43 skrev Lars Helge Øverland:

All this sounds good.

I have no real experience of creating REST apps so take this with a grain of salt but I have been dabbling a bit with Spring MVC and its REST support lately. XML representations can be achieved simply by including a org.springframework.oxm.jaxb.Jaxb2Marshaller in the context and adding spring-oxm to the classpath. JSON representations can be achieved by including Jackson on the classpath through the MappingJacksonJsonView. Creating generic Excel, PDF, JasperReports and atom/rss representations are of course hard to do but support classes for writing application-specific views for all these are available:

http://static.springsource.org/spring/docs/3.0.x/javadoc-api/org/springframework/web/servlet/view/document/package-summary.html

http://static.springsource.org/spring/docs/3.0.x/javadoc-api/org/springframework/web/servlet/view/jasperreports/

http://static.springsource.org/spring/docs/3.0.x/javadoc-api/org/springframework/web/servlet/view/feed/

Mapping to content types/content negotiation happens by looking at 1. the “file extension” 2. the Accept header 3. a “format” request parameter.

Something to consider but for later, lets get the more pressing issues out of the way first :slight_smile:

Lars

···

On Tue, Nov 9, 2010 at 6:58 AM, Jo Størset storset@gmail.com wrote:

Den 8. nov. 2010 kl. 23.43 skrev Lars Helge Øverland:

Required note: As long as we don’t call it REST :slight_smile: REST imples a hypermedia-driven application, so let’s stick to calling it what it would probably be: a simple web api.

Hey be a bit more visionary:) I think this is a great thought.

Ok, if you say so :slight_smile:

We are getting more and more requests from people who want to use their own presentation layer (Ifakara folks in Tanzania will “make a web-based query tool on top of dhis2”, Uganda folks are integrating dhis2 with a CMS etc). I’m envisioning methods for:

  • getting all data elements/indicators with (a bit extended) DXF and HTML format responses with embedded links to URLs pointing to a method giving you the full details for each as HTML or PDF.
  • getting all indicators with DXF/HTML responses with links to URLs pointing to a PNG chart giving the aggregated vales for the 12 last months.
  • getting all report tables as DXF/HTML with links to URLs pointing to SDMX-HD/HTML/PDF/Excel representations of the table.
  • getting all orgunits for a given parent as DXF/HTML with links to URLs pointing to GIS PNG images, and so on…

There you have your hypermedia-driven application that moves from one state to the next by examining and choosing from among the alternative state transitions in the current set of representations.

This kind of stuff will give potential users an elegant way of integrating dhis2 data into whatever tool they prefer and avoid hacking into the database or fumbling with the source code. If don’t want your users to leave, make it easy for them to do so:)

I had hoped that the mobile case could serve as a starting point for exploration in this area, but basically the mobile use case makes more sense as a “custom protocol” as it is now, so it has ended up as little more than an introduction of jersey (which I still think is the right kind of tool for this). So, at a practical level, I think we should start by identifying a specific use case where we can explore how such an api might make sense, without too much custom requirements for how it should be built.

Distilling your list above might be a good start to get a sense of how to model an “application level” model (but we should probably have some use cases for making changes through the api, as well). I would certainly be interested in working on this, but I do have other commitments and don’t really know the domain well enough to get the modeling right by myself. You would of course have to get Bob onboard (I’m not sure what he’s wasting his time on these days, but I’m guessing it has to do with excel :), and probably be prepared for some changes to the way we map metadata to xml :slight_smile:

One problem with the hypermedia part is that there isn’t really much mature tools that easily support this kind of api building. With jax-rs we have basically gotten a better alternative to servlets, but still the way to build decent linkable representations and mapping to standardized content types haven’t really settled down into solid best practices. And with the amount of time it has taken for the rest community to come up with this kind of tool support, I’m not really sure it will materialize anytime soon. There are people building more innovative solutions out there, but those tools then are more bleeding edge or move into too different technologies from our current stack.

There are also some difficult “ground rules” we have to make the right trade off for, if we want to give this a go. We have to make a rough cut as to what makes sense to target for such a web interface versus more batch-oriented import/export and low-level interfaces for performance. We have to make a decent 80/20 trade off for what would be the important use cases to model support for in this way. And we need to have a sense of how much weight we want to put into supporting old-school soap stuff (I know Ime has a little requirement for some support there, but not sure how many others are still subscribing to that way of modeling apis).

Basically, I think it is a difficult challenge to both support larger import/export structures (where size is a main concern) and more fine grained representations (where it is more about finding the right granularity representations and integrating links in a natural way). I’m not sure how easy it is to model these two use cases with the same set of document structures.

I would totally agree that this is by no means pressing for me. Nice
to have, but Jo's suggestions curl will do that I need to accomplish.

···

On Wed, Nov 10, 2010 at 12:33 AM, Lars Helge Øverland <larshelge@gmail.com> wrote:

On Tue, Nov 9, 2010 at 6:58 AM, Jo Størset <storset@gmail.com> wrote:

Den 8. nov. 2010 kl. 23.43 skrev Lars Helge Øverland:

Required note: As long as we don't call it REST :slight_smile: REST imples a
hypermedia-driven application, so let's stick to calling it what it would
probably be: a simple web api.

Hey be a bit more visionary:) I think this is a great thought.

Ok, if you say so :slight_smile:

We are getting more and more requests from people who want to use their
own presentation layer (Ifakara folks in Tanzania will "make a web-based
query tool on top of dhis2", Uganda folks are integrating dhis2 with a CMS
etc). I'm envisioning methods for:
- getting all data elements/indicators with (a bit extended) DXF and HTML
format responses with embedded links to URLs pointing to a method giving you
the full details for each as HTML or PDF.
- getting all indicators with DXF/HTML responses with links to URLs
pointing to a PNG chart giving the aggregated vales for the 12 last months.
- getting all report tables as DXF/HTML with links to URLs pointing to
SDMX-HD/HTML/PDF/Excel representations of the table.
- getting all orgunits for a given parent as DXF/HTML with links to URLs
pointing to GIS PNG images, and so on...
There you have your hypermedia-driven application that moves from one
state to the next by examining and choosing from among the alternative state
transitions in the current set of representations.
This kind of stuff will give potential users an elegant way of integrating
dhis2 data into whatever tool they prefer and avoid hacking into the
database or fumbling with the source code. If don't want your users to
leave, make it easy for them to do so:)

I had hoped that the mobile case could serve as a starting point for
exploration in this area, but basically the mobile use case makes more sense
as a "custom protocol" as it is now, so it has ended up as little more than
an introduction of jersey (which I still think is the right kind of tool for
this). So, at a practical level, I think we should start by identifying a
specific use case where we can explore how such an api might make sense,
without too much custom requirements for how it should be built.
Distilling your list above might be a good start to get a sense of how to
model an "application level" model (but we should probably have some use
cases for making changes through the api, as well). I would certainly be
interested in working on this, but I do have other commitments and don't
really know the domain well enough to get the modeling right by myself. You
would of course have to get Bob onboard (I'm not sure what he's wasting his
time on these days, but I'm guessing it has to do with excel :), and
probably be prepared for some changes to the way we map metadata to xml :slight_smile:
One problem with the hypermedia part is that there isn't really much
mature tools that easily support this kind of api building. With jax-rs we
have basically gotten a better alternative to servlets, but still the way to
build decent linkable representations and mapping to standardized content
types haven't really settled down into solid best practices. And with the
amount of time it has taken for the rest community to come up with this kind
of tool support, I'm not really sure it will materialize anytime soon. There
are people building more innovative solutions out there, but those tools
then are more bleeding edge or move into too different technologies from our
current stack.
There are also some difficult "ground rules" we have to make the right
trade off for, if we want to give this a go. We have to make a rough cut as
to what makes sense to target for such a web interface versus more
batch-oriented import/export and low-level interfaces for performance. We
have to make a decent 80/20 trade off for what would be the important use
cases to model support for in this way. And we need to have a sense of how
much weight we want to put into supporting old-school soap stuff (I know Ime
has a little requirement for some support there, but not sure how many
others are still subscribing to that way of modeling apis).
Basically, I think it is a difficult challenge to both support larger
import/export structures (where size is a main concern) and more fine
grained representations (where it is more about finding the right
granularity representations and integrating links in a natural way). I'm not
sure how easy it is to model these two use cases with the same set of
document structures.

All this sounds good.
I have no real experience of creating REST apps so take this with a grain of
salt but I have been dabbling a bit with Spring MVC and its REST support
lately. XML representations can be achieved simply by including a
org.springframework.oxm.jaxb.Jaxb2Marshaller in the context and adding
spring-oxm to the classpath. JSON representations can be achieved by
including Jackson on the classpath through the MappingJacksonJsonView.
Creating generic Excel, PDF, JasperReports and atom/rss representations are
of course hard to do but support classes for writing application-specific
views for all these are available:
org.springframework.web.servlet.view.document
Index of /spring-framework/docs/3.0.x/javadoc-api/org/springframework/web/servlet/view/jasperreports
Index of /spring-framework/docs/3.0.x/javadoc-api/org/springframework/web/servlet/view/feed
Mapping to content types/content negotiation happens by looking at 1. the
"file extension" 2. the Accept header 3. a "format" request parameter.
Something to consider but for later, lets get the more pressing issues out
of the way first :slight_smile:
Lars

--
Jason P. Pickering
email: jason.p.pickering@gmail.com
tel:+260968395190

Hi

Required note: As long as we don't call it REST :slight_smile: REST imples a
hypermedia-driven application, so let's stick to calling it what it would
probably be: a simple web api.

Hey be a bit more visionary:) I think this is a great thought.

Ok, if you say so :slight_smile:

We are getting more and more requests from people who want to use their own
presentation layer (Ifakara folks in Tanzania will "make a web-based query
tool on top of dhis2", Uganda folks are integrating dhis2 with a CMS etc).
I'm envisioning methods for:
- getting all data elements/indicators with (a bit extended) DXF and HTML
format responses with embedded links to URLs pointing to a method giving you
the full details for each as HTML or PDF.
- getting all indicators with DXF/HTML responses with links to URLs pointing
to a PNG chart giving the aggregated vales for the 12 last months.
- getting all report tables as DXF/HTML with links to URLs pointing to
SDMX-HD/HTML/PDF/Excel representations of the table.
- getting all orgunits for a given parent as DXF/HTML with links to URLs
pointing to GIS PNG images, and so on...
There you have your hypermedia-driven application that moves from one state
to the next by examining and choosing from among the alternative state
transitions in the current set of representations.
This kind of stuff will give potential users an elegant way of integrating
dhis2 data into whatever tool they prefer and avoid hacking into the
database or fumbling with the source code. If don't want your users to
leave, make it easy for them to do so:)

I had hoped that the mobile case could serve as a starting point for
exploration in this area, but basically the mobile use case makes more sense
as a "custom protocol" as it is now, so it has ended up as little more than
an introduction of jersey (which I still think is the right kind of tool for
this). So, at a practical level, I think we should start by identifying a
specific use case where we can explore how such an api might make sense,
without too much custom requirements for how it should be built.
Distilling your list above might be a good start to get a sense of how to
model an "application level" model (but we should probably have some use
cases for making changes through the api, as well). I would certainly be
interested in working on this, but I do have other commitments and don't
really know the domain well enough to get the modeling right by myself. You
would of course have to get Bob onboard (I'm not sure what he's wasting his
time on these days, but I'm guessing it has to do with excel :), and
probably be prepared for some changes to the way we map metadata to xml :slight_smile:
One problem with the hypermedia part is that there isn't really much mature
tools that easily support this kind of api building. With jax-rs we
have basically gotten a better alternative to servlets, but still the way to
build decent linkable representations and mapping to standardized content
types haven't really settled down into solid best practices. And with the
amount of time it has taken for the rest community to come up with this kind
of tool support, I'm not really sure it will materialize anytime soon. There
are people building more innovative solutions out there, but those tools
then are more bleeding edge or move into too different technologies from our
current stack.
There are also some difficult "ground rules" we have to make the right trade
off for, if we want to give this a go. We have to make a rough cut as to
what makes sense to target for such a web interface versus more
batch-oriented import/export and low-level interfaces for performance. We
have to make a decent 80/20 trade off for what would be the important use
cases to model support for in this way. And we need to have a sense of how
much weight we want to put into supporting old-school soap stuff (I know Ime
has a little requirement for some support there, but not sure how many
others are still subscribing to that way of modeling apis).
Basically, I think it is a difficult challenge to both support larger
import/export structures (where size is a main concern) and more fine
grained representations (where it is more about finding the right
granularity representations and integrating links in a natural way). I'm not
sure how easy it is to model these two use cases with the same set of
document structures.

I think it's definitely possible (and highly desirable) to use the
same xml bindings for dhis entities for all use cases within dhis.
Larger structures are just compositions of the finer grained ones,
Having 3 different xml renditions of a dataelement structure in a
single application is not efficient. So I would certainly like to see
a common set of jaxb bindings to an xml which we can all agree is
useful and which can be used for mobile, import/export and ajax
exchange (the 3 case I see now).

There is certainly a case for having stream based input through a
common import point as well as finer grained services. What is
preventing this being done comfortably at this point is the fact that
we haven't yet resolved the problem of identifiers satisfactorily. So
going outside the stream based import we are obliged to either put our
heads in the side regarding database ids or we have to take steps to
ensure that we have a very, very closed system. Of course if we are
talking about the purity or otherwise of REST(ish) approaches this
question of URIs is absolutely fundamental. Moreso than some of the
other concerns raised about how restful is restful. Solving the
problem of interacting with 3rd party systems (finegrained, restful or
otherwise) still fundaentally comes down to solving the problem of
identification.

My own sense after having looked at the range of
uniqueness-based-on-name, vs database integer id, vs urn vs uuid ...
is that we probably need to resign ourself to the fact that integer
ids are just plain useful and efficient so we need to address two
problems of
(i) stabilizing them (save explicitly, rasther than depend on auto sequences)
(ii) quailfy them to give them global uniqueness outside of internal
scope ; eg what is known to the world as
http://dhi2.org/ke/dataelement/id/4 is known internally as 4.
Composition of URI is just example. Lots of politics around how this
uri is formed. In fact I can even see a possible real world use for
the WHO indicator metadata repository here.

There are a couple of pressing needs for non-browser based interaction
with dhis. One being the downloading of updates to aggregatedatavalue
tables for offline pivot table cache refresh. Though rather than
place a requirement for web api to be functional first, I intend to
proceed with libcurl for now.

Regards
Bob

···

On 9 November 2010 05:58, Jo Størset <storset@gmail.com> wrote:

Den 8. nov. 2010 kl. 23.43 skrev Lars Helge Øverland:

Jo
_______________________________________________
Mailing list: DHIS 2 developers in Launchpad
Post to : dhis2-devs@lists.launchpad.net
Unsubscribe : DHIS 2 developers in Launchpad
More help : ListHelp - Launchpad Help

Hi everyone in the discussion,

Thanks for this.
Yes, interesting to begin to look at the Data as a Service (DAAS) capabilities of the DHIS. Would be really useful to be able to have different clients handling data off DHIS, esp with this current 'enterprise architecture' craze.

But Lars, how does your expoze webservice fit into this?

Ime

···

--- On Mon, 11/8/10, Lars Helge Øverland <larshelge@gmail.com> wrote:

When we revive this discussion for 2.0.7, we may want to take a look
at what the Worldbank has done on data.worldbank.org.

Particularly for exposing their data and even offering a competition
to use their REST API to develop apps on top, including mobile ones:

http://data.worldbank.org/developers

Perhaps someone you know would even like to make 15k USD creating a cool WB app:

If we adopt parts of their API, we may even be able to point people to
the WB apps for dissemination and visualization.

Knut

···

On Wed, Nov 10, 2010 at 12:13 PM, Bob Jolliffe <bobjolliffe@gmail.com> wrote:

Hi

On 9 November 2010 05:58, Jo Størset <storset@gmail.com> wrote:

Den 8. nov. 2010 kl. 23.43 skrev Lars Helge Øverland:

Required note: As long as we don't call it REST :slight_smile: REST imples a
hypermedia-driven application, so let's stick to calling it what it would
probably be: a simple web api.

Hey be a bit more visionary:) I think this is a great thought.

Ok, if you say so :slight_smile:

We are getting more and more requests from people who want to use their own
presentation layer (Ifakara folks in Tanzania will "make a web-based query
tool on top of dhis2", Uganda folks are integrating dhis2 with a CMS etc).
I'm envisioning methods for:
- getting all data elements/indicators with (a bit extended) DXF and HTML
format responses with embedded links to URLs pointing to a method giving you
the full details for each as HTML or PDF.
- getting all indicators with DXF/HTML responses with links to URLs pointing
to a PNG chart giving the aggregated vales for the 12 last months.
- getting all report tables as DXF/HTML with links to URLs pointing to
SDMX-HD/HTML/PDF/Excel representations of the table.
- getting all orgunits for a given parent as DXF/HTML with links to URLs
pointing to GIS PNG images, and so on...
There you have your hypermedia-driven application that moves from one state
to the next by examining and choosing from among the alternative state
transitions in the current set of representations.
This kind of stuff will give potential users an elegant way of integrating
dhis2 data into whatever tool they prefer and avoid hacking into the
database or fumbling with the source code. If don't want your users to
leave, make it easy for them to do so:)

I had hoped that the mobile case could serve as a starting point for
exploration in this area, but basically the mobile use case makes more sense
as a "custom protocol" as it is now, so it has ended up as little more than
an introduction of jersey (which I still think is the right kind of tool for
this). So, at a practical level, I think we should start by identifying a
specific use case where we can explore how such an api might make sense,
without too much custom requirements for how it should be built.
Distilling your list above might be a good start to get a sense of how to
model an "application level" model (but we should probably have some use
cases for making changes through the api, as well). I would certainly be
interested in working on this, but I do have other commitments and don't
really know the domain well enough to get the modeling right by myself. You
would of course have to get Bob onboard (I'm not sure what he's wasting his
time on these days, but I'm guessing it has to do with excel :), and
probably be prepared for some changes to the way we map metadata to xml :slight_smile:
One problem with the hypermedia part is that there isn't really much mature
tools that easily support this kind of api building. With jax-rs we
have basically gotten a better alternative to servlets, but still the way to
build decent linkable representations and mapping to standardized content
types haven't really settled down into solid best practices. And with the
amount of time it has taken for the rest community to come up with this kind
of tool support, I'm not really sure it will materialize anytime soon. There
are people building more innovative solutions out there, but those tools
then are more bleeding edge or move into too different technologies from our
current stack.
There are also some difficult "ground rules" we have to make the right trade
off for, if we want to give this a go. We have to make a rough cut as to
what makes sense to target for such a web interface versus more
batch-oriented import/export and low-level interfaces for performance. We
have to make a decent 80/20 trade off for what would be the important use
cases to model support for in this way. And we need to have a sense of how
much weight we want to put into supporting old-school soap stuff (I know Ime
has a little requirement for some support there, but not sure how many
others are still subscribing to that way of modeling apis).
Basically, I think it is a difficult challenge to both support larger
import/export structures (where size is a main concern) and more fine
grained representations (where it is more about finding the right
granularity representations and integrating links in a natural way). I'm not
sure how easy it is to model these two use cases with the same set of
document structures.

I think it's definitely possible (and highly desirable) to use the
same xml bindings for dhis entities for all use cases within dhis.
Larger structures are just compositions of the finer grained ones,
Having 3 different xml renditions of a dataelement structure in a
single application is not efficient. So I would certainly like to see
a common set of jaxb bindings to an xml which we can all agree is
useful and which can be used for mobile, import/export and ajax
exchange (the 3 case I see now).

There is certainly a case for having stream based input through a
common import point as well as finer grained services. What is
preventing this being done comfortably at this point is the fact that
we haven't yet resolved the problem of identifiers satisfactorily. So
going outside the stream based import we are obliged to either put our
heads in the side regarding database ids or we have to take steps to
ensure that we have a very, very closed system. Of course if we are
talking about the purity or otherwise of REST(ish) approaches this
question of URIs is absolutely fundamental. Moreso than some of the
other concerns raised about how restful is restful. Solving the
problem of interacting with 3rd party systems (finegrained, restful or
otherwise) still fundaentally comes down to solving the problem of
identification.

My own sense after having looked at the range of
uniqueness-based-on-name, vs database integer id, vs urn vs uuid ...
is that we probably need to resign ourself to the fact that integer
ids are just plain useful and efficient so we need to address two
problems of
(i) stabilizing them (save explicitly, rasther than depend on auto sequences)
(ii) quailfy them to give them global uniqueness outside of internal
scope ; eg what is known to the world as
http://dhi2.org/ke/dataelement/id/4 is known internally as 4.
Composition of URI is just example. Lots of politics around how this
uri is formed. In fact I can even see a possible real world use for
the WHO indicator metadata repository here.

There are a couple of pressing needs for non-browser based interaction
with dhis. One being the downloading of updates to aggregatedatavalue
tables for offline pivot table cache refresh. Though rather than
place a requirement for web api to be functional first, I intend to
proceed with libcurl for now.

Regards
Bob

Jo
_______________________________________________
Mailing list: DHIS 2 developers in Launchpad
Post to : dhis2-devs@lists.launchpad.net
Unsubscribe : DHIS 2 developers in Launchpad
More help : ListHelp - Launchpad Help

_______________________________________________
Mailing list: DHIS 2 developers in Launchpad
Post to : dhis2-devs@lists.launchpad.net
Unsubscribe : DHIS 2 developers in Launchpad
More help : ListHelp - Launchpad Help

--
Cheers,
Knut Staring

Thanks for sharing the information on what other developers have done. I believe we can make a good implementation matrix on what is required to be added/fix in dhis 2.0.6.

I wish all of you nice week.

Regards

···

On Tue, Nov 23, 2010 at 5:14 PM, Knut Staring knutst@gmail.com wrote:

When we revive this discussion for 2.0.7, we may want to take a look

at what the Worldbank has done on data.worldbank.org.

Particularly for exposing their data and even offering a competition

to use their REST API to develop apps on top, including mobile ones:

http://data.worldbank.org/developers

Perhaps someone you know would even like to make 15k USD creating a cool WB app:

http://appsfordevelopment.challengepost.com/

If we adopt parts of their API, we may even be able to point people to

the WB apps for dissemination and visualization.

Knut

On Wed, Nov 10, 2010 at 12:13 PM, Bob Jolliffe bobjolliffe@gmail.com wrote:

Hi

On 9 November 2010 05:58, Jo Størset storset@gmail.com wrote:

Den 8. nov. 2010 kl. 23.43 skrev Lars Helge Øverland:

Required note: As long as we don’t call it REST :slight_smile: REST imples a

hypermedia-driven application, so let’s stick to calling it what it would

probably be: a simple web api.

Hey be a bit more visionary:) I think this is a great thought.

Ok, if you say so :slight_smile:

We are getting more and more requests from people who want to use their own

presentation layer (Ifakara folks in Tanzania will "make a web-based query

tool on top of dhis2", Uganda folks are integrating dhis2 with a CMS etc).

I’m envisioning methods for:

  • getting all data elements/indicators with (a bit extended) DXF and HTML

format responses with embedded links to URLs pointing to a method giving you

the full details for each as HTML or PDF.

  • getting all indicators with DXF/HTML responses with links to URLs pointing

to a PNG chart giving the aggregated vales for the 12 last months.

  • getting all report tables as DXF/HTML with links to URLs pointing to

SDMX-HD/HTML/PDF/Excel representations of the table.

  • getting all orgunits for a given parent as DXF/HTML with links to URLs

pointing to GIS PNG images, and so on…

There you have your hypermedia-driven application that moves from one state

to the next by examining and choosing from among the alternative state

transitions in the current set of representations.

This kind of stuff will give potential users an elegant way of integrating

dhis2 data into whatever tool they prefer and avoid hacking into the

database or fumbling with the source code. If don’t want your users to

leave, make it easy for them to do so:)

I had hoped that the mobile case could serve as a starting point for

exploration in this area, but basically the mobile use case makes more sense

as a “custom protocol” as it is now, so it has ended up as little more than

an introduction of jersey (which I still think is the right kind of tool for

this). So, at a practical level, I think we should start by identifying a

specific use case where we can explore how such an api might make sense,

without too much custom requirements for how it should be built.

Distilling your list above might be a good start to get a sense of how to

model an “application level” model (but we should probably have some use

cases for making changes through the api, as well). I would certainly be

interested in working on this, but I do have other commitments and don’t

really know the domain well enough to get the modeling right by myself. You

would of course have to get Bob onboard (I’m not sure what he’s wasting his

time on these days, but I’m guessing it has to do with excel :), and

probably be prepared for some changes to the way we map metadata to xml :slight_smile:

One problem with the hypermedia part is that there isn’t really much mature

tools that easily support this kind of api building. With jax-rs we

have basically gotten a better alternative to servlets, but still the way to

build decent linkable representations and mapping to standardized content

types haven’t really settled down into solid best practices. And with the

amount of time it has taken for the rest community to come up with this kind

of tool support, I’m not really sure it will materialize anytime soon. There

are people building more innovative solutions out there, but those tools

then are more bleeding edge or move into too different technologies from our

current stack.

There are also some difficult “ground rules” we have to make the right trade

off for, if we want to give this a go. We have to make a rough cut as to

what makes sense to target for such a web interface versus more

batch-oriented import/export and low-level interfaces for performance. We

have to make a decent 80/20 trade off for what would be the important use

cases to model support for in this way. And we need to have a sense of how

much weight we want to put into supporting old-school soap stuff (I know Ime

has a little requirement for some support there, but not sure how many

others are still subscribing to that way of modeling apis).

Basically, I think it is a difficult challenge to both support larger

import/export structures (where size is a main concern) and more fine

grained representations (where it is more about finding the right

granularity representations and integrating links in a natural way). I’m not

sure how easy it is to model these two use cases with the same set of

document structures.

I think it’s definitely possible (and highly desirable) to use the

same xml bindings for dhis entities for all use cases within dhis.

Larger structures are just compositions of the finer grained ones,

Having 3 different xml renditions of a dataelement structure in a

single application is not efficient. So I would certainly like to see

a common set of jaxb bindings to an xml which we can all agree is

useful and which can be used for mobile, import/export and ajax

exchange (the 3 case I see now).

There is certainly a case for having stream based input through a

common import point as well as finer grained services. What is

preventing this being done comfortably at this point is the fact that

we haven’t yet resolved the problem of identifiers satisfactorily. So

going outside the stream based import we are obliged to either put our

heads in the side regarding database ids or we have to take steps to

ensure that we have a very, very closed system. Of course if we are

talking about the purity or otherwise of REST(ish) approaches this

question of URIs is absolutely fundamental. Moreso than some of the

other concerns raised about how restful is restful. Solving the

problem of interacting with 3rd party systems (finegrained, restful or

otherwise) still fundaentally comes down to solving the problem of

identification.

My own sense after having looked at the range of

uniqueness-based-on-name, vs database integer id, vs urn vs uuid …

is that we probably need to resign ourself to the fact that integer

ids are just plain useful and efficient so we need to address two

problems of

(i) stabilizing them (save explicitly, rasther than depend on auto sequences)

(ii) quailfy them to give them global uniqueness outside of internal

scope ; eg what is known to the world as

http://dhi2.org/ke/dataelement/id/4 is known internally as 4.

Composition of URI is just example. Lots of politics around how this

uri is formed. In fact I can even see a possible real world use for

the WHO indicator metadata repository here.

There are a couple of pressing needs for non-browser based interaction

with dhis. One being the downloading of updates to aggregatedatavalue

tables for offline pivot table cache refresh. Though rather than

place a requirement for web api to be functional first, I intend to

proceed with libcurl for now.

Regards

Bob

Jo


Mailing list: https://launchpad.net/~dhis2-devs

Post to : dhis2-devs@lists.launchpad.net

Unsubscribe : https://launchpad.net/~dhis2-devs
More help : https://help.launchpad.net/ListHelp


Mailing list: https://launchpad.net/~dhis2-devs

Post to : dhis2-devs@lists.launchpad.net

Unsubscribe : https://launchpad.net/~dhis2-devs
More help : https://help.launchpad.net/ListHelp

Cheers,

Knut Staring


Mailing list: https://launchpad.net/~dhis2-devs

Post to : dhis2-devs@lists.launchpad.net

Unsubscribe : https://launchpad.net/~dhis2-devs

More help : https://help.launchpad.net/ListHelp


Samuel Cheburet
Ministry Of Health
P.O. Box 20781
Nairobi, Kenya
Mobile- 0721624338