API automation framework for DHIS2

Dear Nalinikanth,

Thanks for your mail and I am pleased to see others starting to think about this. We know we need a lot better testing.

I think these are all good ideas. I am somewhat less concerned with the framework, as the actual tests themselves. We now find our-self in a somewhat difficult position, of having to write tests (and thus to some extent specs) after code has been written. This has some advantages, as we can write the test based on what we expect the intended system behavior should be, but the downside of course, is that its very easy to miss all the cases which should be tested and the various permutations there of.

Mark Polak and I started working with another framework called “pyresttest” which is Python based. It is under active development but seems to be somewhat less mature than some of the others. Nonetheless, I like its approach and extensible nature. This is quite a different type of framework, mostly because the tests can be developed in relatively plain language. For instance, for the “me” endpoint, here is a simple test

https://github.com/dhis2/dhis2-api-testing/blob/master/tests/me.yaml

These tests could be developed by a business analyst as opposed to a developer. Mark wrote some more code to automatically generate these tests from the “schema” end point from the schema end-point itself. This was really for the purposes of self-.consistency within the application (to be sure that everything in the schema actually was in the endpoints themselves) and for regression testing between versions. With the move to everything being apps in DHIS2, we have already seen the potentially brittle nature of this approach, as when things change on the server side, it can break apps (and downstream external services) quite easily. So going forward, looking for changes to the API is really going to be very critical.

Another type of test, are the ones you mentioned, with data. I developed this by hand (https://github.com/dhis2/dhis2-api-testing/blob/master/tests/crud_operations/dataElementOperations.yaml) to test a scenario of creating and modifying a data element.

I think all of these of course could be done in the frameworks which you mentioned, and we discussed this in Oslo a few months back and all agreed I think on the frameworks which you mentioned, instead of the pyresttest. Having said that, I still like the simplicity of the framework, and the fact that people who do not really write JavaScript, can also write the tests.

We also considered some BDD tests. https://github.com/dhis2/dhis2-integration-tests/tree/master/features.

Here was a simple test to change the title of the application written in the Gherkin syntax

Feature: System Settings
In order to customize my DHIS2 instance
As a system administrator
I want to change system appearance in different ways
Scenario: Change application title
Given that I am a system admin
and am logged into the system
When I access the Appearance tab of the Settings app
and change the Application title to
Then I should see that the settings were updated
and when I log out of the system, the title should be changed.
Examples:

Name |
My DHIS Instance |
برمجة معلومات قسم الصحة 2|

Again, I like it because i can read it and write it and its clear what the feature is supposed to do. However, the movement to BDD of course is a lot more than just testing.

As I said, in the end, I am less concerned about the framework, and perhaps there could be good reasons why we could have multiple frameworks which perform different testing tasks. E2E testing with things like Selenium of course, would point to the need to have multiple testing frameworks.

Of course testing data scenario’s gets really tricky, but we need to do it. We started some work with Paolo Gracio on setting up Docker environments with DHIS2. Setting up a known environment is really going to be crucial to being able to test these more complex scenarios. For instance, a typical test might be, I create some metadata, upload some data, and then run analytic. I develop a test to ensure what is available from the analytics in the form of aggregated data is actually what it should be. For all of that, we need to be able to spin up an environment with a known state (well, really for all of the integration tests!) . So, I think developing metadata and environments with known characteristics is also going to be a crucial component.

One last thought is in terms of choosing the tests which we think are most important. As I said before, I think its going to be a lot of work to develop all the tests which need to be developed and determining which ones to prioritize is going to be key.

Lets keep up the conversation and better yet, if you could get some of your ideas down in code so that we could all start to collaborate together on it, that might even be better.

Best regards,

Jason

···

On Thu, Mar 24, 2016 at 10:59 AM, Nalinikanth Meesala nalinim@thoughtworks.com wrote:

Hi Jason

Hope you are doing well.

I am Nalinikanth, QA for MSF-OCA project at ThoughtWorks India. I’ve gotten the recommendations on API testing frameworks from Vanya. She was recently in Vietnam to work with Morten.

Based on their discussion and our exploration I evaluated three different JS frameworks. The details about which I have mentioned at the end of the mail.

So please have a look at the below observations and provide you feedback.

We are planning to add automated test cases for the WEB API testing (all methods - GET/POST/DELETE…). There is a need to assert the status, data in the body and few other assertions (based on other Http status codes, response body etc)

  1. Using a framework which is simple and easy to maintain. That is why we are inclined towards JS because then we can leverage any library that JS supports. Finally narrowed down on **chakram/supertest. ** Both are a kind of similar.
  1. Data model for the automation, Currently I am looking to have a specific data model for automation, for this there are two ways:

i. Export metadata on the environment and write tests which assert that particular metadata.

ii. Have some metadata for automation, Clear database every time you run automation scripts and import the metadata and run tests.

Out of the above two ways I feel going ahead with the second way as it is easy to do that and will make our tests maintainable. We can keep adding metadata as and when we write new tests.

  1. The last thing I was looking at is separating the data from tests say having,

Utils - Where we have the environment details common for all the test cases.

Data - The metadata that needs to be posted or asserted will be there in file/files.

Tests - where we have all the tests written.

we can get this done just by using require in js. which is what I loved :slight_smile: It makes the repository maintainable, we can add tests as and when we want and we can run on any environment just by changing environment file.

Please let me know your point of view on it, or on any thing which I missed

Details of frameworks that I spiked on:

Chakram:

Chakram allows you to write clear and comprehensive tests, ensuring JSON REST endpoints work correctly as you develop and in the future.

Chakram extends Chai.js, adding HTTP specific assertions.

It allows simple verification of returned status codes, the compression used, cookies, headers, returned JSON objects and the schema of the JSON response (using the JSON schema specification)

API testing is naturally asynchronous, which can make tests complex and unwieldy. Chakram fully exploits javascript promises, resulting in clear asynchronous tests.

Using BDD formatting and hooks, complex tests can be constructed with the necessary setup and tear down operations, all described in natural language for improved clarity and maintenance.

Allows new assertions to be added to the framework. This is ideal for adding edge case HTTP assertions or project specific assertions.

Chakram requires Node.js and npm to be installed. It is available as an npm module. Ideally, Chakram should be added to your testing project’s devDependencies. This can be achieved with the following command:

npm install chakram –save-dev

The Mocha test runner is used to run Chakram tests – this can be installed globally or as a development dependency. The following command installs Mocha globally:

npm install -g mocha

To run tests, simply call the Mocha command line tool. By default, this will run the tests located in the ‘test’ directory. Mocha can export the test results in many different formats, satisfying the majority of continuous integration platforms.

http://dareid.github.io/chakram/jsdoc/index.html

Frisby:

http://frisbyjs.com/

It is a JS framework. It doesn’t use chai or any other assertion library. It got the below methods which can be asserted and if you need any other assertions you need to add any library and it is a bit cumbersome task as you need to make your code leverage the framework modularity.

expectHeader

expectHeaderContains

expectJSON

expectJSONTypes

expectJSONLength

expectBodyContains

expectStatus

frisby is using Jasmine to run tests.

SuperTest: It is something similar to chakram framework, it uses chai and mocha to run the tests.

https://github.com/visionmedia/supertest

Thanks & Regards,

Nalinikanth M

Quality Analyst

Email
nalinim@thoughtworks.com
Telephone
+91 9052234588
ThoughtWorks

Jason P. Pickering
email: jason.p.pickering@gmail.com
tel:+46764147049