Creating Sync between Linode(External Server) and Local Server

Dear All,
Sierra Leone wants to finally migrate to an online server (External
server hosted outside the Ministry) but we will like to create a daily
backup of that server locally in case anything goes wrong.
My questions:

1. We need a help with a script that can create a sync between the
External Server and the Local Server (at least twice a day)

2. Is there something we should know from past experiences about
hosting servers on the cloud

Please feel free to share anything and I will be grateful to learn new
things about dhis2

···

--
Regards,

Gerald

Hello,

try to see with the features of this hoster : Why BAO Hosting? | BAO Systems

Why BAO Hosting? | BAO Systems
DHIS 2 hosting in the cloud

Afficher sur www.baosystems.com

Aperçu par Yahoo

Sincerely

···

=========================
EKANI Guy

Cameroon

Le Jeudi 18 décembre 2014 6h07, gerald thomas gerald17006@gmail.com a écrit :

Dear All,
Sierra Leone wants to finally migrate to an online server (External
server hosted outside the Ministry) but we will like to create a daily
backup of that server locally in case anything goes wrong.
My questions:

  1. We need a help with a script that can create a sync between the
    External Server and the Local Server (at least twice a day)

  2. Is there something we should know from past experiences about
    hosting servers on the cloud

Please feel free to share anything and I will be grateful to learn new
things about dhis2


Regards,

Gerald


Mailing list: https://launchpad.net/~dhis2-users
Post to : dhis2-users@lists.launchpad.net
Unsubscribe : https://launchpad.net/~dhis2-users
More help : https://help.launchpad.net/ListHelp

Hi Gerald

We tested this when I was in Sierra Leone and we were finding serious problems with bandwidth getting the data back to Sierra Leone.

So you are going to have to think carefully about when and how often to synch. Currently your database files are very small as you don’t have much data on your cloud server, but it will soon grow. I suspect “at least twice a day” sounds unrealistic.

The way I typically do it is to first create an account on the backup server. Make sure that the account running your dhis instance can login to the backup server without a password by creating an ssh key pair and installing the public key on the backup server account. Then you can simply the rsync the backups directory (eg /var/lib/dhis2/dhis/backups) to a directory on the backup server using cron. In fact if you look in /usr/bin/dhis2-backup you will see that the commands are already there to do this, just commented out. This would synch with the backup server after taking the nightly backup.

This simple (and slightly lazy) setup has worked fine, and continues to work, in a number of places. But there are a number of reasons you might want to do something different.

(i) you might want to pull from the backup server rather than push to it. Particularly as the backup server might not be as reliably always online as the production server. This would require a slightly different variation on the above, but using the same principle of creating an ssh keypair and letting rsync do the work.

(ii) rsync is a really great and simple tool, but it is sadly quite slow. If you are bandwidth stressed and your database is growing it might not be the best solution. Works fine when bandwidth is not a critical issue. The trouble is it doesn’t really take into account the incremental nature of the data ie. you backup everything every time (besides the ephemeral tables like analytics, aggregated etc). In which case you need to start thinking smarter and maybe a little bit more complicated. One approach I have been considering, (but not yet tried) is to make a copy of the metadata export every night and then just pull all the datavalues with a lastupdated greater than the last time you pulled. That is going to reduce the size of the backup quite considerably. In theory this is probably even possible to do through the api rather than directly through psql which might be fine if you choose the time of day/night carefully. I’d probably do it with psql at the backed,

So there are a few options. The first being the simplest and also the crudest. Any other thoughts?

Cheers

Bob

···

On 18 December 2014 at 05:07, gerald thomas gerald17006@gmail.com wrote:

Dear All,

Sierra Leone wants to finally migrate to an online server (External

server hosted outside the Ministry) but we will like to create a daily

backup of that server locally in case anything goes wrong.

My questions:

  1. We need a help with a script that can create a sync between the

External Server and the Local Server (at least twice a day)

  1. Is there something we should know from past experiences about

hosting servers on the cloud

Please feel free to share anything and I will be grateful to learn new

things about dhis2

Regards,

Gerald


Mailing list: https://launchpad.net/~dhis2-devs

Post to : dhis2-devs@lists.launchpad.net

Unsubscribe : https://launchpad.net/~dhis2-devs

More help : https://help.launchpad.net/ListHelp

Bob,
My Suggestion:
All local servers must be on 2.15 war file then we create a SFTP
account on cloud server then we can use filezilla from the local
server to download the backup from the cloud server.
I know it is crude but that help for now.
What is your take Bob.

···

On 12/18/14, Bob Jolliffe <bobjolliffe@gmail.com> wrote:

Hi Gerald

We tested this when I was in Sierra Leone and we were finding serious
problems with bandwidth getting the data back to Sierra Leone.

So you are going to have to think carefully about when and how often to
synch. Currently your database files are very small as you don't have much
data on your cloud server, but it will soon grow. I suspect "at least
twice a day" sounds unrealistic.

The way I typically do it is to first create an account on the backup
server. Make sure that the account running your dhis instance can login to
the backup server without a password by creating an ssh key pair and
installing the public key on the backup server account. Then you can
simply the rsync the backups directory (eg /var/lib/dhis2/dhis/backups) to
a directory on the backup server using cron. In fact if you look in
/usr/bin/dhis2-backup you will see that the commands are already there to
do this, just commented out. This would synch with the backup server after
taking the nightly backup.

This simple (and slightly lazy) setup has worked fine, and continues to
work, in a number of places. But there are a number of reasons you might
want to do something different.

(i) you might want to pull from the backup server rather than push to it.
Particularly as the backup server might not be as reliably always online as
the production server. This would require a slightly different variation
on the above, but using the same principle of creating an ssh keypair and
letting rsync do the work.

(ii) rsync is a really great and simple tool, but it is sadly quite slow.
If you are bandwidth stressed and your database is growing it might not be
the best solution. Works fine when bandwidth is not a critical issue. The
trouble is it doesn't really take into account the incremental nature of
the data ie. you backup everything every time (besides the ephemeral tables
like analytics, aggregated etc). In which case you need to start thinking
smarter and maybe a little bit more complicated. One approach I have been
considering, (but not yet tried) is to make a copy of the metadata export
every night and then just pull all the datavalues with a lastupdated
greater than the last time you pulled. That is going to reduce the size of
the backup quite considerably. In theory this is probably even possible to
do through the api rather than directly through psql which might be fine if
you choose the time of day/night carefully. I'd probably do it with psql
at the backed,

So there are a few options. The first being the simplest and also the
crudest. Any other thoughts?

Cheers
Bob

On 18 December 2014 at 05:07, gerald thomas <gerald17006@gmail.com> wrote:

Dear All,
Sierra Leone wants to finally migrate to an online server (External
server hosted outside the Ministry) but we will like to create a daily
backup of that server locally in case anything goes wrong.
My questions:

1. We need a help with a script that can create a sync between the
External Server and the Local Server (at least twice a day)

2. Is there something we should know from past experiences about
hosting servers on the cloud

Please feel free to share anything and I will be grateful to learn new
things about dhis2

--
Regards,

Gerald

_______________________________________________
Mailing list: DHIS 2 developers in Launchpad
Post to : dhis2-devs@lists.launchpad.net
Unsubscribe : DHIS 2 developers in Launchpad
More help : ListHelp - Launchpad Help

--
Regards,

Gerald

I wouldn’t do it that way. I think filezilla is a gui app. You need to have something automated if you are to rely on the offsite backup.

If you want to use a gui now you could already use winscp on windows for example or an ssh location in nautilus file browser on linux so no need for sftp.

···

On 18 December 2014 at 12:13, gerald thomas gerald17006@gmail.com wrote:

Bob,

My Suggestion:

All local servers must be on 2.15 war file then we create a SFTP

account on cloud server then we can use filezilla from the local

server to download the backup from the cloud server.

I know it is crude but that help for now.

What is your take Bob.

On 12/18/14, Bob Jolliffe bobjolliffe@gmail.com wrote:

Hi Gerald

We tested this when I was in Sierra Leone and we were finding serious

problems with bandwidth getting the data back to Sierra Leone.

So you are going to have to think carefully about when and how often to

synch. Currently your database files are very small as you don’t have much

data on your cloud server, but it will soon grow. I suspect "at least

twice a day" sounds unrealistic.

The way I typically do it is to first create an account on the backup

server. Make sure that the account running your dhis instance can login to

the backup server without a password by creating an ssh key pair and

installing the public key on the backup server account. Then you can

simply the rsync the backups directory (eg /var/lib/dhis2/dhis/backups) to

a directory on the backup server using cron. In fact if you look in

/usr/bin/dhis2-backup you will see that the commands are already there to

do this, just commented out. This would synch with the backup server after

taking the nightly backup.

This simple (and slightly lazy) setup has worked fine, and continues to

work, in a number of places. But there are a number of reasons you might

want to do something different.

(i) you might want to pull from the backup server rather than push to it.

Particularly as the backup server might not be as reliably always online as

the production server. This would require a slightly different variation

on the above, but using the same principle of creating an ssh keypair and

letting rsync do the work.

(ii) rsync is a really great and simple tool, but it is sadly quite slow.

If you are bandwidth stressed and your database is growing it might not be

the best solution. Works fine when bandwidth is not a critical issue. The

trouble is it doesn’t really take into account the incremental nature of

the data ie. you backup everything every time (besides the ephemeral tables

like analytics, aggregated etc). In which case you need to start thinking

smarter and maybe a little bit more complicated. One approach I have been

considering, (but not yet tried) is to make a copy of the metadata export

every night and then just pull all the datavalues with a lastupdated

greater than the last time you pulled. That is going to reduce the size of

the backup quite considerably. In theory this is probably even possible to

do through the api rather than directly through psql which might be fine if

you choose the time of day/night carefully. I’d probably do it with psql

at the backed,

So there are a few options. The first being the simplest and also the

crudest. Any other thoughts?

Cheers

Bob

On 18 December 2014 at 05:07, gerald thomas gerald17006@gmail.com wrote:

Dear All,

Sierra Leone wants to finally migrate to an online server (External

server hosted outside the Ministry) but we will like to create a daily

backup of that server locally in case anything goes wrong.

My questions:

  1. We need a help with a script that can create a sync between the

External Server and the Local Server (at least twice a day)

  1. Is there something we should know from past experiences about

hosting servers on the cloud

Please feel free to share anything and I will be grateful to learn new

things about dhis2

Regards,

Gerald


Mailing list: https://launchpad.net/~dhis2-devs

Post to : dhis2-devs@lists.launchpad.net

Unsubscribe : https://launchpad.net/~dhis2-devs

More help : https://help.launchpad.net/ListHelp

Regards,

Gerald

We’ve set it up for clients where we can script a DB moving automatically from one server to another and it is automated entirely.

Typically we have scripts that will do a dump without analytics (e.g. pg_dump -T analytics* -T completeness* dhis2 | /usr/bin/gzip -c > /tmp/dhis2.backup.gz), so it is much smaller of a backup to move (since bandwidth is a consideration). We then transfer that securely using key pairs to the new server and it drops the existing DB there (after backing it up) and imports the new one. We typically schedule this transfer as a cron job to run nightly or during off-peak hours from a bash script, since analytics also need to be re-run on the local server once the DB is moved as well.

There are many ways to script this and rsync can work, pg_dump can also backup on one machine and restore to another (but we highly recommend keys to keep it secure), scp, etc.

Steffen Tengesdal

BAO Systems

···

On Dec 18, 2014, at 7:13 AM, gerald thomas gerald17006@gmail.com wrote:

Bob,
My Suggestion:
All local servers must be on 2.15 war file then we create a SFTP
account on cloud server then we can use filezilla from the local
server to download the backup from the cloud server.
I know it is crude but that help for now.
What is your take Bob.

On 12/18/14, Bob Jolliffe bobjolliffe@gmail.com wrote:

Hi Gerald

We tested this when I was in Sierra Leone and we were finding serious
problems with bandwidth getting the data back to Sierra Leone.

So you are going to have to think carefully about when and how often to
synch. Currently your database files are very small as you don’t have much
data on your cloud server, but it will soon grow. I suspect “at least
twice a day” sounds unrealistic.

The way I typically do it is to first create an account on the backup
server. Make sure that the account running your dhis instance can login to
the backup server without a password by creating an ssh key pair and
installing the public key on the backup server account. Then you can
simply the rsync the backups directory (eg /var/lib/dhis2/dhis/backups) to
a directory on the backup server using cron. In fact if you look in
/usr/bin/dhis2-backup you will see that the commands are already there to
do this, just commented out. This would synch with the backup server after
taking the nightly backup.

This simple (and slightly lazy) setup has worked fine, and continues to
work, in a number of places. But there are a number of reasons you might
want to do something different.

(i) you might want to pull from the backup server rather than push to it.
Particularly as the backup server might not be as reliably always online as
the production server. This would require a slightly different variation
on the above, but using the same principle of creating an ssh keypair and
letting rsync do the work.

(ii) rsync is a really great and simple tool, but it is sadly quite slow.
If you are bandwidth stressed and your database is growing it might not be
the best solution. Works fine when bandwidth is not a critical issue. The
trouble is it doesn’t really take into account the incremental nature of
the data ie. you backup everything every time (besides the ephemeral tables
like analytics, aggregated etc). In which case you need to start thinking
smarter and maybe a little bit more complicated. One approach I have been
considering, (but not yet tried) is to make a copy of the metadata export
every night and then just pull all the datavalues with a lastupdated
greater than the last time you pulled. That is going to reduce the size of
the backup quite considerably. In theory this is probably even possible to
do through the api rather than directly through psql which might be fine if
you choose the time of day/night carefully. I’d probably do it with psql
at the backed,

So there are a few options. The first being the simplest and also the
crudest. Any other thoughts?

Cheers
Bob

On 18 December 2014 at 05:07, gerald thomas gerald17006@gmail.com wrote:

Dear All,
Sierra Leone wants to finally migrate to an online server (External
server hosted outside the Ministry) but we will like to create a daily
backup of that server locally in case anything goes wrong.
My questions:

  1. We need a help with a script that can create a sync between the
    External Server and the Local Server (at least twice a day)

  2. Is there something we should know from past experiences about
    hosting servers on the cloud

Please feel free to share anything and I will be grateful to learn new
things about dhis2


Regards,

Gerald


Mailing list: https://launchpad.net/~dhis2-devs
Post to : dhis2-devs@lists.launchpad.net
Unsubscribe : https://launchpad.net/~dhis2-devs
More help : https://help.launchpad.net/ListHelp


Regards,

Gerald


Mailing list: https://launchpad.net/~dhis2-devs
Post to : dhis2-devs@lists.launchpad.net
Unsubscribe : https://launchpad.net/~dhis2-devs
More help : https://help.launchpad.net/ListHelp

Hi Steffen

That makes sense and is pretty close to what I am suggesting as well. Do you have any thoughts about taking incremental backups of the datavalues tables? Even with the analytics and the like removed, some of these databases start to get quite big.

Bob

···

On 18 December 2014 at 12:27, Steffen Tengesdal steffen@tengesdal.com wrote:

We’ve set it up for clients where we can script a DB moving automatically from one server to another and it is automated entirely.

Typically we have scripts that will do a dump without analytics (e.g. pg_dump -T analytics* -T completeness* dhis2 | /usr/bin/gzip -c > /tmp/dhis2.backup.gz), so it is much smaller of a backup to move (since bandwidth is a consideration). We then transfer that securely using key pairs to the new server and it drops the existing DB there (after backing it up) and imports the new one. We typically schedule this transfer as a cron job to run nightly or during off-peak hours from a bash script, since analytics also need to be re-run on the local server once the DB is moved as well.

There are many ways to script this and rsync can work, pg_dump can also backup on one machine and restore to another (but we highly recommend keys to keep it secure), scp, etc.

Steffen Tengesdal

BAO Systems

On Dec 18, 2014, at 7:13 AM, gerald thomas gerald17006@gmail.com wrote:

Bob,
My Suggestion:
All local servers must be on 2.15 war file then we create a SFTP
account on cloud server then we can use filezilla from the local
server to download the backup from the cloud server.
I know it is crude but that help for now.
What is your take Bob.

On 12/18/14, Bob Jolliffe bobjolliffe@gmail.com wrote:

Hi Gerald

We tested this when I was in Sierra Leone and we were finding serious
problems with bandwidth getting the data back to Sierra Leone.

So you are going to have to think carefully about when and how often to
synch. Currently your database files are very small as you don’t have much
data on your cloud server, but it will soon grow. I suspect “at least
twice a day” sounds unrealistic.

The way I typically do it is to first create an account on the backup
server. Make sure that the account running your dhis instance can login to
the backup server without a password by creating an ssh key pair and
installing the public key on the backup server account. Then you can
simply the rsync the backups directory (eg /var/lib/dhis2/dhis/backups) to
a directory on the backup server using cron. In fact if you look in
/usr/bin/dhis2-backup you will see that the commands are already there to
do this, just commented out. This would synch with the backup server after
taking the nightly backup.

This simple (and slightly lazy) setup has worked fine, and continues to
work, in a number of places. But there are a number of reasons you might
want to do something different.

(i) you might want to pull from the backup server rather than push to it.
Particularly as the backup server might not be as reliably always online as
the production server. This would require a slightly different variation
on the above, but using the same principle of creating an ssh keypair and
letting rsync do the work.

(ii) rsync is a really great and simple tool, but it is sadly quite slow.
If you are bandwidth stressed and your database is growing it might not be
the best solution. Works fine when bandwidth is not a critical issue. The
trouble is it doesn’t really take into account the incremental nature of
the data ie. you backup everything every time (besides the ephemeral tables
like analytics, aggregated etc). In which case you need to start thinking
smarter and maybe a little bit more complicated. One approach I have been
considering, (but not yet tried) is to make a copy of the metadata export
every night and then just pull all the datavalues with a lastupdated
greater than the last time you pulled. That is going to reduce the size of
the backup quite considerably. In theory this is probably even possible to
do through the api rather than directly through psql which might be fine if
you choose the time of day/night carefully. I’d probably do it with psql
at the backed,

So there are a few options. The first being the simplest and also the
crudest. Any other thoughts?

Cheers
Bob

On 18 December 2014 at 05:07, gerald thomas gerald17006@gmail.com wrote:

Dear All,
Sierra Leone wants to finally migrate to an online server (External
server hosted outside the Ministry) but we will like to create a daily
backup of that server locally in case anything goes wrong.
My questions:

  1. We need a help with a script that can create a sync between the
    External Server and the Local Server (at least twice a day)

  2. Is there something we should know from past experiences about
    hosting servers on the cloud

Please feel free to share anything and I will be grateful to learn new
things about dhis2


Regards,

Gerald


Mailing list: https://launchpad.net/~dhis2-devs
Post to : dhis2-devs@lists.launchpad.net
Unsubscribe : https://launchpad.net/~dhis2-devs
More help : https://help.launchpad.net/ListHelp


Regards,

Gerald


Mailing list: https://launchpad.net/~dhis2-devs
Post to : dhis2-devs@lists.launchpad.net
Unsubscribe : https://launchpad.net/~dhis2-devs
More help : https://help.launchpad.net/ListHelp

Bob,
My Suggestion:
All local servers must be on 2.15 war file then we create a SFTP
account on cloud server then we can use filezilla from the local
server to download the backup from the cloud server.
I know it is crude but that help for now.
What is your take Bob.

···

On 12/18/14, Bob Jolliffe <bobjolliffe@gmail.com> wrote:

Hi Gerald

We tested this when I was in Sierra Leone and we were finding serious
problems with bandwidth getting the data back to Sierra Leone.

So you are going to have to think carefully about when and how often to
synch. Currently your database files are very small as you don't have much
data on your cloud server, but it will soon grow. I suspect "at least
twice a day" sounds unrealistic.

The way I typically do it is to first create an account on the backup
server. Make sure that the account running your dhis instance can login to
the backup server without a password by creating an ssh key pair and
installing the public key on the backup server account. Then you can
simply the rsync the backups directory (eg /var/lib/dhis2/dhis/backups) to
a directory on the backup server using cron. In fact if you look in
/usr/bin/dhis2-backup you will see that the commands are already there to
do this, just commented out. This would synch with the backup server after
taking the nightly backup.

This simple (and slightly lazy) setup has worked fine, and continues to
work, in a number of places. But there are a number of reasons you might
want to do something different.

(i) you might want to pull from the backup server rather than push to it.
Particularly as the backup server might not be as reliably always online as
the production server. This would require a slightly different variation
on the above, but using the same principle of creating an ssh keypair and
letting rsync do the work.

(ii) rsync is a really great and simple tool, but it is sadly quite slow.
If you are bandwidth stressed and your database is growing it might not be
the best solution. Works fine when bandwidth is not a critical issue. The
trouble is it doesn't really take into account the incremental nature of
the data ie. you backup everything every time (besides the ephemeral tables
like analytics, aggregated etc). In which case you need to start thinking
smarter and maybe a little bit more complicated. One approach I have been
considering, (but not yet tried) is to make a copy of the metadata export
every night and then just pull all the datavalues with a lastupdated
greater than the last time you pulled. That is going to reduce the size of
the backup quite considerably. In theory this is probably even possible to
do through the api rather than directly through psql which might be fine if
you choose the time of day/night carefully. I'd probably do it with psql
at the backed,

So there are a few options. The first being the simplest and also the
crudest. Any other thoughts?

Cheers
Bob

On 18 December 2014 at 05:07, gerald thomas <gerald17006@gmail.com> wrote:

Dear All,
Sierra Leone wants to finally migrate to an online server (External
server hosted outside the Ministry) but we will like to create a daily
backup of that server locally in case anything goes wrong.
My questions:

1. We need a help with a script that can create a sync between the
External Server and the Local Server (at least twice a day)

2. Is there something we should know from past experiences about
hosting servers on the cloud

Please feel free to share anything and I will be grateful to learn new
things about dhis2

--
Regards,

Gerald

_______________________________________________
Mailing list: DHIS 2 developers in Launchpad
Post to : dhis2-devs@lists.launchpad.net
Unsubscribe : DHIS 2 developers in Launchpad
More help : ListHelp - Launchpad Help

--
Regards,

Gerald

Hi Gerald,

As Bob pointed out, filezilla is a GUI tool and it does not support scheduling of downloads. Your local server should not have a GUI on it if it is a production system. If your local host a Linux system? If so, you can create a simple bash script on the localhost system that uses sftp or scp command line to connect and download a backup. A script for that would not be very complicated.

Steffen

···

On Dec 18, 2014, at 7:47 AM, gerald thomas gerald17006@gmail.com wrote:

Bob,
My Suggestion:
All local servers must be on 2.15 war file then we create a SFTP
account on cloud server then we can use filezilla from the local
server to download the backup from the cloud server.
I know it is crude but that help for now.
What is your take Bob.

On 12/18/14, Bob Jolliffe bobjolliffe@gmail.com wrote:

Hi Gerald

We tested this when I was in Sierra Leone and we were finding serious
problems with bandwidth getting the data back to Sierra Leone.

So you are going to have to think carefully about when and how often to
synch. Currently your database files are very small as you don’t have much
data on your cloud server, but it will soon grow. I suspect “at least
twice a day” sounds unrealistic.

The way I typically do it is to first create an account on the backup
server. Make sure that the account running your dhis instance can login to
the backup server without a password by creating an ssh key pair and
installing the public key on the backup server account. Then you can
simply the rsync the backups directory (eg /var/lib/dhis2/dhis/backups) to
a directory on the backup server using cron. In fact if you look in
/usr/bin/dhis2-backup you will see that the commands are already there to
do this, just commented out. This would synch with the backup server after
taking the nightly backup.

This simple (and slightly lazy) setup has worked fine, and continues to
work, in a number of places. But there are a number of reasons you might
want to do something different.

(i) you might want to pull from the backup server rather than push to it.
Particularly as the backup server might not be as reliably always online as
the production server. This would require a slightly different variation
on the above, but using the same principle of creating an ssh keypair and
letting rsync do the work.

(ii) rsync is a really great and simple tool, but it is sadly quite slow.
If you are bandwidth stressed and your database is growing it might not be
the best solution. Works fine when bandwidth is not a critical issue. The
trouble is it doesn’t really take into account the incremental nature of
the data ie. you backup everything every time (besides the ephemeral tables
like analytics, aggregated etc). In which case you need to start thinking
smarter and maybe a little bit more complicated. One approach I have been
considering, (but not yet tried) is to make a copy of the metadata export
every night and then just pull all the datavalues with a lastupdated
greater than the last time you pulled. That is going to reduce the size of
the backup quite considerably. In theory this is probably even possible to
do through the api rather than directly through psql which might be fine if
you choose the time of day/night carefully. I’d probably do it with psql
at the backed,

So there are a few options. The first being the simplest and also the
crudest. Any other thoughts?

Cheers
Bob

On 18 December 2014 at 05:07, gerald thomas gerald17006@gmail.com wrote:

Dear All,
Sierra Leone wants to finally migrate to an online server (External
server hosted outside the Ministry) but we will like to create a daily
backup of that server locally in case anything goes wrong.
My questions:

  1. We need a help with a script that can create a sync between the
    External Server and the Local Server (at least twice a day)

  2. Is there something we should know from past experiences about
    hosting servers on the cloud

Please feel free to share anything and I will be grateful to learn new
things about dhis2


Regards,

Gerald


Mailing list: https://launchpad.net/~dhis2-devs
Post to : dhis2-devs@lists.launchpad.net
Unsubscribe : https://launchpad.net/~dhis2-devs
More help : https://help.launchpad.net/ListHelp


Regards,

Gerald


Mailing list: https://launchpad.net/~dhis2-devs
Post to : dhis2-devs@lists.launchpad.net
Unsubscribe : https://launchpad.net/~dhis2-devs
More help : https://help.launchpad.net/ListHelp

Hi All,

I think there are two concerns being discussed here.

  1. Making sure there is a reliable backup in case something goes wrong.

The first problem is pretty straight forward, one can create another instance in another region, another provider or locally. Then schedule a regular backup to that server. Though I don’t recommend that the local actively run DHIS 2 because any changes made to that server will be lost on the next update from the cloud instance. Merging DBs is a difficult problem and causes more headache than it is worth.

Depending on how far back you’d like your backups to go this will start to consume a lot of disk space.

If the cloud server goes down you can be assured that your data is safe because you’ll have a copy of the database either on another cloud server or locally.

Incremental backups can be good for low bandwidth but my concerns are restore time and if one of the increments is corrupted it can cause a lot of problems.

Some cloud providers also offer storage/backup solutions that can address this concern.

  1. Failover in the event the cloud server goes down.

This is a more complex problem and can be addressed by having stand by servers in different regions, this will allow for failover in the event of an outage but has to be carefully planned and starts to get expensive as you’ve essentially doubled or tripled the number of instances/servers you’d need available. It also requires careful planning to make sure there is clear failover plan in addition to a clear plan to restore to the initial setup.

···

Executive summary

  1. Reliable backups are pretty straight forward and can be cost effective.

  2. Failure over can be addressed but it is complex problem and starts to get expensive.

Lastly and more importantly is to test on regular basis to make sure that you are able to restore from backups in the event of a failure.

Thanks,

Dan

Dan Cocos
BAO Systemswww.baosystems.com

T: +1 202-352-2671 | skype: dancocos

On Dec 18, 2014, at 7:53 AM, Steffen Tengesdal steffen@tengesdal.com wrote:

Hi Gerald,

As Bob pointed out, filezilla is a GUI tool and it does not support scheduling of downloads. Your local server should not have a GUI on it if it is a production system. If your local host a Linux system? If so, you can create a simple bash script on the localhost system that uses sftp or scp command line to connect and download a backup. A script for that would not be very complicated.

Steffen

On Dec 18, 2014, at 7:47 AM, gerald thomas gerald17006@gmail.com wrote:

Bob,
My Suggestion:
All local servers must be on 2.15 war file then we create a SFTP
account on cloud server then we can use filezilla from the local
server to download the backup from the cloud server.
I know it is crude but that help for now.
What is your take Bob.

On 12/18/14, Bob Jolliffe bobjolliffe@gmail.com wrote:

Hi Gerald

We tested this when I was in Sierra Leone and we were finding serious
problems with bandwidth getting the data back to Sierra Leone.

So you are going to have to think carefully about when and how often to
synch. Currently your database files are very small as you don’t have much
data on your cloud server, but it will soon grow. I suspect “at least
twice a day” sounds unrealistic.

The way I typically do it is to first create an account on the backup
server. Make sure that the account running your dhis instance can login to
the backup server without a password by creating an ssh key pair and
installing the public key on the backup server account. Then you can
simply the rsync the backups directory (eg /var/lib/dhis2/dhis/backups) to
a directory on the backup server using cron. In fact if you look in
/usr/bin/dhis2-backup you will see that the commands are already there to
do this, just commented out. This would synch with the backup server after
taking the nightly backup.

This simple (and slightly lazy) setup has worked fine, and continues to
work, in a number of places. But there are a number of reasons you might
want to do something different.

(i) you might want to pull from the backup server rather than push to it.
Particularly as the backup server might not be as reliably always online as
the production server. This would require a slightly different variation
on the above, but using the same principle of creating an ssh keypair and
letting rsync do the work.

(ii) rsync is a really great and simple tool, but it is sadly quite slow.
If you are bandwidth stressed and your database is growing it might not be
the best solution. Works fine when bandwidth is not a critical issue. The
trouble is it doesn’t really take into account the incremental nature of
the data ie. you backup everything every time (besides the ephemeral tables
like analytics, aggregated etc). In which case you need to start thinking
smarter and maybe a little bit more complicated. One approach I have been
considering, (but not yet tried) is to make a copy of the metadata export
every night and then just pull all the datavalues with a lastupdated
greater than the last time you pulled. That is going to reduce the size of
the backup quite considerably. In theory this is probably even possible to
do through the api rather than directly through psql which might be fine if
you choose the time of day/night carefully. I’d probably do it with psql
at the backed,

So there are a few options. The first being the simplest and also the
crudest. Any other thoughts?

Cheers
Bob

On 18 December 2014 at 05:07, gerald thomas gerald17006@gmail.com wrote:

Dear All,
Sierra Leone wants to finally migrate to an online server (External
server hosted outside the Ministry) but we will like to create a daily
backup of that server locally in case anything goes wrong.
My questions:

  1. We need a help with a script that can create a sync between the
    External Server and the Local Server (at least twice a day)

  2. Is there something we should know from past experiences about
    hosting servers on the cloud

Please feel free to share anything and I will be grateful to learn new
things about dhis2


Regards,

Gerald


Mailing list: https://launchpad.net/~dhis2-devs
Post to : dhis2-devs@lists.launchpad.net
Unsubscribe : https://launchpad.net/~dhis2-devs
More help : https://help.launchpad.net/ListHelp


Regards,

Gerald


Mailing list: https://launchpad.net/~dhis2-devs
Post to : dhis2-devs@lists.launchpad.net
Unsubscribe : https://launchpad.net/~dhis2-devs
More help : https://help.launchpad.net/ListHelp


Mailing list: https://launchpad.net/~dhis2-devs
Post to : dhis2-devs@lists.launchpad.net
Unsubscribe : https://launchpad.net/~dhis2-devs
More help : https://help.launchpad.net/ListHelp

I think Steffen put his finger on it when he said that the backup should be restored (and hence tested) as part of the same scripted operation. But you make a good point about not having a dhis2 instance running live against that database as it would disturb the integrity of the backup.

Its also important to have a notion of generations of backup. If you just have the production database and the backup, then when things go bad on the production server you don’t want to overwrite your good backup with a bad one.

You can’t keep daily backups forever as you will rapidly run out of space or budget. My preference is to keep:

6 days of daily backups

6 weeks of weekly backups

some number of monthly backups

etc

This way as you roll into the future your disk usage doesn’t grow too rapidly.

···

On 18 December 2014 at 13:27, Dan Cocos dan@dancocos.com wrote:

Hi All,

I think there are two concerns being discussed here.

  1. Making sure there is a reliable backup in case something goes wrong.

The first problem is pretty straight forward, one can create another instance in another region, another provider or locally. Then schedule a regular backup to that server. Though I don’t recommend that the local actively run DHIS 2 because any changes made to that server will be lost on the next update from the cloud instance. Merging DBs is a difficult problem and causes more headache than it is worth.

Depending on how far back you’d like your backups to go this will start to consume a lot of disk space.

If the cloud server goes down you can be assured that your data is safe because you’ll have a copy of the database either on another cloud server or locally.

Incremental backups can be good for low bandwidth but my concerns are restore time and if one of the increments is corrupted it can cause a lot of problems.

Some cloud providers also offer storage/backup solutions that can address this concern.

  1. Failover in the event the cloud server goes down.

This is a more complex problem and can be addressed by having stand by servers in different regions, this will allow for failover in the event of an outage but has to be carefully planned and starts to get expensive as you’ve essentially doubled or tripled the number of instances/servers you’d need available. It also requires careful planning to make sure there is clear failover plan in addition to a clear plan to restore to the initial setup.

Executive summary

  1. Reliable backups are pretty straight forward and can be cost effective.
  1. Failure over can be addressed but it is complex problem and starts to get expensive.

Lastly and more importantly is to test on regular basis to make sure that you are able to restore from backups in the event of a failure.

Thanks,

Dan

Dan Cocos
BAO Systemswww.baosystems.com

T: +1 202-352-2671 | skype: dancocos

On Dec 18, 2014, at 7:53 AM, Steffen Tengesdal steffen@tengesdal.com wrote:

Hi Gerald,

As Bob pointed out, filezilla is a GUI tool and it does not support scheduling of downloads. Your local server should not have a GUI on it if it is a production system. If your local host a Linux system? If so, you can create a simple bash script on the localhost system that uses sftp or scp command line to connect and download a backup. A script for that would not be very complicated.

Steffen

On Dec 18, 2014, at 7:47 AM, gerald thomas gerald17006@gmail.com wrote:

Bob,
My Suggestion:
All local servers must be on 2.15 war file then we create a SFTP
account on cloud server then we can use filezilla from the local
server to download the backup from the cloud server.
I know it is crude but that help for now.
What is your take Bob.

On 12/18/14, Bob Jolliffe bobjolliffe@gmail.com wrote:

Hi Gerald

We tested this when I was in Sierra Leone and we were finding serious
problems with bandwidth getting the data back to Sierra Leone.

So you are going to have to think carefully about when and how often to
synch. Currently your database files are very small as you don’t have much
data on your cloud server, but it will soon grow. I suspect “at least
twice a day” sounds unrealistic.

The way I typically do it is to first create an account on the backup
server. Make sure that the account running your dhis instance can login to
the backup server without a password by creating an ssh key pair and
installing the public key on the backup server account. Then you can
simply the rsync the backups directory (eg /var/lib/dhis2/dhis/backups) to
a directory on the backup server using cron. In fact if you look in
/usr/bin/dhis2-backup you will see that the commands are already there to
do this, just commented out. This would synch with the backup server after
taking the nightly backup.

This simple (and slightly lazy) setup has worked fine, and continues to
work, in a number of places. But there are a number of reasons you might
want to do something different.

(i) you might want to pull from the backup server rather than push to it.
Particularly as the backup server might not be as reliably always online as
the production server. This would require a slightly different variation
on the above, but using the same principle of creating an ssh keypair and
letting rsync do the work.

(ii) rsync is a really great and simple tool, but it is sadly quite slow.
If you are bandwidth stressed and your database is growing it might not be
the best solution. Works fine when bandwidth is not a critical issue. The
trouble is it doesn’t really take into account the incremental nature of
the data ie. you backup everything every time (besides the ephemeral tables
like analytics, aggregated etc). In which case you need to start thinking
smarter and maybe a little bit more complicated. One approach I have been
considering, (but not yet tried) is to make a copy of the metadata export
every night and then just pull all the datavalues with a lastupdated
greater than the last time you pulled. That is going to reduce the size of
the backup quite considerably. In theory this is probably even possible to
do through the api rather than directly through psql which might be fine if
you choose the time of day/night carefully. I’d probably do it with psql
at the backed,

So there are a few options. The first being the simplest and also the
crudest. Any other thoughts?

Cheers
Bob

On 18 December 2014 at 05:07, gerald thomas gerald17006@gmail.com wrote:

Dear All,
Sierra Leone wants to finally migrate to an online server (External
server hosted outside the Ministry) but we will like to create a daily
backup of that server locally in case anything goes wrong.
My questions:

  1. We need a help with a script that can create a sync between the
    External Server and the Local Server (at least twice a day)

  2. Is there something we should know from past experiences about
    hosting servers on the cloud

Please feel free to share anything and I will be grateful to learn new
things about dhis2


Regards,

Gerald


Mailing list: https://launchpad.net/~dhis2-devs
Post to : dhis2-devs@lists.launchpad.net
Unsubscribe : https://launchpad.net/~dhis2-devs
More help : https://help.launchpad.net/ListHelp


Regards,

Gerald


Mailing list: https://launchpad.net/~dhis2-devs
Post to : dhis2-devs@lists.launchpad.net
Unsubscribe : https://launchpad.net/~dhis2-devs
More help : https://help.launchpad.net/ListHelp


Mailing list: https://launchpad.net/~dhis2-devs
Post to : dhis2-devs@lists.launchpad.net
Unsubscribe : https://launchpad.net/~dhis2-devs
More help : https://help.launchpad.net/ListHelp


Mailing list: https://launchpad.net/~dhis2-users

Post to : dhis2-users@lists.launchpad.net

Unsubscribe : https://launchpad.net/~dhis2-users

More help : https://help.launchpad.net/ListHelp

One way of solving the problem of backups and disk space, is to push your backups to Amazon Glacier. That way, you can be sure that you have a “secure” offsite backup some place. Once it is on Glacier, then you can download the backup to your backup machine. From a security standpoint, it might be better as well, as you do not need direct interaction between the backup server and the production cloud server. Of course, it is more cost, but you solve the problem of having a secure backup, away from both the production and backup servers. Currently, at $0.01 per gigabyte per month, it is likely much cheaper than what would cost you in-house to worry about this.

Regards,

Jason

···

On Thu, Dec 18, 2014 at 2:45 PM, Bob Jolliffe bobjolliffe@gmail.com wrote:

I think Steffen put his finger on it when he said that the backup should be restored (and hence tested) as part of the same scripted operation. But you make a good point about not having a dhis2 instance running live against that database as it would disturb the integrity of the backup.

Its also important to have a notion of generations of backup. If you just have the production database and the backup, then when things go bad on the production server you don’t want to overwrite your good backup with a bad one.

You can’t keep daily backups forever as you will rapidly run out of space or budget. My preference is to keep:

6 days of daily backups

6 weeks of weekly backups

some number of monthly backups

etc

This way as you roll into the future your disk usage doesn’t grow too rapidly.


Mailing list: https://launchpad.net/~dhis2-devs

Post to : dhis2-devs@lists.launchpad.net

Unsubscribe : https://launchpad.net/~dhis2-devs

More help : https://help.launchpad.net/ListHelp

On 18 December 2014 at 13:27, Dan Cocos dan@dancocos.com wrote:

Hi All,

I think there are two concerns being discussed here.

  1. Making sure there is a reliable backup in case something goes wrong.

The first problem is pretty straight forward, one can create another instance in another region, another provider or locally. Then schedule a regular backup to that server. Though I don’t recommend that the local actively run DHIS 2 because any changes made to that server will be lost on the next update from the cloud instance. Merging DBs is a difficult problem and causes more headache than it is worth.

Depending on how far back you’d like your backups to go this will start to consume a lot of disk space.

If the cloud server goes down you can be assured that your data is safe because you’ll have a copy of the database either on another cloud server or locally.

Incremental backups can be good for low bandwidth but my concerns are restore time and if one of the increments is corrupted it can cause a lot of problems.

Some cloud providers also offer storage/backup solutions that can address this concern.

  1. Failover in the event the cloud server goes down.

This is a more complex problem and can be addressed by having stand by servers in different regions, this will allow for failover in the event of an outage but has to be carefully planned and starts to get expensive as you’ve essentially doubled or tripled the number of instances/servers you’d need available. It also requires careful planning to make sure there is clear failover plan in addition to a clear plan to restore to the initial setup.

Executive summary

  1. Reliable backups are pretty straight forward and can be cost effective.
  1. Failure over can be addressed but it is complex problem and starts to get expensive.

Lastly and more importantly is to test on regular basis to make sure that you are able to restore from backups in the event of a failure.

Thanks,

Dan

Dan Cocos
BAO Systemswww.baosystems.com

T: +1 202-352-2671 | skype: dancocos

On Dec 18, 2014, at 7:53 AM, Steffen Tengesdal steffen@tengesdal.com wrote:

Hi Gerald,

As Bob pointed out, filezilla is a GUI tool and it does not support scheduling of downloads. Your local server should not have a GUI on it if it is a production system. If your local host a Linux system? If so, you can create a simple bash script on the localhost system that uses sftp or scp command line to connect and download a backup. A script for that would not be very complicated.

Steffen

On Dec 18, 2014, at 7:47 AM, gerald thomas gerald17006@gmail.com wrote:

Bob,
My Suggestion:
All local servers must be on 2.15 war file then we create a SFTP
account on cloud server then we can use filezilla from the local
server to download the backup from the cloud server.
I know it is crude but that help for now.
What is your take Bob.

On 12/18/14, Bob Jolliffe bobjolliffe@gmail.com wrote:

Hi Gerald

We tested this when I was in Sierra Leone and we were finding serious
problems with bandwidth getting the data back to Sierra Leone.

So you are going to have to think carefully about when and how often to
synch. Currently your database files are very small as you don’t have much
data on your cloud server, but it will soon grow. I suspect “at least
twice a day” sounds unrealistic.

The way I typically do it is to first create an account on the backup
server. Make sure that the account running your dhis instance can login to
the backup server without a password by creating an ssh key pair and
installing the public key on the backup server account. Then you can
simply the rsync the backups directory (eg /var/lib/dhis2/dhis/backups) to
a directory on the backup server using cron. In fact if you look in
/usr/bin/dhis2-backup you will see that the commands are already there to
do this, just commented out. This would synch with the backup server after
taking the nightly backup.

This simple (and slightly lazy) setup has worked fine, and continues to
work, in a number of places. But there are a number of reasons you might
want to do something different.

(i) you might want to pull from the backup server rather than push to it.
Particularly as the backup server might not be as reliably always online as
the production server. This would require a slightly different variation
on the above, but using the same principle of creating an ssh keypair and
letting rsync do the work.

(ii) rsync is a really great and simple tool, but it is sadly quite slow.
If you are bandwidth stressed and your database is growing it might not be
the best solution. Works fine when bandwidth is not a critical issue. The
trouble is it doesn’t really take into account the incremental nature of
the data ie. you backup everything every time (besides the ephemeral tables
like analytics, aggregated etc). In which case you need to start thinking
smarter and maybe a little bit more complicated. One approach I have been
considering, (but not yet tried) is to make a copy of the metadata export
every night and then just pull all the datavalues with a lastupdated
greater than the last time you pulled. That is going to reduce the size of
the backup quite considerably. In theory this is probably even possible to
do through the api rather than directly through psql which might be fine if
you choose the time of day/night carefully. I’d probably do it with psql
at the backed,

So there are a few options. The first being the simplest and also the
crudest. Any other thoughts?

Cheers
Bob

On 18 December 2014 at 05:07, gerald thomas gerald17006@gmail.com wrote:

Dear All,
Sierra Leone wants to finally migrate to an online server (External
server hosted outside the Ministry) but we will like to create a daily
backup of that server locally in case anything goes wrong.
My questions:

  1. We need a help with a script that can create a sync between the
    External Server and the Local Server (at least twice a day)

  2. Is there something we should know from past experiences about
    hosting servers on the cloud

Please feel free to share anything and I will be grateful to learn new
things about dhis2


Regards,

Gerald


Mailing list: https://launchpad.net/~dhis2-devs
Post to : dhis2-devs@lists.launchpad.net
Unsubscribe : https://launchpad.net/~dhis2-devs
More help : https://help.launchpad.net/ListHelp


Regards,

Gerald


Mailing list: https://launchpad.net/~dhis2-devs
Post to : dhis2-devs@lists.launchpad.net
Unsubscribe : https://launchpad.net/~dhis2-devs
More help : https://help.launchpad.net/ListHelp


Mailing list: https://launchpad.net/~dhis2-devs
Post to : dhis2-devs@lists.launchpad.net
Unsubscribe : https://launchpad.net/~dhis2-devs
More help : https://help.launchpad.net/ListHelp


Mailing list: https://launchpad.net/~dhis2-users

Post to : dhis2-users@lists.launchpad.net

Unsubscribe : https://launchpad.net/~dhis2-users

More help : https://help.launchpad.net/ListHelp

Jason P. Pickering
email: jason.p.pickering@gmail.com
tel:+46764147049

Bob,

Sorry about the GUI application I recommend. I was only trying to make a point to explain my idea and also thinking of those regional servers (because they are using desktop Ubuntu) .

Bob,

Your account is still there and you can ssh .

Sorry all for my late responses but I will be giving more input in a hour or two from now.

Regards,

Gerald

···

On 18 December 2014 at 13:27, Dan Cocos dan@dancocos.com wrote:

Hi All,

I think there are two concerns being discussed here.

  1. Making sure there is a reliable backup in case something goes wrong.

The first problem is pretty straight forward, one can create another instance in another region, another provider or locally. Then schedule a regular backup to that server. Though I don’t recommend that the local actively run DHIS 2 because any changes made to that server will be lost on the next update from the cloud instance. Merging DBs is a difficult problem and causes more headache than it is worth.

Depending on how far back you’d like your backups to go this will start to consume a lot of disk space.

If the cloud server goes down you can be assured that your data is safe because you’ll have a copy of the database either on another cloud server or locally.

Incremental backups can be good for low bandwidth but my concerns are restore time and if one of the increments is corrupted it can cause a lot of problems.

Some cloud providers also offer storage/backup solutions that can address this concern.

  1. Failover in the event the cloud server goes down.

This is a more complex problem and can be addressed by having stand by servers in different regions, this will allow for failover in the event of an outage but has to be carefully planned and starts to get expensive as you’ve essentially doubled or tripled the number of instances/servers you’d need available. It also requires careful planning to make sure there is clear failover plan in addition to a clear plan to restore to the initial setup.

Executive summary

  1. Reliable backups are pretty straight forward and can be cost effective.
  1. Failure over can be addressed but it is complex problem and starts to get expensive.

Lastly and more importantly is to test on regular basis to make sure that you are able to restore from backups in the event of a failure.

Thanks,

Dan

Dan Cocos
BAO Systemswww.baosystems.com

T: +1 202-352-2671 | skype: dancocos

On Dec 18, 2014, at 7:53 AM, Steffen Tengesdal steffen@tengesdal.com wrote:

Hi Gerald,

As Bob pointed out, filezilla is a GUI tool and it does not support scheduling of downloads. Your local server should not have a GUI on it if it is a production system. If your local host a Linux system? If so, you can create a simple bash script on the localhost system that uses sftp or scp command line to connect and download a backup. A script for that would not be very complicated.

Steffen

On Dec 18, 2014, at 7:47 AM, gerald thomas gerald17006@gmail.com wrote:

Bob,
My Suggestion:
All local servers must be on 2.15 war file then we create a SFTP
account on cloud server then we can use filezilla from the local
server to download the backup from the cloud server.
I know it is crude but that help for now.
What is your take Bob.

On 12/18/14, Bob Jolliffe bobjolliffe@gmail.com wrote:

Hi Gerald

We tested this when I was in Sierra Leone and we were finding serious
problems with bandwidth getting the data back to Sierra Leone.

So you are going to have to think carefully about when and how often to
synch. Currently your database files are very small as you don’t have much
data on your cloud server, but it will soon grow. I suspect “at least
twice a day” sounds unrealistic.

The way I typically do it is to first create an account on the backup
server. Make sure that the account running your dhis instance can login to
the backup server without a password by creating an ssh key pair and
installing the public key on the backup server account. Then you can
simply the rsync the backups directory (eg /var/lib/dhis2/dhis/backups) to
a directory on the backup server using cron. In fact if you look in
/usr/bin/dhis2-backup you will see that the commands are already there to
do this, just commented out. This would synch with the backup server after
taking the nightly backup.

This simple (and slightly lazy) setup has worked fine, and continues to
work, in a number of places. But there are a number of reasons you might
want to do something different.

(i) you might want to pull from the backup server rather than push to it.
Particularly as the backup server might not be as reliably always online as
the production server. This would require a slightly different variation
on the above, but using the same principle of creating an ssh keypair and
letting rsync do the work.

(ii) rsync is a really great and simple tool, but it is sadly quite slow.
If you are bandwidth stressed and your database is growing it might not be
the best solution. Works fine when bandwidth is not a critical issue. The
trouble is it doesn’t really take into account the incremental nature of
the data ie. you backup everything every time (besides the ephemeral tables
like analytics, aggregated etc). In which case you need to start thinking
smarter and maybe a little bit more complicated. One approach I have been
considering, (but not yet tried) is to make a copy of the metadata export
every night and then just pull all the datavalues with a lastupdated
greater than the last time you pulled. That is going to reduce the size of
the backup quite considerably. In theory this is probably even possible to
do through the api rather than directly through psql which might be fine if
you choose the time of day/night carefully. I’d probably do it with psql
at the backed,

So there are a few options. The first being the simplest and also the
crudest. Any other thoughts?

Cheers
Bob

On 18 December 2014 at 05:07, gerald thomas gerald17006@gmail.com wrote:

Dear All,
Sierra Leone wants to finally migrate to an online server (External
server hosted outside the Ministry) but we will like to create a daily
backup of that server locally in case anything goes wrong.
My questions:

  1. We need a help with a script that can create a sync between the
    External Server and the Local Server (at least twice a day)

  2. Is there something we should know from past experiences about
    hosting servers on the cloud

Please feel free to share anything and I will be grateful to learn new
things about dhis2


Regards,

Gerald


Mailing list: https://launchpad.net/~dhis2-devs
Post to : dhis2-devs@lists.launchpad.net
Unsubscribe : https://launchpad.net/~dhis2-devs
More help : https://help.launchpad.net/ListHelp


Regards,

Gerald


Mailing list: https://launchpad.net/~dhis2-devs
Post to : dhis2-devs@lists.launchpad.net
Unsubscribe : https://launchpad.net/~dhis2-devs
More help : https://help.launchpad.net/ListHelp


Mailing list: https://launchpad.net/~dhis2-devs
Post to : dhis2-devs@lists.launchpad.net
Unsubscribe : https://launchpad.net/~dhis2-devs
More help : https://help.launchpad.net/ListHelp


Mailing list: https://launchpad.net/~dhis2-users

Post to : dhis2-users@lists.launchpad.net

Unsubscribe : https://launchpad.net/~dhis2-users

More help : https://help.launchpad.net/ListHelp

Dear Jason

Please advise on the import of option set. I find option set is important but the members are not when I use CSV import. Any ideas? See example below

name,uid,code,option

Color,“Blue”

Color,“Green”

Gender,“Female”

Gender,“Male”

Regards

Simon Muyambo

__________ Information from ESET NOD32 Antivirus, version of virus signature database 10900 (20141218) __________

The message was checked by ESET NOD32 Antivirus.

http://www.eset.com

I think maybe you have to have something in the code field

···

On 19 Dec 2014 02:54, “Simon Muyambo” smmuyambo@gmail.com wrote:

Dear Jason

Please advise on the import of option set. I find option set is important but the members are not when I use CSV import. Any ideas? See example below

name,uid,code,option

Color,“Blue”

Color,“Green”

Gender,“Female”

Gender,“Male”

Regards

Simon Muyambo

__________ Information from ESET NOD32 Antivirus, version of virus signature database 10900 (20141218) __________

The message was checked by ESET NOD32 Antivirus.

http://www.eset.com


Mailing list: https://launchpad.net/~dhis2-users

Post to : dhis2-users@lists.launchpad.net

Unsubscribe : https://launchpad.net/~dhis2-users

More help : https://help.launchpad.net/ListHelp

Hi Simon

I think Knut is right. You must have a code, and it must be unique across all options (not only the ones in the option set you are importing).

Regards, Jason

···

On Dec 19, 2014 6:02 AM, “Knut Staring” knutst@gmail.com wrote:

I think maybe you have to have something in the code field

On 19 Dec 2014 02:54, “Simon Muyambo” smmuyambo@gmail.com wrote:

Dear Jason

Please advise on the import of option set. I find option set is important but the members are not when I use CSV import. Any ideas? See example below

name,uid,code,option

Color,“Blue”

Color,“Green”

Gender,“Female”

Gender,“Male”

Regards

Simon Muyambo

__________ Information from ESET NOD32 Antivirus, version of virus signature database 10900 (20141218) __________

The message was checked by ESET NOD32 Antivirus.

http://www.eset.com


Mailing list: https://launchpad.net/~dhis2-users

Post to : dhis2-users@lists.launchpad.net

Unsubscribe : https://launchpad.net/~dhis2-users

More help : https://help.launchpad.net/ListHelp

Sorry about the previous mail

Dear Bob,
I want to setup a Test Server so that we can do test on the various
scenarios highlighted and see which one will work best for Sierra
Leone. Basically the Test Server, will be acting as our Central Server
for this test case. I will be sending you all the information once
the setup had been completed.
It is best we test something than do nothing.

Thanks in advance for your cooperation.

···

On 12/18/14, gerald thomas <gerald17006@gmail.com> wrote:

Dear All,
Sierra Leone wants to finally migrate to an online server (External
server hosted outside the Ministry) but we will like to create a daily
backup of that server locally in case anything goes wrong.
My questions:

1. We need a help with a script that can create a sync between the
External Server and the Local Server (at least twice a day)

2. Is there something we should know from past experiences about
hosting servers on the cloud

Please feel free to share anything and I will be grateful to learn new
things about dhis2

--
Regards,

Gerald

--
Regards,

Gerald

Dear Knut and Jason

Thanks for the advice. I have run some tests with unique codes but not sure what the error means

name,uid,code,option

“Colorsm”,1b,“Blue1”

“Colorsm”,2g,“Green1”

“Gendersm”,3f,“Female1”

“Gendersm”,4m,“Male1”

Import summary

Import count

2 Imported
2 Updated
0 Ignored

Type Summary

Type

Imported

Updated

Ignored

OptionSet

2

2

0

Conflicts

Type

Element

Description

OptionSet

Gendersm

Unknown reference to IdentifiableObject{id=0, uid=‘Ja0hb0d1XGK’, code=‘Female1’, name=‘Female1’, created=Fri Dec 19 21:34:19 CAT 2014, lastUpdated=Fri Dec 19 21:34:19 CAT 2014} (Option) on object IdentifiableObject{id=0, uid=‘qvy5Iof8De3’, code=‘3f’, name=‘Gendersm’, created=null, lastUpdated=null} (OptionSet).

OptionSet

Colorsm

Unknown reference to IdentifiableObject{id=0, uid=‘jPyLUhETxWH’, code=‘Green1’, name=‘Green1’, created=Fri Dec 19 21:34:19 CAT 2014, lastUpdated=Fri Dec 19 21:34:19 CAT 2014} (Option) on object IdentifiableObject{id=0, uid=‘JGfUphTsLuk’, code=‘2g’, name=‘Colorsm’, created=null, lastUpdated=null} (OptionSet).

OptionSet

Gendersm

Unknown reference to IdentifiableObject{id=0, uid=‘mJsHyly4UoH’, code=‘Male1’, name=‘Male1’, created=Fri Dec 19 21:34:19 CAT 2014, lastUpdated=Fri Dec 19 21:34:19 CAT 2014} (Option) on object IdentifiableObject{id=0, uid=‘IxdnSBPyo9y’, code=‘4m’, name=‘Gendersm’, created=null, lastUpdated=null} (OptionSet).

OptionSet

Colorsm

Unknown reference to IdentifiableObject{id=0, uid=‘pSmhTjz0yIS’, code=‘Blue1’, name=‘Blue1’, created=Fri Dec 19 21:34:19 CAT 2014, lastUpdated=Fri Dec 19 21:34:19 CAT 2014} (Option) on object IdentifiableObject{id=0, uid=‘MVHPvOLojgd’, code=‘1b’, name=‘Colorsm’, created=null, lastUpdated=null} (OptionSet).

Regards

Simon Muyambo

···

From: Knut Staring [mailto:knutst@gmail.com]
Sent: 19 December 2014 07:03
To: Simon Muyambo
Cc: Jason Pickering; dhis2-users@lists.launchpad.net
Subject: Re: [Dhis2-users] inport meta data to option set

I think maybe you have to have something in the code field

On 19 Dec 2014 02:54, “Simon Muyambo” smmuyambo@gmail.com wrote:

Dear Jason

Please advise on the import of option set. I find option set is important but the members are not when I use CSV import. Any ideas? See example below

name,uid,code,option

Color,“Blue”

Color,“Green”

Gender,“Female”

Gender,“Male”

Regards

Simon Muyambo

__________ Information from ESET NOD32 Antivirus, version of virus signature database 10900 (20141218) __________

The message was checked by ESET NOD32 Antivirus.

http://www.eset.com


Mailing list: https://launchpad.net/~dhis2-users
Post to : dhis2-users@lists.launchpad.net
Unsubscribe : https://launchpad.net/~dhis2-users
More help : https://help.launchpad.net/ListHelp

__________ Information from ESET NOD32 Antivirus, version of virus signature database 10900 (20141218) __________

The message was checked by ESET NOD32 Antivirus.

http://www.eset.com

__________ Information from ESET NOD32 Antivirus, version of virus signature database 10905 (20141219) __________

The message was checked by ESET NOD32 Antivirus.

http://www.eset.com

I think you need to have quotes aroudn any text strings, including the codes

···

On Fri, Dec 19, 2014 at 8:40 PM, Simon Muyambo smmuyambo@gmail.com wrote:

Dear Knut and Jason

Thanks for the advice. I have run some tests with unique codes but not sure what the error means

name,uid,code,option

“Colorsm”,1b,“Blue1”

“Colorsm”,2g,“Green1”

“Gendersm”,3f,“Female1”

“Gendersm”,4m,“Male1”

Import summary

Import count

2 Imported
2 Updated
0 Ignored

Type Summary

Type

Imported

Updated

Ignored

OptionSet

2

2

0

Conflicts

Type

Element

Description

OptionSet

Gendersm

Unknown reference to IdentifiableObject{id=0, uid=‘Ja0hb0d1XGK’, code=‘Female1’, name=‘Female1’, created=Fri Dec 19 21:34:19 CAT 2014, lastUpdated=Fri Dec 19 21:34:19 CAT 2014} (Option) on object IdentifiableObject{id=0, uid=‘qvy5Iof8De3’, code=‘3f’, name=‘Gendersm’, created=null, lastUpdated=null} (OptionSet).

OptionSet

Colorsm

Unknown reference to IdentifiableObject{id=0, uid=‘jPyLUhETxWH’, code=‘Green1’, name=‘Green1’, created=Fri Dec 19 21:34:19 CAT 2014, lastUpdated=Fri Dec 19 21:34:19 CAT 2014} (Option) on object IdentifiableObject{id=0, uid=‘JGfUphTsLuk’, code=‘2g’, name=‘Colorsm’, created=null, lastUpdated=null} (OptionSet).

OptionSet

Gendersm

Unknown reference to IdentifiableObject{id=0, uid=‘mJsHyly4UoH’, code=‘Male1’, name=‘Male1’, created=Fri Dec 19 21:34:19 CAT 2014, lastUpdated=Fri Dec 19 21:34:19 CAT 2014} (Option) on object IdentifiableObject{id=0, uid=‘IxdnSBPyo9y’, code=‘4m’, name=‘Gendersm’, created=null, lastUpdated=null} (OptionSet).

OptionSet

Colorsm

Unknown reference to IdentifiableObject{id=0, uid=‘pSmhTjz0yIS’, code=‘Blue1’, name=‘Blue1’, created=Fri Dec 19 21:34:19 CAT 2014, lastUpdated=Fri Dec 19 21:34:19 CAT 2014} (Option) on object IdentifiableObject{id=0, uid=‘MVHPvOLojgd’, code=‘1b’, name=‘Colorsm’, created=null, lastUpdated=null} (OptionSet).

Regards

Simon Muyambo

From: Knut Staring [mailto:knutst@gmail.com]
Sent: 19 December 2014 07:03
To: Simon Muyambo
Cc: Jason Pickering; dhis2-users@lists.launchpad.net
Subject: Re: [Dhis2-users] inport meta data to option set

I think maybe you have to have something in the code field

On 19 Dec 2014 02:54, “Simon Muyambo” smmuyambo@gmail.com wrote:

Dear Jason

Please advise on the import of option set. I find option set is important but the members are not when I use CSV import. Any ideas? See example below

name,uid,code,option

Color,“Blue”

Color,“Green”

Gender,“Female”

Gender,“Male”

Regards

Simon Muyambo

__________ Information from ESET NOD32 Antivirus, version of virus signature database 10900 (20141218) __________

The message was checked by ESET NOD32 Antivirus.

http://www.eset.com


Mailing list: https://launchpad.net/~dhis2-users
Post to : dhis2-users@lists.launchpad.net
Unsubscribe : https://launchpad.net/~dhis2-users
More help : https://help.launchpad.net/ListHelp

__________ Information from ESET NOD32 Antivirus, version of virus signature database 10900 (20141218) __________

The message was checked by ESET NOD32 Antivirus.

http://www.eset.com

__________ Information from ESET NOD32 Antivirus, version of virus signature database 10905 (20141219) __________

The message was checked by ESET NOD32 Antivirus.

http://www.eset.com

Knut Staring

Dept. of Informatics, University of Oslo

Liberia: +231 770 496 123 or +231 886 146 381

Norway: +4791880522

Skype: knutstar

http://dhis2.org

Dear Knut

I have created option set and exported it in JSON. I then deleted the optionset and tried to import it. Below are the same errors I am getting with csv file. Never seems to be able to import members of the option set

Current user:

admin

Version:

2.17

Build revision:

17561

Build date:

2014-11-21 12:21

{“created”:“2014-12-20T23:44:06.855+0000”,“optionSets”:[{“id”:“etvtSKUhoSZ”,“name”:“prog”,“created”:"2014-12-

20T23:42:20.447+0000",“lastUpdated”:“2014-12-20T23:43:21.551+0000”,“options”:

[{“id”:“v0DNS4VKf40”,“code”:“g1”,“name”:“George”,“created”:“2014-12-20T23:43:07.726+0000”,“lastUpdated”:"2014-12-

20T23:43:07.726+0000"},{“id”:“Ghi9DN5UYfu”,“code”:“t1”,“name”:“Tom”,“created”:“2014-12-20T23:43:21.550+0000”,“lastUpdated”:"2014-

12-20T23:43:21.550+0000"}],“version”:1}]}

Import summary

Import count

1 Imported

0 Updated

0 Ignored

Type Summary

Type Imported Updated Ignored

OptionSet 1 0 0

Conflicts

Type Element Description

OptionSet prog Unknown reference to IdentifiableObject{id=0, uid=‘v0DNS4VKf40’, code=‘g1’, name=‘George’, created=Sun Dec 21 01:43:07 CAT 2014, lastUpdated=Sun Dec 21 01:43:07 CAT 2014} (Option) on object IdentifiableObject{id=3, uid=‘etvtSKUhoSZ’, code=‘null’, name=‘prog’, created=Sun Dec 21 01:42:20 CAT 2014, lastUpdated=Sun Dec 21 02:09:50 CAT 2014} (OptionSet).

OptionSet prog Unknown reference to IdentifiableObject{id=0, uid=‘Ghi9DN5UYfu’, code=‘t1’, name=‘Tom’, created=Sun Dec 21 01:43:21 CAT 2014, lastUpdated=Sun Dec 21 01:43:21 CAT 2014} (Option) on object IdentifiableObject{id=3, uid=‘etvtSKUhoSZ’, code=‘null’, name=‘prog’, created=Sun Dec 21 01:42:20 CAT 2014, lastUpdated=Sun Dec 21 02:09:50 CAT 2014} (OptionSet).

Regards

Simon Muyambo

···

From: Knut Staring [mailto:knutst@gmail.com]
Sent: 19 December 2014 21:49
To: Simon Muyambo
Cc: Jason Pickering; dhis2-users@lists.launchpad.net
Subject: Re: [Dhis2-users] inport meta data to option set

I think you need to have quotes aroudn any text strings, including the codes

On Fri, Dec 19, 2014 at 8:40 PM, Simon Muyambo smmuyambo@gmail.com wrote:

Dear Knut and Jason

Thanks for the advice. I have run some tests with unique codes but not sure what the error means

name,uid,code,option

“Colorsm”,1b,“Blue1”

“Colorsm”,2g,“Green1”

“Gendersm”,3f,“Female1”

“Gendersm”,4m,“Male1”

Import summary

Import count

2 Imported
2 Updated
0 Ignored

Type Summary

Type

Imported

Updated

Ignored

OptionSet

2

2

0

Conflicts

Type

Element

Description

OptionSet

Gendersm

Unknown reference to IdentifiableObject{id=0, uid=‘Ja0hb0d1XGK’, code=‘Female1’, name=‘Female1’, created=Fri Dec 19 21:34:19 CAT 2014, lastUpdated=Fri Dec 19 21:34:19 CAT 2014} (Option) on object IdentifiableObject{id=0, uid=‘qvy5Iof8De3’, code=‘3f’, name=‘Gendersm’, created=null, lastUpdated=null} (OptionSet).

OptionSet

Colorsm

Unknown reference to IdentifiableObject{id=0, uid=‘jPyLUhETxWH’, code=‘Green1’, name=‘Green1’, created=Fri Dec 19 21:34:19 CAT 2014, lastUpdated=Fri Dec 19 21:34:19 CAT 2014} (Option) on object IdentifiableObject{id=0, uid=‘JGfUphTsLuk’, code=‘2g’, name=‘Colorsm’, created=null, lastUpdated=null} (OptionSet).

OptionSet

Gendersm

Unknown reference to IdentifiableObject{id=0, uid=‘mJsHyly4UoH’, code=‘Male1’, name=‘Male1’, created=Fri Dec 19 21:34:19 CAT 2014, lastUpdated=Fri Dec 19 21:34:19 CAT 2014} (Option) on object IdentifiableObject{id=0, uid=‘IxdnSBPyo9y’, code=‘4m’, name=‘Gendersm’, created=null, lastUpdated=null} (OptionSet).

OptionSet

Colorsm

Unknown reference to IdentifiableObject{id=0, uid=‘pSmhTjz0yIS’, code=‘Blue1’, name=‘Blue1’, created=Fri Dec 19 21:34:19 CAT 2014, lastUpdated=Fri Dec 19 21:34:19 CAT 2014} (Option) on object IdentifiableObject{id=0, uid=‘MVHPvOLojgd’, code=‘1b’, name=‘Colorsm’, created=null, lastUpdated=null} (OptionSet).

Regards

Simon Muyambo

From: Knut Staring [mailto:knutst@gmail.com]
Sent: 19 December 2014 07:03
To: Simon Muyambo
Cc: Jason Pickering; dhis2-users@lists.launchpad.net
Subject: Re: [Dhis2-users] inport meta data to option set

I think maybe you have to have something in the code field

On 19 Dec 2014 02:54, “Simon Muyambo” smmuyambo@gmail.com wrote:

Dear Jason

Please advise on the import of option set. I find option set is important but the members are not when I use CSV import. Any ideas? See example below

name,uid,code,option

Color,“Blue”

Color,“Green”

Gender,“Female”

Gender,“Male”

Regards

Simon Muyambo

__________ Information from ESET NOD32 Antivirus, version of virus signature database 10900 (20141218) __________

The message was checked by ESET NOD32 Antivirus.

http://www.eset.com


Mailing list: https://launchpad.net/~dhis2-users
Post to : dhis2-users@lists.launchpad.net
Unsubscribe : https://launchpad.net/~dhis2-users
More help : https://help.launchpad.net/ListHelp

__________ Information from ESET NOD32 Antivirus, version of virus signature database 10900 (20141218) __________

The message was checked by ESET NOD32 Antivirus.

http://www.eset.com

__________ Information from ESET NOD32 Antivirus, version of virus signature database 10905 (20141219) __________

The message was checked by ESET NOD32 Antivirus.

http://www.eset.com

Knut Staring

Dept. of Informatics, University of Oslo

Liberia: +231 770 496 123 or +231 886 146 381

Norway: +4791880522

Skype: knutstar

http://dhis2.org

__________ Information from ESET NOD32 Antivirus, version of virus signature database 10905 (20141219) __________

The message was checked by ESET NOD32 Antivirus.

http://www.eset.com

__________ Information from ESET NOD32 Antivirus, version of virus signature database 10909 (20141220) __________

The message was checked by ESET NOD32 Antivirus.

http://www.eset.com