Amazon EC2 AMI

More cool stuff with Amazon Web Services. In my last experiment with AWS, I have utilized the “Elastic Beanstalk”. A little bit strange to get your head around but here it is…

http://aws.amazon.com/elasticbeanstalk/

So, basically, what you do is you upload a war file. AWS then deploys it to a selected environment, creates auto scaling triggers, elastic load balancers, and all of that stuff that most people (including me!!) have no idea. The basic idea is that as the demand on the app grows, then you need to scale it. But wit the “Elastic beanstalk” it does all of this for you. It costs nothing more than all the normal services on Amazon and takes care of the auto-scaling.

I assume, it would be possible somehow to embed the connection properties to a database somewhere inside the WAR file? You do not have direct access to the environment, so there is no way to specify a database connection through the hibernate.properties file, but it seems possible to do this somehow.

So, basically, all you have to do is to upload a war file, and boom…you are in the cloud and have to do nothing basically in terms of configuration.

Check it out here

http://dhis2.elasticbeanstalk.com/

This is a really stripped down WAR that does nothing. I was only going to use it for a fleet of data entry interfaces to a cloud-backed amazon RDS datasource, but need to figure out first how to specify the connection to the DB inside the WAR file itself.

Regards,
Jason

···

On Tue, Mar 15, 2011 at 10:14 AM, Jason Pickering jason.p.pickering@gmail.com wrote:

Another experiment I conducted over the weekend was a hybrid approach.

Linnode seems to be quite good at being persistent and is relatively

cheap. I think the draw back is of it is not at all as easily

scalable as Amazon. I tried a setup where I use a Linode as the

backend DB and then a cluster of AWS instances(which acted as the

DHIS2 frontend) connected to the remote Postgresql database (running

on Linode). Obviously, latency is an issue here as packets have to get

out of Amazon and to Linode and back, but I guess these pipes are

pretty big.

The experiment worked quite well. Again, not really sure if this

architecture would be advantageous, but perhaps it would be a simple

way to scale up and down capacity depending on peaks of usage (for

instance during the data entry period). Would be nice to test if we

could. Did any one ever come up with a way to do load testing on DHIS?

I heard rumors about Selenium or Jmeter, but not really sure if there

is anything out there.

Regards,

Jason

On Sun, Mar 13, 2011 at 2:43 PM, Jason Pickering > > jason.p.pickering@gmail.com wrote:

On Sun, Mar 13, 2011 at 2:18 PM, Jo Størset storset@gmail.com wrote:

Took it off-list, but maybe others are also interested?

Oops.

Den 13. mars 2011 kl. 10.54 skrev Jason Pickering:

Looking more into costs, it seems to be quite significant. Costs are

calculated based on instance-hour. If it is up and running, it is

billed. Testing on a Micro instance proved that performance is pretty

unacceptably slow. Scaling up to an instance with 17 GB of memory

improved things significantly. Latency with the RDS service seems

significant but could be related to the relatively small size of the

database (5 GB). Keeping an instance up and running 24X7 for a month

will cost you several hundred bucks it seems, significantly more than

Linode.

How did you go about calculating this? Linode is comparable to running EC2 without RDS, I guess, I get $ 87.84/month for a one year large reserved instance, that is

There is a cost calculator…http://calculator.s3.amazonaws.com/calc5.html

and I was reading this.

https://forums.aws.amazon.com/message.jspa?messageID=114409

What - Linode - EC2


Price - $159.95 - $ 87.84

Memory - 4096MB - 7.5 GB

Disk - 128GB - 850 GB

Processing - ? - 4 EC2 Compute Units (1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor)

It is a bit difficult to compare, but I’m not sure amazon don’t make sense even for stand-alone instances. And for flexibility and services offered, especially if running >several services in coordination, it certainly seems to make sense. While a micro instance has “low” IO performance a large instance has “high” IO performance, so >I think your latency issues might go away as well…

The interesting thing is that the MicroInstances with the RDS backend

actually work quite well for data entry, which is the main use case

for me considering Amazon (and various regulatory issues). I am

thinking that if we created a stripped down version of the DHIS war,

which consisted only of the stuff needed by data entry personell,

micro instances which are easily created and load balanced could serve

as a powerful way to scale up and down based on demand. This is of

course possible as well with LinNode, but it is just so easy on Amazon

to do this. I doubt going to support things like the data mart, import

and other memory hungry tasks, but it is simple to create a new

(temporary) instance which could be used for these “heavier” tasks.

Am I missing any significant details?

I am not sure at this point. I think we need to do some testing.

If you want to test yourself, give me your Amazon WS customer ID, and

I will make the AMI available to you.

Done. Obviously, there are some issues related to the RDS backend.

THis is obviously connected to my instance, which I assume you should

have no authority for. It would be nice to have a stripped down AMI

with just Tomcat and HTTP (for the reverse proxy). Might perform

better. It seems you can import virtual machines into Amazon as well,

but have not figured out this part yet. :slight_smile:

Regards,

Jason

Jason P. Pickering

email: jason.p.pickering@gmail.com

tel:+260974901293

Jason P. Pickering

email: jason.p.pickering@gmail.com

tel:+260974901293


Jason P. Pickering
email: jason.p.pickering@gmail.com
tel:+260974901293