If postgres is running out of memory during analytics run, in my experience ths usually happens with creating indexes. The obvious solution of course is to feed it more RAM. But the other workaround which is worth trying is to reduce the number of postgres workers (which might all be creating indexes simultaneously). This can make the job slower overall, but use less memory.
The relevant setting is Number of database server CPUs : System settings - DHIS2 Documentation.
Set that to something low (say 2) and see if the analytics run completes without memory failure. If it does then you can play with turning it up till you find the sweetspot between performance and failure.
Running out of disk space is a different issue. I would need to know more about your environment to be sure how best to proceed.
postgresql v13.4 is worth looking at if you are on a lower version. Experience seems to be mixed in that some things are slower and some things are faster. But the indexes are smaller and people do report less disk usage (as much as 50%). There are also some good grounds to believe that postgresql 13 is less prone to out of memory errors than earlier versions, sometimes at a cost of performance if not properly configured.
But I would start with fiddling with system setting. If you have a separate test server you can have a go with pg13.
Let us know how you get on and maybe a bit more context on the system.