Hello fellows,
I write to you in search of an advice. I have a decision to make and would appreciate some considerations. We have a product developed as a web application. The performance of the software is good, and it is able to handle many users. We don't have problems neither with memory and neither with database performance (At least so far).
The problem I am facing is with connection bandwich and latency. Our biggest client has offices in a lot of countries. Currently, some of the stations have really slow connections. Our interface is pure javascript (somewhat similar to gwt) and a lot of javascript is needed to download. Even with caching, it is slow to access, due to the very slow connection. What I would like to do, is to clusterize our application, to provide, in the slow nodes of the client, a server which would provide a fast, nearly local connection.
The architecture of our application is rather "simple". Our application consists of DWR + Google Guice 2 + Hibernate 3.2 + Hibernate Search. We distribute it together with jetty 6, rather than installing it on client's web container.
Almost all of our services have a public interface, which is visible for dwr to call, a private interface, which is used to connect different services, and a implementation class which implements both interfaces. Guice handles the dependency injection on these services.
We have, in some points, objects which are held in the memory. These are information which almost never change, and when a writing operation is needed, we write both in the entity and on the memory. These consists of country information, organizational structure information and permission profiles. We have a really complex permission algorithm, thats why we keep them in memory.
My question to you is: How can I clusterize this scenario? Which approach would you take? Please consider in your answer that my biggest concern is time + human resources (=money).
If you need more information, please do ask.
Thanks for your time, Guilherme
|