OZONE Asylum
Forums
Mad Scientists' Laboratory
A faster forum using accurate modelling techniques
This page's ID:
27269
Search
QuickChanges
Forums
FAQ
Archives
Register
Edit Post
Who can edit a post?
The poster and administrators may edit a post. The poster can only edit it for a short while after the initial post.
Your User Name:
Your Password:
Login Options:
Remember Me On This Computer
Your Text:
Insert Slimies »
Insert UBB Code »
Close
Last Tag
|
All Tags
UBB Help
I remember some thread about sporadic OzoneAsylum downtimes, and slowdowns. Stuff happens, web applications have to stand large amounts of users and such, but... There is something that can be done about it. I've learnt that planning is extremely important when it comes to software development, and the immense effort by TP has gone through so many changes, tips, ideas, etc. Such an awkward process that while I fully trust his coding ability, I am wondering wether the grail is giving it's full potential or not. So, my idea here is to: - lay down UML diagrams for the grail as it currently is. (a bunch of hours, two, three max) - correctly and comprehensively stress test the Asylum to identify bottlenecks. (a one hour test/log phase) - relating the diagrams and bottlenecks to flawlessly find areas of improvement. Such bottlenecks cannot be summed up to this or that procedure being too slow, I am talking about general software design here: all the things I thought I had to manually investigate and test in the past can be forecast and planned with a proper use of modelling techniques. I am not putting in question the grail as it is, either, just wanting to optimize what could be optimized. ------------------------ A real world example: - it's a known fact that db connection for a web app has to last as short as possible - in general, this translates to web pages that release a connection to the connection pool as soon as it is not in use anymore Workflow: user does action => action creates query => a connection is opened => query is commited => the connection is released That's great. BUT: - writing to a text file is in general faster than accessing a db - therefore, sql queries could be "packed" and cached to one big file - every once in a while, a cron would call a php script that would swallow the cached queries file, and commit the associated SQL queries Workflow: users do a lot of actions => action create text file entries => a connection is opened every once in a while => stored queries are commited => the connection is released The global load would be more balanced towards file storage, vs. db storage and connections, for instance. The global memory management would be improved. This real world quick sample is not flawless, one would wonder "how do we go about making the cached data directly available to the public?" Or stuff like that. I have answers, but I'll stop before this post gets too long to read. The idea behind this whole thing is: a faster forum using accurate modelling techniques. Thoughts?
Loading...
Options:
Enable Slimies
Enable Linkwords
« Backwards
—
Onwards »