Why don't web browsers do this?
In the 80's, computers started instantly. They were READY to go when they first turned on.
Over the next few decades, people wanted to do more things and operating systems got slower to initialize. To solve this, OS and hardware manufacturers created hibernate and standby modes.
Now, many people have stopped using native applications and moved to the web. When I load facebook or gmail, it takes dozens of seconds to start up, and minutes over a slower connection. During this time,
- The source files for the application are loaded from the server,
- The source code is compiled and run.
- Requests are made to retrieve the application state from the server, and
- the DOM is manipulated to present the state to the user.
Or, without any co-operation from standards, browsers can do this RIGHT NOW and snapshot commonly used pages instead of discarding them when users close a tab. When the url is re-entered, from the application perspective it is just as if the machine went into standby and then resumed. The browser could take cookie expiration into account, or to be totally safe, web pages could opt in with a meta tag.
that way you can hide implementation and only remote view screen to client the gmail running in server , same like vnc with limited features.
arise . Maybe tomorrow we will have cloud environment where everyone will login to their cloud server running faster webpages rather than small desktop .
I would use this, as it is the main reason I keep 30 or 40 browser tabs open at a time.
FWIW, Opera snapshots the DOM & co. to provide instant Back & Forward feature. Parts of this is also saved in the session for the next time you start Opera or switch back to that session but I'm not sure how much. Sorry.
While not very impressive by itself, what always amazed me is that if you were to go to another URL in the middle of the generation, you can always hit the back button and watch the generation resume from _exactly_ where you left off!
Web applications like GMail actually try very hard to subvert this natural behavior. It is this very subversion that the author is actually complaining about here.
Then again, just imagine how many poorly this blog post would have been received if the had made the correct claim that, "Google, for all their server-side prowess, can't seem to engineer web apps correctly."
Apache says every css,js, and image file does not expire for 10 years after access (so user loads page files stored in cache)
Whenever a change is made to the css or js files it gets run through a script using yui-compressor which combines and minifies the files.
Then in the program that gets compiled a version number based on the modified time gets appended to the filename (e.g. style.123434343.css) basically what that boils down to is if newer grab from server instead of cache
Then Apache uses mod_rewrite to strip the appended version number and serve the user the file because no actual file with that name exists, the rule ignores any file or dir that currently exists.
Finally most of the assets are gzipped when served which in some case can worsen performance depending on filetype so do some testing.
If I wanted to go further I could implement a manifest file and use local storage to store some heavy files on the user's computer but that spec is still newish at least in terms of implementation.
Other than that you just have to do optimization of any databases queries and tight loops in your code. There are some web apps that load relatively fast, Heres hoping for faster broadband soon!!
Not necessarily, depends on the server settings and if you already have the source in your cache. But usually you want to have the latest version of the website because new data might rely on it and it's usually better and less buggy. Downloading this is quite fast anyway since it static and can be minified into very few requests. If this is slow it's because of bad server-setup/coding.
2. The source code is compiled and run
3. Requests are made to retrieve the application state from the server
Can be done already. Again, depends on the servers cache-settings. And again, you usually want the latest application state.
4. the DOM is manipulated to present the state to the user.
Negligible (except in IE)
Number 3 would be the most heavy operation here, especially if it contains images and stuff or if the server has a slow database. But otherwhise all 4 steps above can be done in under 250ms on a decent connection, without any of the caching i mentioned above.
But i agree that as we move more and more apps to the web it would certainly be nice to be able to just pop up an app to peek at old data without reloading all new data. I mean, you don't want to wait a second every time you alt-tab between windows. So here i would say that web-developers should take some responsibility and code applications that are cache aware and can be made to startup without any connection to the server.
Asana's shocking pricing practices, and how you can get away with it tooIf one apple costs $1, how much would five apples cost? How about 500? If everyday life, when you buy more of something, you get more bananas for your buck. But software companies are bucking the trend.
Exploiting perceptual colour difference for edge detectionThink colour isn't important in image processing algorithms? Let's try it both ways, and see for yourself.
Why you should go to the Business of Software Conference Next YearMost people, having already paid $2000.00 of their hard earned money, and then having flown, driven, or otherwise travelled to Boston to attend a conference, and then having paid an additional $250/night plus $33/night parking and "tourism taxes" to the Seaport Hotel -- most people, after all this, are unlikely to say that it was a waste of time and they should have stayed home watching the remaining salvaged episodes of Doctor Who on Netflix.
In fact, I found it quite useful.