Every year seems to highlight another school of fashion in mobile-application architectures as the focus swings from fat client to thin client and back again. Each shift is accompanied by a sheaf of arguments purporting to prove why a particular architecture delivers the best network performance, a characteristic that will be key in the user experience. Sometimes the culprits are vendors trying to promote the technology they happen to support that year, but just as often it's designers hoping for a quick fix to an inherently challenging problem. The reality is that no one architecture is "right." As in life, one size does not fit all.
One reason mobile applications are so interesting--and so challenging--is that unlike previous delivery channels, many architectures play various roles in wireless. The client-server model was originally pitched as offering a future of networked, distributed services providing discrete, componentized business functions integrated via clean, standardized interfaces (a picture remarkably similar to the pitch for today's Web services). The reality turned out to be, with a few honorable exceptions, fat clients exchanging SQL with remote databases.
Then along came the Internet and with it an entire generation of thin-client HTML interfaces, which have slowly grown richer over time. The latest wave is Web services, which will probably feature prominently in both server-to-server and client-to-server exchanges. Intriguingly, the short history of mobile devices in enterprise IT has rapidly recapitulated this history, having passed through its own fat client and thin client waves, and the current wave of misinformation about the relative network characteristics of fat and thin clients is largely driven by advocates for Web services on mobile devices.
Some have argued that rich clients using Web services are better than thin clients because they don't waste bandwidth transmitting both data and apps as HTML does, while thin-client proponents argue that their model is better because it only has to transmit screen updates, not all of the application data. Both arguments sound superficially plausible, suggesting that deeper examination is required.
In reality it's an oversimplification to believe that any single architecture is inherently better than another in this regard. A designer has to consider three dimensions--the overall architecture, the design of the specific app, and user behavior--to determine how efficiently a given application communicates and what the trade-offs are.
Consider a simple example: an application that provides the user with a list of flight times, for example, that allows searching or sorting. Suppose the user performs a search that identifies 100 flights, sorted by price. A simple thin-client application would transmit just enough data to fill the screen of a Wireless Application Protocol phone with the first four hits. Now the user decides to view by departure time: The application discards the displayed rows, asks the server to re-sort, and another four hits are delivered.
A more sophisticated application might transparently fetch the next four hits in a "cache behind" design, ready to display if the user scrolls down the list, thus reducing the apparent latency at the cost of sometimes fetching rows that aren't needed. In general, the thin-client application designer can trade latency for network efficiency by increasing the size of the cache. Make the cache too large and you waste network traffic fetching unneeded data. Make the cache too small and you make the user wait between pages.
By contrast, a simple rich-client application would fetch all 100 flights, a much larger data transfer than the thin client. If the user views just a few rows, then discards them in favor of a different search, that bandwidth was wasted. However, if the user repeatedly scrolls, re-sorts, or filters the data, the rich client doesn't need to keep fetching rows, and the total network usage may eventually be less than with the thin client.
As with the thin client, the application designer can take steps to reduce the network traffic by fetching rows on demand (sometimes called a lazy fetch), to the point that ultimately the rich client actually matches the thin client, fetching only what is to be displayed, but in so doing, the client sacrifices the ability to sort locally. Whether the thin client, thin client with cache, rich client, or rich client with lazy fetch ultimately produce less network traffic will depend on the users of the application. Despite what the hype might suggest, inserting Web services into the middle of the exchange doesn't change this fundamental point.
Ultimately, the choice between thin client and rich client shouldn't be determined by arguments about network round trips. More important criteria include availability (rich clients may offer offline functionality) and ease of deployment (no download or install to manage for markup applications), as well as basic capability (rich clients are, after all, richer). Resolving those trade-offs, as well as predicting the network behavior of the application, doesn't come from academic discussions but only by getting close to actual users and understanding their requirements and their behaviors--or in one word, usability.
Carl Zetieis the VP of Giga Information Group