The Old Blog Archive, 2005-2009

What Is Boast, and What Isn’t

Two quick thoughts:

There is a certain kind of boast, or personality thereof, that emphasizes “how good I have (or the thing has) become because of my choice of tools“. Like, “the photo is great because of the camera I bought“, “the code has become faster, the lines fewer, the scalability better, because of the language that I mastered“. The fact is there is no good tool or bad tool, only tool that fits the situation and tool that doesn’t. Masterpiece can be taken with a point-and-shoot camera (think of Nobuyushi Araki), and talking about how many megapixels there are is platitude.

There is a certain kind of travelogue that I stopped reading since long ago. I seem to have lost interest in what is happening in a city now and am only interested in the history of a city over a certain period of time. Any travelogue that tries too hard to fit in too may now-happenings loses my readership for good. My criteria for good travel literature now: curiosity, slowness, acceptance of fateful encounters (or the lack thereof), and the most important quality, silence.

The Plague Comes to Vista

Some Cangjie (倉頡, long mistransliterated as Changjei, an important Chinese input method) users reported that Vista screwed up this 30+ year old method by now requiring users to enter the whole phrase instead of sending out each character when it’s done. This adds an extra ENTER key to finish a phrase, much to the user’s annoyance.

Microsoft is just among many multinationals that screw up CJK input methods from time to time. Many decisions are made from engineering’s point of view, not usability. They are made out of technical superiority, not respect of established usage and history. This fits once again one of the theses of this blog, that a system is a historical construct, and we must respect that history. Usability fiasco is the direct result of engineer haughtiness.

In terms of the C of the CJK trio (of the CJKV what… quatro?), there are many more such cases and it’s not just Traditional Chinese that is affected. An interesting antidote (or anticlimax) to the notion that “The whole world is learning Chinese” (actually, that’s Mandarin) and “Chinese is becoming the language of the 21st century”.

Why Redesign Is Hard

Another short entry. The short answer: because now you both have to observe the old behavior and respect the history.

Many system designers ignore the fact that a system is a historical construct. No, we are not talking about some 1970s French phisolophy-cum-literary criticism stuff. But truly, a system is not just designed. It’s formed over time. Many systems, when they finally reach to the stage where they become inadequate and even burdensome, were at first designed with clear mission and structure. But many small changes have creeped in. Many last-minute changes, or small tweaks (so as to make the system work), would become a norm.

Any redesign must honor the old system’s total functionality, and such totalness is very detailed and hard to figure out.

Of course another way of redesign is to refine the mission, the problematics and the solution/functionality of the system. But that usually involves more timely rethinking. Any rethinking done with haste or haughtiness is often doomed to be a design even more inferior to the old one.

As a programmer, I’d say it’s the debug, and the quick fixes and tweaks that comes along with the debugging process, that lubricate a software system. Those lubricants are added over time. And that’s why I said a system is a historical construct. Many fixes and tweaks do not belong to the initial thinking (the design stage). They are not to be accommodated in an abstract model and often pop up only in the running stage. It’s tremendously challenging for a rethinking (rearchitecture, refactor, reform) to accommodate them.

The implication here is: be wary of the tweaks and quick fixes that will be part of a system’s life. A system that absorbs and digests those tweaks better will usually have a longer life (ironically, that also makes it harder to regenerate [ie. for the system to start anew], but that’s a topic for another day).

Against Toolcentrism

This is going to be a short entry: So lately I’ve found a pattern from a number of places I’d previously worked at. I call it the tendency to “use more tools to solve problems created by tools”. It went like this: Needed a group ware, but no body really used them, so needed to arrange training sessions. But hard to solve schedule collision, so needed another group calendar (in some cases, plug-in for the previously mentioned group ware). Installation and usage were hard, so wiki to memorize them. Too many systems, so added up another homepage / blog / resource management system whatever to manage them all.

See the problem here? One of my ex-bosses, a professor that I really admire, she said: “We only use one ‘groupware’ tool: a giant blackboard, with lots of large white poster paper.” And guess which is the more efficient?

Another female friend of mine commented this kind of “tool worship” a problem of men. I don’t know about other cultures, but in Taiwan it’s quite true. Reductionism is not en vogue here.

On professionalism

Lately I have been troubled by the attitude that “I do it for free, so take it or leave it”. True that no one should burden the contributor who does it for free. But as such, the contributor should claim no more than having made the contribution. Some “savior” type likes to claim they’re doing an important job, serving many, saving poor souls from X platform, and enjoying the fame. A slightest request, however, results in “I have done so much, platform X sucks, and nothing is for free: write the damn code yourself if you want that feature!” As if the world owes him or her a sorry (enough paean has been sung).

Interestingly how such arrogance can reflect in the code. I’ve seen a few cases where the correlation is high. Legacy features are treated as, well, “legacy”, and are taken away without first understanding why they are there. Variables and functions are named carelessly. No boundary check in place. No real coding style is enforced–both an aesthetic and managing disaster. Dangerous memory copying and pointer arithmatic done in a way that make people who really do K&R frown–and remember? there is no boundary check. All for the sake of the really non-existent problem of “needs to be efficient”. But for those who know better, it’s really just some bricolage in a sorry state.

That’s plain wrong. That I do it for free does not mean I can do it as I wish. Especially if you both claim to be a serious software developer and to have the intention to help others.

It’s like NGOs and charities. That you have good intentions doesn’t relieve you from the responsibility of doing things the right way. That is what professionalism means. Even amateurs know what that means. There are even hobbyist activities that actually require high skill, attention to detail, and utter seriousness.

Some software can be more serious than others. System software is one example. They are not just expected to be reliable. They must be. And tons of application code and logic depend on those foundations. They must be solid as rock. And they need to be very well organized, clear in what they do, and have sound logical flows. And that’s why open source system software projects are run in a serious manner, however disperse and loose their organization is. Coding style is enforced, review and criticism needed, even welcome. Interestingly–individuality still plays important role. Personality clashes can happen. But whatever the argument, reckless coding can be like reckless driving in coding system software.

Serving many can be a tricky business. Or as the saying goes… “With great power comes great responsibility”. Please, please don’t be a savior if you are actually a reckless developer.

Snippets of Thoughts

  • Common sense and good reasoning are both rare.
  • Quality is built-in. It’s never something you can hire a consultant or designer to have.
  • Worse, face-lifts hide future implosions.
  • In the end, the work emanates from one’s belief or philosophy of building things. To ask “why a system behaves like that” or “why a system sucks like that” (ie. to trace the expression of a system’s working), we must go back to how the system is designed, thought and built.
  • Sadly, asking such “why” can offend.
  • Many system designers (many of them an incidental designer–they didn’t know they were laying the foundation) get angry if you ask them a commonsensical why.
  • If a person tells you “I’ve lately read this and this, and I fully agree with the methodology X, and we should do it”, be wary. A person who is easily converted is easy to switch again.
  • On the other hand, there’s no talking with someone who is entrenched with a given belief, especially if the reality is in conflict with it.
  • To say that a system is an expression of its founding thought seems to be a kind of idealism (ie. it’s the idea that counts). Of course a system is always designed within its constraints (ie. “the material basis”), but within the given boundary, it’s really the thought that counts.
  • Be very vigilant of the initial thought you put into the design of a system.

Details

There are simply too many details around any given idea, if it is ever to be carried out. So many, and so many devils, that brilliance of that idea, the fervor, the buoyant feeling of how great the idea will be, are put to trial. There focus is a necessity, and that entities should not be multiplied beyond necessity is something that we ought to bear in mind.

The Desktop

Despite the talk in the past few years about the future that belongs to the web app, desktop app has also evolved. The umbrella term “usability” becomes the new focus for desktop app developers. And even if gmail is great, there will always be times when offline (that is client-side) mail reading is desirable.

I attended this year’s Apple WWDC. Apple seems to have a clear view about the market segmentation. Namely, there are actually three, not two, separate areas, or platforms, on which software apps compete: desktop, web, and media. The desktop is still the center around which things happen. Talk about thin client or terminals didn’t really materialize (some of the ideas found a way into the media platform though). Desktop is about responsiveness, capacity and device connectivity (something that web is definitely not [1]). Web, as we all know it, is about ubiquity, zero configuration, quick deployment (from an app developer’s point of view). The media platform is somewhat more tricky as it’s more diverse, but from Apple’s point of view, handsets, PSP, setboxes all belong to this category. This is something that used to be the game of appliance makers, but apparently during the past decade traditional PC software makers (Microsoft, needless to say, but also many more) have noticed this development, and they have been unleashing their technological prowess into this arena.

For the time being, though, Wil Shipley (the Chief Monster of Delicious Monster)’s comment is the most succint. While tons of money is needed for building a mass web service (as software application per se; another story with web services that function as platform for something else, like selling books), only the biggest players get something. Now, desktop software market may seem small (and Mac independent software market is even smaller), but it seems that survival is more attainable.

One assuring thing is that the ecological system of software is big enough that, if you’re good, it’s not hard to have a pie. The rise of web apps doesn’t need to mean the decline of all desktop apps (and, frankly, I can’t really think of any desktop app that went undone by a web app equivalent).

What’s better, the rise of web APIs (not Web OS–rather, programmable web is the way) should only make desktop apps better and stronger. Now everything old is anew again. A good news for desktop app developers.

[1] One must remember the reason Windows is such a big player is that it has accommodated and created a whole industry: the peripherals. Even Apple’s computers nowadays use standardized devices and the company is trying to make it easy to develop device drivers. For all the talk about Web OS–which I think is a non-topic–this is something clearly missing. Honestly, though, when and how many app developers ever cared about system programming issues?

The Philosophy behind Design Philosophies

Had afternoon tea with two old friends yesterday. One of them is doing math at UW and I visited him last November in Seattle. The talk was around topics like “the philosophy behind design philosophies”–or in plain words, “why and how you come to design it that way?” Modularization and abstraction is what pro developers do every day, but how–and why–you modularize or abstract a module/class this or that way, that’s not a natural thing. That’s first something to be learned, and later–as one learns (if one’s willing to) more than one strategy to do it, the questions come to choosing the right pathway.

That’s not just scholastic muttering. When you design a language or a framework (be it a programming framework, legal framework, business framework–an institution that is), this meta-philosophy applies. Because you are designing something that allows generation for other things–or to put it the other way, you’re about to design something that is going to be able to accommodate design philosophies. You design C++ to accommodate design patterns and different paradigms (procedural, modular, object-oriented, generic, etc.). You design commerce laws to accommodate different walks of business entities.

So one can’t be too careful nor audacious in coming up with one philosophy behind design philosophies. It’s hard as abstract thinking always is.

Study-Oriented Development

Lately I can find my way of developing an application is coming to its form. I call it “study-oriented development”. The thing is that before I start doing anything real, I do a great number of studies. For the past average Rails apps that I’ve done, the number has been 5 to 10. Usually the more study apps I’ve done, the better the result has been. For a Cocoa app, the number can vary more–but the safe bet is no less than 6. A project that has been the main theme of my past three quarters has already accumulated around 20 studies.

Interestingly, there’s another side of this discovery: if I’m not able to do studies (due to time constraint, or if an app has too many dependencies), then my output will generally… deteriorate, unfortunately.

Studies, or études in art talk, are luxury, though. It indeed takes more time than just working on the app / the problem itself, and you need many clean slates. Another issue is the “boilerplate” code (all the common initilization code, basic plug-in’s, config parsing, etc.) that you have to do in order to start a study. Fortunately for me no study is the same and I try to limit the scope of a study, preventing them, on purpose from growing into a fully-fledged app.

One thing that study shines is when you have a number of similar plug-in’s or pathways that achieve a given goal. Plug-in’s are a good thing, but they can be tricky as they are not necessarily easy if you want to pull them out. (For Cocoa/any C-related language project, it’s the pain of pulling them out of your source tree; for Rails projects, cleaning up generator-produced-then-manually-customized code can be a disaster.) Many plug-in’s can start grow into your code quickly. And then when a plug-in goes wrong, you find you’re putting out fire everywhere. A study on the different options can prepare me for an exit strategy.

« Previous PageNext Page »