A URI by any other name...

Posted by Jim Jagielski on Thursday, October 4. 2007 in Programming

Many people are buzzing about Amazon's Dynamo, and for good reason. But the buzz is almost dual in nature, because not only is it very cool technology, but also because of the real and perceived impacts on other architectural designs. After all, as noted by others as well, what we're really seeing is a RESTful DB, and how it maps to a RESTful web architecture as well. The key is the URI and the data is the resource; it is a very natural fit. And when the "DB" is better, faster and more scalable due to its simplicity, we see a mutual self-validation between the two. The success of REST implies that Dynamo is the right approach, and the power and capability of Dynamo indicate that RESTful architectures should, of course, continue to be taken seriously (if not more seriously). Of course, one can pretty much claim that the web itself is a key/value distributed DB implementation, and they'd be right. I don't agree that RDBMs are going away anytime soon, but mostly it's because that I don't believe in one-size-fits-all solutions. I like having a wide range of tools in my utility belt, so I can use the technology that works, instead of shoe-horning something in, which is the real danger, IMO, with tunnel vision technology.
More...

Experts and open source

Posted by Jim Jagielski on Monday, October 1. 2007 in Programming

Now that Ben and Rod's blog-oriented discussion regarding open source support is over, it's a perfect opportunity for me to add my 2 cents. This isn't in response to either of their PoVs, or their companies, but rather my take on the whole topic. First of all, I agree with the point that one can be an an expert on a software package without being a committer on that project (although it certainly does help). What is a danger, of course, is when someone or some company claims to be "expert" in a codebase, when they are no such thing, basing their claim on the assumption that, well, since they have access to the source code, they can suss through the code and find and fix any bugs, which is what you need an "expert" for anyone. Access to the source does not create expertise, nor does expertise in one codebase automatically imply expertise in others.The best way to illustrate that, I think, is via Ben's auto mechanic anology. Certainly, if I have a Toyota and it's having trouble, going to a verified Toyota mechanic is a safe course of action. But even if I have a Ford, I'd still feel pretty comfortable doing that, since, when all is said and done, the similarities between the designs of the cars are so close that expertise in one pretty much translates well to expertise in the other. But this is not the case in software. The differences between software codebases are staggering. It would be like calling your auto mechanic when your refrigerator or dishwater went on the blink. After all, they are both mechanical devices, right? I can only think of a small number of cases where that would even remotely be acceptable. I guess if you had time to wait, you could handle the downtime involved with someone reading the code and getting at least somewhat up to speed on it before they can even start debugging the issue at hand. But certainly not in production environments. And enterprise customers, with production environments, need agressive SLAs and require and deserve that demonstrated expertise. In open source support, it's depth which is critical and key, not breadth.
More...

One has to laugh

Posted by Jim Jagielski on Thursday, August 23. 2007 in Programming

I saw this link on Sanjiva's blog which was titled Building distributed systems is hard, in which I agree 100%. But when I read the following statement from Stefan, who is summarizing a blog post from Dan Diephouse, I couldn't help but laugh: building your own protocol using Web services is a lot easier than understanding and using HTTP correctly. Huh? Are there simpler protocols than HTTP? For sure. But is it really easier to "build" your own, which hopefully has at least a small fraction of the capability of HTTP, than to take some time and understand, well, the protocol being used by the web after all? And doesn't HTTP have almost universal acceptance? Don't people build and use HTTP clients every day? Web services are supposed to be used after all, aren't they? And do we really need yet another protocol simply because one can't be bothered to "understand and use HTTP correctly"? Good thing that Stefan understands all this, since he finishes up with which IMO means that investing the time and effort to learn and apply RESTful HTTP is an investment that pays off very quickly. So it looks like he sees the light. Yeah, you might be able to create a simpler protocol, but you've given away way too much when you do that. People claim a common issue with WS is that it "ignores" the web; but it hardly does any good dismissing it, as if HTTP is a set of shackles, rubbing you raw. Why even bother calling it Web Services anymore. Just call it Network Services and be done with it. And if it's hard to do something architecturally logical (like create a web service using the web) in Java, then maybe it's either time to look at another language, or look at dropping all that nasty "overhead" that makes it hard.
More...

The trouble with ohloh

Posted by Jim Jagielski on Thursday, August 2. 2007 in Programming

The trouble with ohloh (and other type sites) is: inaccurate and misleading metrics are more dangerous, and less useful, than non-readily-available ones. Until "value" parameters can someone be applied (eg: this 10 line patch did "more" than this 100 line patch that did almost nothing), these types of "statistics" are fun to look at, but really useless.
More...

Apache and Tomcat connectivity,

Posted by Jim Jagielski on Monday, July 16. 2007 in Programming

Mladen has a nice post comparing mod_jk and mod_proxy_ajp from his point of view. His recommendation is that, for most people, mod_jk makes more sense than mod_proxy_ajp (m_p_a), and I can certainly understand that POV. He does mention, however, that one reason for his recommendation is the lack of parity between mod_jk and m_p_a, like with the load balancer code, and although mod_jk does have an additional method (busyness), he fails to mention that the design of mod_proxy_balancer is such that one can easily extend and add addition load balancing methods to mod_proxy with no changes to the core Apache code. Since in mod_proxy, the load balancer implementation uses normal Apache providers, one can create addition mod_proxy sub-modules that implement new providers that the load balancer code can then use. With mod_jk, the load balancer methods are very, very tightly entwined making adding new methods very hard and certainly not something one can do without major changes to the mod_jk source. Mladen also mentions the "maintenance" aspect of mod_jk, which, again, could be a useful feature for mod_proxy itself. However, the actual maintenance itself does incur a noticable overhead, which, at least in my opinion and in testing, offsets any such "improvements" in proposed efficiency of the whole scheme. It also, again at least in my opinion, tries to introduce a time scale (and thus, some sense of "history") into a design which should be time ignorant. In other words, the decay aspects of the maintenance implementation is such that "older" values have less importance than more "recent" ones. IMO, load balancing should be akin to a quick, efficient state table; the selection and hand-off should be as quick as possible and the more you try to "perfect" the "correctness" of load balancing, the slower the actual handling of the pass-off will be. If you want to do so, however, I can envision several LB methods where that makes sense, but to make it a universal aspect of the core balancer itself seems almost unwise. Why, for example, should a "by_randomness" or "by_roundrobin" method even care? In any case, I do agree that there are some aspects of current mod_jk functionality (incl some LB concepts) that would be useful to backport to mod_proxy itself, because this also means that not only AJP but also HTTP proxies would benefit. And, this also implies that the "best" way to connect Apache and Tomcat is via AJP itself, a statement which I may not totally agree with :-)
More...

Page 7 of 11, totaling 54 entries

Quicksearch

Search for an entry in IMO:

Did not find what you were looking for? Post a comment for an entry or contact us via email!