The Elbe at Witttenbergen

February 2006
Sun Mon Tue Wed Thu Fri Sat


November 2005 (50)


My Homepage
My Wiki

Applying the Web to Enterprise IT


Switched Blog Software

I have set up a more comfortable blogging software (hopefully this helps me in posting more frequently :o). Please look here.

When copying my old Blosxom directory, the file creation dates changed, so the order of the posts is now pretty much at random. This was the simplest way to keep the URIs of the old postings stable, though and I do not currently have the time to migrate the Blosxom postings to WordPress.

See you over there; bye.
permanent link

News Search in Google Suggest Style

Just found this nice search interface that uses JavaScript and HTTP requests to produce a list of topics based on the prefix you type.


permanent link

Unmasquerading Google

Are you (like me) upset by Google's URL masquerading? Here is the brute force Google Unmaskerading Bookmarklet:

javascript:a=document.getElementsByTagName("a"); for(i=0;i

Edit the above code to be on a single line and create a bookmark in your browser's toolbar with the code as the bookmark's target. Click the bookmark after your Google search.

permanent link

MONITOR my Blog!

At the bottom of this page you can now enter your email address and subscribe to this blog. The POST request emulates a MONITOR request and what eventually happens is that my server sees

  MONITOR /blog HTTP/1.1
  Reply-To: mailto:[email protected]
and creates a monitor for you (you'll be supplied with a URL to manage and delete your monitor).

The other way to subscribe is to download and install Apache::MONITOR and to run the following command line:

  $ perl -MApache::MONITOR -e SUBSCRIBE \
                   mailto:[email protected]

If you run your own server, you can use Apache::MONITOR to provide MONITOR yourself, but also to use it for monitoring remote resources. Just send a MONITOR proxy request to your server:
  telnet yourhost 80
  Reply-To: mailto:[email protected]
Enjoy ;-) (If you seriously want to use Apache::MONITOR yourself, contact me first as the code still needs quite some cleanup).

permanent link

Pretending that the network isn't there....

Another very nice posting about REST: REST wars by Bill de hÓra.

I especially like:

The problem is that we don't, generally speaking, do it right - we keep trying to treat web apps like desktop apps and keep trying to pretend the network is not there.
permanent link

What Do you Do?

From dm-discuss: an interesting take on how to approach data modeling: What Do you do?.
permanent link

Screen Traversal by Foreign Keys

In What Not How C.J. Date writes:
[...] the system can automatically determine the next form to be displayed, thanks to its knowledge of primary and foreign keys. In other words, the overall application can be perceived from a high-level point of view as a collection of forms, and foreign key relationships have a large role to play in defining (and automating) those legal transitions and supporting that high level perceprion.
(Chapter 3. Presentation Rules, pp.23)

Note how the combination of RDF (being an alternative data model) and Web technology (e.g. Browsers) enables exactly this - traversal along key references.

permanent link


Today I managed to put together a 0.01 version of my Apache::MONITOR module.
permanent link

Is Fowler a RESTafarian?

While giving parts of Martin Fowler's Patterns of Enterprise Application Architecture another close read, I came accross this subtle sentence (from Service Layer Pattern; p. 135/136):
[...] many of the use cases in an enterprise application are fairly boring "CRUD" (create, read,update, delete) use cases on domain objects - create one of these, read a collection of those, update this other thing. My experience is that there is almost allways a one-to-one correspodence between CRUD use cases and Service Layer operations.
That's a pretty strong argument for REST's uniform interface in an EA context I'd say....:o)
permanent link

The Hypertext Application Model in XTM

Yesterday I suddenly noticed something interesting about XML syntaxes that use hyperlinks for representing a serialized knowledge representation graph: the links only connect the graph nodes together, their purpose is not to be traversed by the application.

It turns out that the element actually does make XTM the engine of application state for application/xtm+xml aware user agents.

A am sure there are some clever ways to use and propably extend 's capabilities.

permanent link

SPARQL Protocol for RDF

Just saw this working draft.

From the introduction:

This document describes SPARQL, a protocol for accessing RDF data and for conveying RDF queries from query clients to query processors.
Hmm....a new protocol for communicating with Semantic Web 'data providing services'? I thought the Semantic Web already had an application protocol. I hope I manage to analyse this proposal in detail soon.

From a quick look I see that the authors provide a mapping for the abstract return codes to HTTP response status codes. But then all abstract operations are mapped to HTTP GET, which seems like a semantic overloading of GET.

I can't see right now wheather the authors define a true mapping to HTTP instead of only defining how HTTP is to be used for transport. If update operations were part of the proposed protocol this would definitely be easier to evaluate.

Regarding updates: I just saw that there are the abstract return codes GraphCreated and NoDeletionPerformed but I cannot see (from a text search) that creation or deletion is mentioned in the text in other places. Will this be added in a revision?

permanent link

Robert Bartha on Topic Maps and other Data Models

This morning, my inbox made me extremely happy. In a reply on the sc34wg3 list Robert put Topic Maps in a comparision context with other data models.

I have not expected to finally see my understanding that Topic Maps ARE a data model expressed by somebody else.


permanent link

HTTP Requests Per JavaScript

Jim Ley provides a nice article about issueing HTTP requests per JavaScript.
permanent link

JSR 170 White Paper

Roy Fielding describes JSR 170 in JSR 170 Overview: Standardizing the Content Repository Interface. It is a great example of applying lessons learned from the Web to enterprise IT.


Most significantly, however, JCR holds great promise for eliminating the information silos created by application-specific storage mechanisms and proprietary APIs. Content management systems based on the JCR interface, such as Day Software's Communique©, will be able to manage all of the content within all of the applications that make use of the repository, thus unifying information access, workflow, and presentation across the entire enterprise.

permanent link

How to set up your CMDB?

If you are involved in designing an IT infrastructure data model, the paper Modeling The Enterprise IT Infrastructure is definitely a good read.

Being an OO model it does have the downside of allowing unlimited hierarchies but the overall design makes it easy to improve that aspect.

I bet that Charlie will have something to say about it :o).

permanent link

Building Web Services

In the first issue of ACM Queue there is an interesting interview with Adam Bosworth. The issue has been lying in my cupboard since March and I only discovered this interview yesterday.

Here is a quote:

There also are transactional integrity issues to consider. A database can maintain a transaction over a short period of time in a client/server setting. But if you have one application going to another to ask for 1,000 different things at 1,000 different points in time, the odds that consistency will be maintained are pretty slim. So what would make a lot more sense would be to ask for everything you want from that medical record at the same time. That way, it's just a single transaction over the Web. And then inside the hospital system, they can pull together all the needed data and hand it back as one big screen chunk. Now that model is very efficient for the Web and it doesn't require you to maintain state. Basically, you've got to take a coarse-grain approach like that or your system just isn't going to scale. That's true in the database world and it's even truer in the application world.

Interestingly, the interview sounds as if Bosworth was arguing pro REST-style services....

permanent link

Topic Maps and RDF...

I just found out about a key difference between RDF and XTM.
permanent link

httpRange-14 Resolved!

The TAG has resolved httpRange-14.

   a) If an "http" resource responds to a GET request with a
      2xx response, then the resource identified by that URI
      is an information resource;

   b) If an "http" resource responds to a GET request with a
      303 (See Other) response, then the resource identified
      by that URI could be any resource;

   c) If an "http" resource responds to a GET request with a
      4xx (error) response, then the nature of the resource
      is unknown.
Update: From the minutes.

The Architecture of the World Wide Web, Volume 1 defines information resource as follows:
By design a URI identifies one resource. We do not limit the scope of what might be a resource. The term "resource" is used in a general sense for whatever might be identified by a URI. It is conventional on the hypertext Web to describe Web pages, images, product catalogs, etc. as resources. The distinguishing characteristic of these resources is that all of their essential characteristics can be conveyed in a message. We identify this set as information resources.

I'm not sure yet what the implications of the above resolution are for building enterprise information systems according to WWW architecture. The immediate one seems to be that it is not good practice to create 2xx resources for things like employees or printers...

Let's do some investigation:

  1. Here is an information resource.
  2. Here is a 'could be any' resource. ( identifies my bike and a GET redirects via 303 to an information resource:
  3. Here is a fragment identifier resource. My server sends a 200 for but does that imply a server response code on that qualifies for the httpRange-14 resolution? That said: does identify my boat or an information resource?
  4. Suppose I wanted to say something about the W3C (the organisation); how do I do that? IIRC people often use for this purpose. Let's see what that URI identifies:
    $ telnet 80
    GET /Consortium/ HTTP/1.1
    HTTP/1.1 200 OK
    Content-Type: text/html; charset=utf-8
    Given the httpRange-14 resolution I conclude that the URI identifies an information resource and not the W3C. Suppose I wanted to do something like this:
        ex:director "Tim Berners-Lee"
    is that wrong now? If not wrong, how am I to interprete the statement given that I now know that the resource is an information resource? Is it now against good practice to use URIs identifying 2xx resources to refer to non-information resurces?

    Question: Who is going to mint and maintain all the URIs to talk about dogs?

As a side issue, I am curious how the httpRange-14 resolution could be baked into the Topic Maps destinction between information resource and non addressable resource, but not today...

permanent link

The fallacy of business objects yet another great article by Sean Mc Grath.

"Enterprise computing is simple really. All business data processing can be analyzed and decomposed into a set of so-called "resources". These are the fundamental real world concepts that make up your business (e.g. a customer, an account, a product and so on)."
100% agreed!

permanent link

Don Knuth vs. W3C Validator

A must read on the effects of change.

To change all these pages will cost me a week's time. I don't want to delay The Art of Computer Programming by an unnecessary week; I've been working on it for 43 years and I have 20 more years of work to do,...

Validator folks! do not slow down that man! I want to read the rest of the books some day....given I manage to work myself through the three existing volumes in those 20 years :o)

permanent link

Back & a New Company

I am finally back to my blog, after two very enjoyable weeks of snowboarding in February, chasing lots of moving requirements in client projects and becoming father of a cute daughter two weeks ago.

Today I have started the Website for my new company which is propably interesting enough to serve as a start to get back to the blog...

The company will provide consulting services for solving enterprise IT problems with Semantic Web technologies and also provide software for solving several issues I've discovered in this area over the past months.

permanent link


Via Mark just discovered this interesting new mailing list.
permanent link

What XML Means

Just found What XML Means which provides a good overview of the XML-as-a-datamodel POV. It aligns nicely with what I said yesterday on these issues.

permanent link

Comparision between Topic Maps and ER

I moved the table to my Wiki, it was just to wide for the blog. Also, I will add the bunch of other semantic data models I know when time permits:
  • Entity Relationship Model
  • RM/T
  • Topic Maps
  • RDF
  • Object Oriented Model
  • LDAP's information model
  • XML
  • Object Oriented Model
permanent link

Web Services For IT Management

There is an interesting article in the latest issue of ACM Queue.

The authors discuss the suitability of Web Services for managing complex IT systems. Some parts are interesting from a Web Services vs. REST perspective:

The real power of SOA and Web services becomes apparent when various constituents are added, removed, replaced, or upgraded without adversely impacting the whole system. This is just not possible when each part of the architecture relies on intimate knowledge of the inner workings of every other part and shares code, in the form of language-specific libraries, for processing messages.
This completely matches the experiences I have been making over the last year in a large IT service management project. But the authors miss the additional benefit that REST would bring to their szenario: using a uniform API would even eliminate the need for intimate knowledge of the various service interface semantics. A proper versioning strategy and the principle of partial understanding would take evolvability of the overall system to yet another level.
Within SOA for IT management, it is reasonable to think of each managed resource to be proxied by an identifiable Web service and expose the manageability interface as a Web service interface. Within this framework, an agent is responsible for hosting the Web service representation of the resource. This architecture is very similar to accessing a resource (i.e., a Web page) where the Web server provides transparent access to uniquely identifiable resources.
Yep! Very similar indeed. And I see no reason why one would need the additional complexity of Web Services based wrappers when ordinary Web server+CGI scripts do the job.

They go on saying this:

Although the basic standards referred to earlier are adequate for a variety of SOA solutions, IT management has some special needs that these standards do not meet:
Ah! Let's see what HTTP could do for them today :o)
The Web service representing an IT resource should be able to send alerts or notifications to those managers that have expressed interest. Another way of saying this is that these Web services should be able to support not only a request-response pattern but also a publish-subscribe pattern of interaction.
Fine, I think the MONITOR (link to come) extension to HTTP would do a perfect job here.
It should be possible for a manager to retrieve the state of the underlying IT resource, represented by a collection of named and typed properties, from the corresponding Web service in an efficient manner.
REST inherently addresses this issue by optimizing the amount of data returned for the common case. This is also a classic data access pattern (see (33) 'Active Domain Object') for reducing the number of requests by transfering a bulk of data upon the initial request. Nothing new here.
Traditionally, the Web services have been considered permanent, meaning their lifetime is essentially infinite and the clients do not have to worry about when a service comes into existence or when it goes away. This means there is no standard way of notifying clients of life-cycle events such as creation, upgrade, and destruction. Real IT resources, however, are commissioned and decommissioned on a need basis; hence, their Web service representations also have to be ephemeral.
This is a non issue in a RESTful design since clients should not make use of a priori knowledge about resource URIs but discover them from the hypermedia responses they process; except for a very limited set of 'entry' resources.

In addition, HTTP provides return codes to communicate resource unavailability (404) or movement (3xx) to the clients. be continued soon...

permanent link

Impossible Query Complexity

Via Mark Baker I came across an interesting paragraph in one of Mike Champion's blog entries:

Probably the most unconventional topic Bosworth spent time on was the importance of getting XML data models and APIs suitable for handling the synchronization of intermittently connected devices to Web-based master databases or applications. He noted that he spends much of his business week using only his Blackberry device. Effective use of such web-enabled, but slow and UI-challenged devices will require better synchronization tools: queries are difficult to generate with a handheld UI, and their limited bandwidth (if connected at all) means that it is important for queries to be very optimized to return back only the information the user really wants. Since this is so far beyond the state of the art, it may be easier for the device to anticipate the types of data the user will want, and trickle than information into the device in advance, as bandwidth is available, rather than on demand.

This nicely relates to my opinion that querying topic maps with a traditional query language will not work due to the sheer complexity of the task.

My opinion is that every Topic Map Application (TMA) (recently renamed to Topic Map Disclosure) implicitly defines a number of kinds of collections of topics that form interesting portions of a given topic map. Among those collection types that come with the Standard Application Model (SAM) are Index, SuperclassSubclass-Hierarchy and AllClasses (I plan to publish (my) definitions of these and others next month).

I am deeply convinced that interactions between components of any system that is a deployment of the Topic Map Paradigm will happen in terms of well-known topic collection types as opposed to querying.

Here is an example conversation between a TM-client and a TM-Server:

Client: Hey, Server! Server: Hey, dude. Here is the list of TMAs that govern my topic map: Standard Application Model, Dublin Core Client: Ah, so...since you got DC, I assume you hold an AuthorIndex. Let me see it, but only from G to K. [Note: I am sure you see the corresponding GET request, don't you? GET /indexes/authorIndex?range=G_K HTTP/1.0 ] Server: (sends representation of index in some negotiated content type) Client: (drills down in the index, using embedded hyperlinks, propably requesting papers from a certain author within a certain time frame) Server: (sends representations of index portion) Client: (bookmarks identifier for this index portion for further use) Thanks, bye!
permanent link

Just because XML is easy to write in an editor...

Came across a nice statement from Thomas Passin a few minutes ago:

Just because XML is easy to write in an editor doesn't mean that all the years of experience with databases and data models is non-applicable. All those issues are still there - keys, independent vs dependent identity, data normalization, data integrity - they don't go out the window. And they're still hard. They could even be harder in XML because the database engine isn't taking care of them for us any more.

permanent link

Apache::MagicPOST Error

Arrgr! I made a severe mistake when implementing Apache::MagicPOST: Poking into the POST request data makes that data unavailable for all subsequent handlers, meaning that no POST request to that server will work properly.

There is no real solution with mod_perl1 so I'll port the module to mod_perl2 once I have some spare time.

Bottom line: do not use Apache::MagicPOST!

permanent link

Got my new phone today!

I agreed to continue my existing mobile contract and today received the new phone that was included in the deal (with additional payment of course). I decided to explore the mobile space a bit this time and opted for a Nokia 6670. The design is really nice and the photo/video camera is a really enjoyable time-waster :o) I am really looking forward to checking out the series 60 Python port.

But here is the reason for me writing this: I want to extend an existing Web app to serve WML (or do I say WAP these days?) and checked out the HTTP headers my phone would send. Believe it or not, here is what is sent across to my server:

accept: application/vnd.ces-quicksheet,audio/wav,audio/x-wav,audio/basic,\
accept-charset: iso-8859-1,utf-8,iso-10646-ucs-2;q=0.600,*;q=0.001
accept-encoding: gzip,identity;q=0.800,*;q=0.001
accept-language: de
Pragma: no-cache
user-agent: Nokia6670/2.0 (4.0437.4) SymbianOS/7.0s Series60/2.1 Profile/MIDP-2.0 Configuration/CLDC-1.0

That's about 1.5KB of data just for the headers....are those guys nuts????

And, hey....

Did we really (for example) need those?

Besides the bandwidth....payment is by the KB and so the headers are even significant when it comes to the invoice. Duh....

permanent link

More on Using XTM as the Engine of Application State

When I access the entry page of an information service with my browser, I'll usually be supplied with a number of links that I can follow to navigate or drill down into the information managed by that service. A simple but sufficient example is the Open Directory Project.

Now consider a user agent that is capable of understanding the application/xtm+xml mime type and interacts with a server by exchanging XTM topic maps. How would the server embed hyperlinks in the XTM messages to guide the user agent to more information?

I already mentioned the element yesterday that at least in some sense fulfills this purpose in XTM. The question is, how can be made more suitable to provide an engine of application state?

What if the Open Directory server, for example, was capable of serving XTM and our user agent would issue the following request:

GET / HTTP/1.1
Accept: application/xtm+xml
What could the server respond to provide the user agent with equivalent information in XTM as it serves in HTML? What about

HTTP/1.1 200 Ok
Content-type: application/xtm+xml


Since the intention is not to have the user agent instantly process all the referenced resource, an additional attribute on the element would be needed to indicate that the links are intended as options for further navigation.

What is also missing is a way to communicate to the user agent what the nature of the referenced resources is. One possible way to do this would be RDF Forms or an XTM equivalent that used equivalent semantics.

The underlying idea is that certain kinds of access patterns exist that are used to interact with topic maps (or any information service for that matter). If these patterns were publicly defined, servers and clients would be provided with an initial base for automatic interaction. for example can be considered an instance of hierachical navigation while would refer to the well-kown interaction pattern hitlist.

Based on such information, the user agent could choose to follow only those links that refer to resources that the user agent knows it understands or seek human interaction for the ones that are unknown.

All this certainly needs some more thoughts, but it is a start.

permanent link

JavaScript Epiphany

When I started working in IT back in '99 we used to use JavaScript a lot at the New Media Shop I was working for. Like everyone else, we enjoyed trying the impossible and came up with all this fancy (and sort of useless :o) stuff like image driven multilevel menues.

After that I shifted to backend programming and completely lost track of browser development and all the CSS and JavaScript incompatibility madness I had experienced.

During the last year I figured that I eventually had to produce better (read 'prettier') user interfaces in my client projects and took a look at CSS. Man, what a surprise! I ran to get Eric Meyer's books and simply could not believe what had happened over the last couple of years...

Then for JavaScript....of course I had seen JavaScript HTTP requests in action in some commercial products but I never got around experimenting with the functionality myself. Tuesday evening, I finally managed to look at Google Suggest and the analysis done by Chris Justus. Impressive stuff! And it immediately solves some complex UI-problems I am facing in a client project right now.

Based on Chris' analysis I did a complete rewrite trying to make it work like the bash autocompletion functionality: if you hit TAB, you are given the list of possible completions for the prefix you typed. The particular use case I have is to enter a URI in a text box to create a link between two 'business entities'. I used to open a popup window and have the user navigate or search a thesaurus and then click on a button to select a particular concept (by putting its URI in the parent frame's text field in the background).

With the autocompletion text box this is now much more elegant.

The complete source code is in tb_fish.js, along with a basic description of how to use it. Together with Chris' article the code should be understandable enough to extend it. I am thinking about generalizing it further (by letting the user supply the function to parse the servers response and to populate the select menu), if you have any suggestions please let me know. I tested it with Firefox and Safari, but not yet with IE. Feedback welcome. UPDATE: Try tb_fish.js.

I am also working on a mod_perl backend but that's not ripe for publication. If you need it, let me know.

What would be really nice to have is some kind of JavaScript widget toolbox covering the functionality that is likely to be needed for browser based GUIs for Web 2.0 applications.

Finally....there is a very week point in my implementation: The communication between browser and server uses the MIME type text/plain which makes the message rather meaningless and decreases extensability. I am thinking that RDF would maybe be better (performance considerations set aside); at least it will be worth to give an 'embedded' JavaScript RDF parser a try.

permanent link

xmlhttp and HTTP Authentication

I just noticed that (at least with Firefox) something interesting happens when accessing protected Web resources with JavaScript xmlhttp based requests: When you supply different credentials to xmlhttp.send() than the one supplied by the user, Firefox silently changes the credentials from then on.

If this works with all browsers that support xmlhttp we might have a nice option here for a logout mechanism for HTTP authentication; simply call xmlhttp.send() with credentials that definitely belong to no user and access to the realm will cease.

If anyone tries this or knows how I can make Firefox ask me for credentials again instead of keeping displaying that 'Authorization required' page :-( I would like to hear about the results.

permanent link

Second Try!

After my first Blog has been spammed by the Viagra-maniacs and after reconfiguring my server, I'll give blogging a second try.

This time I use Blosxom and I must say that I am deeply impressed. What an elegant piece of software and it took me only 10 minutes to be ready. Thanks a lot Rael.

permanent link

Redisigned my blog

I finally managed to do some work on my sort of abandoned Website and the blog. I dug up some pictures that friends of our's took back in 2001 and thought they might make the pages a little nicer.

The downside of this work is that I accidentally changed the modification times of the earlier postings and as blosxom relies on that time for the ordering, they are now, well, out of order. Anyhow, I think I wasn't too good at explaining what I wanted to say back then anyhow, so I decided to just leave them as they are now.

Besides intensive client related work I have done some serious research on data models and REST during the last months and I think I finally have my story together, meaning that I now have something to blog about. The working title (and therefore the new blog title) is applying the Web to enterprise IT.

permanent link

Das Keyboard - It says who you are!

Only For The Best
If you are an elite programmer who can write sophisticated code under tight deadlines, someone who makes impossible projects possible; or a Silver Web Surfer your colleagues turn to when they need IT advice: this keyboard is for you.

Das Keyboard.

Duh...and I thought I already owned the coolest keyboards ever...OTH, I truly ready for that Übergeek device?

Yeah! Gotta get those 100% blank :o)

Update: But then... Shouldn't your keyboard reflect your status as one of the elite?. Hmm....:-).

permanent link

Decentralized IT Governance

Regarding the emerging notion of decentralizing IT management, Charlie aksks: "How do you govern something distributed?".

First, I'd use the term 'decentralized' and not 'distributed' because 'decentralized' puts a greater emaphasis on the lack of a single point of control.

"How do you govern something decentralized?" - Well, by providing enough guidelines to achieve useful results while leaving enough freedom and simplicity to enable growth. It is a matter of trading central control for the sake of network effects. Exactly what the Web did successfully with HTTP/URI/HTML and what the Semantic Web is trying to achieve with RDF/OWL.

Todd Stevens had related thoughts a while ago.

permanent link

High Level Data Concepts for Banking

This posting to dm-discuss lists high level data concepts needed for data modeling in a banking context. The list is:
  • Involved Parties
  • Products
  • Resources
  • Arrangements
  • Conditions
  • Events
  • Accounts
  • Systems
  • Addresses
permanent link


A recent thread on rest-discuss made me think about what it actually means in practice to use REST for enterprise application integration. With REST being an architectural style the simple answer would be: "Use a software architecture for your integration solution that adheres to the architectural constraints prescribed by REST!" The problem is though, that this doesn't tell me anything usefull, really. Another approach would be: "Apply REST's principles to your integration task!" Hmmm...better?/worse? Focussing even more on the "what do you have to do" aspect I suggest this: "Use a REST software architecture and make your applications RESTful (or expose them as such)!"

Immediately the question pops up: what a RESTful application is. Roy Fielding recently provided a lucid explanation:

In short, if you can draw a state machine in which each state is self-described (resident on the client), the transitions from that state are also self-described (instructions for the client to initiate), and each transition is invoked using a self-descriptive message, then you have a RESTful application. All of the rest of the constraints fall out from the need to be self-descriptive (i.e., generic methods are necessary because resource-specific behavior is too complex to be self-descriptive).

For obvious reasons, it won't in the general case be an option to change the applications themselves to become RESTful. Instead one will have to create adapters that expose them as RESTful applications. Some of the kinds of adapters needed will be familiar patterns (see for example: (33) Active Domain Object) already in use when decreasing network usage is an issue. Others will propably seem more complex at first because they might involve multiple steps and multiple parties. (The lifecycle of a ticket in a trouble ticket system is an example of this)

There are some interesting questions arising which I will cover in a follow up:

  • how can common enterprise applications be categorized so that they can be mapped to certain kinds of HTTP resources (e.g. as the ones described in Mark's Abstract Model for HTTP Resource State?
  • to what extend can the wrapper code be auto-generated from descriptions of the resources?
  • REST EAI patterns?
  • what communication patterns are useful (e.g. publish subscribe, messaging) for the interaction between the applications and when should the applications themselves act as user agents.

Umm..almost forgot: how does all this relate to pi-calculus?

permanent link

Don't Assume A Closed World on the Web!

While digging around for description logics (as with OWL DL) vs. deductive databases (as with Datalog) material I found this lucid explanation of the Closed World Assumption:
I see it in a pragmatic way. When you're in the open then the Open World Assumption is really, guess what, the only sane thing to do. You dont really give your apartment key to a stranger because googling him up found no pages where he/she was mentioned as a criminal.
(From Two Semantic Webs.)
permanent link

RDF Stores in the Real World

On the semantic-web mailing list there is an interesting thread on experiences people have made using RDF stores as the primary backend storage in data intensive applications.
permanent link


Powerd by Blosxom.