Skip to content

We moved!

Thank you for following our blog.

We just moved to a new website. All the previous posts are transferred to the blog on the new site:

Please let us know if you encounter any issues with the new website/blog! We hope we get to continue the conversation!


Another Anecdote Supporting the Thick Database Approach (and some other lessons learned)

Dulcian’s BRIM-Objects includes  an object modeling tool  based on UML class diagrams. We built this tool over 10 years ago using BC4J (which evolved into ADF-BC). It takes about a minute to load the SWING application and another minute to load a big diagram (300 classes, 3000 attributes). Since this was in line with what we had seen in other tools, we never gave its performance much thought.

When we heard about HTML5, we decided to rewrite the tool as a web application. The Formspider IDE is already a web application, so we assumed that this would be possible. We performed some tests and found that we could create a full UML class diagrammer as a web tool.

After a few months, we had it all working. Then we tried to load the same big diagram, and it took about 40 seconds.  The UI tests of hundreds of components were all sub-second, so something smelled fishy.

The new architecture is 100% thick database.  This means that all of the calculations are done in the database and then an XML representation of the whole screen is sent to the browser, which then parses and renders it on the client. There is exactly one round trip from the client to the application server to the database and back again.

We asked the obvious timing questions: “How much time is spent in the database? How much time is spent in the client?”  The results were as follows:

  • Time spent in the database: 38 seconds
  • Time spent in the client: 2 seconds …. HMMMMMMMM

In the next hour, we placed simple tracing on the database side and found our bad query.  We rewrote it and got somewhat better results:

  • Time spent in the database: 2 seconds
  • Time spent in the client: 2 seconds …. Much better!!!

If we had made the same change in the old 3-tier architecture, would we have realized the same improvement? The answer is yes, mostly. Would it have been as easy to figure out?  Not quite, but still pretty easy.

So why didn’t we ever ask the question and do the analysis in the old architecture? The reason was that there were so many moving parts in the application server (using ADF/BC) that it was not at all obvious that there was a problem or where the problem might be. We knew that there were performance issues with all of the round trips between the application server, database, SWING components, and the hundreds of queries in the database needed to load the diagram, so we just suffered in silence every time someone loaded a big diagram.

There are a number of lessons learned here:

  1. The “thick database” approach is again demonstrated as “cool.”
  2. System architectures that do not provide dead simple ways of figuring out where your performance issues are are “not cool.”
  3. Performance is never measured never gets improved.
  4. Fewer moving parts in an architecture make everything better.


The BEST IT Project Management Book Ever is: “If Disney Ran Your Hospital”

If Disney Ran Your Hospital: 9 1/2 Things you Would Do Differently (Fred Lee, 2004, Second River Healthcare) is not even an IT book. As the title suggests, it is about hospital administration.  But no book has influenced my thinking about project management more than this book.  I will probably write several few posts about this topic  but I will discuss the biggest idea here today.

In the interest of full disclosure, this is my translation of the book into IT terms. I make no claims that the author would endorse or even agree with my interpretation.

“You can’t have more than one primary goal.”

If you try to guess the number one core value of a Disney park, most people will say something like “the happiest place on earth” (their marketing slogan) or “extract as much money as possible from each person” (the cynical approach).  Few people will guess that the true number one core value is “safety”.  Of course, if you think about it, it makes perfect sense.  If a kid is about to get run over by the Matterhorn Bobsleds, you can shove over an old lady to jump on the tracks and save the kid. Nothing else matters at that moment.

The author makes the point that you can’t have lots of objectives in your head and have a coherent strategy at the same time. He ridicules hospital administration strategies with multiple competing strategies such as “revenue”, “quality patient care”, “minimizing legal liability”, “staff satisfaction”, etc..

This got me thinking. What is Dulcian’s core value? What are we really about? Historically, I know the answer to this question. Our core value was “to build great software and build systems that were exactly what the customer wanted/needed.”

However, SHOULD this be our core value?  All this time, I think that I was wrong. Building great software was not the right core value.  See if you can guess the right core value for a project-centric software consulting firm.

What got me thinking was something that we had done almost accidentally which provided much more positive feedback than any software we ever built.  The best example involved our handling of some users encountering a failed web service being called by our application. Users would try to call the web service, but the service provider would be down. The users would try again every few minutes. We eventually put up a message that the service was down and that we would notify them when the service was back up.  They still kept hitting the button to call the service every few minutes. Nothing we did stopped the users from hitting the button. We could have disabled the button, but that would have really made the users very unhappy.

The technical staff (including me) received emails every time there was a failed call to the service. (We really wanted to know when it was down)  Therefore, looking at my email in the evening, I could easily see that some user was hitting the button over and over again. (I could imagine their frustration just by looking at the email). We placed all of the user’s contact information in the email so we could talk to users if needed about why the service was failing. So, I just picked up the phone and called the user.  I told them that the web service was down and that there was nothing I could do about it until the morning. The user was very pleased and surprised that they got a call from the development staff at 10PM just because they had been unable to get through to a web service. We smiled about it as we imagined a user working with an application and then getting a phone call out of the blue to help them with their problem. This seemed to be a cool thing to do for the users and it cost us next to nothing. Therefore, we made it a policy that if we saw someone repeatedly trying to access a down service, we would telephone them.

You have no idea how much impact this little policy had. At a conference where all of the application users were gathered, these phone calls were probably discussed more than any aspect of the software itself.

As a result of this experience, we developed a new core vision: “Make our customers happy.” We are not a software company; we are a service company. We do whatever it takes to make our customers happy.  This shift in thinking allowed us to see mistakes that we made in the past which may have resulted in the software being built cheaper, faster, and better, but made the client less happy.

Systems Analyst Techniques for Interviewing Users

About a million years ago, I wrote my doctoral dissertation about how to talk to computer system users. I videotaped 50 systems analysts talking to 50 different real estate brokers discussing the requirements for a system. I did formal protocol analysis on each of the tapes. Then I started testing a bunch of hypotheses:

1)      The syntax of the question does not matter.  Most conventional wisdom is to ask open-ended questions since these provide better feedback.  This “wisdom” is false.  All of the evidence for that advice comes from “adversarial interviews” such as police or legal interrogations.  Since users want to tell the analyst what they would like to see in the system being built, the questions simply open up topics for discussion.

2)      The more feedback you give a user, the more information they will give you. Every time a user tells you something, parrot it back to make sure they feel understood. Every few minutes you should have a whole part of the conversation that starts with “What I hear you saying is that…”

3)      People-oriented professionals are better analysts.  There are many psychology inventories that attempt to measure this phenomenon.  I gave several of them to the participants. Analysts who are more “people-oriented” get more information.

But none of these factors really explained the HUGE differences between good analysts and poor ones.  A good analyst will extract about ten times as much information from a user in the same amount of time as a poor analyst. After I graduated, I got some really good advice from Dr. Iris Vessey when we were both teaching at Penn State.  She told me to go home and watch all of the tapes to see if I could discern what made the good analysts different. That’s when I saw the big difference between a good analyst and a poor one.

Good analysts come to the table with an organized set of topics in their heads.  Think of it as a tree of topics and subtopics. The analyst traverses that tree asking questions about each topic and subtopic. The tree dynamically changes and expands over the course of the interview.

At the end of the exploration of each topic, good analysts ask: “Is there anything else you want to say about X?” After they hear about several subtopics, they ask if there are more, : “You told me that what is really important about this topic is X1, X2, X3 and X4. Is there anything else we need to keep in mind?”

You can think of the first kind of question as a “vertical terminator” of the tree navigation and the second kind of question as a “horizontal terminator”.

At the end of the process, the analyst has constructed a very detailed tree of information.

EVERY really good analyst used this technique. When I talked to them about it, some were aware of what they were doing, but most just looked at me like I was dense and said something like… “Well how else would you do it?”  It just came so naturally to them that they didn’t even think about it.

I provided training about these discoveries to groups of analysts for a while. It is a funny thing to train people to do.  The really good analysts already do it naturally, so they think that I am totally wasting their time. The really bad analysts feel as though I am trying to make them do something that is artificial.

I would think that the prospect of doing your job ten times more efficiently would be a good thing… Maybe they all bill hourly.

The Challenge of Converting Oracle Forms to ADF (or anything in the Java EE world)

The Challenge of Converting Oracle Forms to ADF (or anything in the Java EE world)

“How do I convert my Oracle Forms applications to…?” I have been getting this kind of question ever since JDeveloper 3.0 was released. As soon as BC4J (now ADF BC) was created, we started seeing the handwriting on the wall. The era of Oracle Forms was coming to an end.

However, here we are ten years later and we still do not a have a capable converter from Forms to anything. Why not???

Converting from one platform to another is hard work.  Development architectures have different capabilities and very different ways of doing things.  Things that are dead simple in one environment may be nearly impossible to do in another. The user experience is influenced by what is easy to do in the environment. Oracle Forms had a lot of interesting features that are very hard to replicate in a web environment.

The following lists some of the interesting things about Forms that make converters challenging:

  1. Forms uses PL/SQL for all scripting.  Java is about as far as you can get from PL/SQL.  PL/SQL is tightly coupled with the database. Java and JavaScript are explicitly database- agnostic.
  2. In Forms, each user has his/her own Oracle schema. On the internet, we have single sign-on. Things like package variables and database objects can’t be expected to persist across UI events.
  3. Forms has no real model layer.  Blocks are both UI grids as well as data sources. Migrating that into an MVC architecture is very difficult.
  4. Forms has no notion of canvases nesting in other canvases (stacked canvases).  There is the base canvas and then you can place other canvases on top. Translating this into any kind of rational nested container system is going to be challenging.
  5. Forms code is interpreted.  SHOW_ALERT actually pauses the code and waits for user input.  If the SHOW_ALERT command is hidden inside some kind of complex logic, conversion is going to be a nightmare.
  6. Forms is client/server based, so quick round trips are simple.  You can easily write code to populate a city based on a zip code as soon as you lose focus. Doing that kind of thing in a web environment is usually not trivial.
  7. Forms uses x/y layout. Many web environments do not even support pixel level x/y layout.

These challenges have not slowed down the converter companies.  For the last decade they have come and gone.  When they work at all they typically work as follows:

  • The applications that the converter generates are totally different from any application you would have written if you had worked in the target environment from the start.
  • The converters do not get you all the way there.  There is some amount of manual work required post-generation in order to get your system working.
  • The user experience is about the same as what Forms provided. With modern web applications, no one is going to be very happy with a web application that looks exactly like an Oracle Forms application written ten years ago.

The Formspider™ architecture is closer to Oracle Forms than any product on the market for the following reasons:

  1. FormsSpider supports x/y layout.
  2. Because of the lightweight network round trips, it can support traditional client/server technology with little effort.
  3. We use PL/SQL as the scripting language. All of the standard Forms APIs exist as APIs in our supplied packages.
  4. We use a similar event model.  For example, you attach code to the “WhenButtonPressed” event on a button.

We acknowledge that it is possible to write a converter from Forms to Formspider.  We even spent some number of months trying to make it work.  But we eventually found that it is a Sisyphean problem (if you forgot your Greek mythology, Sisyphus was the guy cursed to keep rolling a boulder up an increasingly steep hill until it rolled back down and crushed him).  It is relatively easy to solve the problem badly (giving you just enough positive feedback to suck you in deeper) until you discover that there are about a million little things that you have to deal with.  Then you get a couple of clients to try it out and find that each client used a different subset of the Forms features.

If we could find a client with 800 or more forms willing to dump about $1,000,000 into solving the problem, we could write a really good converter… that would work for that one client. I am not so sure that it would work for anyone else very well.  It is just that big of a problem.

Trying to write a converter from Forms to JavaEE (including ADF), Apex, .Net, PERL, Flash, Flex, Swing, or whatever… I wouldn’t even want to try.

Case Study: “The Origin of the Object Interaction Repository”

Case Study: “The Origin of the Object Interaction Repository”

Dulcian was building the recruiting system for the United States Air Force Reserve. Most of the core user interface screens needed to be images of the paper forms that applicants fill out as part of enlisting in the Air Force. Those screens were mandated to be delivered using PureEdge Forms (now IBM Workflow Forms). This software was designed to support forms management.  Users would fill out and sign forms then send them around in a workflow.

PureEdge was not really intended as an interface product for a database. However, they did have an interface that would read and write the data from a form as XML.

Many of the same data elements existed on multiple forms and there was a requirement to update overlapping information on one form and have it automatically update on all of the forms. The way we handled this was to read the data from a centralized database every time a form was opened.  The data would be formatted as an XML document and then parsed on the client and loaded into the form. When the form was saved, the data would be extracted and formatted as an XML file and then sent to the database where it was parsed and the database updated as appropriate.

As you can imagine, the structure of the forms was very different from the structure of the database. There were also lots of forms (initially 40 when for the Reserve and now 200 to support for the entire Air Force. To generate the XML, each form may require up to 15,000 lines of code. Therefore, it was not practical to code all of this by hand. In addition, there are always new forms added over time, and the forms change so we really needed an efficient way to handle this task.

This is a classic ETL type of problem.  Naturally we started looking at ETL mapping tools like Oracle Data Warehouse Builder and Informatica.  We were not happy with either of these repositories because they did not approach the problem in the same way that I did.  Traditional ETL tools tend to think in terms of populating one table at a time.  When I build something like this by hand, I tend to think in terms of the whole object in a hierarchical structure that then writes to another (perhaps very different) hierarchical structure.

We ended up building our own repository to support this that looked something like the following:


We then wrote a small generator (about 1000 lines of PL/SQL code) to generate the PL/SQL packages necessary to generate the XML for each form.

When we first used this repository for reading and writing XML, we generated about 50,000 lines of code.  The productivity was wonderful.  Two developers entering data were able to specify repository entries that generated about 3000 lines of code per day.

This was over 10 years ago. Since then, we have written lots of different generators that work on the same repository for different purposes:

  1. Generate XML for on-screen forms
  2. Generate XML for web services
  3. Generate views for UI screens
  4. Read XML, parse it and write into the database (we have a few different types of this generator)
  5. Build ETL scripts for batch database migration
  6. Read flat files
  7. Write flat files
  8. A Java version of one of the generators for a non-Oracle client

The repository itself has changed very little over time. We have added an attribute here and there when we needed to expand the capability or when using the Mapper for a different purpose, but it has been surprisingly stable.

We are still working on the Air Force Recruiting project which now includes over 1,000,000 lines of code generated by this single repository.

Where the Mapper has proven most useful is in generating code to support reading and writing of XML for web services. The XML required to support a web service can be completely different from how the data is stored in the database. Having a nice tool that lets you point and click your way through that transformation has saved us months of time each time we need to support a new web service.


What is the role of the database in web architecture?

Many web developers think that the answer to this question is: “As a useless artifact fit for a museum”. 

Probably the most common perspective among JavaEE developers is: “As a place to store persistent copies of our classes”. 

Organizations having a history with databases might answer: “We have the same kind of database design that we always had. Then the developers go write everything else in the middle tier.”

Organizations that still think databases are useful might answer: “We write our batch code as stored procedures, but all of our UI code is in the middle tier.”

I have a more “Thick Database” perspective. My answer to the role of the database in web architecture is: “Anything to do with the business rules associated with the business objects or their interaction should be stored as closely as possible to the objects themselves (in the database). Things having to do with the UI should be stored with the UI code. Interfaces between the database and the UI could be stored either with the UI or with the objects, but given the tools that Oracle provides, those interfaces should be database objects.”

There are a number of ideas in my answer to be discussed in more detail:

  1. Store business object code with the objects.  I think this position is pretty hard to argue against, yet I see many shops that do not even think about object logic as a concept. It is all just “code.”  I find this ironic because the whole concept behind OO thinking involves encapsulation, yet few self-identified OO professionals think this way.  If the code is near the objects (in the database)performance is better, fewer context switches are required, and lots of other nice things are possible.
  2. UI code stays with the UI. Similarly, this one is a no-brainer.  Keep all of the UI stuff in one place. However, in my world, I do all of my UI work in the database as well, but I do keep it in its own area for management purposes. Formspider™ lets me use PL/SQL for UI scripting.  I appreciate the fact that I do not having to use different programming languages in different parts of my system.
  3. Interfaces could be near the UI code (e.g. ADF BC, Hibernate, TopLink) but are better coded in the database. I prefer that the database be isolated from the developers. I do this by building INSTEAD OF trigger views to present the data in the way that it needs to be displayed on the screens. Then the screens can simply point to these UI views. All of the complexity (and even the physical database design) is hidden from the UI developers.