Thursday, December 27, 2007

Fact Tables and Dimension Tables

By Ralph Kimball

Dimensional modeling is a design discipline that straddles the formal relational model and the engineering realities of text and number data. Compared to entity/relation modeling, it's less rigorous (allowing the designer more discretion in organizing the tables) but more practical because it accommodates database complexity and improves performance. Contrasted with other modeling disciplines, dimensional modeling has developed an extensive portfolio of techniques for handling real-world situations.

Measurements and Context

Dimensional modeling begins by dividing the world into measurements and context. Measurements are usually numeric and taken repeatedly. Numeric measurements are facts. Facts are always surrounded by mostly textual context that's true at the moment the fact is recorded. Facts are very specific, well-defined numeric attributes. By contrast, the context surrounding the facts is open-ended and verbose. It's not uncommon for the designer to add context to a set of facts partway through the implementation.

Although you could lump all context into a wide, logical record associated with each measured fact, you'll usually find it convenient and intuitive to divide the context into independent logical clumps. When you record facts — dollar sales of a grocery store purchase of an individual product, for example — you naturally divide the context into clumps named Product, Store, Time, Customer, Clerk, and several others. We call these logical clumps dimensions and assume informally that these dimensions are independent. Figure 1 shows the dimensional model for a typical grocery store fact.

In truth, dimensions rarely are completely independent in a strong statistical sense. In the grocery store example, Customer and Store clearly will show a statistical correlation. But it's usually the right decision to model Customer and Store as separate dimensions. A single, combined dimension would likely be unwieldy with tens of millions of rows. And the record of when a given customer shopped in a given store would be expressed more naturally in a fact table that also showed the Time dimension.

The assumption of dimension independence would mean that all the dimensions, such as Product, Store, and Customer, are independent of Time. But you have to account for the slow, episodic change of these dimensions in the way you handle them. In effect, as keepers of the data warehouse, we have taken a pledge to faithfully represent these changes. This predicament gives rise to the technique of slowly changing dimensions, the subject of the next column in this series.

Dimensional Keys

If the facts are truly measures taken repeatedly, you find that fact tables always create a characteristic many-to-many relationship among the dimensions. Many customers buy many products in many stores at many times.

Therefore, you logically model measurements as fact tables with multiple foreign keys referring to the contextual entities. And the contextual entities are each dimensions with a single primary key. (See Figure 1.) Although you can separate the logical design from the physical design, in a relational database fact tables and dimension tables are most often explicit tables.

Actually, a real relational database has two levels of physical design. At the higher level, tables are explicitly declared together with their fields and keys. The lower level of physical design describes the way the bits are organized on the disk and in memory. Not only is this design highly dependent on the particular database, but some implementations may even "invert" the database beneath the level of table declarations and store the bits in ways that are not directly related to the higher-level physical records. What follows is a discussion of the higher level only.

A fact table in a pure star schema consists of multiple foreign keys, each paired with a primary key in a dimension, together with the facts containing the measurements. In Figure 1, the foreign keys in the fact table are labeled FK, and the primary keys in the dimension tables are labeled PK. (The field labeled DD, special degenerate dimension key, is discussed later in this column.)

I insist that the foreign keys in the fact table obey referential integrity with respect to the primary keys in their respective dimensions. In other words, every foreign key in the fact table has a match to a unique primary key in the respective dimension. Note that this design allows the dimension table to possess primary keys that aren't found in the fact table. Therefore, a product dimension table might be paired with a sales fact table in which some of the products are never sold. This situation is perfectly consistent with referential integrity and proper dimensional modeling.

In the real world, there are many compelling reasons to build the FK-PK pairs as surrogate keys that are just sequentially assigned integers. It's a major mistake to build data warehouse keys out of the natural keys that come from the underlying data sources. I discuss this fascinating and intricate topic in detail in a pair of Intelligent Enterprise columns, "Surrogate Keys" and "Pipelining Your Surrogates," which you can find in my article archive at www.kimballuniversity.com or at www.intelligententerprise.com.

Occasionally a perfectly legitimate measurement will involve a missing dimension. Perhaps in some situations a product can be sold to a customer in a transaction without a store defined. In this case, rather than attempting to store a null value in the Store FK, you build a special record in the Store dimension representing "No Store." Now the No Store condition has a perfectly normal FK-PK representation in the fact table.

Logically, a fact table doesn't need a primary key because, depending on the information available, two different legitimate observations could be represented identically. Practically speaking, this is a terrible idea because normal SQL makes it very hard to select one of the records without selecting the other. It would also be hard to check data quality if multiple records were indistinguishable from each other.

Relating the Two Modeling Worlds

Dimensional models are full-fledged relational models, where the fact table is in third normal form and the dimension tables are in second normal form, confusingly referred to as denormalized. Remember that the chief difference between second and third normal forms is that repeated entries are removed from a second normal form table and placed in their own "snowflake." Thus the act of removing the context from a fact record and creating dimension tables places the fact table in third normal form.

I resist the urge to further snowflake the dimension tables and am content to leave them in flat second normal form because the flat tables are much more efficient to query. In particular, dimension attributes with many repeated values are perfect targets for bitmap indexes. Snowflaking a dimension into third normal form, while not incorrect, destroys the ability to use bitmap indexes and increases the user-perceived complexity of the design. Remember that in the presentation system in the data warehouse, you don't have to worry about enforcing many-to-one data rules in the physical table design by demanding snowflaked dimensions. The staging system has already enforced those rules.

Declaring the Grain

Although theoretically any mixture of measured facts could be shoehorned into a single dimension table, a proper dimensional design allows only facts of a uniform grain (the same dimensionality) to coexist in a single fact table. Uniform grain guarantees that all the dimensions are used with all the fact records (keeping in mind the No Store example), and it greatly reduces the possibility of application errors due to combining data at different grains. For example, it's usually meaningless to blithely add daily data to yearly data. When you have facts at two different grains, you place the facts in separate tables.

Additive Facts

At the heart of every fact table is the list of facts that represent the measurements. Because most fact tables are huge, with millions or even billions of rows, you almost never fetch a single record into your answer set. Rather, you fetch a very large number of records, which you compress into digestible form by adding, counting, averaging, or taking the min or max. But for practical purposes, the most common choice, by far, is adding. Applications are simpler if they store facts in an additive format as often as possible. Thus, in the grocery example, you don't need to store the unit price. You merely compute the unit price by dividing the dollar sales by the unit sales whenever necessary.

Some facts, like bank balances and inventory levels, represent intensities that are awkward to express in an additive format. You can treat these semiadditive facts as if they were additive — but just before presenting the results to the end user, divide the answer by the number of time periods to get the right result. This technique is called averaging over time.

Some perfectly good fact tables represent measurements that have no facts! This kind of measurements is often called an event. The classic example of such a factless fact table is a record representing a student attending a class on a specific day. The dimensions are Day, Student, Professor, Course, and Location, but there are no obvious numeric facts. The tuition paid and grade received are good facts but not at the grain of the daily attendance.

Degenerate Dimensions

In many modeling situations where the grain is a child, the natural key of the parent winds up as an orphan in the design. In the grocery example, the grain is the line item on a sales ticket, but the ticket number is the natural key of the parent ticket. Because you have systematically stripped off all the ticket context as dimensions, the ticket number is left exposed without any attributes of its own. You model this reality by placing the ticket number by itself, right in the fact table. We call this key a degenerate dimension. The ticket number is useful because it's the glue that holds the child records together.

In the next issue, the sixth column in this Fundamentals series will detail the latest thinking on how to handle slowly changing dimensions.

Ralph Kimball co-invented the Star Workstation at Xerox and founded Red Brick Systems. He has three best-selling data warehousing books in print, including The Data Warehouse Toolkit, Second Edition (Wiley, 2002). He teaches dimensional data warehouse design through Kimball University and critically reviews large data warehouse projects. You can reach him through his Web site, www.ralphkimball.com.

RESOURCES

This is Ralph Kimball's fifth column in his Fundamentals series. The previous four are:

Part 1: "An Engineer's View," July 26, 2002

Part 2: "Design Constraints and Unavoidable Realities," Sept. 3, 2002

Part 3: "Two Powerful Ideas," Sept. 17, 2002

Part 4: "Divide and Conquer," Oct. 30, 2002

Friday, December 21, 2007

A Step-by-Step Guide to Starting Up SaaS Operations

Introduction

Faced with intensifying competition, as well as a desire for more stable revenue streams and stronger customer relationships, software companies are increasingly turning to the Software as a Service (SaaS) delivery model.

Adoption of SaaS is driven in part by end-users, who benefit from access to any application, from anywhere, on virtually any web-enabled device; better licensing and cost control; and assurance that the most recent version of the application is in use. SaaS is built on the legacy of the ASP model, but modernized and enhanced by today’s robust web services integration capabilities, increased bandwidth and bandwidth availability, and more mature infrastructure.

As a strategic offering, SaaS has already shown that it can:

Ø Open new markets, revenue streams, and distribution channels

Ø Provide a stable, recurring revenue model

Ø Afford consolidation of development and support efforts around single versions of code

Now, software companies are facing the complex issues involved with building the service delivery capabilities necessary to support SaaS offerings. Building an SaaS infrastructure is a complex undertaking, requiring a committed team and a focused effort. End-users demand 100% uptime, appropriate Service Level Agreements, and

24x7 call center support. Meeting those demands requires 24x7 application and systems management, hosting, networking and security infrastructures, disaster recovery capabilities, change management policies and procedures, and more.

The Steps

  1. Understand your business objectives and definition of a successful outcome
  2. Select and staff your services delivery team
  3. Define and understand the infrastructure needed to deliver your SaaS application
  4. Select your hosting facility and Internet Service Providers (ISPs)
  5. Procure the infrastructure and software required to deliver your SaaS application
  6. Deploy your SaaS delivery infrastructure
  7. Implement disaster recovery and business continuity planning
  8. Integrate a monitoring solution
  9. Establish a Network Operations Center (NOC), Client Call Center and ticketing system
  10. Design and manage Service Level Agreements
  11. Document and manage the solution
In the following sections, we describe this high-level, step-by-step methodology for successfully starting operations with SaaS.

1: Discovery - Understand the Objectives for Your SaaS Offering

To successfully deploy Software as a Service offering, you must be guided by clearly defined business requirements, objectives, and timelines. It is critical that these objectives are identified before starting the process. A detailed investigation and discovery process will set the direction for the subsequent decisions related to deploying the offering.

Among the areas for investigation:

Ø How is the on-demand application designed to run?

Ø How is the on-demand application designed to be accessed? Where are the users of this application located when accessing the systems?

Ø Is the on-demand application designed to handle multiple users? If so, how?

Ø Is the on-demand application designed to meet scalability, security, and failover requirements? If so, how?

It is essential to understand the intricacies of your on-demand offering, and the challenges faced in transitioning dedicated applications to an SaaS platform that is traditionally operated by your client’s IT organization, prior to starting infrastructure design and component selection.

2: Designate the Operations Team

The SaaS task force then designates the Operations Team, comprised of seasoned veterans with both engineering and operational expertise, to design the scalable architecture for hosting the SaaS platform, based on the application’s requirements. To be successful the Operations Team will need to have expertise in multiple technologies. Some of these include: system and application management; network and security management; change control expertise; infrastructure design; and deployment experience.

The Operations Team is tasked with developing design solutions that meet the stated objectives. Typically, this is approached one of two ways: from a bottom up, cost-based perspective; or from a top down, maximum needs perspective.

Unfortunately, neither of these approaches will result in an optimized, competitive offering. A cost-based approach may result in an under-built infrastructure that may not be as effective, efficient, scalable, or secure as is required. A maximum needs approach may result in an over-built infrastructure that is never fully utilized, incurring unnecessary costs and dragging down profits. A middle ground is best, resulting in a salable solution that is logically linked to revenues.

3: Conceive and Design Scalable Infrastructure and Services

With a clear understanding of the application(s), and the service offering, the next step or the Operations Team is to architect a comprehensive infrastructure and its supporting components. These infrastructure components include:

Ø Data center

Ø Network components and connectivity

Ø Security

Ø Hardware – systems

Ø Hardware – storage

Ø Storage tape backup

Ø Monitoring tools

Ø Systems management tools

Internal reviews should critically examine cost-benefit issues related to building the infrastructure to support today’s business and application(s) requirements, as compared to short- and long-term architectural considerations for scalability and expanded services offerings.

Final decisions must include strategies for:

Ø SLA creation and management

Ø Scalable 24x7x365 systems and application management

Ø End-user call center support

Ø Disaster recovery

Ø Scalability of web, application and database servers

Ø Performance and availability commitments

Ø Network and bandwidth capacities

Ø Security and security management

Ø Monitoring management and reporting

Obviously, these considerations must also be examined within the context of available budgets, while factoring in ongoing operational expenses to update and maintain the infrastructure.

4: Determine Your Bandwidth Requirements and Select Your Hosting Facility

Hosting your infrastructure behind appropriate public connectivity and in a facility that is best suited to your needs is key to a consistently positive end-user experience. When reviewing bandwidth, you must understand the demographics related to your application(s) by identifying where the majority of your network connections come from. End-users who will access your application from home-based desktop computers will require a different approach, compared to those in corporate offices with dedicated high-speed Internet connections.

Placing your infrastructure as close as possible to the end-user community will reduce network hops and increase performance. Using multiple network connections to your application(s) from tier one providers will eliminate bottlenecks and ensure fast application response times.

If you determine that you will host your infrastructure in a third-party data center, there are some key components to review. Questions include:

Ø Are the data centers staffed 24x7x365?

Ø Are there redundant systems for power and cooling? What is the testing frequency?

Ø What physical security measures are in place?

Ø How many Internet Service Providers (ISPs) are available for purchasing connectivity?

Once selected, the Operations Team will need to ensure that the selected facility and ISP will meet the build and deployment timelines that were discussed during contract negotiations. It is important not to let this critical component slip during the build phase.

5: Procure the Infrastructure Components

With the overall infrastructure design complete, components with proven reliability and functionality are selected for the actual production infrastructure. A core set of these components will include:

Ø Firewall / IDS devices

Ø VPN and SSL acceleration units

Ø Load Balancers

Ø Servers

Ø Storage devices

Ø Software

Ø Support contracts

To meet scalability and business requirements, as well as guaranteed uptime commitments, selecting the right equipment is critical. Equipment should be deployed under a high availability scheme and, for most production infrastructures, platinum-level support contracts should be executed with vendors to ensure immediate (maximum four hour) response, should any of the key components fail at any given point in time. During this period care should be taken to ensure that the selected hardware will be delivered within timelines that will meet your deployment master schedule.

6: Deploy the SaaS Delivery Infrastructure

With the arrival of infrastructure components, the Operations Team enters the build phase, deploying the infrastructure in accordance with set specifications. During this hands-on effort network equipment is racked, burned in, and updated with the latest firmware versions, prior to being configured. Configurations are placed on the networking infrastructure that appropriately manage multiple ISP connections for redundancy and segment traffic from public (customer facing) and private (administrative and backup) networks. Security devices are updated with the most current versions of intrusion detection software (IDS) and firewall rule sets are established that allow your customers access to the systems while keeping unwanted intruders out.

Servers are racked and configured to support overall application(s) requirements. Operating systems are installed and brought up to the appropriate patch levels. Hotfixes specific to your application are installed and utilities that are needed to administer the systems and applications are put into place.

Systems and networks will then need to be tied into your disaster recovery solution. Network device configurations, system configurations, and all data should be backed up nightly to off-site tape facilities, and off-site tapes in storage should be kept available for 3 to 6 months.

7: Implement Disaster Recovery and Business Continuity Planning

With a live application now ready for delivery via SaaS, the task force must focus on business continuity issues. Key questions must be answered:

Ø What happens in the event of a disaster?

Ø How quickly can the application be up and running following such an event?

With widely varying disaster preparedness options, from off-site tape backups to leading edge global load balancing technologies across multiple geographic locations, selected solutions should be based on business requirements relative to budget limitations.

8: Integrate a Monitoring Solution

To ensure that all infrastructure components are both working, and working with each other, a monitoring solution is essential. Key components that must be periodically checked include:

Ø Hardware: memory, CPU, hard drives

Ø Operating Systems: event logs, process lists, key services

Ø Application Layer: process, TCP ports, web service checks

9: Establish a Network Operations Center (NOC), Client Call Center, and Ticketing System

Always focused on your service delivery infrastructure, the NOC is the central monitoring station that performs correlations between triggered alerts and appropriate responses. Fully staffed and on alert 24x7x365, the NOC is also your product’s eyes and ears for monitoring system health and performance. Based on set policies and procedures, the NOC must validate an alert, determine the appropriate response, and set the response in motion. Failure to resolve the issue requires the NOC to further escalate the response by assigning responsibility to an oncall engineer.

End-user support is a key component in successfully deploying SaaS. End-users that encounter application-related issues must have a primary point of contact for escalating issues. Responsible for receiving and processing all support calls, 24x7x365, a call center must have policies and procedures in place designed to help end-users who call in with issues (application access issues, feature requests, bug reports), and a clear path for escalating the issue to an appropriate resource for resolution. The call center must also be responsible for monitoring the response to ensure that the problem is fixed in a timely manner, and that the end-user is satisfied with the result.

In addition to serving end-users, the call center provides another conduit for business intelligence generated as a result of daily operations, identifying usability issues and requested enhancements, and disseminating them to the appropriate development groups within the company. Recording calls from end-users also provides an excellent vehicle for continuous improvement and training for call center employees. The call center is also typically responsible for generating customer satisfaction surveys.

To support the infrastructure, a ticketing system is required that connects the human components across the organization (NOC, operations delivery team, etc.) to issues management. An automated system with centralized communication provides the greatest opportunity for efficient issue management. Email, instant messaging, and phone calls, while inexpensive, have all proven to be inefficient and ineffective alternatives. A robust ticketing system also provides the organization with a consistent view into the issues impacting SaaS delivery, from end-user support to application development.

10: Design and Manage a Service Level Agreement (SLA)

Based on the components that comprise your SaaS offering, the task force should then work with the marketing team to develop a comprehensive SLA that meets end-user expectations. Key SLA elements include:

Ø Application availability

Ø Infrastructure alert response time

Ø Call center response time


The task force must then ensure that the infrastructure components and supporting policies and procedures are in place to meet these benchmark agreements, and that they deliver a satisfactory end-user experience. SLAs must be proactively monitored and managed to ensure that SLA-triggered events are tracked and resolved. The system must also identify SLA failures and specify financial credits to end-users for failure to deliver to benchmark levels.

11: Document and Manage the Solution

Once deployed, the Operations Team must document the entire infrastructure, noting any nuances or areas of concern in regards to custom components. The documentation should take advantage of automated tools and be available within a centralized knowledge base.

To be successful, documentation should include information on all aspects of the SaaS environment, including information on: the data center; bandwidth providers; network and security components and configuration; system components and configuration; disaster recovery activities and plans; and business continuity planning.

Once your infrastructure is in place, all components are working together, and your SaaS offering is bringing in revenues every day, ongoing success will result from diligent management. To that end, daily, weekly, and monthly maintenance task lists should be produced for every device in the infrastructure. Regular maintenance windows should also be used to address infrastructure hotspots, in order to remedy issues before they become problems.

System logs should be methodically reviewed for error/warning messages, and response scenarios updated as needed.

Summary

This high-level overview describes a proven methodology for successfully starting operations with SaaS. It is important to note, however, that there is no substitute for domain expertise. Therefore, the most critical element to have in place, before taking on this challenge, is a team of experts in operations and engineering who have previously designed, built, and managed complex infrastructures.

Thursday, December 13, 2007

Getting From Requirements to Code
Model-driven development provides the technology link between business requirements and production applications.
by Mike Sawicki

It's Monday morning, and you have just completed a grueling meeting with your supervisors, who had a difficult time understanding your explanations of how your development team will implement the latest business requirements. They really started to glaze over while reading the latest specification that was laced with too much of what they call jargon and references to industry standards that simply aren't of interest to them.

You need to do two things: 1) improve the communication and understanding of the project so that the complexity associated with the implementation of the technical aspects of the application are simplified; and 2) eliminate errors in construction that are because of a lack of understanding or possibly omission.

Let's face it. Given the variety of technology and design decisions that must be mastered, building enterprise-scale Java applications that deliver business value is difficult work. We are constantly challenged to solve complex business problems with very complex technology. Getting from requirements to code, and getting it correct is a challenge to many an overburdened development staff. The ideal solution to this problem is a development approach that improves communication and understanding, reduces technical complexity, and eliminates errors of omission or lack of expertise. These are exactly the problems model-driven development (MDD) solves.

Simply stated, MDD uses three simple steps to take you from requirement to code. The model, transform, and elaborate work cycle transforms visual models describing the structure, behaviors, and the necessary processes of your application into working systems (see Figure). It's not magic as you will see, but it's an innovative approach that contributes to better understanding simplification and consistency or quality of your application. There are three important aspects to MDD: abstraction, frameworks, and automation.



Business Value Delivery
Abstraction. As you might guess, models and modeling are central to the activities that lead to the development and deployment of good applications. These applications can deliver business value to your organization (by improving revenue, saving money, reducing cycle time, or improving customer satisfaction) and are reliable (by delivering correct information to the correct people when needed). Modeling aids development teams to improve their understanding of the problem to be solved and shields them from the complexity of repetitive coding tasks and component assembly. In addition to getting a really clear picture of the structure, behaviors, and processes of the application, abstractions improve understanding in these areas: reducing the complexity of the business aspects of the system, creating a separation of concerns, and managing both technical and functional architectures.

Frameworks. Having made your platform decision, there are some aspects of your system that can be addressed using frameworks. Frameworks reduce the complexity of the technology aspects of the system. For example, your decision is to build an application that makes use of a Web-based, front-end, and back-end Enterprise JavaBeans (EJB). This platform decision means that you must provide for the logic that organizes and exchanges data between the client interface and back-end components and the logic for data acquisition and maintenance.

For those client interface issues using a well-known open source framework like Apache Struts it might make sense for addressing the presentation layer mechanics. The model-view-controller (MVC) is a fairly popular (and tried) approach for managing the control flow of Web applications. Apache Struts has a number of features with which you can define the look and feel of particular fields, the flow of the Web application, the type of Web application generated, and so on.

This approach helps simplify much of the implementation of these technical aspects. The mechanics of the MVC can be applied using automated technology transformations, thereby simplifying the implementation. A similar approach can be employed for implementing back-end components with a persistency framework like EJB or other approaches like Hibernate.

Automation. The main challenge with models is that for many organizations they are considered not much more than an artistic venture and do not directly contribute to code. That is, it is difficult to see how a Unified Modeling Language (UML) class model or a use case directly relates to the code artifacts that we need to build. Oftentimes, developers have real difficulty synchronizing the code they have written with the models they've built.

In this case, the model becomes outdated or out of synch with the code because there is no link from the model to the code. Soft linkage of models to the code by using automated transformations makes modeling an active part of the development process. Automated transformations, then, in addition to providing the linkage, are the real strength behind automation. Automated transformations help us to eliminate errors of omission or lack of knowledge; enforce best practices for architecture and coding as it relates to the problem domain; accelerate, with high predictability, the steps from requirements to design architecture to code; and deliver business value consistently faster and continuously.

Abstractions, frameworks, and automation are the essence of MDD. We build models and use frameworks to visualize and communicate the desired structure and behavior of our system. We use automated transformations to ensure quality, improve delivery capacity, and manage risk.

Models, Frameworks, and Transformations
The domain model, application model, and code represent different layers of abstraction (see Figure). The domain model is a technology-agnostic representation of the system. It describes the classes, behaviors, and processes that comprise the application. The application model represents the technology-specific architecture with which we will implement our system, and the code—well, this is the end game. These are the objects that we build to implement our system.


Each of the different layers is expressed in a different fashion. Our domain models, for example, can be expressed in the form of UML diagrams. Class diagrams provide a serviceable picture of structure and the relationships among the entities. Use cases convey information regarding the behaviors or process tasks that represent specific requirements in our systems.

The application model represents the platform architecture that we employ. If we make a decision that the implementation platform will be Web-based Java, we will likely need to follow the rules for building and deploying this type of application. We are a bit closer to code, but not quite there. At the application layer we might use UML diagrams still to represent architecture decisions, but we are not yet to the code.

Getting from requirements to business model to application model to code requires a series of translations. Naturally, we can do these translations manually, but this approach is where errors or omissions can infect the application. For productivity and quality purposes an automated approach makes better sense—where automated transformations come into play.

The technology and the implementation transformations are the mappings that automate the domain model-to-application model and the application model-to-code translations. In other words, the technology and implementation transformations take you from a semantically rich, yet nontechnical, level of abstraction in the domain to a specific platform architecture and then to the syntactically precise code implementation.

A technology pattern describes how one model can be mapped to another model. Technology patterns can simplify a couple of tasks for you. First, it can define rules that map the structure and behavior of the application (DomainClasses to EJBEntityComponents and DomainOperations to BusinessMethods) depending on the granularity required. Second, these mappings can create or update the presentation, business logic, and database model elements required for a specific technology platform (like J2EE) based on the object definition(s) found in the domain model.

Implementation patterns map the application model to the code. Implementation transformations are not a new concept. They are an abstraction of many of the practices of developers. In completing a development task, a developer will oftentimes reuse or reapply what was learned from a previously successful project, relevant standards, or best practices. Implementation patterns collect this information, which can be reused. You capture this knowledge in a code template that can be used with a code generator. In a simple quick step you can apply this knowledge, practice, and code reuse that you have captured in your implementation pattern. Patterns are implemented in a variety of languages or a specialized pattern language.

MDD means that we model what we want to build. We use automated transformations to implement solution frameworks for translation from business model to application model and from application model to the code.

Modeling visually captures important facts about the business requirements and provides a vehicle by which information can be viewed and exchanged between the business stakeholders and the development teams. Models provide a clearer picture of what the business needs and what IT needs to build. The automated transformations of business model to application model and application model to code improves what has previously been a series of manual processes and helps align the end results with best practice design and coding standards in the form of frameworks in mind. MDD makes those Monday morning meetings a bit easier to take.


--------------------------------------------------------------------------------