Last year, I worked on a law enforcement project in which a central database was created as a repository for information about all violent crimes committed in the United States. Once the database was created, sophisticated data mining techniques could be employed to find patterns in the information contained in the database for the purpose of apprehending serial criminals. The project involved various law enforcement agencies sharing data that up to now had only been kept by the National Crime Investigation Center, more commonly known as NCIC. To even attempt such an endeavor, a normalized vocabulary describing violent crimes had to be created.
Many Codes, Many Tables
The first phase of the project was developing the normalized vocabulary, which required that law enforcement personnel perform the tedious task of researching thousands of violent crimes. This research yielded thousands of descriptive traits related to violent crime.
The data in these tables was to be used by the application as taxonomy codes (also known as lookup values) for new cases being entered. The ActsDoneToBody table, for instance, contained such values as BodyFrozen and BodyBuried. When a new case was entered, the data entry clerk would check off all the taxonomy codes that matched that particular crime. The case, as entered in the system, would be highly normalized, thereby making it searchable and analyzable.
There were over a hundred of these code tables, detailing everything from firearms and vehicles to the voice and facial hair of a suspect to the activity conducted by a victim when assaulted. I have included samples from two code tables in Figure 1, page 44. The following SQL command can be used to create the FacialHair table:
CREATE TABLE dbo.FacialHair
( codIDCode varchar(255) NOT NULL ,
codCodeDesc varchar (255),
primary key (codIDCode) )
Denormalization
The second phase of the project involved the software architects. I was to create an entire solution around the normalized taxonomy codes. This involved designing and
implementing the database and the application. I was given the taxonomy code tables in Microsoft Excel format. Each Excel sheet contained a set of taxonomy codes and became a table. At this point, according to the rules of normalization, the database was correctly modeled.
I found myself unhappy with having to manage so many tables. And, every combo box was going to need its own unique select statement. Not to mention the nightmare if they decided to internationalize, in which case I would have to go through each table and add language identifier columns.
I decided to denormalize the tables. Denormalization is intentionally breaking the normal form, which typically means that some data is duplicated. A denormalized structure would resemble the table depicted in Figure 2. This table, which I called MasterCode, can be created from the following SQL code:
CREATE TABLE dbo.MasterCode
(codIDCodeTable varchar (255) NOT NULL ,
codIDCode varchar(255) NOT NULL ,
codCodeDesc varchar (255),
primary key (codIDCodeTable, codIDCode) )
Let me digress briefly to explain why the three characters cod prefix every column in the SQL listed above. You should get into the habit of assigning tables a unique three- character prefix and then prefixing every column of the table with the table prefix. If you use prefixes, you will not be forced to alias your joins because all the column names will be unique. This could save you time, especially if you later decide to write your own SQL generators (programs whose output is SQL to be used in other programs).
Figure 2 illustrates how I consolidated two code tables by adding a column called codIDCodeTable. The primary key is a composite key that includes the codIDCodeTable field and the codIDCode field. I have encountered implementations in which the primary key is simply the codIDCode field. This is advisable if you are in a position in which you can decide to give every code value a system-wide unique identifier. The reason I have not adopted this strategy in my example is that, if you are migrating from an existing system, you may not have the flexibility of changing all your lookup identifiers to unique values. Thus, the composite key will ease your migration woes!
The table is considered denormalized because there are duplicate values in the codIDCodeTable column. This violates the Second Normal Form (2NF), which applies only to tables with composite keys. Second Normal Form requires that each non-key column be fully functionally dependent on the entire key, not just a part of it. In this case, codCodeDesc is dependent only on the codIDCode field, not on the entire key.
When you denormalize, you should question your reasons for doing so. So I now pose the question, Is this a valid denormalization? It is a popular opinion that the only reason to denormalize data is to achieve performance improvements. It is also thought that denormalization induces additional programming costs because you have to program code to maintain the denormalization. While these statements are mostly correct, there are some exceptions, which I will explore.
By adding the codIDCodeTable column to the MasterCode table, I have indeed denormalized, but not for the purpose of improving performance. In fact, denormalization would slightly degrade performance, owing to the extra time needed to filter the MasterCode table for the desired codes. Nor will I experience additional programming costs; I will reduce them. This contradicts the idea that denormalizing decreases programmer productivity.
Before you commit to applying this technique indiscriminately, let me briefly discuss some situations in which you may not want to use it, opting instead to use multiple
Risks
code tables. If, for example, your database has only a few code tables and the likelihood for new code tables seems small, you would not benefit much in terms of productivity and maintenance by using a master code table. Or if any performance hit whatsoever would severely affect your system, you may want to consider either not employing this technique or buying more hardware. (You may want to do this anyway, if even small degradation would impair the system, since you are probably far exceeding the safe limits.)
As I mentioned, grouping all codes into one table can begin to degrade performance as the code table starts to expand, but the actual degradation becomes almost insignificant after indexing the codIDCodeTable column. If you want to further reduce performance degradation after indexing, you may want to avoid putting all your codes into one table and use a rule of thumb to decide whether a certain set of lookup values should have its own table. A colleague has adopted the rule that, if a set of lookup values has more than 51 records, it gets its own table. This allows him to include the states of the United States in the master code table. My own rule is, if a set has more than a few hundred records and it appears that more may be added over time, I create a table for it.
Because the vehicle model table for the violent crime database contained about 4,700 records and, of course, new models appear yearly, I created a VehicleModel table.
Migrating Existing Systems?
Assume that you have decided to merge all your code tables into one. In other words, you have adopted a taxonomy consolidation initiative. (Or at least that is what you will tell your boss if he asks why you are having so much fun!) What is the best way to modify a production system with all of its associated applications? Well, you may not have access to the application source code, which would mean that you cannot modify the applications themselves. How, then, can you make the application continue to function correctly once you start deleting code tables?
The solution is simple. First, create the MasterCode table and populate it with all the data from all the code tables. Then create views on the MasterCode table that are named the same as your existing code tables. For example, the following code creates a view named FacialHair:
CREATE VIEW dbo.FacialHair AS
Select
codIDCode As IDFacialHair,
codCodeDesc As FacialHairDesc
From dbo.MasterCode
Where codIDCodeTable = FACIAL HAIR
Before you can create this view, you must drop or rename the existing FacialHair table; otherwise, you will get a message saying that there is already an object named FacialHair in the database.
Notice that I aliased the column names to make the view appear identical in structure to the old FacialHair code table. This tricks the application into continuing to access the object named FacialHair; however, it will now be a view instead of a table. This strategy allows you to migrate databases to the proposed data model.
Many software development companies use this technique. In a recent engagement with Microsoft, I found innumerable instances of the master code table concept. They had used a similar table structure to store product types, customer segment types, and company types. I have seen projects in the human resources sector that use master code tables to house employee job titles, job codes, department codes, state codes, and ZIP codes.
So, what is the real benefit of denormalization? The most significant benefit is decreased time to market. You will save yourself at least several days by not having to
Rewards
create 50 tables with similar structure but varied field names. Admittedly, this savings would be reduced in the event that you are migrating from a production system, since you would have to create the corresponding views.
Furthermore, you will know exactly where all your data lies; that is to say, the maintenance will be much easier, and you will need only one screen to maintain a code table! New hires will be able to quickly understand the data model, since they will not need to know the dozens of tables that would otherwise have housed all the taxonomy values. And, finally, when you are ready to put your database on the Internet with multilanguage support, you will have fewer tables to modify.
In an ideal situation, you will have just begun to develop a new application, so you will not be faced with the onerous task of modifying an existing system. Since such a situation is not very likely, I am versed with the philosophy, If it aint broke, dont fix it! What you will find is that, when you adopt this strategy for your code tables, you will have time to take care of all those other things that are broken!
IDFacialHair FacialHairDesc
F1 Clean Shaven F2 Van Dyke
F3 Fu Manchu F4 Full Beard
F5 Fuzz
F6 Goatee
F7 Lower Lip
F8 Mustache
F9 Sideburns F10 Unshaven
IDVoice VoiceDesc
V1 Disguised V2 Soft
V3 High Pitch V4 Loud
V5 Low Pitch V6 Medium V7 Monotone V8 Nasal
V9 Pleasant V10 Raspy
Figure 1: The normalized database contained many small tables.
CodIDCodeTable codIDCode CodCodeDesc
FACIAL HAIR F1 Clean Shaven FACIAL HAIR F2 Van Dyke
...
VOICE V1 Disguised VOICE V2 Soft
...
Figure 2: In a denormalized database, many code tables can be consolidated into one.
LATEST COMMENTS
MC Press Online