Need for Denormalization of Tables in database
Denormalization is the process of putting one fact in numerous places. This speeds data retrieval at the expense of data modification.Only one valid reason exists for denormalizing a relational design - to enhance performance. However, there are several indicators which will help to identify systems and tables which are potential denormalization candidates. These are:
- Many critical queries and reports exist which rely upon data from more than one table. Often times these requests need to be processed in an on-line environment.
- Repeating groups exist which need to be processed in a group instead of individually.
- Many calculations need to be applied to one or many columns before queries can be successfully answered.
- Tables need to be accessed in different ways by different users during the same timeframe.
- Many large primary keys exist which are clumsy to query and consume a large amount of DASD when carried as foreign key columns in related tables.
- Certain columns are queried a large percentage of the time. Consider 60% or greater to be a cautionary number flagging denormalization as an option.
Be aware that each new RDBMS release usually brings enhanced performance and improved access options that may reduce the need for denormalization. However, most of the popular RDBMS products on occasion will require denormalized data structures. There are many different types of denormalized tables which can resolve the performance problems caused when accessing fully normalized data. The following topics will detail the different types and give advice on when to implement each of the denormalization types.
Can't find what you're looking for? Try Google Search!
Comments on "Need for Denormalization of Tables in database"