There are many causes for datafile fragmentation. Fragmentation is when there are unused gaps of free space in the data file, which causes the data file to be larger and less efficient than possible.
Fragmentation is caused by how the datafile is constructed and how the data is stored. The datafile is divided into blocks of 128 bytes. When data is stored in the datafile, it must use whole blocks, so if a record takes 129 bytes it will use two blocks. When data is added they are all added consecutively utilizing the next group of free blocks.
Fragmentation can occur when data is deleted or modified. When data is deleted, the blocks are freed up, however the blocks may not be utilized due to the size. When a data is modified so that it uses more blocks it will be moved to the next available set of sequential blocks that can fit the data. For example, if a record is 100 bytes, this can fit in a block. A field is updated to add more text increasing the record to 200 bytes. This now needs two blocks, but if the next block is already occupied, the record is moved else where freeing up this single block.
Other changes can also cause this behavior such as structure changes where a field is modified, added, or deleted.
Many databases perform modifications and deletes as well as other actions that touch the datafile. The more frequent these operations are performed, the more beneficial it is to run a compact. When compacting the Advance compact option with the "Force updating of records" and the "Compact address table" options are recomended to be enabled to get the best results.