top of page

How T24 Data Lifecycle Management Improves Database Performance and Efficiency

  • Writer: Josef Mayrhofer
    Josef Mayrhofer
  • 4 hours ago
  • 2 min read

As transactional systems evolve, databases accumulate large volumes of historical data that are rarely accessed but still required for compliance and analysis. Keeping this data in the primary database can impact performance and increase costs, making data archiving a critical practice.


Data Lifecycle Management (DLM)


Temenos Transact addresses this through Data Lifecycle Management (DLM), an archiving solution that introduces a secondary database dedicated to historical data. In this model, the operational database is referred to as LIVEDB, while the archive database is DLMDB.


Tables selected for archiving in LIVEDB have corresponding read-only tables in DLMDB, identified by the suffix #RO. Data is moved to these tables based on business rules and retention policies, yet it remains fully accessible to the application without it being aware of its physical location.


Once archived data exceeds its retention period, it is transferred to #ARC tables and eventually purged. This approach allows Transact to maintain optimal performance while ensuring long-term data availability and compliance.



Architecture and Workflow

Implementing DLM adds a second database to the Transact database layer, known as DLMDB. Tables in LIVEDB selected for archiving have one-to-one counterparts in DLMDB with the same name and a #RO suffix, where archived data is stored.



Process Workflow


DLM continuously identifies aged data in LIVEDB and archives it to DLMDB through a structured process.


Data selection is handled by the ARC.GENERIC tSA service, which uses ARC.GENERIC.REQUEST and the ARCHIVE application to determine which table groups are eligible for archiving based on retention rules. It generates key lists with table names and primary keys, stored in RO.COPY.KEYLIST.


Data archiving happens in two steps: copying data to DLMDB and purging it from LIVEDB. Since R25, both steps are managed by DL.COPY.PROCESS. In earlier versions, copying and purging were handled by separate services. After a successful copy, key lists move to RO.PURGE.KEYLIST for cleanup.


Data access remains transparent. Archived data in the RO tables can be accessed using standard JBC functions (SELECT or READ), either directly from DLMDB or by combining it with LIVEDB data.


Data destruction occurs when archived data exceeds its retention period. Instead of moving rows individually, entire partitions are transferred from RO to ARC tables using database-specific PL/SQL procedures before final purging.


If you want to learn more about the efficiency and best practices for this data maintenance process in Transact, feel free to contact Performetriks. Our T24 experts and performance specialists are always available to ensure that all your data processing is secure and highly efficient.

Happy Performance Engineering! #T24 #Transact #CoreBanking #DataManagement


Comments


bottom of page