Temenos Data Hub Explained: Architecture, Data Types, and Use Cases
- Josef Mayrhofer

- 10 hours ago
- 5 min read
The Temenos Data Hub (TDH) was originally created for data extraction purposes and has since been enhanced to serve as a real-time data management solution, equipped with the ability to integrate with external tools.
Banks can obtain four primary forms of transaction data through TDH:
Streaming Data: Using a stream processing platform like Apache Kafka, Amazon Kinesis, or Azure Event Hub, users can access real-time data events and integration messages.
Near Real-time Relational Data: Streaming data messages provide access to a relational replication of Transact data that is updated almost instantly.
End-of-Day Relational Data: This provides access to a snapshot replication of Transact data that has been updated through the completion of business processes and has an "end of day" state.
Temporal Snapshot Data: Retaining historical snapshots in the best possible format at the end of the day is essential for businesses to fulfill regulatory, audit, and reconciliation needs.
Components of TDH:
Event Streaming is essential for capturing and processing both near-real-time and initial load processing of Transact data events.
The Extract Data Store (EDS) retains an unparsed version of current or historical Transact End-of-Day data in CSV format, which can be relocated to various locations after extraction.
The Operational Data Store (ODS) holds a relational copy of the current day’s Transact data, which is continuously updated and curated in near real-time. This relational data is stored and refreshed through streaming messages.
The Snapshot Data Store (SDS) preserves a relational replication of Transact data, akin to the ODS, but it also features an incremental historical snapshot for each business day. This snapshot captures essential data points for regulatory, reconciliation, and audit purposes.
Temenos Data Engineering (TDE) oversees the configuration, creation, and scheduling of diverse dataflows that integrate data into TDH, ensuring effective data quality management across all operations.
The Semantic Query Layer is a GraphQL-based semantic layer that facilitates the creation and management of efficient APIs directly from the Operational Data Store (ODS).
TDH engages with the following components to set up the application for replicating Transact data to near-real-time data stores:
IRIS – For interaction with Transact via IRIS REST API calls.
Enterprise Streaming – For interaction with Enterprise Streaming through REST API calls.
Schema Registry – For accessing schema-related information from the schema registry via REST API calls.
Kafka – For retrieving event messages from the streaming platform through Kafka.
TDH Integration:
Integrating Keycloak into TDE will facilitate single sign-on across all modules. The Keycloak server will be established within the existing Docker container, and a Keycloak database will be set up within the MySQL server.
Temenos Data Hub (TDH) is compatible with Temenos Enterprise Pricing (TEP) in conjunction with Transact by employing a shared outbox. Each system possesses its own Iris configuration and schema, with data being posted to a unified Kafka topic.
Transact Data Hub (TDH) accommodates Outbox events originating from Transact.
The Temenos Data Engineering (TDE) application utilizes the Representational State Transfer (REST) Application Programming Interface (APIs) of the DW interface to enable and disable Temenos Transact applications.
Out of the box, TDH comes with the following functionality:
Pre-configured data flows that export 122 tables from Transact and fill two optimized data stores (Operational Data Store and Snapshot Data Store).
The Operational Data Store replicates Transact data into a relational database format almost instantly, with multi-value, multi-language, and local reference fields pre-prepared.
Snapshot Data Store offers a full or incremental snapshot of transaction data at the end of the day, including data fields like financial accounts, accumulated interest, and effective interest rates that are computed at the end of business activities. Additionally, Snapshot Data Store keeps track of end-of-day snapshots.
175 banking APIs rely on data stored in data stores.
TDH does not encompass the following:
Reporting or Analytical Content - TDH does not encompass any reporting, dashboards, BI applications, or analytics. This can be supplemented by acquiring Temenos Analytics. With Temenos Analytics, the bank gains access to the Analytical Data Store, the Analytics web application, and various reporting and analytics content packs. Temenos Analytics can be offered in addition to TDH.
The capability to integrate and process "big data" such as unstructured data sources like video, audio, images, etc. This feature can be obtained by upgrading to Temenos Data Lake.
Direct integration with XAI. The Data Engineering components that connect to XAI for training or scoring are available with an upgrade to the complete Temenos Data Lake product.
Direct integration with Infinity.
The ability to integrate non-Transact data. TDH is designed exclusively for Transact data. The capability to integrate non-Transact data is included with an upgrade to the complete Temenos Data Lake product.
Use Cases of TDH:
Transact Enquiry Performance – Some Transact enquiries experience slow performance on the primary Transact (OLTP) database. The TDH incorporates the Operational Data Store, which is designed to enhance query performance and can improve enquiry performance by as much as 10 times.
Efficient Banking APIs - Some Transact APIs operate at a reduced speed on the primary Transact (OLTP) database. These APIs can now be directly accessed by the Operational Data Store via a semantic query layer, potentially enhancing performance by as much as 10 times.
Data Extraction – Financial institutions necessitate the data contained within Transact to fill various other systems within the bank. TDH facilitates real-time extraction, preparation, and optimization of Transact data to accommodate numerous data extraction scenarios.
The creation of CSV files – TDH possesses the ability to generate CSV files from Transact data, which is frequently necessary for batch integration with both internal banking systems and external vendors.
The population of the bank's Data Warehouse – The interfaces of bank data warehouses are frequently intricate and necessitate significant data transformation to align with the target data warehouse schema. TDH, in collaboration with Data Engineering, is capable of transferring data to bank data warehouses, or banks may opt to utilize their own ETL tools to link to data streams or TDH data stores for the purpose of populating the data warehouse.
Reporting – The TDH data stores, including the Operational Data Store and the Snapshot Data Store, serve as potential sources for bank reporting. Nevertheless, the Analytical Data Store, which is offered alongside Temenos Analytics, enhances Transact data for various forms of bank reporting and analytics.
Legacy Core Data – TDH is capable of importing data from the legacy (pre-conversion) core banking system into a TDH data repository. This functionality supports various use cases, including historical customer statements, reconciliation, post-live access to legacy core data within Transact, and internal audit assessments of historical fraud, among others.
AI – Data scientists can link to TDH data streams or data repositories to obtain the Transact data necessary for integration into their feature engineering environments and model training settings.
Data Archiving – TDH maintains historical daily snapshots of Transact data that can be utilized for any scenario requiring historical end-of-day data, including reporting, auditing, integration, customer inquiries, and more.
Internal Audit – Bank audit teams frequently require access to historical data while examining different fraud cases or other audit situations.
Implementing TDH both for cloud and on-premise:
Phase | Duration | |
1 | Discovery Phase: Identifying TDL candidates Health checks Apply Data Profiling | 1 W |
2 | Build Phase: Infrastructure Setup Deploying TDH Configuration
| 2W |
3 | Execution Phase:
| 3W |
4 | SIT and UAT testing | 3W |
5 | Total | 3 Months |
If you are exploring the options to implement TDH, need T24-related training, or want to check T24 performance, please do contact us. It would be a pleasure to assist you.
Happy Performance Engineering!




Comments