1 / 47

More than a Partner

More than a Partner. t-SAS. T 24 – S cramble, A rchiving and S ubset. QUALITY PARTNER FOR YOUR EXPANSION. Reduce storage costs by moving old data offline to lower-cost data stores. Increase efficiency by reducing the active data footprint.

telma
Télécharger la présentation

More than a Partner

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. More than a Partner t-SAS T24 – Scramble, Archiving and Subset QUALITY PARTNER FOR YOUR EXPANSION

  2. Reduce storage costs by moving old data offline to lower-cost data stores. Increase efficiency by reducing the active data footprint. Improve backup scenarios -- smaller data sets are faster to back up. Speed recovery times -- smaller data sets are more portable and faster to restore. Improve performance -- smaller data sets index faster and reduce retrieval overhead. Meet compliance needs -- data is still preserved for later access and backed up. To keep performance (enquiry and COB) under control after go live over a period of time as data growth is gradually increased To be able to do duplication of environments (production copy) faster and reduce in size Reduce upgrade and data conversion time

  3. Why archiving? 40% CAGR may be a conservative estimate! “With growth rates exceeding 125%, organizations face two basic options: continue to grow the infrastructure or develop processes to separate dormant data from active data.” Source: Meta Group 2008

  4. Why archiving?

  5. …and it continues to add People Processes Technology …and it continues to decrease Performance Availability Time for other projects ProductionDatabase

  6. Up to 80% of data in OLTP databases over 2 years old is no longer needed for daily business activity Administration costs for 1TB storage are five to seven times higher than the storage costs (Dataquest/Gartner) Removing old transactions from the active database will reduce costs and increase performance of mission-critical applications Majority of the banks don’t monitor database growth on daily basis Difficult to do archiving after 2 years of DB growth, as more downtime is required for Archiving process with large database size

  7. 1200GB Test Production Total Development Backup 200GB DisasterRecovery 200GB 200GB 200 GB 200GB 200GB Quality Control Actual Data Requirement = Size of production database + all replicated clones Training 200GB

  8. Archive Current Data 1+ years Active Historical 2-3 years ProductionDatabase Archive Reporting Database Archive Archived / purged database Test Databases

  9. Archive Database Archive Files Production Database Archive Files Data Access (locate, browse, query, report) purge • Reduce amount of data in the application database • Remove obsolete or infrequently used data • Maintain “business context” of archived data • Purge Archived files regularly to keep control • Enable easy user access to archived information • View, research and restore as needed • Support Data & Storage Management Strategies of the bank

  10. Archiving is a T24 oriented solution, it reduces number of records live or HIS files by moving records to $ARC file Existing standard archiving process and parameters cover only part of core T24 files To archive any local files and core files, not included in standard archiving (Fees related tables, AM related tables, AA tables, etc) for which additional local development needed Tables are classified into 3 categories - Core tables (covered by core archival process) - Core new ( not covered by core archival process, new local dev) - Local tables (not covered by core archival process, new local dev)

  11. DE.O.HISTORY ../bnk.arc/de/DE.O.HISTORY$ARC On-line data Archive On-line data Empty Space

  12. Current archiving process doesn’t cover all core files which grows. Functionality is limited in old versions of T24. Existing archiving process is a standalone application which moves data from LIVE/HIS tables to $ARC but nothing else. It’s not a solution but a standalone process. Banks cannot use this as it is When archiving is done it doesn’t reduce the DB size by default instead increases, as the empty space is not given back by default. Accessing ARCHIVED data is transparent for the user, as they need to write specific enquiries to access data from $ARC tables Parameterisation is limited (only based on DATE), which is a huge limitation for tables which doesn’t have DATE.TIME field (like concat files, live files etc) No database techniques are used in the whole archiving process The whole archiving process is manual (configuring, running, moving tables, running resize/reorg to reclaim space, accessing ARC data etc) $ARC files wont be supported while T24 software upgrade. So these files will be become obsolete after T24 upgrade

  13. Number of days, months or years the data has to be kept. The age of each record is calculated based on TODAY date and the date supplied by FILTER.EXPR field. Format examples: Name of the T24 file that is linked with SOURCE.FILE. This is a file that has a relation with the main file currently archived. Examples of relations:

  14. Number of days, months or years the data has to be kept. The age of each record is calculated based on TODAY date and the date supplied by FILTER.EXPR field. Format examples: Name of the T24 file that is linked with SOURCE.FILE. This is a file that has a relation with the main file currently archived. Examples of relations:

  15. Optimum usage of storage - keep only necessary data in high speed storage (ex: SSD), which is expensive - retire less accessed data to low cost disk storage - Control DB growth and spend less on STORAGE infrastructure Software upgrade - supported - Automatic run time conversion will happen, which will convert the old format records to current T24 release - No conversion needed, which is time consuming - To read archive data, standard API is provided which can be utilized in local development. - No huge performance issues and its a simple field mapping from OLD t24 release to new T24 release.

  16. Temporary TABLESPACE DATAFILE 1 DATAFILE 1 DATAFILE 1 Temporary DATAFILE TABLE A Free TABLE A Part 1 of 2 Copy of TABLE A Free TABLE A Part 1 of 2 TABLE B TABLE B TABLE B Copy of TABLE B TABLE C Copy of TABLE C TABLE C Part 1 of 2 TABLE C Part 1 of 2 TABLE D Copy of TABLE D Free DATAFILE 2 DATAFILE 2 TABLE A Part 2 of 2 TABLE A Part 2 of 2 TABLE C Part 2 of 2 TABLE C Part 2 of 2 TABLE D TABLE D Free Free • Use Reorg to optimize tables, indexes, and tablespace’ s after archiving: • Most of the relational databases support ONLINE reorg • Oracle support online shrinking for XML tables and • its related objects (indexes, LOB segment, etc.) • DB2 support online REORG for xml DB from DB2 9.7 • No need to shutdown DB’s during REORG, big tables • can be done in parallel degree option to speed up • In jBASE database, resize can be done to • Downsize the table size • After effective reorg performance will be improved for these tables TABLESPACE TABLESPACE

  17. Accessing archived data - Standard build routine is provided to access archived data from existing enquiries. Which will read archived data or live data based on date given in selection - Standard enquiries are provided which are specifically written for archived data - Full API set given to accommodate local development needs with clear documentation Interface files / application house keeping - There is no default automatic house keeping for T24 interfaces files / log files / OFS files - Currently, the data cumulating in a number of folders shared by T24 application servers (NFS/GPFS) is not treated in any way. As a result the total size and number of INODE’s will grow considerably. An archiving mechanism should to prevent uncontrolled growth of these files. - The archive service will scan specific folders based on a set of rules defined in a parameter file. The service has the option to delete old files or create compressed archives. Running on daily basis, the amount of data in selected folders will stabilize. - This will keep application files clean and controlled, which will help to reduce the backup and restore time and reduce storage needs

  18. ARCHIVED DATA STORAGE - Many options are available in our solution to store Archived data - Archived data can be kept as jBASE hash files in a separate file system. This directory can be ignored during backup and restore - Archived data can be kept in a different TABLESPACE in case T24 runs in ORACLE/DB2/SQL SERVER. This tablespace can be kept in a less expensive disks and can be ignored for backup and restore - Archived data files can be kept in a different schema itself in case T24 runs in ORACLE / DB2 / SQL SERVER. This schema can be kept in a less expensive disks and can be ignored for backup and restore - Archived data files can be kept in a different database itself and accessed via separate T24 environment.

  19. PRODUCTION USER PRODUCTION USER WAS WAS MQ/TCPIP MQ/TCP T24 T24 PROD TS Normal Scenario PROD DB

  20. PRODUCTION USER ARC USER WAS WAS Restricted Menu – Live Data Restricted Menu – Archive Data MQ/TCP MQ/TCP T24 T24 Free PROD TS ARC TS After Archive scenario PROD DB

  21. Creating another tablespace does not jeopardize database security in any way, since the setup will rely entirely on Database functionality and will follow the design of T24. A table space is a set of volumes on disks that hold the data sets in which tables are actually stored. All tables are kept in table spaces. A table space can have one or more tables. Benefits: //The whole T24 data resides on Database //Inherited resilience to data corruption. //Ability to use Database features like tablespace compression / encryption for smaller database size. //The tablespace can reside on separate cheaper disks (SATA); hence, will not affect performance of main database. //Daily backup will not include this tablespace. Drawbacks: //The size of the tablespace will continue to grow. As a result, another purging procedure based on retention period of archive data should be developed in order to keep the tablespace size stable on long term. //Current backup script must be modified to exclude ARC tablespace from daily backup process.

  22. PRODUCTION USER ARC USER WAS WAS Restricted Menu – Live Data Restricted Menu – Archive Data MQ/TCP MQ/TCP T24 T24 Free PROD TS ARC TS After Archive scenario DBLINK / FEDERATED SERVER PROD DB ARCHIVE DB

  23. The ARC files can be kept in a remote DB server and accessed from Production server via Database connect options like DBLINK or DB2 Federated server concept or JRFS (jBase remote file System) Benefits: //No change in current production DB backup and restore scripts //Works as independent database, so DBA administration will be easier //Data security will not be compromised as both production DB and archive DB on the same production DB instance //Works seamlessly in DB2/ORACLE/JBASE. No major patches, setup, changes required to setup federated server/DBLINK/JRFS //Archive DB will be on separate file system in cheaper DISK volume (SATA disks), so it will be a low cost system //Same ARCHIVE DB can be shared between many T24 environments which will avoid storage usage for each environment Drawbacks: //The structure of T24 data will be fragmented, part will be stored on prod DB, and part on other DB. //Creating new tables wont be possible in remote DB from T24, so tables needs to be manually moved to remote DB then link needs to be established from production DB to remote DB //Performance will be affected (5times slower compared to keeping data on the same DB) //Index creation not working from federated server to remote DB tables

  24. ARCHIVE STORAGE – AS J4/JR Files PRODUCTION USER ARC USER WAS WAS Restricted Menu – Live Data Restricted Menu – Archive Data MQ/TCP MQ/TCP GPFS / NFS / JRFS T24 T24 ARC Files as J4/JR Free PROD TS After Archive scenario PROD DB

  25. Archiving Data Storage – Separate DB Benefits: //File based structure for each ARC table. As a result, would be easier and quicker to restore and work with only necessary data that business requested. //Ability to do jBASE file distribution. As an example would be to distribute the file by dates (for example by year for STMT and Delivery files), so, each file will contain records grouped by year. The oldest part files can be put on tape and removed from disk. On request, part files can be easily restored and put back and used by T24 to query oldest data. There can be up to 254 part files defined. So purging of archived files will be easier //File size will be around 50% less compared to DB tables, since there won’t be any XML format. //Simple HASH file mechanism, no extra license required. Easy to administer. //No change in DB backup and restore scripts, can be easily managed by tar / gz options Drawbacks: //The structure of T24 data will be fragmented, part will be stored on DB, and part on file based tables. //For file distribution and managing part files, a procedure must be developed that includes: Preparation of DISTRIB routines for ARC files. Preparing a script that on yearly basis will compress and put on tape oldest part files for all ARC files concerned based on archive retention period, and replace them on disk with empty part files. Define a procedure to restore specific part files and put them temporarily on disk, so that users will be able to query old data from T24. //Risk of data corruption.

  26. PRODUCTION USER ARC USER WAS Restricted Menu – Live Data EXCEL / CRYSTAL REPORTS / BUSINESS OBJECTS / TODD Restricted Menu – Archive Data MQ/TCP DRIVER / ETL T24 T24 jRFS Connection After Archive scenario Free PROD TS ARCHIVE DB PROD DB

  27. SCRAMBLING

  28. Data Masking - Scrambling Introduction The main purpose of this utility is to scramble the T24 database while maintaining the data integrity of the platform. Anonymisation of sensitive data like (client address, name, KYC fields, and recognizable portfolios) in various tables will be performed by this utility. The objective of such an exercise is to provide a “scrambled” database outside of host country or host the scrambled environment but allow cross-border access to this environment The utility has been developed in T24 programming language (j-basic) and is designed to run in multi-threaded mode, maximizing the server resource utilisation. Data masking is a prebuilt mechanism which can be run during SUBSETTING process or STANDALONE. mask sensitive data in non-production databases Completely parameter driven Scramble or encrypt any field data in any T24 table (standard and non-standard)

  29. Data Masking - Scrambling Masking Methods Built-in encryption Simple Scrambling algorithm (masking customer names with XXX) Adjusting field length automatically Updating related concat files for sensitive fields is easily configurable Easily extend to include one of your own De-identify sensitive data Employee information Customers information Suppliers information Perfect for training, testing, development databases Good for offshore development Pre – Built Package Based on best practices followed by most of the banks, all major sensitive application fields are mapped by default in the package. Additional fields only needs to be mapped like LOCAL.REF fields, all core sensitive fields are mapped already Covers all major T24 modules in the pre built package

  30. Automatically identifies all associated files like $NAU, $HIS, $ARC or $SIM and scrambles them SAS.SCRAMBLE.PROFILE Supports Random value as DATE, MONTH, YEAR or any LIST of values from SAVEDLISTS Supports encryption, default encryption is XOR. Complex encryption can be done using SUBROUTINE option Adjusts length automatically as per dictionary or as per length of original value If PC file is available for a given application, then PC file content will also be scrambled Concat Files configuration, which will be updated automatically

  31. SCRAMBLING - SAMPLE

  32. Data Masking – Dynamic Scrambling Dynamic Scrambling / Dynamic scrambling gives possibility to MASK/SCRAMBLE sensitive information on the fly in production database / Based on User level flag, scrambles information in SEE mode on sensitive fields / Uses the same SAS.SCRAMBLE.PROFILE functionality, with a simple API call dynamic scrambling can be enabled in all VERSION Screens Perfect for external user, contractor, employees and account manager access control to sensitive data in production database

  33. jBASE Audit and Access Control Monitor privileged access There are many unknown privileged users in a bank. System administrators, users with access to sensitive content especially in those accounts where activity should be closely monitored like production environment. To know what they are really doing with their access it is mandatory to track their activity. Track outsource partners Some banks have more outsourced users than internal ones. They cannot control them with their standard policies: tracking and controlling their activities is a must. Comply with industrial and international standards Compliance audit is one of the most painful events in many banks. External regulators from the government or international organizations can audit who complies with the mandatory standards. Sensitive data must be protected and any backend access to data sources needs close monitoring and control

  34. jBASE Audit and Access Control - cont… T24 Access T24 Application access is completely controlled by application level access control definition system called (SMS) setup. Setting up USER.SMS.GROUP controls the access method for each user defined in the system and enabling logging at USER level makes sure all activity of the user(s) are logged in F.PROTOCOL table, which can be used for Audit purposes later on TAFC / JBASE Access Currenlty there is no default access control / logging system available for TAFC / JBASE. Jbase currently acts as a database as well as runtime environment for T24. Allowing access to jshell means, users can be able to do anything from backend to T24 data, especially this gets tricky when external database(s) like oracle, DB2, MSSQL are used, as no unix commands can protect the data. This is one of the major risk for the bank and audit will also object. To over come this, ITSS developed a wrapper for jshell which will control the access restrictions for users and also itwill log all activities done at jshell level.

  35. Access Control Setup F.TAFC.ACCESS.PARAM – SYSTEM SYSTEM record helps to log all the commends defined in that by default. Full access is available for the user but all operations will be logged by default anyway. Field 1 - list of commands separated by comma which will be logged in to F.TAFC.AUDIT.LOG table. Ex: JED,BASIC,CATALOG If SYSTEM record is missing – default list is used: ED,JED,EED,EJED,EEDIT,EDIT,BASIC,CATALOG,DECATALOG,COPY,LOGOFF,JKILL, CREATE,SELECT,LIST,CLEAR,DELETE It’s advised to add only sensitive commands into the list to avoid unnecessary logging

  36. Access Control Setup F.TAFC.ACCESS.PARAM ID is the login name of the unix user. If record is not found – the command is not checked for restriction, Full access is available for the user but all operations will be logged by default anyway. Field 1 – restricted files, separated by comma ( also possible to restrict a field) ex: FBNK.ACCOUNT,FBNK.CUSTOMERex: FBNK.CUSTOMER>SHORT.NAME,FBNK.ACCOUNT>CURRENCY Can be specified as ALL so that user doesn’t allowed to do anything on any of the tables defined in VOC. Field 2 – restricted commands, separated by comma ex: JED,LIST,SELECT.Can be specified ALL, so none of the commands are allowed for this user Field 3 – exception files, separated by comma ex: FBNK.STMT.ENTRY,FBNK.CATEG.ENTRYUsed when specified ALL in field 1. Field 4 – exception commands, separated by comma ex: LIST,SELECT,SORTUsed when specified ALL in field 2

  37. Sample TAFC.ACCESS.PARAM t24oper ----- Unix user Name 001 FBNK.CUSTOMER>SHORT.NAME,FBNK.ACCOUNT>CURRENCY 002 LIST,JED 003 004 t24admin 001 FBNK.ACCOUNT 002 All 003 004 SELECT,JED t24support 001 All 002 003 FBNK.STMT.ENTRY,FBNK.CATEG.ENTRY 004

  38. Audit Log - F.TAFC.AUDIT.LOG • This is a Log file with the following details. • ID – format is Date_Time_PortNumber_LoginName • Field 1 – routine name • Field 2 – record key • Field 3 – date & time • Field 4 – actual command executed • Field 5 – user login id • Field 6 – IP address • Field 7 – host name • Field 8 – terminal • Field 9 - environment

  39. Sample TAFC.AUDIT.LOG 20120309_11-35-10_4080_t24support 001 jsh.b -> jsh.b -> jsh.b 002 jutil_jsh__dev_pts_106 003 09 MAR 2012 11:35:10 004 LIST-ITEM F.TAFC.ACCESS.PARAM ‘t24oper' 005 t24support 006 (P06412) 007 t24prodserver 008 /dev/pts/106 009 t24prod 20120309_12-08-36_4081_t24oper 001 jsh.b -> jsh.b -> jsh.b 002 jutil_jsh__dev_pts_99 003 09 MAR 2012 12:08:36 004 LIST FBNK.ACCOUNT 005 t24oper 006 192.168.56.101 007 t24prodserver 008 /dev/pts/99 009 t24test - Both F.TAFC.ACCESS.PARAM and F.TAFC.AUDIT.LOG will be kept in jbase hash file and restricted permission (write) willbe given only for ADMIN or ROOT. This will avoid tampering this configuration tables by other unix users

  40. SUBSETTING

  41. Archiving – Our approach Scoping - 5 days - Analyze database for DB growth and big tables - Analyze application level directories and interface files - Decide where to stay archived data - Estimate storage saving, timing of archiving, retention period, TCO and ROI Conference room pilot - 10 days - Configure parameter table and trail run - Correct re-run archiving / reclaiming / scrambling - Create data access to archive - Train admin on archive Refine/UAT - 10 days - UAT testing with business user validation - approve parameter setup - Validate COB / online - Month end / year end testing Deploy - 5 days - Production deployment - Production Run - Support

  42. Methodology Scoping Post live support Configure parameter table and trail run Happy Client Production deployment / Training Correct re-run archiving / reclaiming / scrambling Validate COB / online Data archive strategy and storage UAT testing with business user validation

  43. Contact usITSS109, rue du Pont du CentenaireCH – 1228 Plan-les-OuatesSwitzerlandTel: + 41 22 706 20 80www.itssglobal.com Thank you for your attention! QUALITY PARTNER FOR YOUR EXPANSION

More Related